text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
History and construction This configuration is named after Pappus of Alexandria. Pappus's hexagon theorem states that every two triples of collinear points ABC and abc (none of which lie on the intersection of the two lines) can be completed to form a Pappus configuration, by adding the six lines Ab, aB, Ac, aC, Bc, and bC, and their three intersection points X = Ab·aB, Y = Ac·aC, and Z = Bc·bC. These three points are the intersection points of the "opposite" sides of the hexagon AbCaBc. According to Pappus' theorem, the resulting system of nine points and eight lines always has a ninth line containing the three intersection points X, Y, and Z, called the Pappus line. The Pappus configuration can also be derived from two triangles XcC and YbB that are in perspective with each other (the three lines through corresponding pairs of points meet at a single crossing point) in three different ways, together with their three centers of perspectivity Z, a, and A. The points of the configuration are the points of the triangles and centers of perspectivity, and the lines of the configuration are the lines through corresponding pairs of points. The Desargues configuration can also be defined in terms of perspective triangles, and the Reye configuration can be defined analogously from two tetrahedra that are in perspective with each other in four different ways, forming a desmic system of tetrahedra. For any nonsingular cubic plane curve in the Euclidean plane, three real inflection points of the curve, and a fourth point on the curve, there is a unique way of completing these four points to form a Pappus configuration in such a way that all nine points lie on the curve. A variant of the Pappus configuration provides a solution to the orchard-planting problem, the problem of finding sets of points that have the largest possible number of lines through three points. The nine points of the Pappus configuration form only nine three-point lines. However, they can be arranged so that there is another three-point line, making a total of ten. This is the maximum possible number of three-point lines through nine points. - Grünbaum, Branko (2009), Configurations of points and lines, Graduate Studies in Mathematics, 103, Providence, RI: American Mathematical Society, p. xiv+399, ISBN 978-0-8218-4308-6, MR 2510707. - Grünbaum (2009), p. 9. - Grünbaum (2009), p. 28. - Mendelsohn, N. S.; Padmanabhan, R.; Wolk, Barry (1987), "Some remarks on "n"-clusters on cubic curves", in Colbourn, Charles J.; Mathon, R. A., Combinatorial Design Theory, Annals of Discrete Mathematics, 34, Elsevier, pp. 371–378, doi:10.1016/S0304-0208(08)72903-7, ISBN 9780444703286, MR 0920661. - Sloane, N.J.A. (ed.). "Sequence A003035". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
<urn:uuid:b0c4c711-b533-4e6e-8b50-8e08aaef5679>
3.53125
717
Knowledge Article
Science & Tech.
63.127654
95,589,172
Today, the Earth got a little hotter, and a little more crowded. Daily Climate Change: Global Map of Unusual Temperatures, Feb 27 2014 How unusual has the weather been? No one event is "caused" by climate change, but global warming, which is predicted to increase unusual, extreme weather, is having a daily effect on weather, worldwide. Looking above at recent temperature anomalies and the jetstream, the polar aneurism over the US has changed but persists, while the North Pole and surroundings is experiencing much warmer than normal temperatures - not good news for our Arctic thermal shield of ice. Hotter than usual temperatures continue to dominate human habitats. (Add 0.3-0.4 C to have these anomaly values calibrate with those of NASA.) Daily updates of can be seen here for both the temperature anomalies map, and the jetstream map. For real time animated US surface wind patterns, click here, and here, for the planet. (Clicking on "earth" there reveals data and map options.) Obama's New Truck Fuel Standards To Result In Big Emission Cuts reports Ari Philipps at Climate Progress. The higher fuel efficiency standards will affect trucks that created 7% of all US carbon emissions in 2011, and will save users $50 billion in lifetime fuel costs. The rules should roll out by 2016, says the UK Guardian, and Obama's action illustrates his new operating mode: sidestepping the 'Do Nothing on Climate' Congress with direct executive action. Shame on you, Congress, for not doing YOUR duty! January 2014 Fourth Warmest Globally since records began in 1880, reports Brian Kahn at Climate Central. It continued a solid streak in above average temperatures for almost 29 years. Global average temperatures were also among their top 10 warmest for the ninth straight month, according to NOAA data. January 2014 set other records, too, reports Jeff Spross at Climate Progress: third-wettest January on record for Britain, the fifth-wettest for Australia, the fifth-driest on record for the U.S., and the second-warmest since 1961 for China. Arctic sea ice hit its fourth-lowest level on record Video screen capture Mary Ellen Harte. OO Watch 27 Years Of 'Old' Arctic Ice Melt Away In Seconds Earth's Thermal Shield, Arctic Ice, Drops to Record Lows in February 2014, reports Ryan Koronowski at Climate Progress, when temperatures warmed above normal, even in perpetual Arctic winter darkness. The Arctic has already warmed over 3.5°F since the 1970s, and federal scientists warned recently it could warm up to 23°F by 2100, likely ensuring the disappearance of Earth's thermal shield during the entire arctic summer, when it is most needed. Total loss of summer Arctic ice is estimated to double global warming. Ready to act yet? OO Total loss of summer Arctic ice is estimated to DOUBLE global warming. It's Melting, Melting... Loss of Earth's Thermal Shield To Speed Warming Faster Than Thought says a new study, reports Ari Philipps at Climate Progress. The Arctic ice cover is shrinking dramatically in area, volume and ice density - translation: Earth's Arctic thermal shield is a shell of its former self, can be broken up by storms, and is getting worse. As white ice is replaced by dark water, much more sunlight gets absorbed, heating the water. The new study finds that the darkness of the water, and hence its ability to absorb sunlight, is much greater than previously thought. Thus, it will speed up global warming far more than previously estimated. OO This British youtuber, The Banker, played by "State of Play" star Bill Nighy, is deadpan hilarious: a banker squirms, downplaying how a tiny tax on financial speculations could produce over $100 billion in funds to address poverty and climate change. See its equally hilarious sequel, set in 2024, as well -- Both are must-sees!! Screen capture, Mary Ellen Harte Bankers Could Be Heroes -- And Even Stop Floods! reports Bill Nighy at the UK Guardian, by adopting a teeny weeny tax (0.05%) on speculative banking. The Financial Transaction Tax (aka the Robin Hood Tax) could potentially raise over $30 billion in funds yearly in the UK alone to address poverty and climate change. It has widespread support from over 1000 economists, senior financiers, Bill Gates, the Vatican, and people across the UK and Europe. As it is, reports Sophie Yeo at Responding To Climate Change, France and Germany want to launch such a tax along with 9 other European nations by May 2014. Now imagine what the US could do if it followed their example... With No End In Sight, California's Drought Endangers Public Health reports Jeff Spross at Climate Progress, in several ways: as groundwater sources shrink, contaminants within them - farming, fracking, and industrial pollutants, and natural arsenic - can concentrate to hazardous levels. Dry conditions turn creeks and ponds into stagnant pools, perfect breeding grounds for disease-carrying mosquitoes, while the resulting dust worsens asthma and lung problems. Meanwhile, the weather forecast almost certainly ensures drought for the rest of 2014. Sounds like a great time to get rooftop solar... Which is the better investment, wind power or natural gas? The answer might surprise you. Credit Mary Ellen Harte (left), University of Texas (right). Wind Farms Remain 'Highly Efficient' For 25 Years says a new study, reports Sophie Yeo at Responding To Climate Change. This means their 'shelf life' is longer than previously thought, which makes them a good choice for investors. Furthermore, wind power can fill in the gap when demand surges, as wind did recently during a Texas cold snap. Now compare wind farms with US fracking wells: nearly 30% of US oil now comes from expensive fracking wells whose production declines 60-70% within their first year, reports Asjylyn Loder at Bloomberg Businessweek. Which would you rather invest in? Wait, Wait, Don't Tell Me! Exxon CEO Comes Out Against Fracking Project -- Wait, wait, don't tell me --- uhhhhhhh - because it will affect his property values? "Actually, honey, we could pass it off as an art installation, dontchya think?" Source: Fort Worth Citizens Against Neighborhood Drilling Ordinance. This Year's Weird Winter Extremes writes Philip Newell at Climate Nexus, include a record-breaking dry year for California, record-breaking precipitation in the United Kingdom, record-breaking warmth in Siberia and other parts of Russia, and record-breaking cold and heat in the US caused by abnormal behavior of the jetstream, creating a polar vortex that pumped heat into Alaska as it funneled Arctic cold deep into the midwest and south to Georgia. How exciting!! I can hardly wait for next year! Discovery of the Week: 1 in 4 Americans Don't Know That the Earth Orbits the Sun OO an NSF poll finds. Remember, folks: Friends let other friends know - gently! - that the Earth revolves around the Sun. Book of the week: OO Facing the Change, Personal Encounters with Global Warming @@ NOW You too can be part of the Fossil Fuel Follies Future! This satire of the American Petroleum Institute will have you laughing ... or crying, if you think about the truth of it too much... a must-see! Screencapture by Mary Ellen Harte. ☼☼☼ On the Bright Side ☼☼☼ (>(>(> PEOPLE SPEAK OUT <)<)<) OO Global Access to Family Planning Critical For Sustainability and slowing climate change, say policy experts Paul and Anne Ehrlich. OO Doctors Call On President Obama For More Regulation On Fracking OO Obama Says Climate Must Influence 'All' Government Policies OO Scientists Are Worried The IPCC Is Underestimating Sea Level Rise - the Intergovernmental Panel on Climate Change, the global climate change authority, which forges a consensus over several years with thousands of scientists worldwide for its reports. OO Philipino Envoy Yeb Sano: Kerry's Climate Warning Needs Political Backing OO Energy Policy Expert Amory Lovins Sees Renewables Revolution In Full Swing - and lives his ideas: last year he grew 30 lbs of tropical fruit in his small garden - in his unheated house at 6,500 feet in the Colorado Rockies. via Earth Hour Global OO Environmental Advocates Target Climate Change As Democratic Election Issue OO Native Americans Vow A Last Stand To Block Keystone XL Pipeline OO Ex US Hockey Player Mike Richter: Climate Change Threatens Winter Olympics Labrecque family photo, via Climate Progress. OO Meet The Family The Tar Sands Industry Wants To Keep Quiet OO California's Fracking Opponents Introduce New Moratorium Bill OO Colorado Adopts Tougher Air Rules For Oil, Gas Industry OO Group Seeks Fracking Ban In Texas Town Paris. Source: Flickr, Never House. OO Over 1,000 Cities Demand Ambitious EU Climate Targets OO UK: Climate Change Is A 'National Security' Issue Say Military Experts OO German Village Resists Its Destruction For Site of Coal Strip Mine The most comprehensive newsfeed on climate change online. For more news on our changing climate around the world, click here. OO Brussels Sues Britain Over Dirty-Air Breach One of the corner towers of the Forbidden City, Beijing. Source: wikipedia. OO China State Media Criticizes Government For Lackluster Smog Efforts OO Renewables Revolution In China Within Reach, Says WWF ☼☼☼ BRIGHT IDEAS ☼☼☼ OO Leds Much More Efficient For Greenhouses OO Using Genetics In Forest Conservation OO Ways To Harness The Waves are now several. ☼☼☼ BRIGHT DEVELOPMENTS ☼☼☼ OO 50% More US Coal Plants Are Slated For Earlier Retirement OO US Solar Power Continues To Get Cheaper Check it out here, right now! OO Clean Energy Boosts Remote Island Nations OO Billionaire Tom Steyer To Launch A $100 Million Climate Change Ad Campaign OO US Building Efficiency Revenues Outpaced Those For Clean Electricity In 2013 OO Energy Efficiency Is Credited For A Milestone Drop In Midwest Energy Demand OO India's Schools Switch To Solar Power OO US & China To Speed Up Cooperation On Climate Change For more news on green technology, click here. OO Latin America Is Emerging As The Next Green Tech Hub OO Texas Now Hosts One of the Largest US Solar Workforce OO CA And Nevada Projects Will Create More Solar Jobs The flag flies, May 21, 2013. At Moore, OK, the day after a huge tornado demolished it. Credit Major Jon Quinlan/defenseimagery mil ***** US Climate Change News ***** OO Coal, Oil Trains To Clog Western Rails, Drive Up Rail Rates by 2023 OO Warming US Weather Helps Create Killer Avalanches OO Rising Sea Levels Threaten Los Angeles "You mean you're not going to fight for MY future because of YOUR pension?" Credit: Ajsmen91 at Wikimedia Commons. OO Parents: Your Pensions Are Screwing Up Your Children's Future OO Drought Threatens California Wildlife drying up fisheries, and causing animals to migrate in desperate search for food. Further drought could cause irreversible harm. OO Switch to Gas From Coal Can Threaten Water Supply as the huge needs of fracking collide with drought in many US states. <><><><> FOSSIL FUEL FOLLIES <><><><> OO West Virginia: Blackwater, A Type Of Coal Waste, Now Leaking Into Creek OO Massachusetts: Public Not Told: Derailed Gas Train Car Could Have Burned Their Town What can happen when a propane train derails, as this one did in Wisconsin in 1996. Source: www.weyauwegachamber.com/history OO Texas: Fracking Boom Spews Toxic Air Emissions On Residents OO The Year 2013 In US Fossil Fuel Disasters ☼☼☼Acting Like You Care: The XL Keystone pipeline will make possible far more climate change, but President Obama might okay it anyway. Credo, 350 org and others are asking people to stand up and be counted as nonviolent resisters or help in other ways. I did. If you ever wanted to do something big for your future, now's your chance - here. <><><><> GOPPING IT UP <><><><> Even elephants can't survive on oil. Credit: Mary Ellen Harte Via the University of East Anglia Climate Change webpage <<<< Climate Change Round the World >>>> OO The Symptoms Of Climate Change Are Spreading And Deadly as extreme weather eases the spread of a variety of pathogens throughout the world. OO Extreme Weather Could Derail Developing Country Growth - report OO Coffee Shortage May Arise Due To Drought, Climate Change, Rising Demand say analysts. An extreme drought in Brazil has forced 140 cities there to ration water. A luxury in the future? An extreme drought in Brazil has destroyed much of the coffee crop, sending coffee prices soaring. Source: wikipedia. OO Brazil Drought Drying Out Cities, Crops, With Worse to Come say scientists. OO UK Storms 'Have Changed Coastline Forever' say authorities. OO Australian Cities Already Hotter Than 2030 Forecasts OO Canada: Unreported Spills, Alarmed Communities Along 400+Mi Enbridge Pipeline OO Ireland: Extreme Weather Leaves Farmers And Fishermen On Brink Of Ruin OO Slovenia Puts Freak Ice Storm Damage On Woods At 194 Million Euro A rarity: a tornado in winter, recently spotted in Illinois. Note snowy foreground. Credit: Dana Cottingham Fricke OO Extreme Weather Increases UK Winter Lightning and a winter tornado is seen in the US, which meteorologist Paul Douglas has never seen before in his 30+ years of watching the weather, he tells me. OO Report: China Outspent US on Smart Grid in 2013 OO Climate Change Could Worsen Sahel Conflicts: UN Ten Global Warming Indicators. Credit NOAA (((((((( Seeking the Science )))))))) OO We Can't Geoengineer Our Way Out Of Global Warming, say researchers in a new study; even IF we could build the machines needed and had the money to run them, we'd have to run them 500-1,000 years without disruption, through social unrest , other environmental disasters, or any other threat to our civilization. Look back at our history: how likely is this to happen? OO Arctic Temperatures Could Increase 23F By 2100 - very likely ensuring the destruction of Earth's Arctic thermal shield, the summer Arctic ice. A large iceberg breaks off from the Antarctic Pine Island glacier, which is melting rapidly. Credit NASA OO Melting Antarctic Glacier Passes Tipping Point, Will Keep Raising Seas - the glacier's history shows that once started, the melting, persists from decades to centuries. OO Wildlands Absorb More Carbon Pollution Than Croplands - Study OO Emptying the Seas: Overfishing Top Down, Acidifying Bottom Up - the marine food chain, that is. Credit Aaron Escobar via Wikimedia Commons OO Tree Roots Help The Ground Store Carbon by physically breaking up rocks accelerating the chemical changes that allow carbon to be absorbed by rocks and new soil, a new study indicates. @@@ Climate Change in the Media @@@ @@ LIVING ON EARTH @@ PRI's Environmental News Magazine, covers climate change and other fascinating topics as well. Check it out! @@ Powering The Planet - Earth: The Operators' Manual This uplifting, informative and compelling program presents 8 fast-paced case studies and stories about nations and communities transitioning from fossil fuels to renewable or low-carbon sources of energy. It covers all the important bases: what we stand to lose with business as usual, but what we stand to gain by switching! The "Can We Do It?" section reminds us that we can, pointing out how we overcame an equally great challenge - sanitation - with indoor plumbing and sewer systems. One of 3 parts. Plan a party or just download it and watch it yourself, then spread the word! @@ Weather Nation Meteorologist Paul Douglas Explains How Climate Matters - A meteorologist who's watched weather for decades, he is able to show the connection between the weather and climate change. Check out his video series now! @@ This Is NOT Cool Climate Change youtube series by Peter Sinclair can be found at his Climate Denial Crock of the Week playlist here. Hard-hitting shorts with great graphic imagery send the points home. !!!! Want to Save Energy & Help the Planet? Check out these helpful EPA Climate Change youtubers here! !!!! @@ At Last, a Green Radio Show to counter the Lush Bimbaugh wasteland. Check out On the Green Front with Betsy Rosenberg at Progressive Radio Network to keep updated on climate change, and other green topics. Every day is Earth Day, folks, as I was reminded by these starfishes I photographed one spring. Making the U.S. a global clean energy leader will ensure a heck of a lot more jobs, and a clean, safe future. If you'd like to join the increasing numbers of people who want to TELL Congress that they will vote for clean energy candidates you can do so here. It's our way of letting Congress know there's a strong clean energy voting bloc out there. For more detailed summaries of the above and other climate change items, audio podcasts and texts are freely available.
<urn:uuid:93cdca8d-1dce-4860-8515-7bc564842b9b>
3.03125
3,692
Content Listing
Science & Tech.
49.825418
95,589,178
Why would it be important to detect water movement and vibrations?© BrainMass Inc. brainmass.com July 20, 2018, 7:01 am ad1c9bdddf Rheotaxis is a form of an innate behavioral response by an organism to a directional stimulus or a gradient of stimulus intensity. Rheotaxis is seen in fish, where they will turn to face into an oncoming current (positive rheotaxis). The opposite of that is negative rheotaxis, where a fish orients itself away from a current or oscillations in water. Negative rheotaxis can be seen in salmon that migrate downstream. Detection of water movement and vibration ... The solution answers the question completely and provides further background infromation. As well, the solution provides some references so that the student may fully understand the problem.
<urn:uuid:2117545b-dd7e-41c6-9a64-52dbba8ea1e6>
3.53125
174
Truncated
Science & Tech.
42.156344
95,589,183
All aboard for the Southern Ocean Mission! Over six months after returning from the Southern Ocean, scientists from the SOCLIM project are sharing their experience through a web series that tells the story of their adventure aboard the Marion Dufresne. The series' nine episodes explain what was at stake in the mission and share the researchers’ day-to-day lives during the 27-day expedition. Check out the first episode of the web series! 27 days, 12,000 kilometres In autumn of 2016, a team of scientists from the French National Centre for Scientific Research (CNRS) and Pierre and Marie Curie University (UPMC), supported by the French Polar Institute Paul-Emile Victor (IPEV), set sail on the Marion Dufresne as part of the SOCLIM project, supported by the BNP Paribas Foundation. The researchers’ mission? To cross the turbulent waters of the Southern Ocean in order to better understand how the ocean works and how it influences the climate. For a month, Universcience.tv teams followed the SOCLIM project scientists aboard the Marion Dufresne, documenting everything from the scientists’ life and work on board and the challenges they face, to discoveries on the Kerguelen Islands and the climate conditions they encounter. To see all this and more, tune in every week for a new episode of the Southern Ocean Mission webseries! Episode #1 - The Southern Ocean and climate: the stakes for SOCLIM In the first episode, Southern Ocean Mission scientists reflect on the stakes of the expedition and the adventure that awaits them before they set sail on the Marion Dufresne. “ Of course, what we are looking for is an answer to the questions we are asking ourselves, notably about the role of the Southern Ocean in absorbing CO2. But we are also interested in understanding the underlying mechanisms, the different processes that transfer atmospheric CO2 to the bottom of the ocean where it is stored. ” Read moreAll news Global warming would reach even more alarming levels were it not for the role of oceanic waters,...
<urn:uuid:cc398fbe-1f6a-4bea-a2ad-7ee8a757f62c>
2.796875
433
Content Listing
Science & Tech.
43.910005
95,589,185
1.Find the values of a, b and c for which the function has all the following properties: -it is self-inverse -its range is the set of all real numbers except 3 -its graph passes through (2,-2) 2.Find the values of a, b and c for which the graph of y=f(x) can be obtained from that of y=1/x by a translation of 1 unit parallel to the x axis followed by a translation of 3 units parallel to the y axis. Please write down your solution, thanks. Turn on thread page Beta Help:A2 Maths watch - Thread Starter - 13-12-2003 21:53
<urn:uuid:eabfd61d-ccac-4a19-8f76-2778601bdbdb>
3.359375
148
Comment Section
Science & Tech.
82.786136
95,589,191
Heat capacity ratio In thermal physics and thermodynamics, the heat capacity ratio or adiabatic index or ratio of specific heats or Poisson constant, is the ratio of the heat capacity at constant pressure (CP) to heat capacity at constant volume (CV). It is sometimes also known as the isentropic expansion factor and is denoted by γ (gamma) for an ideal gas or κ (kappa), the isentropic exponent for a real gas. The symbol gamma is used by aerospace and chemical engineers. |Heat capacity ratio for various gases| |−181 °C||H2||1.597||200 °C||Dry air||1.398||20 °C||NO||1.400| |−76 °C||1.453||400 °C||1.393||20 °C||N2O||1.310| |20 °C||1.410||1000 °C||1.365||−181 °C||N2||1.470| |100 °C||1.404||15 °C||1.404| |400 °C||1.387||0 °C||CO2||1.310||20 °C||Cl2||1.340| |1000 °C||1.358||20 °C||1.300||−115 °C||CH4||1.410| |2000 °C||1.318||100 °C||1.281||−74 °C||1.350| |20 °C||He||1.660||400 °C||1.235||20 °C||1.320| |20 °C||H2O||1.330||1000 °C||1.195||15 °C||NH3||1.310| |100 °C||1.324||20 °C||CO||1.400||19 °C||Ne||1.640| |200 °C||1.310||−181 °C||O2||1.450||19 °C||Xe||1.660| |−180 °C||Ar||1.760||−76 °C||1.415||19 °C||Kr||1.680| |20 °C||1.670||20 °C||1.400||15 °C||SO2||1.290| |0 °C||Dry air||1.403||100 °C||1.399||360 °C||Hg||1.670| |20 °C||1.400||200 °C||1.397||15 °C||C2H6||1.220| |100 °C||1.401||400 °C||1.394||16 °C||C3H8||1.130| where C is the heat capacity, and c the specific heat capacity (heat capacity per unit mass) of a gas. The suffixes P and V refer to constant pressure and constant volume conditions respectively. To understand this relation, consider the following thought experiment. A closed pneumatic cylinder contains air. The piston is locked. The pressure inside is equal to atmospheric pressure. This cylinder is heated to a certain target temperature. Since the piston cannot move, the volume is constant. The temperature and pressure will rise. When the target temperature is reached, the heating is stopped. The amount of energy added equals CVΔT, with ΔT representing the change in temperature. The piston is now freed and moves outwards, stopping as the pressure inside the chamber reaches atmospheric pressure. We assume the expansion occurs without exchange of heat (adiabatic expansion). Doing this work, air inside the cylinder will cool to below the target temperature. To return to the target temperature (still with a free piston), the air must be heated, but is no longer under constant volume, since the piston is free to move as the gas is reheated. This extra heat amounts to about 40% more than the previous amount added. In this example, the amount of heat added with a locked piston is proportional to CV, whereas the total amount of heat added is proportional to CP. Therefore, the heat capacity ratio in this example is 1.4. Another way of understanding the difference between CP and CV is that CP applies if work is done to the system, which causes a change in volume (such as by moving a piston so as to compress the contents of a cylinder), or if work is done by the system, which changes its temperature (such as heating the gas in a cylinder to cause a piston to move). CV applies only if P dV – that is, the work done – is zero. Consider the difference between adding heat to the gas with a locked piston and adding heat with a piston free to move, so that pressure remains constant. In the second case, the gas will both heat and expand, causing the piston to do mechanical work on the atmosphere. The heat that is added to the gas goes only partly into heating the gas, while the rest is transformed into the mechanical work performed by the piston. In the first, constant-volume case (locked piston) there is no external motion, and thus no mechanical work is done on the atmosphere; CV is used. In the second case, additional work is done as the volume changes, so the amount of heat required to raise the gas temperature (the specific heat capacity) is higher for this constant pressure case. Ideal gas relationsEdit For an ideal gas, the heat capacity is constant with temperature. Accordingly, we can express the enthalpy as H = CPT and the internal energy as U = CVT. Thus, it can also be said that the heat capacity ratio is the ratio between the enthalpy to the internal energy: Furthermore, the heat capacities can be expressed in terms of heat capacity ratio (γ) and the gas constant (R): where n is the amount of substance in moles. Mayer's relation allows to deduce the value of CV from the more commonly tabulated value of CP: Relation with degrees of freedomEdit The heat capacity ratio (γ) for an ideal gas can be related to the degrees of freedom ( f ) of a molecule by Thus we observe that for a monatomic gas, with 3 degrees of freedom: while for a diatomic gas, with 5 degrees of freedom (at room temperature: 3 translational and 2 rotational degrees of freedom; the vibrational degree of freedom is not involved, except at high temperatures): For example, the terrestrial air is primarily made up of diatomic gases (around 78% nitrogen (N2) and 21% oxygen (O2)), and at standard conditions it can be considered to be an ideal gas. The above value of 1.4 is highly consistent with the measured adiabatic indices for dry air within a temperature range of 0–200 °C, exhibiting a deviation of only 0.2% (see tablation above). This section needs expansion. You can help by adding to it. (June 2008) As temperature increases, higher-energy rotational and vibrational states become accessible to molecular gases, thus increasing the number of degrees of freedom and lowering γ. For a real gas, both CP and CV increase with increasing temperature, while continuing to differ from each other by a fixed constant (as above, CP = CV + nR), which reflects the relatively constant PV difference in work done during expansion for constant pressure vs. constant volume conditions. Thus, the ratio of the two values, γ, decreases with increasing temperature. For more information on mechanisms for storing heat in gases, see the gas section of specific heat capacity. Values based on approximations (particularly CP − CV = nR) are in many cases not sufficiently accurate for practical engineering calculations, such as flow rates through pipes and valves. An experimental value should be used rather than one based on this approximation, where possible. A rigorous value for the ratio CP/ can also be calculated by determining CV from the residual properties expressed as Values for CP are readily available and recorded, but values for CV need to be determined via relations such as these. See relations between specific heats for the derivation of the thermodynamic relations between the heat capacities. The above definition is the approach used to develop rigorous expressions from equations of state (such as Peng–Robinson), which match experimental values so closely that there is little need to develop a database of ratios or CV values. Values can also be determined through finite-difference approximation. where p is pressure, and v is the gas specific volume. This article lacks ISBNs for the books listed in it. (July 2017) - White, Frank M. Fluid Mechanics (4th ed.). McGraw Hill. - Lange, Norbert A. Lange's Handbook of Chemistry (10th ed.). p. 1524.
<urn:uuid:7264792e-1364-4428-97ed-a198731dfeef>
3.125
1,851
Knowledge Article
Science & Tech.
73.81403
95,589,198
Build Your Own ASP.NET Website This PDF tutorial is aimed at beginner, intermediate, and advanced Web designers, looking to build their first web application with ASP.NET. It's a free and complet training document for download under 183 pages Table of contents - Introduction to .NET and ASP.NET - What is .NET? - What is ASP.NET - ASP.NET Basics - Your First ASP.NET Page - VB.NET and C# Programming Basics - Variables and Variable Declaration - Object Oriented Programming Concepts - Working with HTML Controls - Processing a Simple Form - Introduction to Web Forms - Introduction to Web Controls - Formatting Controls with CSS - An Introduction to Databases - The Database Management System - An Introduction to ADO.NET - The DataGrid and DataList Controls - Using the DataList Control - Overview of ASP.NET Applications - Building an ASP.NET Shopping Cart - Security and User Authentication - XML Web Services - File Size: - 1,426.83 Kb - Submitted On: Take advantage of this course called Build Your Own ASP.NET Website to improve your Web development skills and better understand asp. This course is adapted to your level as well as all asp pdf courses to better enrich your knowledge. All you need to do is download the training document, open it and start learning asp for free. PHP5 web programming This PDF tutorial shows how to program a dynamic web site using PHP5 ,free training lesson under 24 pages designated to the beginners. Django Web Framework and Python Download free PDF tutorial about Django framework with Python, document under 40 page by Zhaojie Zhang. Document Object Model Tutorial Download free eBook about DOM, (Document Object Model), learn how to navigate an XML structure. Introduction to ASP With this tutorial you will learn how to create dynamic web pages with ASP ,a brief introduction in PDF under 8 pages designated to beginners. ASP.NET and Web programming This tutorial shows you the basics of ASP dot NET programming ,free training document for download designated to intermediate level users. Getting started with ASP.NET This tutorial guides you step by step to create an ASP.NET Web pages ,free training document under 62 pages by Erik Reitan.
<urn:uuid:a33adb90-d73c-44e7-9cb2-f5956dd97020>
2.53125
491
Product Page
Software Dev.
55.631314
95,589,201
Mechanisms linking plant productivity and water status for a temperate Eucalyptus forest flux site: Analysis over wet and dry years with a simple model - Publication Type: - Journal Article - Functional Plant Biology, 2008, 35 (6), pp. 493 - 508 - Issue Date: A simple process-based model was applied to a tall Eucalyptus forest site over consecutive wet and dry years to examine the importance of different mechanisms linking productivity and water availability. Measured soil moisture, gas flux (CO2, H2O) and meteorological records for the site were used. Similar levels of simulated H2O flux in 'wet' and 'dry' years were achieved when water availability was not confined to the first 1.20 m of the soil profile, but was allowed to exceed it. Although the simulated effects of low soil and atmospheric water content on CO2 flux, presumably via reduction in stomatal aperture, also acted on transpiration, they were offset in the dry year by a higher vapour-pressure deficit. A sensitivity analysis identified the processes that were important in wet versus dry years, and on an intra-annual timeframe. Light-limited productivity dominated in both years, except for the driest period in the dry year. Vapour-pressure deficit affected productivity across more of each year than soil moisture, but both effects were larger in the dry year. The introduction of a reduced leaf area tended to decrease sensitivity in the dry year. Plant hydraulic architecture that increases plant available water, maximises productivity per unit water use and achieves lower sensitivity to low soil moisture levels should minimise production losses during dry conditions. © CSIRO 2008. Please use this identifier to cite or link to this item:
<urn:uuid:a345931f-6333-4e49-9fde-cb84c834c206>
2.59375
359
Academic Writing
Science & Tech.
24.150343
95,589,221
Ruby rescue clause is used along with begin and end to define blocks of code to handle exceptions. For example: begin puts 10 / 0 # w ww .jav a 2s. co m rescue puts "You caused an error!" end Here, begin...end defines a section where if an exception arises, it's handled with the code inside the rescue block. Ten divided by zero raises an exception of class ZeroDivisionError. Being inside a block containing a rescue section means that the exception is handled by the code inside that rescue section. Rather than existing with a ZeroDivisionError, the text "You caused an error!" is printed to the screen.
<urn:uuid:67ac7971-4ee1-4db1-94fc-c5f5a3107031>
3.796875
139
Documentation
Software Dev.
62.582714
95,589,242
In essence, and due to the seasonal shift of the subtropical high-pressure belts with the apparent movement of the Sun, a Mediterranean climate is an intermediate type between these other climates, with winters somewhat mimicking winters in oceanic climates and summers imitating dry seasons in semi-arid and arid climates. But contrary to oceanic climates, there are always a number of clear, sunny days in the wet season. The resulting vegetation of Mediterranean climates are the garrigue in the Mediterranean Basin, the chaparral in California, the fynbos in South Africa and the Chilean scrubland in Chile. Areas with this climate are where the so-called "Mediterranean trinity" has traditionally developed: wheat, vine and olive. Under the Köppen climate classification, "hot dry-summer" climates (classified as Csa) and "cool dry-summer" climates (classified as Csb) are often referred to as "Mediterranean". Under the Köppen climate system, the first letter indicates the climate group (in this case temperate climates). Temperate climates or "C" zones have an average temperature above 0 °C (32 °F), but below 18 °C (64 °F), in their coolest months. The second letter indicates the precipitation pattern ("s" represents dry summers). Köppen has defined a dry summer month as a month with less than 30 mm (1.2 in) of precipitation and with less than one-third that of the wettest winter month. Some, however, use a 40 mm (1.6 in) level. The third letter indicates the degree of summer heat: "a" represents an average temperature in the warmest month above 22 °C (72 °F), while "b" indicates the average temperature in the warmest month below 22 °C (72 °F). Under the Köppen classification, dry-summer climates (Csa, Csb) usually occur on the western sides of continents. Csb zones in the Köppen system include areas normally not associated with Mediterranean climates but with Oceanic climates, such as much of the Pacific Northwest, much of southern Chile, parts of west-central Argentina, and parts of New Zealand. Additional highland areas in the subtropics also meet Cs requirements, though they, too, are not normally associated with Mediterranean climates, as do a number of oceanic islands such as Madeira, the Juan Fernández Islands, the western part of the Canary Islands, and the eastern part of the Azores. Under Trewartha's modified Köppen climate classification, the two major requirements for a Cs climate are revised. Under Trewartha's system, at least eight months must have average temperatures of 10 °C (50 °F) or higher (subtropical), and the average annual precipitation must not exceed 900 mm (35 in). Thus, under this system, many Csb zones in the Köppen system become Do (temperate oceanic), and the rare Csc zones become Eo (subpolar oceanic), with only the classic dry-summer to warm winter, low annual rainfall locations included in the Mediterranean type climate. During summer, regions of Mediterranean climate are strongly influenced by cold ocean currents which keep the weather in the region very dry, stable, and pleasant. Similar to desert climates, in many Mediterranean climates there is a strong diurnal character to daily temperatures in the warm summer months due to strong heating during the day from sunlight and rapid cooling at night. In winter, Mediterranean climate zones are no longer influenced by the cold ocean currents and therefore warmer water settles near land and causes clouds to form and rainfall becomes much more likely. As a result, areas with this climate receive almost all of their precipitation during their winter and spring seasons, and may go anywhere from 3 to 6 months during the summer without having any significant precipitation. In the lower latitudes, precipitation usually decreases in both the winter and summer because they are closer to the Horse latitudes, thus bringing smaller amounts of rain. Toward the polar latitudes, total moisture usually increases; the Mediterranean climate in Southern Europe has more rain. The rainfall also tends to be more evenly distributed throughout the year in Southern Europe, while in the Eastern Mediterranean (the Levant) and in Southern California the summer is nearly or completely dry and the dry season most severe. In places where evapotranspiration is higher, steppe climates tend to prevail, but still follow the weather pattern of the Mediterranean climate. The majority of the regions with Mediterranean climates have relatively mild winters and very warm summers. However winter and summer temperatures can vary greatly between different regions with a Mediterranean climate. For instance, in the case of winters, Valencia and Los Angeles experience mild temperatures in the winter, with frost and snowfall almost unknown, whereas Tashkent has colder winters with annual frosts and snowfall. Or to consider summer, Athens experiences rather high temperatures in that season (48 °C (118 °F) has been measured in nearby Eleusis). In contrast, San Francisco has cool summers with daily highs around 21 °C (70 °F) due to the continuous upwelling of cold subsurface waters along the coast. Because most regions with a Mediterranean climate are near large bodies of water, temperatures are generally moderate with a comparatively small range of temperatures between the winter low and summer high (although the daily range of temperature during the summer is large due to dry and clear conditions, except along the immediate coasts). Temperatures during winter only occasionally fall below the freezing point and snow is generally seldom seen. In the summer, the temperatures range from mild to very hot, depending on distance from a large body of water, elevation, and latitude. Even in the warmest locations with a Mediterranean-type climate, however, temperatures usually do not reach the highest readings found in adjacent desert regions because of cooling from water bodies, although strong winds from inland desert regions can sometimes boost summer temperatures, quickly increasing the risk of wildfires. As in every climatologic domain, the highland locations of the Mediterranean domain can present cooler temperatures in winter than the lowland areas, temperatures which can sometimes prohibit the growth of typical Mediterranean plants. Some Spanish authors opt to use the term "Continental Mediterranean climate" for some regions with lower temperature in winter than the coastal areas (direct translation from Clima Mediterráneo Continentalizado), but most climate classifications (including Köppen's Cs zones) show no distinction. Additionally, the temperature and rainfall pattern for a Csa or even a Csb climate can exist as a microclimate in some high-altitude locations adjacent to a rare tropical As (tropical savanna climate with dry summers, typically in a rainshadow region). These have a favourable climate with mild wet winters and fairly warm, dry summers. The Mediterranean forests, woodlands, and scrubbiome is closely associated with Mediterranean climate zones, as are unique freshwater communities. Particularly distinctive of the climate are sclerophyll shrublands, called maquis in the Mediterranean Basin, chaparral in California, matorral in Chile, fynbos in South Africa, and mallee and kwongan shrublands in Australia. Aquatic communities in Mediterranean climate regions are adapted to a yearly cycle in which abiotic (environmental) controls of stream populations and community structure dominate during floods, biotic components (e.g. competition and predation) controls become increasingly important as the discharge declines, and environmental controls regain dominance as environmental conditions become very harsh (i.e. hot and dry); as a result, these communities are well suited to recover from droughts, floods, and fires. Aquatic organisms in these regions show distinct long-term patterns in structure and function, and are also highly sensitive to the effects of climate change. The native vegetation of Mediterranean climate lands must be adapted to survive long, hot summer droughts and prolonged wet periods in winter. Mediterranean vegetation examples include the following: Much native vegetation in Mediterranean climate area valleys have been cleared for agriculture. In places such as the Sacramento Valley and Oxnard Plain in California, draining marshes and estuaries combined with supplemental irrigation has led to a century of intensive agriculture. Much of the Overberg in the southern Cape of South Africa, once covered with renosterveld, has likewise been largely converted to agriculture, mainly wheat. In hillside and mountainous areas, away from urban sprawl, ecosystems and habitats of native vegetation are more sustained. This subtype of the Mediterranean climate (Csa) is the most common form of the Mediterranean climate, therefore it is also known as a “typical Mediterranean climate”. As stated earlier, regions with this form of a Mediterranean climate experience average monthly temperatures in excess of 22.0 °C (71.6 °F) during its warmest month and an average in the coldest month between 18 and −3 °C (64 and 27 °F) or, in some applications, between 18 and 0 °C (64 and 32 °F). Also, at least four months must average above 10 °C (50 °F). Regions with this form of the Mediterranean climate typically experience hot, sometimes very hot and dry summers and mild, wet winters. In a number of instances, summers here can closely resemble summers seen in arid and semi-arid climates. However, high temperatures during summers are generally not quite as high as those in arid or semiarid climates due to the presence of a large body of water. All areas with this subtype have wet winters. However, some areas with a hot Mediterranean subtype can actually experience very chilly winters, with occasional snowfall. Occasionally also termed “Cool-summer Mediterranean climate”, this subtype of the Mediterranean climate (Csb) is the less common form of the Mediterranean climate. Cool ocean currents and upwelling are often the reason for this cooler type of Mediterranean climate. As stated earlier, regions with this subtype of the Mediterranean climate experience warm (but not hot) and dry summers, with no average monthly temperatures above 22 °C (72 °F) during its warmest month and an average in the coldest month between 18 and −3 °C (64 and 27 °F) or, in some applications, between 18 and 0 °C (64 and 32 °F). Also, at least four months must average above 10 °C (50 °F). Winters are rainy and can be mild to chilly. In a few instances, snow can fall on these areas. Precipitation occurs in the colder seasons, but there are a number of clear sunny days even during the wetter seasons. Distribution of the relatively rare cold-summer Mediterranean climate (Köppen type Csc) in Washington, Oregon and California. The cold-summer subtype of the Mediterranean climate (Csc) is rare and predominately found at scattered high-altitude locations along the west coasts of North and South America. This type is characterized by cool summers, with fewer than four months with a mean temperature at or above 10 °C (50 °F), as well as with mild winters, with no winter month having a mean temperature below 0 °C (32 °F) (or −3 °C [27 °F]), depending on the isotherm used). Regions with this climate are influenced by the dry-summer trend that extends considerably poleward along the west coast of the Americas, as well as the moderating influences of high altitude and relative proximity to the Pacific Ocean. In North America, areas with Csc climate can be found in the Olympic, Cascade, Klamath, and Sierra Nevada ranges in Washington, Oregon and California. These locations are found at high altitude nearby lower altitude regions characterized by a warm-summer Mediterranean climate (Csb) or hot-summer Mediterranean climate (Csa). A rare instance of this climate occurs in the tropics, on Haleakalā Summit in Hawaii. In South America, Csc regions can be found along the Andes in Chile and Argentina. The town of Balmaceda is one of the few towns confirmed to have this climate.
<urn:uuid:35fcf2e1-8a43-4a11-b179-8cc3262bc09b>
3.921875
2,521
Knowledge Article
Science & Tech.
26.668792
95,589,246
The word calixarene is derived from calix or chalice because this type of molecule resembles a vase and from the word arene that refers to the aromatic building block. Calixarenes have hydrophobic cavities that can hold smaller molecules or ions and belong to the class of cavitands known in host-guest chemistry. Calixarene nomenclature is straightforward and involves counting the number of repeating units in the ring and include it in the name. A calixarene has 4 units in the ring and a calixarene has 6. A substituent in the meso position Rb is added to the name with a prefix C- as in C-methylcalixarene. The aromatic components are derived from phenol, resorcinol, or pyrogallol. For phenol, the aldehyde most often used is simple formaldehyde, while larger aldehydes, like acetaldehyde, are usually required in condensation reactions with resorcinol and pyrogallol. The chemical reaction qualifies as electrophilic aromatic substitution, followed by an elimination of water, and then a second aromatic substitution. The reaction is catalyzed by acids or bases. Calixarenes are difficult to produce because random polymerization occurs inside of complex mixtures of linear and cyclic oligomers with different numbers of repeating units. With finely tuned starting materials and reaction conditions, synthesis can also be surprisingly facile. In 2005, researchers produced pyrogallolarene simply by mixing a solvent-free dispersion of isovaleraldehyde with pyrogallol, and a catalytic amount of p-toluenesulfonic acid, in a mortar and pestle. Calixarenes are sparingly soluble as parent compounds and melt at high temperatures compared to other crystalline solids. Calixarenes are characterised by a three-dimensional basket, cup or bucket shape. In calixarenes the internal volume is around 10 cubic angstroms. Calixarenes are characterised by a wide upper rim and a narrow lower rim and a central annulus. With phenol as a starting material the 4 hydroxyl groups are intrannular on the lower rim. In a resorcinarene 8 hydroxyl groups are placed extraannular on the upper ring. Calixarenes exist in different chemical conformations because rotation around the methylene bridge is not difficult. In calixarene 4 up-down conformations exist: cone (point group C2v,C4v), partial cone Cs, 1,2 alternate C2h and 1,3 alternate D2d. The 4 hydroxyl groups interact by hydrogen bonding and stabilize the cone conformation. This conformation is in dynamic equilibrium with the other conformations. Conformations can be locked in place with proper substituents replacing the hydroxyl groups which increase the rotational barrier. Alternatively placing a bulky substituent on the upper rim also locks a conformation. The calixarene based on p-tert-butyl phenol is also a cone. Calixarenes are structurally related to the pillararenes. |Calixarene with para-tert-butyl substituents||3D representation of a cone conformation| In 1872 Adolf von Baeyer mixed various aldehydes, including formaldehyde, with phenols in a strongly acidic solution. The resultant tars defied characterization; but represented the typical products of a phenol/formaldehyde polymerization. Leo Baekeland discovered that these tars could be cured into a brittle substance which he marketed as “Bakelite”. This polymer was the first commercial synthetic plastic. The success of Bakelite spurred scientific investigations into the chemistry of the phenol/formaldehyde reaction. One result was the discovery made in 1942 by Alois Zinke, that p-alkyl phenols and formaldehyde in a strongly basic solution yield mixtures containing cyclic tetramers. Concomitantly, Joseph Niederl and H. J. Vogel obtained similar cyclic tetramers from the acid-catalyzed reaction of resorcinol and aldehydes such as benzaldehyde. A number of years later, John Cornforth showed that the product from p-tert-butylphenol and formaldehyde is a mixture of the cyclic tetramer and another ambiguous cyclomer. His interest in these compounds was in the tuberculostatic properties of their oxyethylated derivatives. In the early 1970s C. David Gutsche recognized the calix shape of the cyclic tetramer and thought that it might furnish the structure for building an enzyme xenologue. He initiated a study that lasted for three decades. His attention to these compounds came from acquaintance with the Petrolite company’s commercial demulsifiers, made by oxyethylation of the still ambiguous products from p-alkylphenols and formaldehyde. He introduced the name “calixarene”: from “calix”, the Greek name for a chalice, and “arene” for the presence of aryl groups in the cyclic array. He also determined the structures for the cyclic tetramer, hexamer, and octamer, along with procedures for obtaining these materials in good to excellent yields. He then established procedures for attaching functional groups to both the upper and lower rims and mapped the conformational states of these flexible molecules. Additionally, he proved that the cyclic tetramer can be frozen into a cone conformation, by the addition of measurably large substituents to the lower rim of the calix shape. Concomitant with Gutsche’s work was that of the Hermann Kämmerer and Volker Böhmer. They developed methods for the stepwise synthesis of calixarenes. Chemists of University of Parma, Giovanni Andreetti, Rocco Ungaro and Andrea Pochini were the first to resolve x-ray crystallographic images of calixarenes. In the mid 1980s, other groups of investigators joined the field of calixarene chemistry. It has become an important aspect of supramolecular chemistry and attracts the attention of hundreds of scientists around the world. The Niederl cyclic tetramers from resorcinol and aldehydes were studied in detail by Donald J. Cram, who called the derived compounds “cavitands” and “carcerands”. An accurate and detailed history of the calixarenes along with extensive discussion of calixarene chemistry can be found in the 1989 publication (ref 1) as well as the second edition in 2008 Host guest interactions Some calixarenes are efficient sodium ionophores and are potentially useful in chemical sensors. Calixarenes are used in commercial applications as sodium selective electrodes for the measurement of sodium levels in blood. Calixarenes also form complexes with cadmium, lead, lanthanides and actinides. Calixarene and the C70 fullerene in p-xylene form a ball-and-socket supramolecular complex. Calixarenes also form exo-calix ammonium salts with aliphatic amines such as piperidine. Derivatives or homologues of calixarene exhibit highly selective binding behavior towards anions (especially halogen anions) with changes in optical properties such as fluorescence. Molecular self-assembly of resorcinarenes and pyrogallolarenes led to larger supramolecular assemblies. Both in the crystalline state and in solution, they are known to form hexamers that are akin to certain Archimedean solids with an internal volume of around one cubic nanometer (nanocapsules). (Isobutylpyrogallolarene)6 is held together by 48 intermolecular hydrogen bonds. The remaining 24 hydrogen bonds are intramolecular. The cavity is filled by a number of solvent molecules. Calixarenes in general, and more specifically calixarenes have been extensively used as molecular platform to build up supramolecular catalysts. The design of this kind of catalysts consists in functionalizing the upper rim of the lower rim of calixarenes with ligands able to bind metal cations, notably Cu(II) or Zn(II), or other active functions. These compounds are active in the catalysis of hydrolytic reactions. Calixarenes are of interest as enzyme mimetics, components of ion sensitive electrodes or sensors, selective membranes, non-linear optics and in HPLC stationary phases. In addition, in nanotechnology calixarenes are used as negative resist for high-resolution electron beam lithography . A tetrathiaarene is found to mimic some properties of the aquaporin proteins. This calixarene adopts a 1,3-alternate conformation (methoxy groups populate the lower ring) and water is not contained in the basket but grabbed by two opposing tert-butyl groups on the outer rim in a pincer. The nonporous and hydrophobic crystals are soaked in water for 8 hours in which time the calixarene:water ratio nevertheless acquires the value of one. Calixarenes accelerate reactions taking place inside the concavity by a combination of local concentration effect and polar stabilization of the transition state. An extended resorcinarene cavitand is found to accelerate the reaction rate of a Menshutkin reaction between quinuclidine and butylbromide by a factor of 1600. In heterocalixarenes the phenolic units are replaced by heterocycles, for instance by furans in calix[n]furanes and by pyridines in calix[n]pyridines. Calixarenes have been used as the macrocycle portion of a rotaxane and two calixarene molecules covalently joined together by the lower rims form carcerands. Calixarenes with XXYZ or WXYZ substitution patterns at the upper rim are inherently chiral and their enantiomers can be resolved by chiral column chromatography. Recently, inherently chiral calixarenes have been synthesised in good yields by asymmetric ortholithiation using a chiral oxazoline directing group. This removes the need for resolution techniques. - Gutsche, C. David (1989). Calixarenes. Cambridge: Royal Society of Chemistry. ISBN 0-85186-385-X. - IUPAC, Compendium of Chemical Terminology, 2nd ed. (the "Gold Book") (1997). Online corrected version: (1995) "Calixarenes". - Moss, G. P.; Smith, P. A. S.; Tavernier, D. (1 January 1995). "Glossary of class names of organic compounds and reactivity intermediates based on structure (IUPAC Recommendations 1995)". Pure and Applied Chemistry. 67 (8-9): 1307–1375. doi:10.1351/pac199567081307. - Antesberger J, Cave GW, Ferrarelli MC, Heaven MW, Raston CL, Atwood JL (2005). "Solvent-free, direct synthesis of supramolecular nano-capsules". Chemical communications (Cambridge, England). . (7): 892–894. doi:10.1039/b412251h. PMID 15700072. - McMahon G; O’Malley S; Nolan K; Diamond D (2003). "Important Calixarene Derivatives – their Synthesis and Applications". Arkivoc. Part (vii): 23–31. ISSN 1551-7012. Retrieved 2011-10-10. - Atwood, Jerry L.; Barbour, Leonard J.; Heaven, Michael W.; Raston, Colin L. (2003-09-01). "Association and orientation of C70 on complexation with calixarene". Chemical Communications (18): 2270–2271. doi:10.1039/B306411P. PMID 14518869. Retrieved 2011-10-10. - Nachtigall FF, Lazzarotto M, Braz FN (2002). "Interaction of Calixarene and Aliphatic Amines: A Combined NMR, Spectrophotometric and Conductimetric Investigation". Journal of the Brazilian Chemical Society. 13 (3): 295–299. doi:10.1590/S0103-50532002000300002. - Jin, Jaehyeok; Park, Ji Young; Lee, Yoon Sup (2016-10-27). "Optical Nature and Binding Energetics of Fluorescent Fluoride Sensor Bis(bora)calixarene and Design Strategies of Its Homologues". The Journal of Physical Chemistry C. 120 (42): 24324–24334. doi:10.1021/acs.jpcc.6b06729. ISSN 1932-7447. - Atwood JL, Barbour LJ, Jerga A (2002). "Organization of the interior of molecular capsules by hydrogen bonding". Proceedings of the National Academy of Sciences. 99 (8): 4837–41. Bibcode:2002PNAS...99.4837A. doi:10.1073/pnas.082659799. PMC . PMID 11943875. - Reactivity of carbonyl and phosphoryl groups at calixarenes R. Cacciapaglia, S. Di Stefano, L. Mandolini and R. Salvio; Supramol. Chem. 2013, 25, 537-554 doi:10.1080/10610278.2013.824578 - Calixarenes and resorcinarenes as scaffolds for supramolecular metallo-enzyme mimicry J.-N. Rebilly, O. Reinaud; Supramol. Chem. 2014, 1-27 doi:10.1080/10610278.2013.877137 - Hennrich, Gunther; Murillo, M. Teresa; Prados, Pilar; Song, Kai; Asselberghs, Inge; Clays, Koen; Persoons, André; Benet-Buchholz, Jordi; de Mendoza, Javier (2005-07-07). "Tetraalkynyl calixarenes with advanced NLO properties". Chemical Communications (21): 2747–2749. doi:10.1039/B502045J. PMID 15917941. Retrieved 2011-10-10. - Thallapally PK, Lloyd GO, Atwood JL, Barbour LJ (2005-06-20). "Diffusion of water in a nonporous hydrophobic crystal". Angewandte Chemie International Edition in English. 44 (25): 3848–3851. doi:10.1002/anie.200500749. PMID 15892031. - Purse, BW; Gissot, A; Rebek Jr., J (2005). "A deep cavitand provides a structured environment for the menschutkin reaction". Journal of the American Chemical Society. 127 (32): 11222–11223. doi:10.1021/ja052877+. PMID 16089433. - Subodh Kumar; Dharam Paul; Harjit Singh (2006). "Syntheses, structures and interactions of heterocalixarenes" (PDF). Arkivoc. 05-1699LU: 17–25.
<urn:uuid:b30d393e-e2c1-4de0-9411-6311a40fd7c3>
3.390625
3,312
Knowledge Article
Science & Tech.
36.322028
95,589,247
The flute-nosed bat Murina florium is a poorly known species that was first discovered in Australia at Mt Baldy State Forest on the Atherton Tablelands in north-eastern Queensland in 1981. Subsequently there have been few other documented records despite intensive harp trapping studies, with the species only recorded from an additional six localities up until December 1995. This study provides four new locality records for the species, including two records which extend the known southern range limits of M. florium by 150 km across the Herbert River discontinuity within the Wet Tropics bioregion. The broad habitat characteristics of all known localities for the species are reviewed and the paper presents the first account of this bat occurring in non-rainforest habitat. Occurrence of M. florium in this habitat is discussed using current knowledge of roosting and ecomorphology characteristics. A predicted distribution of M. florium based on the 11 locality records, is calculated using DOMAIN and 16 biophysical parameters. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:05f4781f-7d68-40a7-9b6d-9c6733267636>
3.40625
229
Academic Writing
Science & Tech.
31.245202
95,589,253
Authors: Yannan Yang From the distribution characteristics of the magnetic field created by a current wire, we speculate that a current wire will experience force from a magnetic field nearby even if the current does not go through the field. From the same analysis, a similar effect should also exist for a moving charged particle, i.e., a moving charged particle will experience force from a nearby magnetic field, although the particle does not cut across the magnetic field. To prove the existence of this force, two experiments are performed and the results support our speculation. Considering the experimental results of this paper, if the relativistic transformation of field is universally valid, a motional electromotive force should be created in a neutral conductor moving through a space where there is no magnetic field, as long as there is a magnetic field around. Experimental designs are proposed to prove this motional electromotive force. Comments: 8 Pages. [v1] 2017-12-11 06:43:19 Unique-IP document downloads: 20 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:c462650f-821b-4797-9520-b2cdb091e350>
2.953125
347
Knowledge Article
Science & Tech.
37.365839
95,589,258
Mysterious bursts of radio waves identified far outside galaxy - July 16, 2014 - 412 Views - 0 Likes - 0 Comment Mysterious split-second pulses of radio waves are coming from deep in outer space, and nobody knows what causes them, according to astronomers. Researchers led by Laura Spitler from the Max Planck Institute for Radio Astronomy in Bonn, Germany say they have found the first so-called “fast radio burst” in the sky's northern hemisphere, using the Arecibo radio telescope in Puerto Rico. The mystery is reminiscent of that of gamma-ray bursts, discovered in the 1960s and now thought to come from giant stars collapsing to form black holes. The new phenomenon, in the form of radio rather than gamma-rays-a different form of light-remains an enigma. The flashes last only a few thousandths of a second. Scientists using the Parkes Observatory in Australia had recorded such events before, but the lack of similar findings by other telescopes led to speculation that the Australian instrument might have been picking up signals from sources nearby Earth. The finding at Arecibo is the first detection using a different telescope: the burst came from the direction of the constellation Auriga in the Northern sky, according to the scientists, who detail their findings July 10 in the online issue of The Astrophysical Journal. “There are only seven bursts every minute somewhere in the sky on average, so you have to be pretty lucky to have your telescope pointed in the right place at the right time,” said Spitler, the paper's lead author. “The characteristics of the burst seen by the Arecibo telescope, as well as how often we expect to catch one, are consistent with the characteristics of the previously observed bursts from Parkes.” “The radio waves show every sign of having come from far outside our galaxy – a really exciting prospect,” added Victoria Kaspi of the McGill University in Montreal and principal investigator for the pulsar-survey project that detected the burst Possible causes, scientists said, include a range of exotic astrophysical objects, such as evaporating black holes, mergers of neutron stars, or flares from magnetars-a type of neutron star with extremely powerful magnetic fields. The pulse was detected on Nov. 2, 2012, at Arecibo, with the world's largest and most sensitive single-dish radio telescope. The result confirms previous estimates that the bursts occur roughly 10,000 times a day over the whole sky, said the astronomers, who inferred the huge number by calculating how much sky was observed, and for how long, to make the few detections so far reported. The bursts appear to be coming from beyond the Milky Way galaxy based on measurements of an effect known as plasma dispersion. Pulses that travel through the cosmos are distinguished from man-made interference by the effect of electrons in space, which cause longer radio waves to travel more slowly. Source : http://www.world-science.net
<urn:uuid:7bcb3b46-4974-4af7-9717-86cfe054b110>
3.515625
628
News Article
Science & Tech.
33.931771
95,589,263
Comammox—a newly discovered nitrification process in the terrestrial nitrogen cycle - 1.1k Downloads Nitrification, the microbial oxidation of ammonia to nitrate via nitrite, is a pivotal component of the biogeochemical nitrogen cycle. Nitrification was conventionally assumed as a two-step process in which ammonia oxidation was thought to be catalyzed by ammonia-oxidizing archaea (AOA) and bacteria (AOB), as well as nitrite oxidation by nitrite-oxidizing bacteria (NOB). This long-held assumption of labour division between the two functional groups, however, was challenged by the recent unexpected discovery of complete ammonia oxidizers within the Nitrospira genus that are capable of converting ammonia to nitrate in a single organism (comammox). This breakthrough raised fundamental questions on the niche specialization and differentiation of comammox organisms with other canonical nitrifying prokaryotes in terrestrial ecosystems. Materials and methods This article provides an overview of the recent insights into the genomic analysis, physiological characterization and environmental investigation of the comammox organisms, which have dramatically changed our perspective on the aerobic nitrification process. By using quantitative PCR analysis, we also compared the abundances of comammox Nitrospira clade A and clade B, AOA, AOB and NOB in 300 forest soil samples from China spanning a wide range of soil pH. Results and discussion Comammox Nitrospira are environmentally widespread and numerically abundant in natural and engineered habitats. Physiological data, including ammonia oxidation kinetics and metabolic versatility, and comparative genomic analysis revealed that comammox organisms might functionally outcompete other canonical nitrifiers under highly oligotrophic conditions. These findings highlight the necessity in future studies to re-evaluate the niche differentiation between ammonia oxidizers and their relative contribution to nitrification in various terrestrial ecosystems by including comammox Nitrospira in such comparisons. The discovery of comammox and their broad environmental distribution added a new dimension to our knowledge of the biochemistry and physiology of nitrification and has far-reaching implications for refined strategies to manipulate nitrification in terrestrial ecosystems and to maximize agricultural productivity and sustainability. KeywordsAmmonia oxidation Comammox Complete nitrification Niche separation Nitrite oxidation Nitrospira This work was financially supported by Natural Science Foundation of China (41230857) and the Australian Research Council (DE150100870; DP160101028). - Arp D, Bottomley PJ (2006) Nitrifier: more than 100 years from isolation to genome sequences. Microbe 1:229–234Google Scholar - Camejo PY, Domingo JS, McMahon KD, Moguera DR (2017) Genome-enabled insights into the ecophysiology of the comammox bacterium Candidatus Nitrospira nitrosa. mSystems. https://doi.org/10.1128/mSystems.00059-17 - Daims H, Lebedeva EV, Pjevac P, Han P, Herbold C, Albertsen M et al (2015) Complete nitrification by Nitrospira bacteria. Nature 528:504–509Google Scholar - Hink L, Nicol GW, Prosser JI (2016) Archaea produce lower yields of N2O than bacteria during aerobic ammonia oxidation in soil. Environ Microbiol. https://doi.org/10.1111/1462-2920.13282 - Hu HW, Trivedi P, He JZ, Singh BK (2017) Microbial nitrous oxide emissions in dryland ecosystems: mechanisms, microbiome and mitigation. Environ Microbiol. https://doi.org/10.1111/1462-2920.13795 - Kits KD, Sedlacek CJ, Lebedeva EV, Han P, Bulaev A, Pjevac P et al (2017) Kinetic analysis of a complete nitrifier reveals an oligotrophic lifestyle. Nature. https://doi.org/10.1038/nature23679 - Palomo A, Pedersen AG, Fowler SJ, Dechesne A, Sicheritz-Ponten T, Smets BF (2017) Comparative genomics sheds light on niche differentiation and the evolutionary history of comammox Nitrospira. bioRxiv. https://doi.org/10.1101/138586 - Pinto AJ, Marcus DN, Ijaz UZ, Bautista-de lose Santos QM, Dick GJ, Raskin L (2015) Metagenomic evidence for the presence of comammox Nitrospira-like bacteria in a drinking water system. mSphere 1:e00054–e00015Google Scholar - Shi XZ, Hu HW, Zhu-Barker X, Hayden H, Wang JT, Suter H, Chen D, He JZ (2017) Nitrifier-induced denitrification is an important source of soil nitrous oxide and can be inhibited by a nitrification inhibitor 3,4-dimethylpyrazole phosphate. Environ Microbiol. https://doi.org/10.1111/1462-2920.13872 - Spang A, Poehlein A, Offre P, Zumbrägel S, Haider S, Rychlik N et al (2012) The genome of the ammonia-oxidizing Candidatus Nitrososphaera gargensis: insights into metabolic versatility and environmental adaptations. Environ Microbiol 14:3122–3145Google Scholar - van Kessel MA, Speth DR, Albertsen M, Nielsen PH, den Camp HJO, Kartal B et al (2015) Complete nitrification by a single microorganism. Nature 528:555–559Google Scholar
<urn:uuid:20ceacec-767e-4898-8975-0abc9bb1fa9b>
2.921875
1,222
Academic Writing
Science & Tech.
24.241964
95,589,283
A study that analyzed data from the Hubble Space Telescope and Kepler space observatory recently concluded that many factors support the idea that more life is yet to come. Astronomers found a rare type of growing galaxy that appears to feed off stolen gases. NASA Hubble Space Telescope images revealed 2,753 young, blue star clusters in a neighboring galaxy. This sheds light on the formation history of stars in our universe. The Hubble Space Telescope recently captured detailed images of Twin Jet Nebula, which has recently ejected its outer layers, and illuminated them, signifying it is in its final stages of life. Astronomers recently discovered two supermassive black holes in the quasar known as Markarian 231, using NASA's Hubble Space Telescope. This suggests that these massive black holes form from violent mergers. Analysis of some Hubble Space Telescope images has added to research on how cosmic winds prevent stars from being born, and which areas might produce fewer stars after this. The Universe may seem like a big, wide-open expanse, but the truth is that most galaxies are clumped together in groups or clusters, and a neighbor is never far away. However, using the Hubble Space Telescope, researchers have now imaged in greater detail one unique, but lonely galaxy that is "lost in space." Two of Pluto's moons, Nix and Hydra, are wobbling unpredictably in some kind of "cosmic dance," according to new data discovered by NASA's Hubble Space Telescope, adding more mystery to this former planet. The farthest known galaxy has recently been captured on camera, giving scientists a glimpse back in time to when the Universe was only five percent of its present age, according to new research. In the beginning, there was only darkness... Then stars began to fill the heavens, lighting our Universe. No matter what your denomination or beliefs, this is one point you likely won't dispute. Astronomers have long been fascinated with the dawn of light, and now, with the help of the Hubble Space Telescope, they believe they have determined how that beginning ended. Happy 25th anniversary Hubble! As of April 24, it has been a whopping quarter of a century since the Hubble Space Telescope (HST) rocketed out of Earth's atmosphere to begin its mission of surveying the stars. Since then, it's had a hand in... well, just about any space research you can think of. Now, on the advent of this landmark anniversary, we take a look at what the Hubble has accomplished, and what the future has in store for it. The Hubble Space Telescope recently captured images of some eerie and beautiful objects floating around space. A series of green and ghostly wisps were recently spotted by the unmanned orbital telescope, betraying the past presence of quasars, the brightest objects in the Universe. Dark matter: unless you're a theoretical physicist, you probably will have a hard time explaining just what this mysterious thing in our Universe really is. Even experts have long viewed the material as one giant question mark looming over blackboards and telescopic lenses. Now, a new discovery has made things even more complicated. Dark matter, as it turns out, barely even interacts with itself, explaining why it is so undetectable. Ever look up at the stars and wish you could see all the brilliant displays they have to offer for yourself? Yes, we have powerful telescopes, but even they cannot see into the furthest reaches of the Universe. Luckily, astronomers are finding that on rare occasions, the Universe itself provides.
<urn:uuid:7064a8c4-fa46-45eb-9700-9e26825556dd>
3.28125
718
Content Listing
Science & Tech.
41.617197
95,589,298
Hi Michael Thomas and Michael Bath, Michael T, I am glad you visited these definitions because I hope I have not being using them loosely and perhaps incorrectly over the years rather than the true definitions! I have not been corrected by anyone on this issue to this point. Just for your interest, an explanation is listed quite well here: Jeff Snyder, am meteorologist in the US has an excellent and perhaps more detailed and complex explanation here on storm track (2nd post – though he does ): We can also talk about veering and backing winds in terms of a particular/constant time and location but varying height. In this case, backing winds mean that the wind direction is changing in a counterclockwise direction WITH HEIGHT. To help avoid confusion, we typically say that winds are "veering with height" or "backing with height". This is the veering or backing wind profile (profile indicating constant time, varying height) that you hear about. So, in summary: + Veering/backing winds (w/o reference to "with height" or "profile") usually refers to winds that are changing IN TIME at a fixed location and height. + Veering/backing of winds WITH HEIGHT implies winds that change in a clockwise/counterclockwise manner at increasing heights at a fixed time and ground location. As a reminder, veering winds with height implies warm-air advection, while backing winds with height (or a backing wind profile) implies cold air advection. Typically, in the US, we want to see a veering low-level wind PROFILE (so, veering with height) with a backing surface wind tendency. It's also pretty common to see backing winds with height in the mid and upper-levels, which can be favorable since it implies cold-air advection in the mid and upper levels, which can increase instability. Jeff Snyder – KC0HJX University of Oklahoma Graduate Student Note that Jeff also mentions a changing time scale (evolving) as we all as "fixed location". Remember sometimes, a sounding has the veering or backing profiles in place. An evolving event may change a profile to veering/backing or non-veering/non-backing. The main focus though is the advection of warm air and cold air – again Jeff covers this well. Regardless of approach, we should be careful in specifically mentioning both hemispheres.
<urn:uuid:76f91b72-6866-45f3-99aa-2bce3fa47865>
3
507
Q&A Forum
Science & Tech.
44.551927
95,589,312
)To transform a program written in a high-level programming language from source code into object code. Programmers write programs in a form called source code. Source code must go through several steps before it becomes an executable program. The first step is to pass the source code through a compiler, which translates the high-level language instructionsinto object code. The final step in producing an executable program -- after the compiler has produced object code -- is to pass the object code through a linker. The linker combines modules and gives real values to all symbolic addresses, thereby producing machine code. IT Solutions Builder TOP IT RESOURCES TO MOVE YOUR BUSINESS FORWARD Which topic are you interested in? What is your company size? What is your job title? What is your job function? Searching our resource database to find your matches... Stay up to date on the latest developments in Internet terminology with a free weekly newsletter from Webopedia. Join to subscribe now. The following facts and statistics capture the changing landscape of cloud computing and how service providers and customers are keeping up with... Read More »SEO Dictionary From keyword analysis to backlinks and Google search engine algorithm updates, our search engine optimization glossary lists 85 SEO terms you need... Read More »Texting & Chat Abbreviations From A3 to ZZZ this guide lists 1,500 text message and online chat abbreviations to help you translate and understand today's texting lingo. Read More » Java is a high-level programming language. This guide describes the basics of Java, providing an overview of syntax, variables, data types and... Read More »Java Basics, Part 2 This second Study Guide describes the basics of Java, providing an overview of operators, modifiers and control Structures. Read More »Network Fundamentals Study Guide Networking fundamentals teaches the building blocks of modern network design. Learn different types of networks, concepts, architecture and... Read More »
<urn:uuid:9828f4df-86e7-47da-aa72-4d4ae8cac467>
3.765625
403
Content Listing
Software Dev.
43.167005
95,589,328
How does a male moth find the right sort of female for mating, when there are two similar types luring him with their pheromones? In many species, differences in the antenna used by the male to smell these perfumes are responsible for his choice. But in the European Corn Borer, changes in the male's brain seem to dictate his choice between two types of available females, as shown by researchers from the University of Amsterdam, the Swedish University of Agricultural Sciences, and the Max Planck Institute for Chemical Ecology. Female moths produce a sex pheromone, a different blend of chemicals for each species, which attracts males from a distance. Males detect these chemicals with exquisitely sensitive hair-like structures in the antenna. These hairs contain specialized neurons, nerve cells that express pheromone receptors which are activated when they bind to individual pheromone components. Different species have different pheromone receptors, and so the ability to most accurately smell females of the same species prevents attraction to other females. Solving the puzzle of why a certain pheromone receptor is activated only by a specific chemical has motivated much past research. "Our previous work in mapping the pheromone receptors of the European Corn Borer convinced us that this species doesn't fit the mold, and so we took another approach," says lead author Fotini Koutroumpa. The European Corn Borer uses a simple pheromone with only two isomeric compounds, identical except for the orientation of a double bond. The two "pheromone strains" of this species produce them in different proportions. E-strain females make mostly the E isomer with traces of the Z isomer, which is highly attractive to E-strain males. Z-strain females release the opposite ratio, attracting Z-strain males. In both cases, both components are absolutely necessary for attraction, and males of both strains can smell both, with similar or identical antennal structures and pheromone receptors. So what difference among the E and Z males could explain their opposite preferences? "We decided to look for a difference at the genetic level", says co-author Astrid Groot. By crossing the E and Z strains in the laboratory and mapping the gene governing male preference, the researchers found that the pheromone receptors had little or no effect. Instead, a chromosomal region containing genes involved in neuronal development explained most of the male behavioral response. "This result fits with our previous work showing that E and Z males have different connections between the brain and the neurons containing pheromone receptors," explains co-author Teun Dekker. This suggests that females of the E or Z strain smell the same to both E and Z males, while their preferences are controlled not by their noses but instead by their brains. "This result will point future research towards the tiny but complex moth brain, and shed light on how the diverse pheromone systems of the thousands of moth species has changed throughout evolution," concludes co-author David Heckel. [DGH] Koutroumpa, F. A., Groot, A. T., Dekker, T., Heckel, D. G. (2016). Genetic mapping of male pheromone response in the European Corn Borer identifies candidate genes regulating neurogenesis. Proceedings of the National Academy of Sciences of the United States of America (Early Edition), DOI: 10.1073/pnas.1610515113 David G. Heckel, Max Planck Institute for Chemical Ecology, Hans-Knöll-Str. 8, 07743 Jena, Germany, +49 3641 57 1500, firstname.lastname@example.org Contact and Media Requests: Angela Overmeyer M.A., Max Planck Institute for Chemical Ecology, Hans-Knöll-Str. 8, 07743 Jena, +49 3641 57-2110, E-Mail email@example.com Download high-resolution images via http://www.ice.mpg.de/ext/downloads2016.html Angela Overmeyer | Max-Planck-Institut für chemische Ökologie Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:e60ddd19-45da-47c1-a424-f098b18616cc>
3.34375
1,522
Content Listing
Science & Tech.
42.738139
95,589,362
HOW CAN OXYGEN BE CREATED FROM WATER? Water is made of two elements: hydrogen and oxygen. These are bonded together and can often be manipulated in chemical reactions. Water is added to many different processes, in a process called hydrolysis - the addition of water. Electrolysis is a similar process, which involves the addition of an electrical current through a water sample that contains a soluble material, known as an electrolyte. This process then provides energy to the water molecules, and with enough applied energy, the bonds that keep the oxygen joined to the hydrogen can be broken. This is often done in a school laboratory by placing two electrodes (or a battery) in a small flask of impure water, applying a current and capturing the gas created. This gas is then lit with a match and produces a squeaky pop, a telltale sign of hydrogen being present. This has yet to be scaled up on Earth, with the infrastructure yet to be established. Another way is to use a catalyst which work by absorbing light particles – photons – into a semiconductor material inserted into water. The energy of a photon gets absorbed by an electron in the material which then jumps, leaving behind a hole. The free electron can react with protons (which make up the atomic nucleus along with neutrons) in water to form hydrogen. Meanwhile, the hole can absorb electrons from water to form protons and oxygen. The process can also be reversed. Hydrogen and oxygen can be brought together or 'recombined' using a fuel cell returning the solar energy taken in by the 'photocatalysis' – energy which can be used to power electronics. Recombination forms only water as a product – meaning the water can also be recycled. Read articles that feature this panel A step closer to a colony on Mars? Method of making oxygen from water in zero gravity raises hopes for long-distance space travel Dr Charles Dunnill from Swansea University explains how being able to create oxygen from water in microgravity could help humans colonise... Most watched News videos - Brutal bat attack caught on surveillance video in the Bronx - Last known survivor of Amazon tribe captured on camera - Trump's daughter grasps her Secret Service agent's hand - Comedian is forced to move her scooter from disability space on train - Desperate parents queue for hours for school breakfast club place - Shocking video shows mother brutally beating her twin girls - The terrifying moment a plane comes crashing down in South Africa - NFL quarterback Jimmy Garoppolo goes on a date with porn star - Man sets up projector to make garden look like jurassic park - Biker jailed after filming himself speeding at 200mph - Waitress tackles male customer after grabbing her backside - Utah train worker calls a group of girls porn stars
<urn:uuid:cc13befa-b5c6-4d57-b5e8-7c64010fa5d1>
3.875
589
Knowledge Article
Science & Tech.
36.345643
95,589,371
Micropia, a museum of microbes located in the heart of Amsterdam, uses art to address the big knowledge gap between science and the general public. Microbiology originated just a few kilometers from Amsterdam when Antoni van Leeuwenhoek built one of the first microscopes and discovered that everyday objects, as well as himself, were teeming with minuscule forms of life. More than 200 years after Leeuwenhoek observed microorganisms for the first time, Micropia was opened in 2014 with the aim of erasing the negative view most people have on microbes. To do so, it features art installations that reveal the most amazing facts about these tiny creatures, still unknown and feared by many. Professor Remco Kort from the Vrije Universiteit Amsterdam works in collaboration with the museum and is behind exhibits like the kiss-o-meter, which shows visitors how many bacteria are exchanged during a kiss and the many benefits it can have. This piece is part of an exhibition titled ‘Rise and shine with microbes‘, which explores the role of microorganisms in our everyday lives. Did you know that microbes living in you and on you weigh 1.5kg? That they protect your skin and teeth? That they’re necessary to make most of the ingredients in your breakfast? With living pieces and plenty of interactive installations, these are the kind of questions that the museum answers to curious visitors of all ages. Micropia also highlights the big role microbiology can play in solving global problems, being used to purify water, develop new medicines, and produce energy or food. Its most recently added display focuses particularly on bio-plastics and how bacteria could turn the production of this ever-present material into a sustainable process. What impressed me was that despite the overwhelming numbers of microorganisms that exist, this artistic space has somehow managed to include info and fun facts about all types of microbes, from bacteria and algae to the popular tardigrades or water bears — cute superhero organisms that can even survive in space. It’s just great to see science and art come together like this to make these exciting topics accessible to everyone. This article was originally published on December 2016 and updated on February 2018 All media from Micropia Let's Continue The Conversation Feel free to send us comments about this article to email@example.com and/or comment on that article on social media.
<urn:uuid:5db6610a-1f6e-4f54-94ee-cc99336a7567>
3.234375
501
Truncated
Science & Tech.
32.891445
95,589,373
An international team of researchers from Russia, Germany, the USA and Austria has conducted a deep drilling programme in the utmost northeast of Russia during the last six months to retrieve several hundred metres of marine sediments, impact breccias and permanently frozen soil. These make new insights into the climate history of the Arctic, crater formation of the Elgygytgyn Lake and permafrost dynamics possible. A milestone has been reached at the beginning of May with the first results of the drilling campaign. The cores gained will help to answer crucial open questions of Arctic geology. At the utmost northern fringe of north-eastern Siberia, about 900 kilometres west of the Bering Strait and 100 km north of the Arctic Circle (67°30' N, 172°05' E), lies the Elgygytgyn Lake which originated 3.6 Mio years ago from a meteoride impact. The lake has, in contrast to other areas of this latitude, never been glaciated - the sediments which accumulated continually at the bottom of the lake are therefore an invaluable Arctic climate archive. International researchers from various disciplines have set the goal to retrieve this archive. Preparations took eleven years before the large scale deep drilling campaign began at the end of the last year. Infrastructure for up to 40 people had to be created in this remote area under the most difficult conditions - accommodations, sanitary installations and supply utilities. "Humans and technical appliances need sufficient energy in temperatures of down to -45°C, for instance for storing the drilling cores above freezing point", says Martin Melles from the University of Cologne, project manager of the Elgygytgyn Drilling Project on the side of the Germans. The drilling equipment employed for drillings in the sea weighs about 70 tons, a great challenge for its safe positioning on the sea ice. At the end of the last year, permafrost drillings were performed by a Russian construction company from the 260 km distant Pevek. It yielded impressive results: the team reached a drilling depth of 142 metres despite heavy snowstorms and low temperatures. The cores contain information on the permafrost history and its influence on sea sedimentation. "It is possible to read sea level fluctuations from the cores", reports Georg Schwammborn from the Research Station Potsdam of the Alfred Wegener Institute who headed the permafrost drillings. Of great importance is the installation of a temperature measurement chain in the drilling hole by the researchers from Potsdam. It documents the current changes in the permafrost soil. Its understanding is of great value for climate research since the release of the gases bound in the thawing permafrost might further reinforce the greenhouse effect. The sea drillings which have just been completed were no less successful: sea sediments were drilled 315 metres below the sea bottom; the upper 110 metres overlapped to close the remaining gap of the first drilling in the archive. First results indicate that the climate and environment history of the last 3.6 Mio years is largely documented. Measurements of the magnetic properties in the upper part of the sediment layers show numerous warm and glacial periods with different intensities and characteristics. "We can learn from detailed examinations of the transition from a glacial to a warm period that the Arctic reacted to global warming in the past; it is therefore safe to assume that it will also react to it in the future ", explains Catalina Gebhardt from the Alfred Wegener Institute in Bremerhaven. The deepest sea sediment cores reached into the Pliocene of 2.6 million years ago. "These sediments are of unique importance because the climate of this time was considerably warmer than it is today", says Martin Melles. "The insights gained from these sediments can serve as a perfect example for the Arctic in a few years time, in case the global warming takes place as prognosticated by climate models." An important goal of the sea drilling was the drilling of the impact breccias. This clastic rock created by a meteorite impact was found 315 metres below the sea bottom. The cores drawn by drilling 200 metres into the breccias are invaluable. "We expect new insights not only about the trajectory and composition of the meteorite, but particularly about the reactions of the volcanic rocks to the impact", says Christian Koeberl from the University of Vienna, who coordinates the international team processing the impact rocks. The insights serve the risk assessments in areas with similar rock formations. The 3.5 tons of cores drilled in 2009 will be brought to the Russian Arctic and Antarctic Research Institute (AARI) in St. Petersburg at the start of June. The cores of the whole drilling campaign will thereafter be brought to Germany: the permafrost cores to the Alfred Wegener Institute for Polar and Marine Research, the sea sediments to the University of Cologne and the impact breccias to the ICDP in Potsdam. The examinations will take two years. Altogether, about 30 guest researchers next to the German researchers and students will work on the cores. You can find information of the project here: http://www.geologie.uni-koeln.de/elgygytgyn.htmlPartner research institutes: The Alfred Wegener Institute carries out research in the Arctic and Antarctic as well as in the high and mid latitude oceans. The institute coordinates German polar research and provides international science with important infrastructure, e.g. the research icebreaker Polarstern and research stations in the Arctic and Antarctic. The Alfred Wegener Institute is one of 15 research centres within the Helmholtz Association, Germany's largest scientific organization. Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:51582e2d-d2e4-4712-a87b-f50bf1d55189>
3.09375
1,717
Content Listing
Science & Tech.
40.244619
95,589,378
Yet it has been challenging to figure out how to sustain the many benefits people obtain from nature — so-called "ecosystem services" — in any given landscape because an improvement in one may come at the cost of another. Two ecologists at the University of Wisconsin–Madison report this week (July 1) in the Proceedings of the National Academy of Sciences a novel approach to analyzing the production and location of 10 different ecosystem services across a landscape, opening the door to being able to identify factors governing their synergies and tradeoffs. Monica Turner, the Eugene P. Odum Professor of Zoology, and graduate student Jiangxiao Qiu mapped the production, distribution, and interactions of the services in three main categories: provisioning (providing resources like food, fiber, or fresh water), cultural (such as aesthetics and hunting), and regulating (including improving ground and surface water quality, handling floodwater, preventing erosion, and storing carbon). They focused on the Yahara River watershed, which covers much of central portion of Dane County and parts of Columbia and Rock Counties in southern Wisconsin and includes the chain of Madison lakes. "We found that the main ecosystem services are not independent of each other. They interact spatially in very complex ways," says Qiu, lead author of the new study. Some of those interactions were not surprising — for example, higher levels of crop production were generally associated with poorer surface and ground water quality. However, two other sets of services showed positive associations: flood regulation, pasture and freshwater supply all went together, as did forest recreation, soil retention, carbon storage and surface water quality. "If you manage for one of these services, you can probably enhance others, as well," says Turner. "It also means that you can't take a narrow view of the landscape. You have to consider all of the things that it produces for us and recognize that we have to manage it very holistically." Even in the expected tradeoff between crop production and water quality, the researchers found something unexpected. "There is a strong tradeoff between crop production and surface and groundwater quality," Qiu says. "But despite this, there are still some locations that can be high for all three services — exceptions that can produce high crop yield and good water quality in general." Preliminary analysis of these "win-win" areas suggests that factors like flat topography, a deep water table, less field runoff, soil with high water-holding capacity, more adjoining wetlands and proximity to streams with riparian vegetation may contribute to maintaining both crop production and good water quality. The results also show that nearly all of the land in the watershed provides a high level of at least one of the measured services but that they are not uniformly distributed. Most areas offer a high level of just one or two services. But a few, termed "hotspots" and making up just three percent of the watershed (largely parks and protected areas), provide high levels of at least six of the measured services. "A single piece of land can provide different kinds of services simultaneously but you cannot expect that this land can provide all of the benefits," Qiu says. The work was undertaken as part of a larger project to improve water sustainability in a mixed urban and agricultural landscape, supported by the Water Sustainability and Climate Program of the National Science Foundation (NSF). "This paper is an initial assessment that gives us a picture of the spatial distribution of ecosystem services in contemporary times, a starting point for comparison," says Chris Kucharik, a UW–Madison professor of agronomy and environmental studies and principal investigator of the overall NSF project. The project aims to use a combination of contemporary and historical data to understand how the watershed may change over the next 50 to 60 years."We ultimately want to be able to look at future scenarios for this watershed," Turner says. "If climate changes or land use changes, what's going to happen to the values that we care about?" Jiangxiao Qiu | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:9aa2e758-4bf7-4c93-ab62-da699f46d121>
3.15625
1,424
Content Listing
Science & Tech.
34.360512
95,589,395
In Unreal Engine 4 we wanted to make binding input events as easy as possible. To that end, we created Input Action and Axis Mappings. While it is certainly valid to bind keys directly to events, I hope I can convince you that using mappings will be the most flexible and convenient way to set up your input. So what are Action and Axis Mappings? Action and Axis Mappings provide a mechanism to conveniently map keys and axes to input behaviors by inserting a layer of indirection between the input behavior and the keys that invoke it. Action Mappings are for key presses and releases, while Axis Mappings allow for inputs that have a continuous range. Why would I want to use a mapping instead of binding directly to the key? Using input mappings gives you the ability to map multiple keys to the same behavior with a single binding. It also makes remapping which keys are mapped to the behavior easy, both at a project level if you change your mind about default settings, and for a user in a key binding UI. Finally, using input mappings allows you to interpret input keys that aren’t an axis input (e.g. gamepad thumbstick axes which have a range of [-1,1]) as components of an axis (e.g. W/S for forward and back in typical FPS controls). Alright, these sound great. How do I set them up? In the Input section of Engine Project Settings you can see the list of existing mappings and create new ones. Actions are pretty straightforward: give the action a name, add the keys you want mapped to the action, and specify which modifier keys need to be held when the key is pressed. Axis mappings are also reasonably straightforward. Instead of specifying any modifier keys, however, you specify a Scale. The Scale is a multiplier on the value of the key when summing up the Axis’ value. This is particularly useful for creating an axis out of keyboard keys (for example, W can represent pressing up on the gamepad stick while S represents pressing down). Now that I’ve defined some mappings, how do I use these things? Mappings can be bound to behaviors from both Blueprints and C++. In C++ you will most typically set up your bindings in the Pawn/Character::SetupPlayerInputComponent or PlayerCharacter::SetupInputComponent functions; however, anywhere you have an InputComponent is valid. The bindings are formed by calling BindAction/Axis on the InputComponent. InputComponent->BindAxis("MoveForward", this, &ASampleCharacter::MoveForward); InputComponent->BindAction("Fire", IE_Pressed, this, &ASampleCharacter::OnBeginFire); InputComponent->BindAction("Fire", IE_Released, this, &ASampleCharacter::OnEndFire); In Blueprints you can place an Axis or Action Event node from the Input section of the context menu or palette of any Actor blueprint. In both C++ and Blueprints, Axis events will fire every frame passing the current value of the Axis while Action events will have the Pressed and Released outputs fire as the key(s) are pressed. An Axis’ value is the sum of the values of each key’s state in that frame. So in the MoveForward case pictured above, if you have only W held down the Axis’ value is 1, but if you had both W and S held down then the Axis’ value would be 0. It should also be noted that if you had both W and Up pressed then the value is 2, so you would likely want to clamp the value in the bound function. Actions that are bound only to a pressed or released event will fire every time any key that is mapped to it is pressed/released. However, in the case of Paired Actions (actions that have both a pressed and a released function bound to them) we consider the first key to be pressed to have captured the action. Once a key has captured the action the other bound keys’ press and release events will be ignored until the capturing key has been released. Is that all? There are a lot of other important input concepts (some of which are covered in the input documentation) such as the input stack, which Actors have input enabled by default and how to enable input for other Actors, and how input consumption works, but we’ll leave diving in to those for another post. I hope I’ve convinced you that using Action and Axis Mappings will be the best way to set up input in your project, but if not, that’s fine! You can always bind directly to Keys if that’s easier for you and convert to using Actions and Axes when they provide value for you. Have questions? Don’t forget you can join us over in the forums – we’re always happy to help! Get ready to buckle down and put your development skills to the test! Ne...
<urn:uuid:f0627a04-c754-48b2-96a6-5a4351a46a10>
2.609375
1,037
Documentation
Software Dev.
53.691259
95,589,400
AIPS HELP file for TO in 31DEC18 As of Sun Jul 22 10:18:41 2018 Use: In the FOR loop construction TO is used to seperate the starting value for the incrementing variable from the limiting value. (See FOR). Grammar: FOR variable = start TO fini BY increment statement1; statement2;... ; END where 'variable' is a scalar variable that will change each time through the loop, beginning at 'start', have 'increment' added to it each time through the loop (the increment may be positive or negative) and will end when the loop has been executed with 'variable' greater than or equal to 'fini'. Note that this implies that the loop will be executed at least once no matter what the start, fini and increment parameters are. If the BY section is omitted, the increment will default to 1. Statementi are AIPS statements. These statements may not have an omitted optional immediate numeric argument. Such statements depend on the POPS stack being empty in the absence of the argument. But the FOR loop uses the stack for the 'fini' and 'increment' values and so the stack is not empty. Thus TVON with no argument will see a non-empty stack and assume that it is getting an immediate argument from the stack. Use in this case TVON(2**(TVCHAN-1)) inside your loop. Verbs of this sort include EHEX, GROFF, GRON, HUEWEDGE, IMWEDGE, REHEX, TVOFF, TVON, TVWEDGE. Examples: FOR I = 1 TO 10; SUM = SUM + A(I); END FOR I = K+3 TO K BY -1; A(I+1) = A(I); END In the first example SUM will be incremented by the sum of the first 10 elements in array A. The second example will shift the section of array A between A(K) and A(K+3) forward 1 element.
<urn:uuid:eecae69d-c4cb-4766-975c-0b14cb673c40>
3.40625
453
Documentation
Software Dev.
65.61973
95,589,404
Latest posts by H. Sterling Burnett (see all) - Opposing Carbon-Dioxide Taxes Supports American Energy - July 19, 2018 - Carbon Taxes Are Uneconomic And Misanthropic - July 19, 2018 - Paris Climate Participants Miss Targets While U.S. Reduces Its Emissions - July 12, 2018 While some social scientists continue to undertake purported survey’s of the literature they claim show almost all scientists agree humans greenhouse gas emissions are causing dangerous climate change, real climate scientists are showing the climate is far more complex than climate models and those who rely almost solely on them to predict climate disaster acknowledge. In his March 29 testimony before the U.S. House Science, Space, and Technology Committee’s hearing on “Climate Science: Assumptions, Policy Implications and the Scientific Method,” climate scientist and satellite expert John Christy, Ph.D. notes when climate model projections are tested against actual observations and measurements, the model’s outputs fail to match observed phenomena and data, and as a result, they should not be used to shape climate policies. As Christy observes the time and experience tested scientific method is to is not a set of facts but a process or method establishing a way for humans to discover information in the pursuit of understanding and knowledge. He explained, “In the method, a ‘claim’ or ‘hypothesis’ is stated such that rigorous tests might be employed to test the claim to determine its credibility. If the claim fails a test, the claim is rejected or modified then tested again.” Christy points out the average outputs of the models grossly misrepresent climate variations and changes of recent decades. From 1979 through 2016, climate models project significant warming should have occurred due to ever-increasing atmospheric greenhouse gas levels. The average warming estimated by 102 model runs from 32 groups of modelers over the period is 1°C. By contrast, the actual observed warming experienced during the period, as recorded by three independent sources – weather balloons, satellites, and weather center reanalyses – is less than 0.5°C, less than half the amount predicted by climate models. The most likely reason for the failure, Christy testified, is “the models are simply too sensitive to the extra [greenhouse gases] that are being added to both the model and the real world.” Christy said “applying the traditional scientific method, one would accept this failure and not promote the model trends as something truthful about the recent past or the future.” If not carbon dioxide, what could be behind climate changes so much in the news? Three international studies may provide at least part of the answer. Three studies from researchers from France, Germany, Portugal, Spain, and Switzerland reinforce the fact solar activity has a significant effect on climate changes. Research funded by the Swiss National Science Foundation quantified the contribution solar fluctuations make to temperature changes on Earth. While the U.N. Intergovernmental Panel on Climate Change assumes solar activity has an insignificant effect on Earth’s temperature, historical data show that is not true. Using robust computer models, the Swiss scientists say as solar activity reaches its next minimum, the weaker sun should result in temperature falling by a half a degree during this century. Temperature reconstructions of data for the Iberian Peninsula for the past 400 years published in Climate of the Past find temperature changes in the region track solar activity well. Using tree- ring data for the period 1602 through 2012, the Spanish-led research team shows warm phases coincide with periods of high solar activity. The region as a whole has warmed almost 3°C over the past 400 years, reflecting the recent recovery from the Little Ice Age, but even during the Little Ice Age, there were phases around 1625 and 1800 when temperatures were as high as the present for short periods of time corresponding to increased solar activity. Reinforcing the Climate of the Past study, research in the Journal of Atmospheric and Solar-Terrestrial Physics, using data from three Portuguese meteorological stations from 1888 to 2001, finds a statistically significant association between temperatures and changes in solar and geomagnetic activity. Temperature changes consistently track the 11-year solar cycle and the 22-year solar magnetic cycle, lagging by approximately one to two years, showing solar forcing significantly affects temperature. The sun, the very center of our solar system, affects climate. Who would have guessed?
<urn:uuid:634866cc-acd0-4083-a8a1-adc22e06cf9d>
2.96875
908
Personal Blog
Science & Tech.
32.74193
95,589,413
Thermal ionization mass spectrometry Thermal ionization mass spectrometry is a technique which has been chiefly developed for the analysis of geological samples. The technique is used extensively for the isotope ratio measurements required for Rb—Sr, Nd—Sm and Pb—Th—U geochronology studies as well as the determination of rare-earth elements, and, less frequently, other selected elements by isotope dilution analysis. KeywordsIsotope Dilution Mass Peak Thermal Ionization Mass Spectrometry Isotope Dilution Mass Spectrometry Isotope Ratio Measurement Unable to display preview. Download preview PDF. - Catanzaro, E.J., T.J. Murphy, W.R. Shields and E.L. Garner (1968) Absolute isotopic abundance ratios of common, equal-atom and radiogenic lead standards. J. Res. Natl. Bur. Stand A72 261–267.Google Scholar - Faure, G. (1977) Principles of Isotope Geology John Wiley and Sons, New York.Google Scholar - Hawkesworth, C.J. and P.W.C. van Calsteren (1983). Radiogenic isotopes—some geological applications. In: P. Henderson (ed.), Rare Earth Element Geochemistry (Developments in Geochemistry, 2). Elsevier, Amsterdam, 375–421.Google Scholar - Heumann, K.G. (1982) Isotopic analysis of inorganic and organic substances by mass spectrometry. Int. J. Mass Spectrom. Ion Phys 45 87–110.Google Scholar - Krogh, T.E. and G.L. Davis (1975) The production and preparation of 205Pb for use as a tracer for isotope dilution analysis. Carnegie Institute Yearbook 74 416–417.Google Scholar - Schuhmann, S. and J.A. Philpotts (1979) Mass spectrometric stable isotope dilution analyses for lanthanides in geochemical materials. In: K.A. Gschneider and L. Eyring (eds.), Handbook on the Physical Chemistry of Rare Earths Elsevier/North Holland Publishing, Amsterdam, 4 471–481.Google Scholar
<urn:uuid:a1b7783b-efc0-4550-8c0a-dc7997aa898a>
2.796875
472
Academic Writing
Science & Tech.
51.667783
95,589,443
According to the textbooks, chromatin, the natural state of DNA in the cell, is made up of nucleosomes. And nucleosomes are the basic repeating unit of chromatin. When viewed by a high powered microscope, nucleosomes look like beads on a string (photo at right). But in the August 19th issue of the journal Molecular Cell, UC San Diego biologists report their discovery of a novel chromatin particle halfway between DNA and a nucleosome (photo at left). While it looks like a nucleosome, they say, it is in fact a distinct particle of its own. "This novel particle was found as a precursor to a nucleosome," said James Kadonaga, a professor of biology at UC San Diego who headed the research team and calls the particle a "pre-nucleosome." "These findings suggest that it is necessary to reconsider what chromatin is. The pre-nucleosome is likely to be an important player in how our genetic material is duplicated and used." The biologists say that while the pre-nucleosome may look something like a nucleosome under the microscope, biochemical tests have shown that it is in reality halfway between DNA and a nucleosome. These pre-nucleosomes, the researchers say, are converted into nucleosomes by a motor protein that uses the energy molecule ATP (see graphic). "The discovery of pre-nucleosomes suggests that much of chromatin, which has been generally presumed to consist only of nucleosomes, may be a mixture of nucleosomes and pre-nucleosomes," said Kadonaga. "So, this discovery may be the beginning of a revolution in our understanding of what chromatin is." "The packaging of DNA with histone proteins to form chromatin helps stabilize chromosomes and plays an important role in regulating gene activities and DNA replication," said Anthony Carter, who oversees chromatin grants at the National Institute of General Medical Sciences of the National Institutes of Health, which funded the research. "The discovery of a novel intermediate DNA-histone complex offers intriguing insights into the nature of chromatin and may help us better understand how it impacts these key cellular processes." Kim McDonald | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:830ba0cf-370d-4d35-a577-c1fb77a9a113>
3.75
1,109
Content Listing
Science & Tech.
37.073715
95,589,448
The Biotechnology and Biological Sciences Research Council (BBSRC) funded team from UCL (University College London), and the University of Tromsø, Norway, showed that the colour change helps reindeer to see better in the continuous daylight of summer and continuous darkness of Arctic winters, by changing the sensitivity of the retina to light. Arctic reindeer, like many animals, have a layer of tissue in the eye called the tapetum lucidum (TL) which lies behind the retina and reflects light back through it to enhance night vision. By changing its colour the TL reflects different wavelengths of light. In the bright light of summer the TL in Arctic reindeer is gold, similar to many other mammals, which reflects most light back directly through the retina. However by winter it has changed to a deep blue which reflects less light out of the eye. This change scatters more light through photoreceptors at the back of the eye, increasing the sensitivity of the retina in response to the limited winter light The team believes this would be an advantage in the prolonged murk of winter, allowing reindeer to more easily detect moving predators and forage. Lead researcher Professor Glen Jeffery from UCL, said: "This is the first time a colour change of this kind has been shown in mammals. By changing the colour of the TL in the eye reindeer have flexibility to cope better with the extreme differences between light levels in their habitat between seasons. "This gives them an advantage when it comes to spotting predators, which could save their lives." The colour change may be caused by pressure within the eyes. In winter pressure in the reindeers' eyes is increased, probably caused by permanent pupil dilation, which prevents fluid in the eyeball from draining naturally. This compresses the TL, reducing the space between collagen in the tissue and thus reflecting the shorter wavelengths of the blue light common in Arctic winters. Previous work from Professor Jeffery and Norwegian colleagues from Tromso had shown that Arctic reindeer eyes can also see ultraviolet, which is abundant in Arctic light but invisible to humans, and that they use this to find food and see predators. The blue reflection from the winter eye is likely to favour ultra-violet sensitivity. "Shifting mirrors: adaptive changes in retinal reflections to winter darkness in Arctic reindeer" is published in Proceedings of the Royal Society B, can be viewed online at http://dx.doi.org/10.1098/rspb.2013.2451 from October 30. Notes to editors Images available on request Chris Melvin, BBSRC Media Officer, 01793 414694, firstname.lastname@example.org The Biotechnology and Biological Sciences Research Council (BBSRC) invests in world-class bioscience research and training on behalf of the UK public. Our aim is to further scientific knowledge, to promote economic growth, wealth and job creation and to improve quality of life in the UK and beyond. Funded by Government, and with an annual budget of around £467M (2012-2013), we support research and training in universities and strategically funded institutes. BBSRC research and the people we fund are helping society to meet major challenges, including food security, green energy and healthier, longer lives. Our investments underpin important UK economic sectors, such as farming, food, industrial biotechnology and pharmaceuticals. For more information about BBSRC, our science and our impact see: http://www.bbsrc.ac.uk For more information about BBSRC strategically funded institutes see: http://www.bbsrc.ac.uk/institutes About UCL (University College London) Founded in 1826, UCL was the first English university established after Oxford and Cambridge, the first to admit students regardless of race, class, religion or gender and the first to provide systematic teaching of law, architecture and medicine. We are among the world's top universities, as reflected by our performance in a range of international rankings and tables. According to the Thomson Scientific Citation Index, UCL is the second most highly cited European university and the 15th most highly cited in the world. UCL has nearly 27,000 students from 150 countries and more than 9,000 employees, of whom one third are from outside the UK. The university is based in Bloomsbury in the heart of London, but also has two international campuses – UCL Australia and UCL Qatar. Our annual income is more than £800 million. http://www.ucl.ac.uk | Follow us on Twitter @uclnews | Watch our YouTube channel YouTube.com/UCLTV About University of Tromsø UIT The Arctic University of Norway is the northernmost university in the world. Its location on the edge of the Arctic implies a mission. Climate change, the exploration of Arctic resources and environmental threats are topics of great public concern, and which the University of Tromsø takes special interest in. Our key research focuses on the polar environment, climate research, indigenous people, peace and conflict transformation, telemedicine, medical biology, space physics, fishery science, marine biosprospecting, linguistics and computational chemistry. Chris Melvin | EurekAlert! World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:91897f1b-334f-43e1-afaa-654a68bbbb68>
4.4375
1,761
Knowledge Article
Science & Tech.
43.763383
95,589,463
Despite their importance for evaluating anthropogenic climatic change, quantitative temperature reconstructions of the Holocene remain scarce from northern high-latitude regions. We conducted high-resolution midge analysis on the sediments of the past 6000 years from a lake in south-central Alaska. Results were used to estimate mean July air temperature (TJuly) variations on the basis of a midge temperature transfer function. The TJulyestimates from the near-surface samples are broadly consistent with instrumental and treering-based temperature data. Together with previous studies, these results suggest that midge assemblages are more sensitive to small shifts in summer temperature (∼0.5 °C) than indicated by the typical error range of midge temperature transfer functions (∼1.5 °C). A piecewise linear regression analysis identifies a significant change point at ca 4000 years before present (cal BP) in our TJulyrecord, with a decreasing trend after this point. Episodic TJulypeaks (∼14.5 °C) between 5500 and 4200 cal BP and the subsequent climatic cooling may have resulted from decreasing summer insolation associated with the precessional cycle. Centennial-scale climatic cooling of up to 1 °C occurred around 4000, 3300, 1800-1300, 600, and 250 cal BP. These cooling events were more pronounced and lasted longer during the last two millennia than between 2000 and 4000 cal BP. Some of these events have counterparts in climatic records from elsewhere in Alaska and other regions of the Northern Hemisphere, including several roughly synchronous with known grand minima in solar irradiance. Over the past 2000 years, our TJulyrecord displays patterns similar to those inferred from a wide variety of temperature proxy indicators at other sites in Alaska, including fluctuations coeval with the Little Ice Age, the Medieval Climate Anomaly, and the First Millennial Cooling (centered around 1400 cal BP). To our knowledge, this study offers the first high-resolution, quantitative record of summer temperature variation that spans longer than the past 2000 years from the high-latitude regions around the North Pacific. © 2010 Elsevier Ltd. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:f8912b95-d60c-49be-be2d-1bbf3a7ad137>
3.140625
455
Academic Writing
Science & Tech.
28.354942
95,589,464
|ECHINODERMATA : APODIDA : Synaptidae||STARFISH, SEA URCHINS, ETC.| Description: A tiny transparent worm-like holothurian which lives at the surface of soft mud. There are twelve tentacles each with four digits. The body wall is transparent and five longitudinal muscle-bands are visible internally by transparency. The spicules consist of anchors and racket-shaped anchor-plates with handles. 2-3cm in length. Habitat: Lives at the surface or just buried in flocculent mud. Distribution: Originally described from Scandinavia this species had only been found in the British Isles in Strangford Lough, N. Ireland. It has recently been discovered in similar very sheltered habitats in the sea lochs of the Outer Hebrides. Similar Species: Labidoplax buski (McIntosh, 1866) is very similar but has 11 tentacles with a long terminal digit and a single pair of lateral digits. Key Identification Features: Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Picton, B.E. & Morrow, C.C. (2016). Labidoplax media (Ostergren, 1905). [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=ZB5340 Accessed on 2018-07-18 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:f0b6dcc8-d602-4f07-bf15-0d8f49b34497>
3.03125
344
Knowledge Article
Science & Tech.
43.145909
95,589,471
+44 1803 865913 Edited By: Staffan Kjelleberg and Michael Givskov 300 pages, 30 black & white illustrations, 5 black & white tables A biofilm is a complex aggregation of microbes usually attached to a solid surface. Traditional studies of bacteria sometimes implied that microbes live as single organisms; it is now clear that in nature, microbes usually live in co-operative groups attached to surfaces. This book, written by leading international scientists, presents an overview of the most recent and exciting new research into the mechanisms that underpin the biofilm mode of life. It is an essential reading for anyone interested in biofilms. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects We welcome the range and price of boxes available and have been delighted with the speedy service compared to other suppliers. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:3da72b14-09e0-418a-91f8-47b8b99f5d4f>
2.90625
210
Product Page
Science & Tech.
33.850671
95,589,477
The persistence of summer sea surface temperature anomalies (SSTAs) shows strong seasonal dependence, which might prove useful as reference for seasonal to interannual climate predictions. For the North Pacific, previous studies on SSTA persistence and large-scale air–sea interaction have tended to focus on the cold season. However, there is an evident contradiction for the persistence of summer SSTAs (Fig. 1). Fig. 1. Lag correlation of SSTAs as a function of the start month (ordinate) and lag month (abscissa): leading principal component of the North Pacific SSTA (left); SSTA over (32°N, 159°W) (right). Contour interval: 0.1; the 0.4 and 0.6 contours are thickened. Associate Professor Xia Zhao and Jing Wang, from the Institute of Oceanology, Chinese Academy of Sciences, and Assistant Professor Guang Yang, from the First Institute of Oceanography, State Oceanic Administration, investigated the persistence of summer SSTAs in the North Pacific. Their findings, published in Advances in Atmospheric Sciences (Zhao et al., 2018), show that summer SSTAs can persist for a long time (approximately 8–14 months) around the Kuroshio Extension region. In addition, they also examined its mechanism and interdecadal variability. Associate Professor Xia Zhao explains what they discovered: “This long persistence may be strongly related to atmospheric forcing, because the mixed layer is too shallow in the summer to be influenced by the anomalies at depths in the ocean. The longwave radiation flux has a dominant influence. The effect of shortwave radiation flux anomalies is not significant. This result is different from that of the previous studies.” Associate Professor Jing Wang and Assistant Professor Guang Yang further indicate that the persistence of summer SSTAs displays pronounced interdecadal variability around the KE region, which appears very weak during 1950–82, but becomes stronger during 1983–16. Zhao, X., G. Yang, and J. Wang, 2018: Persistence of summer sea surface temperature anomalies in the midlatitude North Pacific and its interdecadal variability. Adv. Atmos. Sci., 35(7), https://link.springer.com/article/10.1007/s00376-017-7184-1 . Contact: Ms. Fengfan Yang Email:email@example.com Address: 7 Nanhai Road, Qingdao, Shandong 266071, China Tel: 86-532-82898902 Fax: 86-532-82898612 E-mail: firstname.lastname@example.org
<urn:uuid:77af78fb-ded2-45ac-a53e-cca19eefe511>
2.59375
563
Academic Writing
Science & Tech.
49.53274
95,589,511
A team of Rice University computational and applied math students have developed a technique to simplify the placement of electrodes in the brains of patients with epilepsy Mathematics is about numbers, shapes, symmetry, chance, change and more. Much more! Math is not only the most rigorous mental discipline ever invented, it's among the richest, most wide-ranging and most useful. Mathematics is also central to the information revolution. Downloadable music files, DVD movies, digital special effects and secure online credit card transactions, essentially any software application you can think of, owes its existence not just to computers, but to the mathematical algorithms that run on computers. Scientists describe the advancements in scientifically based earthquake research, which today relies on detailed simulations of ground movements using some of the world's largest and most capable supercomputers What is the future of deep learning? Charles Cadieu, co-Founder and CEO of Bay Labs, answers the question in this edition of Ask a Scientist MapLite -- a framework that allows self-driving cars to drive on roads they've never been on before without 3-D maps -- combines simple GPS data that can be found on Google Maps with a series of sensors that observe road conditions Someday self-driving cars could react to hazards before a passenger even sees them, thanks to a laser-based imaging technology being developed by Stanford University researchers A team funded by the National Science Foundation demonstrated that -- if controlled effectively - driverless cars are able to reduce stop-and-go waves that can arise in normal traffic patterns San Diego Supercomputer Center's chief data science officer Ilkay Altintas describes a National Science Foundation-funded project that uses data-driven knowledge and predictive tools to battle wildfires, such as those that destroyed thousands of homes and businesses in 2017 A novel approach to connecting everyday appliances via the Internet manages to wirelessly link objects without the use of batteries or electronics By applying a novel computer algorithm to mimic how the brain learns, a team of researchers -- with the aid of San Diego Supercomputer Center at the University of California, San Diego's Comet supercomputer and the center's Neuroscience Gateway -- has identified and replicated neural circuitry that resembles how an unimpaired brain controls limb movement University of California, Berkeley, researchers have developed a robotic learning technology that enables robots to imagine the future of their actions so they can figure out how to manipulate objects they have never encountered before By compressing the data at its source, researchers at Purdue University have developed a technology that allows real-time holographic image transmission, small enough to be streamed over existing consumer data networks and received by any cellphone or web browser Dr. Ajay Sharda and his colleagues and students do work in precision agriculture to increase efficiency of farming through innovations in farm machinery, sensors and technology and agronomic algorithm development Have you ever wondered if planets are still being formed? Dr. Debra Fischer answers your question in this special "Mysteries of the Cosmos" edition of Ask a Scientist With support from the National Science Foundation, solar plasma physicists at the University of Michigan study solar storms as they form and then barrel off the sun, sometimes hitting Earth with damaging force With support from the National Science Foundation, Ragib Hasan of The University of Alabama at Birmingham is retrofitting everyday objects with next generation, highly secure, personal cloud computing capability How might professional development (PD) be designed to help elementary mathematics teachers develop knowledge and skills that are usable in practice? Is the universe infinite and will it last forever? Saul Perlmutter, a professor of physics at the University of California, Berkeley, answers your question in this special "Mysteries of the Cosmos" edition of Ask a Scientist On Aug. 17, 2017, the Laser Interferometer Gravitational-wave Observatory (LIGO) and Virgo detected, for the first time, gravitational waves from the collision of two neutron stars Interactive Robogami uses simulations and interactive feedback with algorithms for design composition, allowing users to focus on high-level conceptual design We asked Mark Mote, a graduate researcher at the Georgia Institute of Technology's Robotarium, what is coding?
<urn:uuid:256b5c62-185d-44b7-b54a-33cbf8430c6a>
2.984375
842
Content Listing
Science & Tech.
-12.050899
95,589,533
Seawater Can Be Turned Into Fuel, While Reducing Carbon Dioxide & This Is What All Of Us Need With climate change as serious as it is, just trying to lower our pollution levels isn’t enough, we need to be actively fighting the damage we’ve caused. Now, a new study shows we may actually be able to do that, with plain old seawater. A study led by Greg Rau, from the University of California in Santa Cruz, shows that we might have a powerful tool at our disposal to scrub carbon dioxide from the air. Get this, we can do this by splitting seawater atoms and producing hydrogen gas for fuel at the same time. Electrolysis is a process by which you use a direct electric current to force a chemical reaction in otherwise non-spontaneous chemicals. In this case, the study talks about using electrolysis to split the atoms in seawater into hydrogen gas, though with a slight difference. One change they suggested is using special membrane filters to separate the hydrogen and hydroxide ions output during electrolysis. Adding that hydroxide to the water allows it to absorb CO2 from the air and turn it into bicarbonate. Without the filter, the presence of the hydrogen ions would instead dissolve the CO2 in the water, which is also bad. The basic idea is that, CO2 in the atmosphere is converted into bicarbonate that goes into the ocean, which won’t harm the ecosystem in any way. On top of that, the study points out, if you use renewable energy like solar and wind to power the electrolysis, you’re effectively converting it into the hydrogen fuel. Additionally, the researchers estimate the cost of such an operation would be between $3 (approximately Rs 200) to $161 (approximately Rs 11,000) per ton of captured CO2, depending on the kind of renewable energy used to power it. That’s cheaper even than biofuels being considered as a fossil fuel substitute today. And it would be a pretty effective CO2 scrubber, if we could implement it on a large enough scale. Hypothetically, if every renewable energy resource in the world was devoted to this kind of plant, we could capture and eliminate twice as much carbon dioxide in a year as we emit. Despite its narrow drawbacks (and there are a couple) the researchers argue it’s worth pursuing this idea further. After all, it might be the tool we need to save our planet from ourselves. 13 Reasons Why Robots And AI Will Help All Of Us, And Make Our Life Convenient And Comfortable How Internet Porn Inspired The Birth Of Online Shopping, Allowing You To Buy Anything Online Believe It Or Not, 3D X-Rays Are Set To Become A Reality Very Soon Google Keyboard Now Supports Morse Code, Giving You The Ability To Talk In A Secret Language This Future Mobility Concept Merges Trains And Planes Into A Single Unit! This College Student's 3D Printed JetPack That Lets You Swim Underwater At 13 Kmph Like A Fish - PartnerThere Have Been A Few Additions To The Quirky Installations We Spotted Across The City And Now It All Makes Sense697 SHARES - NewsRejoice Everyone! Net Neutrality In India Is Alive And Well, ISPs Can't Throttle Speed Of Access375 SHARES WhatsApp Now Labels Forwarded Messages So You Know What You Should Or Should Not Trust Online - NewsSomeone Tried To Smuggle A Snake Onto A Plane By Hiding It In A Hard Drive. It Didn't Work.757 SHARES - TechnologyPopular Apps May Be Sharing Screenshots Of Your Use To Others, According To A Disturbing Study176 SHARES - NewsUK's 'Maximum Penalty' For Facebook's Cambridge Analytica Data Privacy Disaster Is A Huge Joke148 SHARES PayPal Has A Zero Tolerance Policy If You Owe Them Money, Even When You Are Dead. No Kidding. - GadgetsApple May Kill Off The iPhone X And iPhone SE, Replacing Them With 3 New iPhones This Year2K SHARES - NewsTwitter Suspended 70 Million Accounts In The Last Two Months As Part Of Its War On Fake News134 SHARES
<urn:uuid:1b57f306-b500-40ae-a25b-beaae275bbb9>
3.40625
880
News Article
Science & Tech.
37.588033
95,589,534
Hindu Kush Known For Seismic Activity AP Science Writer Earthquake experts aren't surprised to see faultlines popping like firecrackers beneath Afghanistan's Hindu Kush mountain range. Powerful earthquakes happen by the handful there every year. But scientists at the National Earthquake Information Center in Golden, Colo., say the magnitude 6.1 quake Monday night in northeastern Afghanistan was destructive because it was so shallow. It triggered less than three miles below the surface, and with little surrounding bedrock to absorb the energy, flimsy mud-brick villages were flattened. At least 1,800 people were killed. It was followed by at least seven significant aftershocks, they said. All the aftershocks occurred about 6 miles below the surface, or slightly deeper than the primary quake. They ranged from magnitude 4.4 to 5.0 and rattled off and on for about eight hours, including three within the first hour after the initial jolt, the NEIC reported. The shallow primary earthquake occurred near the boundary of the Eurasian and Indian tectonic plates within the Earth's crust. The area is deeply riddled with underground faults. The plates constantly grind against one another, and the Eurasia plate overrides the Indian plate. Their convergence generates extraordinary stresses that result in faults slipping, as well as the gradual uplifting of the Himalayas and other mountain ranges. Scientists said they were still examining data to determine which of the area's major faults might've been responsible for the earthquakes. "The Indian and Asian plates approach each other here at roughly 4 centimeters per year," said University of Colorado geophysicist Roger Bilham. "Unlike other parts of the Himalayan collision, this region of convergence is quite narrow, about 900 miles. So the seismicity is quite high." On average, according to the U.S. Geological Survey, there are at least five earthquakes annually with magnitude 5.0 or greater that occur within a 100-mile radius of this latest epicenter. On March 3, a powerful magnitude 7.4 earthquake rocked the same region, killing 100. However, it occurred 150 miles below the surface. Researchers said the recent quakes were natural and had no ties with the bombing of Afghanistan during the current military campaign. There is no direct connection between fatalities and the magnitude of an earthquake. In 1998, a pair of moderate, but shallow earthquakes in the same region of the Hindu Kush killed more than 6,200 people. In contrast, a magnitude 6.8 earthquake that rocked metropolitan Seattle on Feb. 28, 2001, caused only moderate damage and no direct deaths. Experts said that was not considerably stronger than the latest Afghan quake, but it occurred 33 miles below the surface and its wallop was blunted. The largest earthquake so far in 2002 was a magnitude 7.5 on March 5 in Mindanao, Philippines, that killed 15. It was recorded at a depth of about 20 miles.
<urn:uuid:a67f45ce-444f-4a24-a56c-7ef587a9cdf1>
3.0625
599
News Article
Science & Tech.
49.415468
95,589,535
Researchers Experimenting In Laboratory Earth in the space - Universe background - USA The girl has control over the globe scientists in white coats and goggles working with reagents and microscope in laboratory Conceptual illustration of human mind Little boy with many books in autumnal park collage Portrait of scientist analyzing a solution. Young scientists in laboratory Chemicals being mixed in a test tube Lab worker adding violet liquid to test tubes Earth and Mars in space. Elements of this image furnished by NASA. Green AGP slot on motherboard. selective focus of scientists in medical masks and goggles looking through microscopes on regents in lab Fly compound eye surface Flask with DNA molecule
<urn:uuid:65c4dee0-3766-4efb-b8c8-bae697835f5c>
2.765625
145
Truncated
Science & Tech.
23.187669
95,589,573
miércoles, 19 de octubre de 2011 The First Monstrous Objects of the Early Universe New observations from NASA's Spitzer Space Telescope strongly suggest that infrared light detected in a prior study originated from clumps of the very first objects of the Universe. The recent data indicate this patchy light is splattered across the entire sky and comes from clusters of bright, monstrous objects more than 13 billion light-years away. "We are pushing our telescopes to the limit and are tantalizingly close to getting a clear picture of the very first collections of objects," said Dr. Alexander Kashlinsky of NASA's Goddard Space Flight Cente. "Whatever these objects are, they are intrinsically incredibly bright and very different from a.nything in existence today." Astronomers believe the objects are either the first stars -- humongous stars more than 1,000 times the mass of our sun -- or voracious black holes that are consuming gas and spilling out tons of energy. If the objects are stars, then the observed clusters might be the first mini-galaxies containing a mass of less than about one million suns. The Milky Way galaxy holds the equivalent of approximately 100 billion suns and was probably created when mini-galaxies like these merged. Scientists say that space, time and matter originated 13.7 billion years ago in a tremendous explosion called the Big Bang. Observations of the cosmic microwave background by a co-author of the recent Spitzer studies, Dr. John Mather of Goddard, and his science team strongly support this theory. Mather is a co-winner of the 2006 Nobel Prize for Physics for this work. Another few hundred million years or so would pass before the first stars would form, ending the so-called dark age of the Universe. With Spitzer, Kashlinsky's group studied the cosmic infrared background, a diffuse light from this early epoch when structure first emerged. Some of the light comes from stars or black hole activity so distant that, although it originated as ultraviolet and optical light, its wavelengths have been stretched to infrared wavelengths by the growing space-time that causes the Universe's expansion. Other parts of the cosmic infrared background are from distant starlight absorbed by dust and re-emitted as infrared light. "There's ongoing debate about what the first objects were and how galaxies formed," said Dr. Harvey Moseley of Goddard, a co-author on the papers. "We are on the right track to figuring this out. We've now reached the hilltop and are looking down on the village below, trying to make sense of what's going on." The analysis first involved carefully removing the light from all foreground stars and galaxies in the five regions of the sky, leaving only the most ancient light. The scientists then studied fluctuations in the intensity of infrared brightness, in the relatively diffuse light. The fluctuations revealed a clustering of objects that produced the observed light pattern. "Imagine trying to see fireworks at night from across a crowded city," said Kashlinsky. "If you could turn off the city lights, you might get a glimpse at the fireworks. We have shut down the lights of the Universe to see the outlines of its first fireworks." "Spitzer has paved the way for the James Webb Space Telescope, which should be able to identify the nature of the clusters," said Mather, who is senior project scientist for NASA's future James Webb Space Telescope. The image at the top of the page reveals a background glow of light from a period of time when the universe was less than one billion years old. This light most likely originated from the universe's very first groups of objects -- either huge stars or voracious black holes. The image from NASA's Spitzer Space Telescope shows a region of sky in the Ursa Major constellation. To create this image, stars, galaxies and other sources were masked out. This infrared image covers a region of space so large that light would take up to 100 million years to travel across it. Darker shades in the image on the left correspond to dimmer parts of the background glow, while yellow and white show the brightest light. Source: The Daily Galaxy Publicado por Karla Segura Chavarría en 18:20
<urn:uuid:b2819598-c5b8-480f-b3ff-bc762b78455b>
3.453125
865
News Article
Science & Tech.
47.380261
95,589,589
Index out of bounds exception arraylistIndex out of bounds exception arraylist Java arraylist bounds exception stack overflow, you declared arraylist initial capacity 10 elements add element list list empty set replace existing element element list exception thrown add elements add method initial capacity means array list maintains internally size 10. Java arraylist indexoutofboundsexception index 1 size 1, i attempting read file java multidimensional array read line code script console caused java lang outofboundsexce. Object oriented style languages objective java, the piler allocate temporary variable hold 1 means postfix version slower address memory address variable memory addresses stored type records type variable address. Exception thread main java lang, if ing background pleasant surprise java programming language implicit bound checks array means invalid array access allowed java result java lang array outofboundsexception array data structure programming language java. System class java properties system journaldev, system class java core classes java developer doesn easiest log rmation debugging system print function system class final members methods static subclass override behavior inheritance system class java. Related Post : Index out of bounds exception arraylist java - ArrayList out of bounds exception - Stack Overflow I have the following code: ArrayList<Integer> arr = new ArrayList<Integer>(10); arr.set(0,5); I am getting an index out of bounds error, and I don't know why.... Last update Thu, 12 Jul 2018 05:55:00 GMT Read More Java ArrayList IndexOutOfBoundsException Index: 1, Size: 1 I'm attempting to read a certain file in Java and make it into a multidimensional array. Whenever I read a line of code from the script, The console says: Caused by: java.lang.IndexOutOfBoundsExce... Last update Thu, 12 Jul 2018 01:45:00 GMT Read More Object-Oriented C Style Languages: C++, Objective-C, Java Version. version used. The compiler version used for this sheet. show version. How to get the compiler version. implicit prologue. Code which examples in the sheet assume to have already been executed.... Last update Wed, 11 Jul 2018 19:40:00 GMT Read More Exception in thread "main" java.lang If you are coming from C background than there is pleasant surprise for you, Java programming language provides implicit bound checks on Array, which means an invalid array index access is not allowed in Java and it will result in java.lang.ArrayIndexOutOfBoundsException.... Last update Wed, 11 Jul 2018 07:51:00 GMT Read More
<urn:uuid:35f5ac49-6bd8-43f9-a6ac-b9561121c46c>
2.703125
535
Content Listing
Software Dev.
42.259003
95,589,591
where the denotes the disjoint union, and ∼ is the equivalence relation generated by That is, the mapping cylinder is obtained by gluing one end of to via the map . Notice that the "top" of the cylinder is homeomorphic to , while the "bottom" is the space . It is common to write for , and to use the notation or for the mapping cylinder construction. That is, one writes with the subscripted cup symbol denoting the equivalence. The mapping cylinder is commonly used to construct the mapping cone , obtained by collapsing one end of the cylinder to a point. Mapping cylinders are central to the definition of cofibrations. The bottom Y is a deformation retract of . The projection splits (via ), and a deformation retraction is given by: (where points in stay fixed, which is well-defined, because for all ). The mapping cylinder may be viewed as a way to replace an arbitrary map by an equivalent cofibration, in the following sense: Thus the space Y gets replaced with a homotopy equivalent space , and the map f with a lifted map . Equivalently, the diagram gets replaced with a diagram together with a homotopy equivalence between them. The construction serves to replace any map of topological spaces by a homotopy equivalent cofibration. Mapping cylinders are quite common homotopical tools. One use of mapping cylinders is to apply theorems concerning inclusions of spaces to general maps, which might not be injective. Consequently, theorems or techniques (such as homology, cohomology or homotopy theory) which are only dependent on the homotopy class of spaces and maps involved may be applied to with the assumption that and that is actually the inclusion of a subspace. Another, more intuitive appeal of the construction is that it accords with the usual mental image of a function as "sending" points of to points of and hence of embedding within despite the fact that the function need not be one-to-one. Categorical application and interpretation One can use the mapping cylinder to construct homotopy colimits: this follows from the general statement that any category with all pushouts and coequalizers has all colimits. That is, given a diagram, replace the maps by cofibrations (using the mapping cylinder) and then take the ordinary pointwise limit (one must take a bit more care, but mapping cylinders are a component). Conversely, the mapping cylinder is the homotopy pushout of the diagram where and . Given a sequence of maps the mapping telescope is the homotopical direct limit. If the maps are all already cofibrations (such as for the orthogonal groups ), then the direct limit is the union, but in general one must use the mapping telescope. The mapping telescope is a sequence of mapping cylinders, joined end-to-end. The picture of the construction looks like a stack of increasingly large cylinders, like a telescope. Formally, one defines it as
<urn:uuid:7c278203-1305-429d-9acf-7035057cf076>
3.109375
643
Knowledge Article
Science & Tech.
30.478885
95,589,611
These molecules bind to the protein actin which is implicated in cell movement and cell division. According to experimental results published recently in "Biophysical Journal" the ability of actin to join to long chains is either hindered or improved. Surprisingly, it has been shown that these substances also affect the rate at which genetic information is processed in the cell's nucleus. A large family of plant pigments, the flavonoids, comprises over 6000 structurally related substances found in fruit and vegetables of our daily diet. They appear to evoke the positive health effects of green tea or red wine. However, their functional mechanisms are diverse and not well understood. This complicates the reliable assessment of their beneficial effects as well as possible health risks. Many scientists try to understand these mechanisms on a molecular level hoping to learn from nature in order to design new compounds that can be used in therapies of cancer or heart diseases. The recent study reports two surprising results that are related to the binding of flavonoids to the protein actin. Actin is one of the best-studied and most abundant proteins. Together with other biomolecules, it enables muscle contraction, changing the cell shape, and separation of daughter cells during cell division. Two years ago, biologists from the Technische Universität Dresden were surprised to find that flavonoids can dock to actin in the nucleus of living cells (Publication No. 1). Now, together with the biophysics group at the Forschungszentrum Dresden-Rossendorf (FZD), they proved in a test tube that flavonoids influence the growth of chains of actin molecules, a process that is linked to the cellular functions of actin (Publication No. 2). Flavonoids can strengthen or weaken this process. Astonishingly, the same dependency on flavonoids was observed for the speed at which the genetic material is read from the DNA in the cell nucleus. These results, according to Prof. Herwig O. Gutzeit from the TU Dresden, show that the direct biological effects of flavonoids on actin may also influence the activity of genes in a cell. The biophysicist Dr. Karim Fahmy from the Forschungszentrum Dresden-Rossendorf (FZD) was able to demonstrate the molecular mechanism by which flavonoids can affect actin functions. The flavonoids function as switches that bind to actin and promote or inhibit its functions. Using infrared spectroscopy, Fahmy studied the interaction of actin with the activating flavonoid “epigallocatechin” and the inhibitor “quercetin”. This method is well-suited for demonstrating structural changes in large biomolecules without any interventions that may affect the extremely sensitive proteins. Upon addition of the selected flavonoids to actin, the structure of the actin changes in a dramatic and typical way. Depending on the type of flavonoid, the "actin switch" is set to increased or reduced functional activity. The mechanism appears obvious to the scientists: the effects of the flavonoids are a function of their form. Actin itself is a flexible molecule, which explains why various flavonoids can bind to actin in a very similar way but nevertheless produce effects that range from inhibition to stimulation. Flexible flavonoids match the structure of the actin and create complexes that improve actin functions. More rigid flavonoids force the actin into a structure that is less compatible with its natural functions, thereby, inhibiting actin-dependent cellular processes. Simulations of flavonoid binding to actin performed in the bioinformatics group of Dr. Apostolakis at the Ludwig-Maximilian University of Munich identified the putative site where flavonoids interact with actin. The collaborative and highly interdisciplinary efforts allowed to determine previously unknown structure-specific functional mechanisms of flavonoids. This knowledge facilitates the future search for compounds with improved effectiveness and specificity that can be used to modulate actin functions for therapeutic purposes. Christine Bohnet | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:f9a79655-bd91-4e91-91e5-d01931ebb995>
3.015625
1,420
Content Listing
Science & Tech.
32.934268
95,589,615
On Friday the Earth will be involved in an astronomical "near miss" with a 1000-metre wide asteroid called 2014-YB35 . The so-called "near miss" means the space object will "skim" Earth from a distance of 2.8 million miles - so there’s absolutely no need to panic. However, there are a million asteroids in the solar system with potential to strike Earth, but only 10,000 have been discovered. Working out a way to deflect an asteroid could be very useful in saving the human race from potential catastrophe. That’s where NASA comes in. The US space agency has been working on techniques for asteroid "redirection" - deflecting space rocks from a crash course with planet Earth. This week, it has revealed how in the 2020s it's going to capture an asteroid boulder - using a robotic spacecraft - and move it into an orbit around the Moon so that astronauts can explore it. The Asteroid Redirect Mission is part of training in the run up to a manned Mars mission. The plan is to land on an asteroid, retrieve a boulder from that asteroid and put the smaller rock into orbit around the Moon. NASA will pick a specific asteroid for the mission in 2019 at the earliest - a year before launching the robotic spacecraft. SO far there are three asteroids in the running for the mission: Itokawa, Bennu and 2008 EV5. The spacecraft will make a rendezvous with the target asteroid and use robotic arms to grab a large boulder from the surface. It will then drag that boulder into the Moon’s orbit. In the process the spacecraft will test a new type of propulsion system that uses sunlight to create power. This sort of system could eventually help get humans to Mars. The scope of the mission has been somewhat reduced, however. The original plan was to redirect an entire asteroid into the Moon's orbit as a means of learning how to protect the planet from catastrophe should a large space rock find itself hurtling towards us. The more ambitious version of the mission could have enabled NASA to protect Earth from an asteroid like 2014-YB35. The space agency does say that before it moves the boulder to the Moon it will "use the opportunity to test planetary defence techniques to help mitigate potential asteroid impact threats in the future". NASA has also revealed that it’s increased the detection of near-Earth asteroids by 65% since starting the asteroid initiative three years ago.
<urn:uuid:7c462bf6-64be-42d2-9573-1ef2bec2500f>
3.828125
509
News Article
Science & Tech.
52.251987
95,589,665
Astronomers: Life elsewhere seems even more likely, but may be more like slime mold than ET Lately, a handful of new discoveries make it seem more likely that we are not alone — that there is life somewhere else in the universe. In the past several days, scientists have reported there are three times as many stars as they previously thought. Another group of researchers discovered a microbe can live on arsenic, expanding our understanding of how life can thrive under the harshest environments. And earlier this year, astronomers for the first time said they’d found a potentially habitable planet. “The evidence is just getting stronger and stronger,” said Carl Pilcher, director of NASA’s Astrobiology Institute, which studies the origins, evolution and possibilities of life in the universe. “I think anybody looking at this evidence is going to say, ‘There’s got to be life out there.'” A caveat: Since much of this research is new, scientists are still debating how solid the conclusions are. Another reason to not get too excited is that the search for life starts small — microscopically small — and then looks to evolution for more. The first signs of life elsewhere are more likely to be closer to slime mold than to ET. It can evolve from there. Scientists have an equation that calculates the odds of civilized life on another planet. But much of it includes factors that are pure guesswork on less-than-astronomical factors, such as the likelihood of the evolution of intelligence and how long civilizations last. Stripped to its simplistic core — with the requirement for intelligence and civilization removed — the calculations hinge on two basic factors: How many places out there can support life? And how hard is it for life to take root? What last week’s findings did was both increase the number of potential homes for life and broaden the definition of what life is. That means the probability for alien life is higher than ever before, agree 10 scientists interviewed by The Associated Press. Seth Shostak, senior astronomer at the SETI Institute in California, ticks off the astronomical findings about planet abundance and Earthbound discoveries about life’s hardiness. “All of these have gone in the direction of encouraging life out there and they didn’t have to.” Scientists who looked for life were once dismissed as working on the fringes of science. Now, Shostak said, it’s the other way around. He said that given the mounting evidence, to believe now that Earth is the only place harboring life is essentially like believing in miracles. “And astronomers tend not to believe in miracles.” Astronomers, however, do believe in proof. They don’t have proof of life yet. There’s no green alien or even a bacterium that scientists can point to and say it’s alive and alien. Even that arsenic-munching microbe discovered in Mono Lake in California isn’t truly alien. It was manipulated in the lab. But, says NASA astrobiologist Chris McKay, who has worked on searches for life on Mars and extreme places on Earth, “There are real things we can point to and show that being optimistic about life elsewhere is not silly.” First, there’s the basic question of where such life might exist. Until a few years ago, astronomers thought life was only likely to be found on or around planets circling stars like our sun. So that’s where the search of life focused — on stars like ours. That left out the universe’s most common stars: red dwarfs, which are smaller than our sun and dimmer. Up to 90 percent of the stars in the universe are red dwarf stars. And astronomers assumed planets circling them would be devoid of life. But three years ago, NASA got the top experts in the field together. They crunched numbers and realized that life could exist on planets orbiting red dwarfs. The planets would have to be closer to their star and wouldn’t rotate as quickly as Earth. The scientists considered habitability and found conditions near these small stars wouldn’t be similar to Earth but would still be acceptable for life. That didn’t just open up billions of new worlds, but many, many times that. Last week, a Yale University astronomer said he estimates there are 300 sextillion stars — triple the previous number. Lisa Kaltenegger of Harvard University says scientists now believe that as many as half the stars in our galaxy have planets that are two to 10 times the size of Earth — “super Earths” which might sustain life. Then the question is how many of those are in the so-called Goldilocks zone — not too hot, not too cold. The discovery of such a planet was announced in April, although some scientists are challenging that. The other half of the equation is: How likely is life? Over the past decade and a half, scientists have found Earth life growing in acid, in Antarctica and other extreme environments. But nothing topped last week’s news of a lake bacterium that scientists could train to thrive on arsenic instead of phosphorous. Six major elements have long been considered essential for life — carbon, hydrogen, nitrogen, oxygen, phosphorus and sulfur. This changed that definition of life. By making life more likely in extreme places, it increases the number of planets that are potential homes for life, said Kaltenegger, who also works at the Max Planck Institute in Germany. Donald Brownlee, an astronomer at the University of Washington, is less optimistic because he believes what’s likely to be out there is not going to be easy to find — or that meaningful. If it’s out there, he said, it’s likely microbes that can’t be seen easily from great distances. Also, the different geologic and atmospheric forces on planets may keep life from evolving into something complex or intelligent, he said. If life is going to be found, Mars is the most likely candidate. And any life is probably underground where there is water, astronomers say. Other possibilities include Jupiter’s moon Europa and Saturn’s moons Enceladus and Titan. There’s also a chance that a telescope could spot a planet with an atmosphere that suggests photosynthesis is occurring, Kaltenegger said. And then there’s the possibility of finding alien life on Earth, perhaps in a meteorite, or something with an entirely different set of DNA. And finally, advanced aliens could find us or we could hear their radio transmissions, McKay said. That’s what the SETI Institute is about, listening for intelligent life. That’s where Shostak puts his money behind his optimism. At his public lectures, Shostak bets a cup of coffee for everyone in the audience that scientists will find proof of alien life by about 2026. The odds, he figures, have never been more in his favor. NASA Astrobiology Institute: //astrobiology.nasa.gov/ SETI Institute: //www.seti.org/ Source: AP News Mochila insert follows…
<urn:uuid:826e67af-916c-4674-944d-e4995023c87c>
3.171875
1,500
Truncated
Science & Tech.
46.047637
95,589,667
Elattoneura vrijdaghi Fraser, 1954 Type locality: Bambesa, DRC Male is similar to E. tsiamae by (a) head, thorax and sometimes Abd tip with reddish markings with maturity; (b) eyes red in life; (c) anal vein terminates level to distal border of quadrilateral; (d) legs dull yellow to reddish with dark blotches; (f) ventral process of cerci triangular, anterior border at acute angle to posterior border. However, differs by (1) antehumeral stripes wider, rather than narrower than dark area between them; (2) apical process of paraprocts rounded, rather than pointed. However, note that the holotype of E. vrijdaghi has paraprocts and antehumerals more like as illustrated for E. tsiamae by Dijkstra & Clausnitzer (2014) and thus further study of the complex is needed (Dijkstra et al. 2015). [Adapted from Dijkstra & Clausnitzer 2014 and Dijkstra, Kipping & Mézière 2015] Mostly streams, but also rivers, in open areas in forest. Probably often with blackwater, mostly with a sandy bottom. From 200 to 700 m above sea level. Appendages (lateral view) Thorax (lateral view) Map citation: Clausnitzer, V., K.-D.B. Dijkstra, R. Koch, J.-P. Boudot, W.R.T. Darwall, J. Kipping, B. Samraoui, M.J. Samways, J.P. Simaika & F. Suhling, 2012. Focus on African Freshwaters: hotspots of dragonfly diversity and conservation concern. Frontiers in Ecology and the Environment 10: 129-134. - Fraser, F.C. (1954). New and rare species of Zygoptera from the Belgian Congo. Revue Zoologie Botanique Africaines, 50, 269-276. [PDF file] Citation: Dijkstra, K.-D.B (editor). African Dragonflies and Damselflies Online. http://addo.adu.org.za/ [2018-07-17].
<urn:uuid:d1ad78a5-ddac-4f12-80b9-cafdbbb9f5ab>
2.765625
492
Knowledge Article
Science & Tech.
65.337791
95,589,675
The Biological Solar Panel - Fundamental Research, Department of Energy Projects Plants and algae are nature’s biological solar panels. By capturing light energy from the sun and converting it into dense energy molecules, through the process of photosynthesis, these organisms support most of life on our planet. Photosynthesis is a complex system of processes consisting of hundreds of component parts that work together at the cellular level. The two major processes of photosynthesis are the so-called light-dependent and dark reactions. In the first, photosynthetic organisms trap and process ‘raw’ sunlight energy that cannot be consumed by living things. The dark reactions use that light energy to capture carbon dioxide from the atmosphere and convert it into compounds that can be used for consumption. The challenge: Integrating knowledge of photosynthetic processes that operate over a wide range of spatial and temporal scales Decades of research have taught us a lot about the photosynthetic components, but scientists still don’t have a full picture of how photosynthesis works as a whole. Part of the difficulty lies in the fact that most research has focused on organisms grown under static laboratory conditions, instead of observing how the photosynthetic components respond dynamically to natural living conditions. Photosynthetic processes occur on time scales ranging from sub-millisecond photochemical reactions to the seasonality of leaf deterioration and renewal over time. Spatial scales are also vast, spanning from molecules to whole leaves. It is therefore difficult to study photosynthesis within one lab or a single discipline, as the process spans a range of physical, biochemical, and structural areas of scientific expertise. Our approach: Understanding the biological solar panel holistically The MSU-DOE Plant Research Lab aims to study the components and processes in a highly integrated way. We want to develop models on multiple scales that describe how photosynthesis fundamentally works as a whole. If we can understand the processes, as a whole, it will facilitate our long-term efforts to improve photosynthetic efficiency and increase crop yield, by redesigning different parts of the system to work better. To achieve this goal, we work collaboratively across various disciplines. Our participating researchers have expertise in areas including biophysics, biochemistry, physiology, photobiology, genetics, and cell biology. We currently study photosynthesis from four angles: - We are focusing on chloroplasts, the subcellular compartment in which photosynthesis begins. We want to understand how the chloroplast membranes are created and maintained in living plants. We also want to examine how the chloroplast interacts with other parts of the cell that contribute to photosynthetic processes (Benning, Brandizzi, and Hu labs). - We are exploring how the structural features of the biological solar panel influence the availability of carbon dioxide in the photosynthetic compartments. We also want to look at how photorespiration and Calvin-Benson cycle regulation work together (Hu, Brandizzi, Ducat, He, and Sharkey labs). - We are studying how the Calvin-Benson cycle energy outputs coordinate with changing light intensities in the surrounding environment. We also seek to understand how these outputs match with the light-dependent reactions of the cell (Sharkey, Froehlich, Howe, and Kramer labs). - We are using engineered model plants (Arabidopsis) and cyanobacteria to understand how shifts in the allocation of carbon, the raw material for production of energy-dense compounds, are sensed by these organisms. We also seek to understand how changes in environmental conditions, including various stresses, influence carbon partitioning and the activity of photosynthesis (Ducat, Howe, Kramer, Montgomery, and Sharkey labs). With this diversity of perspectives, combined with the unique technologies at our disposal, we are well positioned to understand the biological solar panel in a holistic way. This research is one of three core projects funded by the US Department of Energy, Office of Basic Energy Sciences.
<urn:uuid:c213a124-6171-4fd0-a462-589b4e2ec058>
3.703125
816
Knowledge Article
Science & Tech.
15.924783
95,589,707
Eugenia Cheng on how extra numbers inserted into the genuine information you are transmitting can prevent errors. It’s hard to transfer a sense of likelihood from large data sets to individual events in real life. Eugenia Cheng’s advice on how to deal with chance, from medical forecasts to elections. Eugenia Cheng on sound waves, Fourier analysis and the mathematics of why one voice can be unbearable, another dulcet. Mathematics can deal with the fatigue of making seemingly endless decisions. Eugenia Cheng on how to escape the “axiom of choice.” Eugenia Cheng uses her adventures with macarons to demonstrate a key part of mathematics—the use of exponentials. Questionnaires and mathematics share a goal: to depict a complex world. Eugenia Cheng on how the pitfalls and strategies common to both. A quest to find the ideal mix of juice and water turns into an elegant math problem. In life, the need for cutoff points can confuse us. Math has tools for helping us to make better sense of gray areas. Speculative math isn’t useless. Real-world applications often follow, though it may take some time.
<urn:uuid:cc824e4b-eaba-44ce-ad28-7b496e36bed8>
2.65625
253
Content Listing
Science & Tech.
47.602634
95,589,712
New tool for monitoring nuclear tests: Corresponding author: Stephen J. Arrowsmith of Los Alamos National Laboratory. California’s Hayward Fault System Examined: For Northern California, the Hayward Fault System is considered to pose the greatest risk for producing a major quake in the next 30 years. It is necessary for seismologists to understand the structure of the East Bay and the mechanics of its motion in order to anticipate what will happen during an earthquake along the Hayward Fault. Scientists from USGS-Menlo Park created the most detailed 3-D model to date of the upper crust in the East Bay and geometry of the Hayward Fault. The model reveals the motion of small Hayward Fault earthquakes to be very similar to the over-all motion of the fault, with no complexities that could bound or restrict the rupture zones of large earthquakes. Seismic hazard assessments should therefore plan for earthquakes anywhere along the fault. Further, although the Hayward and Calaveras Faults are not connected at the surface, the model revealed a smooth connection between the Hayward and Calaveras Faults at depths greater than about 3 miles. Therefore, seismic hazard assessments should assume scenario earthquakes that span parts of both faults. Authors: Jeanne L. Hardebeck, Andrew J. Michael, and Thomas M. Brocher of USGS-Menlo Park, California. Long-term seismic behavior of an active fault: what can we learn from a 12,000-yr-long paleo-seismic record? Daëron and colleagues from Institut de Physique du Globe de Paris (France) present results of the first paleoseismic study of the Yammoûneh fault, which is the main on-land segment of the Levant fault system (or "Dead Sea fault") in Lebanon, a region tectonically similar to the "Big Bend" in the San Andreas fault. This area offers a long historic record that spans more than 2000 years of activity. Researchers sought to answer several questions about the frequency and magnitude of historical quakes and to understand the mechanisms at work that govern the faults. They present evidence that the latest event was the great A.D. 1202 earthquake and resolve unanswered questions about the frequency of seismic activity. Large earthquakes on different fault segments appear to cluster temporally within a couple of centuries, followed by millennial spans of relative quiescence. Authors conclude that regional risk assessment needs to prepare for the possibility of a large (M>7) earthquake striking this densely populated region in the coming century. Corresponding author: Mathieu Daëron, currently at Caltech in Pasadena, CA Nan Broadbent | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:b2a2965c-71a5-4a3e-b435-0905145a4de0>
3.140625
1,116
Content Listing
Science & Tech.
41.772445
95,589,736
Scientists have used gene-editing technology to alter the wing colour of butterflies for the first time, unlocking the genetic code for future biotech companies to create vivid new colours - The same pathway that controls pigmentation also controls wing structure - Researchers used Crispr to eliminate five of the key genes that determine colour - The lack of these genes halted production of melanin, making colours lighter - Researchers say the discovery could help biotech companies unlock the genetic code to create vivid new colours for bioengineered animals in future Scientists have successfully altered the colour and structure of butterfly wings for the first time, opening the door to future bioengineering possibilities. The patterns, colours, and shapes seen in the wings of butterfly species around the world can now be manipulated by researchers using gene editing techniques, according to research published today. Known as Crispr-Cas9, this method allows scientists to alter the sequence of DNA. Scientists were able to use the technique to unearth the genetic key behind butterfly wings and how they get their colour, pattern and structure. The researchers claim this finding could allow for 'vivid, brilliant colours' to be created by biotech companies in the near future. Scroll down for video The Squinting bush brown butterfly (pictured) is a perfect example of the incredible shapes and colour that can happen when evolution finds the balance between beauty and function. Using Crispr, scientists are now able to manipulate the patterns and colour of the wings National University of Singapore scientists selected the squinting bush brown butterfly for the genetic study. Researchers believe unlocking the genetics behind the beautiful patterns on the species' wings could allow for the production of extremely vivid colours through a natural process, opposed to the current method which uses metals. This would be more environmentally-friendly than current techniques. 'Butterfly colour has always been described as either pigmentary or structural, but our work identifies the first candidate genes that may constrain the evolution of both of these forms of generating colour,' said study author Dr Monteiro, of the National University of Singapore's Faculty of Science and Yale-NUS College in Singapore. 'If we understand the developmental genetics of colour, biotech companies of the future might be able to generate vivid, brilliant colours via bio engineering, based on butterfly scales, instead of having to nano-manufacture them using metals, which is currently extremely difficult to do. 'These chitin-based colours would be lasting, biodegradable, and environmentally friendly.' During the research, scientists found wing colour and structure are tightly linked. When they used Crispr-Cas9 to manipulate a pathway to make the colour of the wings lighter, it had a knock-on impact on the levels of chitin it produces. Chitin is a stiff organic molecule which forms the exoskeleton of almost all insects. Butterflies typically have a thin layer of chitin to protect the surface of their wings. 'Our research indicates that the colour and structure of wing scales are intimately related because pigment molecules also affect the structure of scales,' says Dr Monteiro. 'Some end products of the melanin pathway, which produces butterfly wing pigments, play a role in both scale pigmentation and scale morphology.' The team used Crispr-Cas9 to eliminate five genes in the squinting bush brown butterfly known to control pigmentation. These genes are TH (Tyrosine Hydroxylase), DDC (DOPA decarboxylase), yellow, ebony, and aaNAT (Arylalkylamine N-Acetyltransferase). This photograph shows the wild-type form of the squinting bush brown butterfly (left) alongside an individual with the genetic mutations in its melanin pathway, in particular in the gene yellow, that make it appear paler (right) These genes are involved in a complex cascade of reactions known as the melanin biosynthetic pathway, which transforms the amino acid tyrosine into a form of melanin, which in turn triggers either dark or light pigmentation in the insect. This process produces five different chemicals, including dopa-melanin and dopamine-melanin, which combine to create the unique pattern and colours appearance of the squinting bush brown butterfly. Dopa-melanin is responsible for the black pigmentation, while the brown pigment is created by dopamine-melanin. The researchers looked at how depriving the animal of specific elements of the melanin biosynthetic pathway would alter their form. It was during their tests to uncover more details about this pathway when they stumbled across the close relationship with chitin. According to the researchers, the study showed the 'yellow' mutation prevented the pigment dopa-melanin from being generated and caused an extra sheet of chitin to form horizontally on the upper surface of the wing scale. 'Some butterflies can have vivid hues just by having simple thin films of chitin on their scales that interfere with incoming light to create shades known as structural colours without producing corresponding pigments,' says Dr Monteiro. 'Light beams reflecting off the top and bottom surfaces of the chitin layer can interfere with each other and accentuate specific colours depending on the thickness of the film, so our results might be interesting in this context.' When the 'yellow' mutation prevented the pigment dopa-melanin from being generated, an extra sheet of chitin formed horizontally on the upper surface of the wing (middle). The DDC mutation blocked the pigment dopamin-melanin and produced vertical blades of chitin (bottom). The normal form of the butterfly, the wild type (WT), is pictured at the top Wing colours and patterns play multiple, critical roles in the lives of the insects. They act as camouflage to avoid predators and often play an role in attracting a potential mate. Should the mutations introduced in the lab occur naturally, with the animals becoming paler and suffering changes to their exoskeleton, it could jeopardise the survival of the individual, as well as the species as a whole. WHAT IS CRISPR-CAS9? CRISPR-Cas9 is a tool for making precise edits in DNA, discovered in bacteria. The acronym stands for 'Clustered Regularly Inter-Spaced Palindromic Repeats'. The technique involves a DNA cutting enzyme and a small tag which tells the enzyme where to cut. The CRISPR/Cas9 technique uses tags which identify the location of the mutation, and an enzyme, which acts as tiny scissors, to cut DNA in a precise place, allowing small portions of a gene to be removed By editing this tag, scientists are able to target the enzyme to specific regions of DNA and make precise cuts, wherever they like. It has been used to 'silence' genes - effectively switching them off. When cellular machinery repairs the DNA break, it removes a small snip of DNA. In this way, researchers can precisely turn off specific genes in the genome. The approach has been used previously to edit the HBB gene responsible for a condition called β-thalassaemia. This photograph shows pigmented wing scales in the eyespots of the Squinting bush brown butterfly changing colour with mutations in the melanin pathway. Yellow mutant (left), wildtype (center), and DDC mutant (right) 'The morphology of wing scales is highly diverged; however, the genetic and molecular mechanisms underlying the development of butterfly wing scales have just started to be studied, and melanin products are one of the multiple molecules likely playing a role in this process,' adds Mr Matsuoka. 'Further studies using other butterfly species and other cuticular components will help us understand more about the evolution and development of butterfly wing scales.' A report from the Department for environment, food and rural affairs (Defra) recently revealed butterfly numbers in the UK are in decline due to poor land management. Since 1990, butterfly numbers have dropped by 27 per cent in farmland and by 58 per cent in woods, the government study found. Species in long-term decline on farmland include the gatekeeper, large skipper and small tortoiseshell. Woodland species that are struggling include the brown argus, common blue, peacock and purple hairstreak. The report blames the dwindling numbers of butterflies on the 'lack of woodland management and loss of open spaces in woods.' Most watched News videos - Shocking video shows mother brutally beating her twin girls - Roseanne Bar explains her Valerie Jarrett tweet in eccentric rant - Disaster averted by good samaritan that saved child in hot car - Sir David Attenborough shuts down Naga Munchetty's questions - Man fatally shoots a father during an argument over a handicap spot - Waitress tackles male customer after grabbing her backside - Bon Jovi star Richie Sambora soars in fighter plane - Cohen taped Trump discussing payment to Playboy model - Woman livestreams unassisted birth of her 6th child in her garden - London commuter sings out loud and doesn't care who hears him - Roseanne Barr gives official statement on her Valerie Jarrett tweet - Road rage brawl ends with BMW driver sending man flying
<urn:uuid:e0fd95db-94dc-43d2-ab3a-5810c63f8700>
3.140625
1,915
News Article
Science & Tech.
25.035207
95,589,749
PyStarch is a lint-style command line tool for static type checking of Python programs. It also checks that programs conform to certain constraints that are intended to encourage a more functional programming style. You can think of PyStarch as defining a sub-language of Python that lies halfway between Python and Haskell, combining the simple syntax of Python with the safety and cleanliness of Haskell. Although PyStarch provides warnings to encourage you to use this sub-language (such as warning when variables are reassigned), you can choose to ignore any warnings you want since your code still runs in the standard Python interpreter. Does Python need static analysis? I've heard some Python developers argue that they don't need static analysis because they have lots of unit tests. Static analysis tools essentially run thousands of additional unit tests on your code for free. Unlike manually written unit tests, the static analysis tool's unit tests are generated automatically (requiring no developer time), are bug-free (requiring no debugging or maintenance), and have complete code coverage (so you don't have to run a code coverage tool on them). The only reasonable argument against using static analysis is that it may require a bit more time upfront to structure your code in a way that is amenable to static analysis. But I've generally found that this tends to produce cleaner, more readable code, so it is probably a good idea anyways, unless you are writing a quick throwaway prototype perhaps. You can't add "None" to an integer, so PyStarch generates a warning if you try it. x = 1 + None --- x Num example.py:1 type-error "None" (NoneType vs Num) The types of function arguments are inferred by the constraints imposed by the way they are used in the body of the function. def f(a, b): return len(a * b) x = f(2, 2) --- f Function(a: Str, b: Num -> Num) x Num example.py:3 type-error "Num" (Num vs Str) Note that PyStarch assumes that when multiplying a string by an integer the integer is on the right hand side. An error is generated because the arguments passed to "f" don't match the inferred type signature. As in Haskell, expressions can take on a "Maybe" type if they might be None or some non-None value. from random import random a = 1 if random() > 0.5 else None b = 1 if random() > 0.5 else None c = a + b if a is not None and b is not None else None d = a + b if a is not None or b is not None else None --- a Maybe(Num) b Maybe(Num) c Maybe(Num) d Maybe(Num) random Unknown example.py:5 type-error "a" (Maybe(Num) vs Num) example.py:5 type-error "b" (Maybe(Num) vs Num) Notice that the line defining "c" does not generate an error because it can be deduced that the addition only takes place when both operands are not None, but on the line defining "d" the same cannot be deduced so it generates an error. Installation and Usage sudo pip install meta git clone https://github.com/clark800/pystarch.git cd pystarch python2.7 main.py module-to-analyze.py This will produce a listing of the types of all the symbols in the module's top scope, followed by a list of all the warnings generated while analyzing the module.
<urn:uuid:7d2bfbd4-5ab0-4a41-b10f-7eb0dce4bdab>
2.6875
752
Documentation
Software Dev.
53.080622
95,589,791
Microsoft started development on the .NET Framework in the late 1990s originally under the name of Next Generation Windows Services (NGWS). By late 2001 the first beta versions of .NET 1.0 were released. The first version of .NET Framework was released on 13 February 2002, bringing managed code to Windows NT 4.0, 98, 2000, ME and XP. Since the first version, Microsoft has released nine more upgrades for .NET Framework, seven of which have been released along with a new version of Visual Studio. Two of these upgrades, .NET Framework 2.0 and 4.0, have upgraded Common Language Runtime (CLR). New versions of .NET Framework replace older versions when the CLR version is the same. The .NET Framework family also includes two versions for mobile or Embedded device use. A reduced version of the framework, the .NET Compact Framework, is available on Windows CE platforms, including Windows Mobile devices such as smartphones. Additionally, the .NET Micro Framework is targeted at severely resource-constrained devices. |Development tool||Included in||Replaces| |1.0||1.0||2002-02-13||2009-07-14||Visual Studio .NET||XP SP1[a]||N/A||N/A| |1.1||1.1||2003-04-24||2015-06-14||Visual Studio .NET 2003||XP SP2, SP3[b]||2003||1.0| |2.0||2.0||2005-11-07||2011-07-12||Visual Studio 2005||N/A||2003, 2003 R2,2008 SP2, 2008 R2 SP1||N/A| |3.0||2.0||2006-11-06||2011-07-12||Expression Blend[c]||Vista||2008 SP2, 2008 R2 SP1||2.0| |3.5||2.0||2007-11-19||N/A||Visual Studio 2008||7, 8, 8.1, 10[d]||2008 R2 SP1||2.0, 3.0| |4.0||4||2010-04-12||2016-01-12||Visual Studio 2010||N/A||N/A||N/A| |4.5||4||2012-08-15||2016-01-12||Visual Studio 2012||8||2012||4.0| |4.5.1||4||2013-10-17||2016-01-12||Visual Studio 2013||8.1||2012 R2||4.0, 4.5| |4.6||4||2015-07-20||N/A||Visual Studio 2015||10 v1507||N/A||4.0-4.5.2| |4.6.1||4||2015-11-30||N/A||Visual Studio 2015 Update 1||10 v1511||N/A||4.0-4.6| |4.7||4||2017-04-05||N/A||Visual Studio 2017||10 v1703||N/A||4.0-4.6.2| |4.7.1||4||2017-10-17||N/A||Visual Studio 2017||10 v1709||2016 v1709||4.0-4.7| |4.7.2||4||2018-04-30||N/A||Visual Studio 2017||10 v1803||N/A||4.0-4.7.1| |4.7.3||4||Developing||N/A||Visual Studio 2017||10 v1809 (Planning)||N/A||4.0-4.7.2| |4.8||4||Developing||N/A||Visual Studio 2019 (Planning)||10 v1903 (Planning)||N/A||4.0-4.7.3| The first version of the .NET Framework was released on 13 February 2002 for Windows 98, ME, NT 4.0, 2000, and XP. Mainstream support for this version ended on 10 July 2007, and extended support ended on 14 July 2009, with the exception of Windows XP Media Center and Tablet PC editions. On 19 July 2001, the tenth anniversary of the release of Visual Basic, .NET Framework 1.0 Beta 2 was released. .NET Framework 1.0 is supported on Windows 98, ME, NT 4.0, 2000, XP, and Server 2003. Applications utilizing .NET Framework 1.0 will also run on computers with .NET Framework 1.1 installed, which supports additional operating systems. Version 1.1 is the first minor .NET Framework upgrade. It is available on its own as a redistributable package or in a software development kit, and was published on 3 April 2003. It is also part of the second release of Visual Studio .NET 2003. This is the first version of the .NET Framework to be included as part of the Windows operating system, shipping with Windows Server 2003. Mainstream support for .NET Framework 1.1 ended on 14 October 2008, and extended support ended on 8 October 2013. .NET Framework 1.1 is the last version to support Windows NT 4.0. pport for version 1.0, except in rare instances where an application will not run because it checks the version number of a library. Changes in 1.1 include: Version 2.0 was released on 22 January 2006. It was also released along with Visual Studio 2005, Microsoft SQL Server 2005, and BizTalk 2006. A software development kit for this version was released on 29 November 2006. It was the last version to support Windows 98 and Windows Me. .NET Framework 2.0 with Service Pack 2 requires Windows 2000 with SP4 plus KB835732 or KB891861 update, Windows XP with SP2 plus Windows Installer 3.1. It is the last version to support Windows 2000 although there have been some unofficial workarounds to use a subset of the functionality from Version 3.5 in Windows 2000. Changes in 2.0 include: .NET Framework 2.0 is supported on Windows 98, ME, 2000, XP, Server 2003, Vista, Server 2008, and Server 2008 R2. Applications utilizing .NET Framework 2.0 will also run on computers with .NET Framework 3.0 or 3.5 installed, which supports additional operating systems. .NET Framework 3.0, formerly called WinFX, was released on 21 November 2006. It includes a new set of managed code APIs that are an integral part of Windows Vista and Windows Server 2008. It is also available for Windows XP SP2 and Windows Server 2003 as a download. There are no major architectural changes included with this release; .NET Framework 3.0 uses the same CLR as .NET Framework 2.0. Unlike the previous major .NET releases there was no .NET Compact Framework release made as a counterpart of this version. Version 3.0 of the .NET Framework shipped with Windows Vista. It also shipped with Windows Server 2008 as an optional component (disabled by default). .NET Framework 3.0 consists of four major new components: .NET Framework 3.0 is supported on Windows XP, Server 2003, Vista, Server 2008, and Server 2008 R2. Applications utilizing .NET Framework 3.0 will also run on computers with .NET Framework 3.5 installed, which supports additional operating systems. Version 3.5 of the .NET Framework was released on 19 November 2007. As with .NET Framework 3.0, version 3.5 uses Common Language Runtime (CLR) 2.0, that is, the same version as .NET Framework version 2.0. In addition, .NET Framework 3.5 also installs .NET Framework 2.0 SP1 and 3.0 SP1 (with the later 3.5 SP1 instead installing 2.0 SP2 and 3.0 SP2), which adds some methods and properties to the BCL classes in version 2.0 which are required for version 3.5 features such as Language Integrated Query (LINQ). These changes do not affect applications written for version 2.0, however. As with previous versions, a new .NET Compact Framework 3.5 was released in tandem with this update in order to provide support for additional features on Windows Mobile and Windows Embedded CE devices. The .NET Framework 3.5 Service Pack 1 was released on 11 August 2008. This release adds new functionality and provides performance improvements under certain conditions, especially with WPF where 20-45% improvements are expected. Two new data service components have been added, the ADO.NET Entity Framework and ADO.NET Data Services. Two new assemblies for web development, System.Web.Abstraction and System.Web.Routing, have been added; these are used in the ASP.NET MVC framework and, reportedly, will be used in the future release of ASP.NET Forms applications. Service Pack 1 is included with SQL Server 2008 and Visual Studio 2008 Service Pack 1. It also featured a new set of controls called "Visual Basic Power Packs" which brought back Visual Basic controls such as "Line" and "Shape". Version 3.5 SP1 of the .NET Framework shipped with Windows 7. It also shipped with Windows Server 2008 R2 as an optional component (disabled by default). For the .NET Framework 3.5 SP1 there is also a new variant of the .NET Framework, called the ".NET Framework Client Profile", which at 28 MB is significantly smaller than the full framework and only installs components that are the most relevant to desktop applications. However, the Client Profile amounts to this size only if using the online installer on Windows XP SP2 when no other .NET Frameworks are installed or using Windows Update. When using the off-line installer or any other OS, the download size is still 250 MB. Key focuses for this release are: .NET Framework 4.0 is supported on Windows XP (with Service Pack 3), Windows Server 2003, Vista, Server 2008, 7 and Server 2008 R2. Applications utilizing .NET Framework 4.0 will also run on computers with .NET Framework 4.5 or 4.6 installed, which supports additional operating systems. .NET Framework 4.0 is the last version to support Windows XP and Windows Server 2003. Microsoft announced the intention to ship .NET Framework 4 on 29 September 2008. The Public Beta was released on 20 May 2009. On 28 July 2009, a second release of the .NET Framework 4 beta was made available with experimental software transactional memory support. This functionality is not available in the final version of the framework. On 19 October 2009, Microsoft released Beta 2 of the .NET Framework 4. At the same time, Microsoft announced the expected launch date for .NET Framework 4 as 22 March 2010. This launch date was subsequently delayed to 12 April 2010. On 18 April 2011, version 4.0.1 was released supporting some customer-demanded fixes for Windows Workflow Foundation. Its design-time component, which requires Visual Studio 2010 SP1, adds a workflow state machine designer. Version 4.0.3 was released on 4 March 2012. After the release of the .NET Framework 4, Microsoft released a set of enhancements, named Windows Server AppFabric, for application server capabilities in the form of AppFabric Hosting and in-memory distributed caching support. .NET Framework 4.5 was released on 15 August 2012; a set of new or improved features were added into this version. The .NET Framework 4.5 is only supported on Windows Vista or later. The .NET Framework 4.5 uses Common Language Runtime 4.0, with some additional runtime features. .NET Framework 4.5 is supported on Windows Vista, Server 2008, 7, Server 2008 R2, 8, Server 2012, 8.1 and Server 2012 R2. Applications utilizing .NET Framework 4.5 will also run on computers with .NET Framework 4.6 installed, which supports additional operating systems. The Managed Extensibility Framework or MEF is a library for creating lightweight, extensible applications. It allows application developers to discover and use extensions with no configuration required. It also lets extension developers easily encapsulate code and avoid fragile hard dependencies. MEF not only allows extensions to be reused within applications, but across applications as well. The release of .NET Framework 4.5.1 was announced on 17 October 2013 along Visual Studio 2013. This version requires Windows Vista SP2 and later and is included with Windows 8.1 and Windows Server 2012 R2. New features of .NET Framework 4.5.1: The release of .NET Framework 4.5.2 was announced on 5 May 2014. This version requires Windows Vista SP2 and later. For Windows Forms applications, improvements were made for high DPI scenarios. For ASP.NET, higher reliability HTTP header inspection and modification methods are available as is a new way to schedule background asynchronous worker tasks. .NET Framework 4.6 was announced on 12 November 2014. It was released on 20 July 2015. It supports a new just-in-time compiler (JIT) for 64-bit systems called RyuJIT, which features higher performance and support for SSE2 and AVX2 instruction sets. WPF and Windows Forms both have received updates for high DPI scenarios. Support for TLS 1.1 and TLS 1.2 has been added to WCF. This version requires Windows Vista SP2 or later. The cryptographic API in .NET Framework 4.6 uses the latest version of Windows CNG cryptography API. As a result, NSA Suite B Cryptography is available to .NET Framework. Suite B consists of AES, the SHA-2 family of hashing algorithms, elliptic curve Diffie-Hellman, and elliptic curve DSA. .NET Framework 4.6 is supported on Windows Vista, Server 2008, 7, Server 2008 R2, 8, Server 2012, 8.1, Server 2012 R2, 10 and Server 2016. However, .NET Framework 4.6.1 and 4.6.2 drops support for Windows Vista and Server 2008, and .NET Framework 4.6.2 drops support for Windows 8. On 5 April 2017, Microsoft announced that .NET Framework 4.7 was integrated into Windows 10 Creators Update, promising a standalone installer for other Windows versions. An update for Visual Studio 2017 was released on this date to add support for targeting .NET Framework 4.7. The promised standalone installer for Windows 7 and later was released on 2 May 2017, but it had prerequisites not included with the package. New features in .NET Framework 4.7 include: .NET Framework 4.7.1 was released on 17 October 2017. Amongst the fixes and new features, it corrects a d3dcompiler dependency issue. It also adds compatibility with the .NET Standard 2.0 out of the box. Visual Studio .NET 2002 shipped with the Microsoft .NET Framework SDK version 1.0. Visual Studio .NET 2003 ships with .NET Framework SDK version 1.1. The team is updating the System.Security.Cryptography APIs to support the Windows CNG cryptography APIs [...] since it supports modern cryptography algorithms [Suite B Support], which are important for certain categories of apps. Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.Visit defaultLogic's partner sites below:
<urn:uuid:b9712d0a-df8d-4d6f-9538-57bf3fe5e101>
2.640625
3,289
Knowledge Article
Software Dev.
84.364212
95,589,817
Evolutionary transitions in individuality (ETIs) underlie the watershed events in the history of life on Earth, including the origins of cells, eukaryotes, plants, animals, and fungi. Each of these events constitutes an increase in the level of complexity, as groups of individuals become individuals in their own right. Among the best-studied ETIs is the origin of multicellularity in the green alga Volvox, a model system for the evolution of multicellularity and cellular differentiation. Since its divergence from unicellular ancestors, Volvox has evolved into a highly integrated multicellular organism with cellular specialization, a complex developmental program, and a high degree of coordination among cells. Remarkably, all of these changes were previously thought to have occurred in the last 50-75 million years. Here we estimate divergence times using a multigene data set with multiple fossil calibrations and use these estimates to infer the times of developmental changes relevant to the evolution of multicellularity. Our results show that Volvox diverged from unicellular ancestors at least 200 million years ago. Two key innovations resulting from an early cycle of cooperation, conflict and conflict mediation led to a rapid integration and radiation of multicellular forms in this group. This is the only ETI for which a detailed timeline has been established, but multilevel selection theory predicts that similar changes must have occurred during other ETIs. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:38fc7273-6086-48aa-8412-059db3d6c530>
3.28125
311
Academic Writing
Science & Tech.
7.239267
95,589,835
GOCE : A seismometer in orbit around the Earth Most people think that seismometers are ground-based instruments, but today earthquakes can also be detected by satellites. Researchers of the Institut de Recherche en Astrophysique et Planétologie (IRAP-OMP, UPS, CNRS) in collaboration with the CNES, the IPGP and the University of Delft, demonstrated it using data from the GOCE mission (Gravity and Ocean Circulation Explorer) of the European Space Agency (ESA). Right panel: Propagation of seismic waves at the surface of the Earth under the orbit of GOCE (Modified from ESA AOES Medialab (2008)) The GOCE satellite, called "the first seismometer in orbit around the Earth," was able to detect sound waves at very low frequencies generated in the atmosphere by the devastating earthquake of Tohoku in Japan (March 11, 2011). Indeed, the ground vibrations during an earthquake produce acoustic waves which propagate vertically in the atmosphere. Using the very accurate measurements of the vertical acceleration of the GOCE satellite, which orbits the Earth at an altitude of about 270 kilometers, and deducting changes in the atmospheric density encountered by the satellite, the scientists were able to perform the first measurement "in situ" of the post-seismic infrasounds. These measurements were acquired both when the satellite crosses the wavefront over the Pacific Ocean and when it doubles it, half an hour later over Europe. The atmospheric seismic waves could be distinguished from the atmospheric gravity waves since the ratio between the vertical acceleration of the satellite and the disturbance of the air density is higher for these waves than for the gravity waves usually generated by the dynamics of the atmosphere. In order to compare their model to the data collected, the researchers also modeled the atmospheric waves generated by the Tohoku earthquake. The comparison between these modelizations and the propagation time, amplitude and form of the waves observed by GOCE shows a good agreement. In addition, the arrival time differences between models and observations are attributed to lateral variations of the seismic wave velocities, both in the solid Earth and in the atmosphere. The authors believe that this new satellite observable has a great potential for the study of atmospheric waves generated by tectonic activity, and synergies with the study of the dynamics of the upper atmosphere. : Geophysical Research Letters, doi:10.1002/grl.50205, 2013 : Raphael Garcia, IRAP-OMP, mail: firstname.lastname@example.org Press release in Nature : http://www.nature.com/news/earthquake-detected-from-space-1.12545 Press Release in ArsTechnica : http://arstechnica.com/science/2013/02/earthquakes-booms-big-enough-to-be-detected-from-orbit/
<urn:uuid:afeec00f-20e8-4845-aea5-e91cbf654122>
3.546875
604
News (Org.)
Science & Tech.
23.910924
95,589,865
In computational complexity theory, the 3SUM problem asks if a given set of real numbers contains three elements that sum to zero. A generalized version, k -SUM, asks the same question on k numbers. 3SUM can be easily solved in time, and matching lower bounds are known in some specialized models of computation It was widely conjectured that any deterministic algorithm for the 3SUM requires time.In 2014, the original 3SUM conjecture was refuted by Allan Grønlund and Seth Pettie who gave a deterministic algorithm that solves 3SUM in time .Additionally, Grønlund and Pettie showed that the 4-linear decision tree complexity of 3SUM is .These bounds were subsequently improved;the current best known algorithm for 3SUM runs in time, and the randomized 4-linear decision tree complexity of 3SUM is .It is still conjectured that 3SUM is unsolvable in When the elements are integers in the range , 3SUM can be solved in time by representing the input set as a bit vector , computing the set of all pairwise sums as a discrete convolution using the Fast Fourier transform , and finally comparing this set to Suppose the input array is . In integer (word RAM ) models of computing, 3SUM can be solved in time on average by inserting each number into a hash table , and then for each index , checking whether the hash table contains the integer It is also possible to solve the problem in the same time in a comparison-based model of computing or real RAM, for which hashing is not allowed. The algorithm below first sorts the input array and then tests all possible pairs in a careful order that avoids the need to binary search for the pairs in the sorted list, achieving worst-case time, as follows. a = S[i]; start = i+1; end = n-1; while (start < end) do b = S[start] c = S[end]; if 0) then output a, b, c; // Continue search for all triplet combinations summing to zero. if (b S[start + 1]) then start = start + 1; else end = end - 1; else if (a+b+c > 0) then end = end - 1; else start = start + 1; end end end The following example shows this algorithm's execution on a small sorted array. Current values of a are shown in green, values of b and c are shown in blue. -7 -3 2 4 8 10 -3 2 4 8 10 -22) . . . -10 -7 -3 2 4 8 10 -3 2 4 8 10 2 4 8 10 -7 -3 2 4 8 10 -7 -3 2 The correctness of the algorithm can be seen as follows. Suppose we have a solution a + b + c = 0. Since the pointers only move in one direction, we can run the algorithm until the leftmost pointer points to a. Run the algorithm until either one of the remaining pointers points to b or c, whichever occurs first. Then the algorithm will run until the last pointer points to the remaining term, giving the affirmative solution. Instead of looking for numbers whose sum is 0, it is possible to look for numbers whose sum is any constant C in the following way: - Subtract C/3 from all elements of the input array. - In the modified array, find 3 elements whose sum is 0. For e.g., if A=[1,2,3,4] and if you are asked to find 3sum for C=4, then subtract all the elements of A by 4/3 and solve it in the usual 3sum way, i.e., (a-C/3) + (b-C/3) + (c-C/3) = 0 3 different arrays Instead of searching for the 3 numbers in a single array, we can search for them in 3 different arrays. I.e., given three arrays X, Y and Z, find three numbers, such that . Call the 1-array variant 3SUM×1 and the 3-array variant 3SUM×3. Given a solver for 3SUM×1, the 3SUM×3 problem can be solved in the following way (assuming all elements are integers): - For every element in X, Y and Z, set:,, . - Let S be a concatenation of the arrays X, Y and Z. - Use the 3SUM×1 oracle to find three elements such that . - Return . By the way we transformed the arrays, it is guaranteed that . Instead of looking for arbitrary elements of the array such that: the convolution 3sum problem (Conv3SUM) looks for elements in specific locations: Reduction from Conv3SUM to 3SUM Given a solver for 3SUM, the Conv3SUM problem can be solved in the following way. - Define a new array T, such that for every index i: is the number of elements in the array, and the indices run from 0 to n - Solve 3SUM on the array T. - If in the original array there is a triple with , so this solution will be found by 3SUM on T - Conversely, if in the new array there is a triple with , so this is a valid solution for Conv3SUM on S Reduction from 3SUM to Conv3SUM Given a solver for Conv3SUM, the 3SUM problem can be solved in the following way. The reduction uses a hash function. As a first approximation, assume that we have a linear hash function, i.e. a function h such that: Suppose that all elements are integers in the range: 0...N -1, and that the function h maps each element to an element in the smaller range of indices: 0...n -1. Create a new array T and send each element of S to its hash value in T , i.e., for every x Initially, suppose that the mappings are unique (i.e. each cell in T accepts only a single element from S ). Solve Conv3SUM on T - If there is a solution for 3SUM: , so this solution will be found by the Conv3SUM solver on T - Conversely, if a Conv3SUM is found on T, then obviously it corresponds to a 3SUM solution on S since T is just a permutation of S. This idealized solution doesn't work, because any hash function might map several distinct elements of S to the same cell of T. The trick is to create an array T* by selecting a single random element from each cell of T, and run Conv3SUM on T*. If a solution is found, then it is a correct solution for 3SUM on S. If no solution is found, then create a different random T* and try again. Suppose there are at most R elements in each cell of T. Then the probability of finding a solution (if a solution exists) is the probability that the random selection will select the correct element from each cell, which is . By running Conv3SUM times, the solution will be found with a high probability. Unfortunately, we do not have linear perfect hashing, so we have to use an almost linear hash function, i.e. a function h such that: This requires to duplicate the elements of S when copying them into T , i.e., put every element (as before) and in . So each cell will have 2R elements, and we will have to run Conv3SUM A problem is called 3SUM-hard if solving it in subquadratic time implies a subquadratic-time algorithm for 3SUM. The concept of 3SUM-hardness was introduced by . They proved that a large class of problems in computational geometry are 3SUM-hard, including the following ones. (The authors acknowledge that many of these problems are contributed by other researchers.) - Given a set of lines in the plane, are there three that meet in a point? - Given a set of non-intersecting axis-parallel line segments, is there a line that separates them into two non-empty subsets? - Given a set of infinite strips in the plane, do they fully cover a given rectangle? - Given a set of triangles in the plane, compute their measure. - Given a set of triangles in the plane, does their union have a hole? - A number of visibility and motion planning problems, e.g., - Given a set of horizontal triangles in space, can a particular triangle be seen from a particular point? - Given a set of non-intersecting axis-parallel line segment obstacles in the plane, can a given rod be moved by translations and rotations between a start and finish positions without colliding with the obstacles? By now there are a multitude of other problems that fall into this category. An example is the decision version of X + Y sorting: given sets of numbers and of elements each, are there distinct for ? Notes and References - Ex. 30.1–7, p. 906. - http://www.ti.inf.ethz.ch/ew/courses/CG09/materials/v12.pdf Visibility Graphs and 3-Sum - For a reduction in the other direction, see Variants of the 3-sum problem. - 10.1145/1806689.1806772. Towards polynomial lower bounds for dynamic problems. Proceedings of the 42nd ACM symposium on Theory of computing - STOC '10. 603. 2010. Patrascu. M.. 9781450300506. - 1407.6756. Kopelowitz. Tsvi. 3SUM Hardness in (Dynamic) Data Structures. Pettie. Seth. Porat. Ely. cs.DS. 2014. - Web site: Erik. Demaine. Erik Demaine. Jeff. Erickson. Joseph. O'Rourke. Problem 41: Sorting X + Y (Pairwise Sums). 20 August 2006. 23 September 2014. The Open Problems Project.
<urn:uuid:b74aa28d-b689-4411-945d-a360760f8de4>
2.59375
2,224
Academic Writing
Science & Tech.
73.785495
95,589,893
Описание книги C# Graphics Programming: This Wrox Blox teaches you how to add graphics to C# 2008 applications, explaining fundamental graphics techniques such as: drawing shapes with different colors and line styles; filling areas with colors, gradients, and patterns; drawing text that is properly aligned, sized, and clipped exactly where you want it; manipulating images and saving results in bitmap, JPEG, and other types of files. Also covered are instructions for how to greatly increase your graphics capabilities using transformations. Transformations allow you to move, stretch, or rotate graphics. They also let you work in coordinate systems that make sense for your application. You will also learn how to use all of these techniques in printouts. The author describes the sequence of events that produce a printout and shows how to generate and preview printouts. The final sections describe two powerful new graphic tools that were introduced with .NET Framework 3.0: WPF graphics and FlowDocuments. WPF applications can use XAML graphic commands to declaratively draw and fill the same kinds of shapes that a program can draw by using graphics objects. Finally, a discussion on the FlowDocument object shows you how to define items that should be flowed across multiple pages as space permits. This lets you display text, graphics, controls, and other items that automatically flow across page breaks. FlowDocument viewers make displaying these documents easy for you, and simplifies the user's reading of the documents. This Wrox Blox also contains 35 example programs written in C# 2008, although most of the code works in previous versions of C# as well. The most notable exceptions are WPF graphics and FlowDocuments, both of which require WPF provided in .NET Framework 3.0 and later. 6,295 просмотров всего, 1 просмотров сегодня
<urn:uuid:ab1fae98-2b9c-4f6f-a589-d592a0a94f34>
2.5625
416
Product Page
Software Dev.
50.74732
95,589,894
posted by anonymous The elements of which of these groups on the periodic table are most resistant to forming compounds? A. Group 1 B. Group 9 C. Group 14 D. Group 17 Look on your periodic table at the noble elements also referred to as the inert gases. If you don't have a periodic table handy, here is one at www.webelements.com I think group 18. It is not on your list. b. group 9
<urn:uuid:79067215-34a9-4769-a0bb-bb188203b4c8>
2.671875
99
Q&A Forum
Science & Tech.
88.761169
95,589,906
Aquarium tracks horseshoe crab activity (Video) By CHRIS BOSAK Hour Staff Writer The horseshoe crabs are coming in to spawn and the Maritime Aquarium at Norwalk is there, tags at the ready. The Aquarium, led by staff member Joe Schnierlein, for the sixth consecutive year is taking part in a horseshoe crab tagging study that will give scientists a better understanding of the ancient creature. "Part of the study is about tracking population and seeing where they go after they spawn (lay eggs)," Schnierlein said. "But also, you've got an animal that has been on this earth for 350 million years. It's lived through God-knows-how-many climate changes, glacial movements and everything that man has thrown at it and it's still here. After all that you have to ask yourself, 'Why not study it?' It's got the right stuff." The study is part of Project Limulus, a long-term study led by Jennifer Mattei, Ph.D., head of the biology department at Sacred Heart University. Project Limulus is being conducted along coastal Connecticut, including the Norwalk coast and islands. Schnierlein, along with volunteer Laina Grillo, arrived at Calf Pasture Beach before sunrise on Sunday to tag horseshoe crabs. The majority of the crabs were spawning with the larger female crabs dragging their mates behind them. They lay their eggs on sandy and rocky shorelines, leaving behind small impressions on the beach. "I've always been interested in this type of thing. It's fun learning about this interesting creature," said Grillo, a former student of Schnierlein's at Brien McMahon High School. "Plus, it's fun hanging out with Joe." Tagging involves poking a hole in the shell of the crab with an awl and pushing a white, circular tag through the hole. The process does not harm the animal. Conversely, the crabs are harmless to humans, despite their appearance, "They look prehistoric and ominous, but they are perfectly harmless," Schnierlein said. The tag includes contact information for Mattei and a six-digit number for data collecting purposes. Anyone finding a tagged horseshoe crab is urged to call or e-mail Mattei with the information on the tag. The project previously used yellow tags. Mattei said the information gathered over the years has been vital to the study, but she is most surprised by another aspect of the project. "Our biggest surprise has been the level of volunteerism we have found," Mattei wrote to The Hour via e-mail. " We have over 300 volunteers helping us out and this year we will put out over 14,000 tags." Lending a hand to the project is a natural for The Maritime Aquarium at Norwalk, Schnierlein said. The Aquarium's mission, in part, is to: "Inspire people to appreciate Long Island Sound and protect it for future generations." "We take 180 school groups to the beach each year," Schnierlein said. "It's that outreach ... letting people know." Schnierlein and Grillo tagged about 40 crabs on Sunday morning. Schnierlein has tagged as many 190 crabs on a single visit in previous years. Schnierlein returned to the sandbar at Calf Pasture Beach later on Sunday and worked with volunteers from Credit Suisse, a global bank with offices in Greenwich. "The numbers are down this year," Schnierlein said. "I expected a lot more here today, to be honest. I thought we'd be crawling with these things." Mattei said she expects Project Limulus to continue for at least 10 more years. The project is funded by the Connecticut Sea Grant. "We have found from the tagging study that 99 percent the horseshoe crabs tagged in Long Island Sound stay in Long Island Sound," she said. "We had one long distance traveler appear in Groton from that was originally tagged in Maryland in 2005. We have also had a few females re-captured on Long Island beaches, so a few can cross the Sound." For more information about the horseshoe crab study, visit Web site http://www.projectlimulus.org
<urn:uuid:72a74bca-210c-410b-ab7d-95824cdbc4f2>
2.9375
880
Truncated
Science & Tech.
61.822248
95,589,917
Vernal pools generally have no currents because they have no regular sources or spills of their waters. The two main abiotic factors — things other than plants and animals — related to vernal pools are how the water gets into pools in the spring of the year and the geological nature of the receptacle in which the pool forms. Vernal pools form in the spring and often vanish before the end of summer. These abiotic factors define pools found in two quite distinct main combinations in two regions of North America. Water from Spring Rains and Snow Melt Vernal pools form with water that simply accumulates in a basin or depression in any kind of soil. The Massachusetts Division of Fisheries and Wildlife, Natural Heritage and Endangered Species Program, identifies a vernal pool as one “where water is contained for more than two months in the spring and summer of most years and where no reproducing fish populations are present.” These pools vary a great deal in size, but the prototypical pool is only 3 feet deep at its peak season. Water Rising through the Ground Especially in the forested eastern parts of the continent, the soil is often saturated with water to a much higher level in the spring than at other times of the year, again due to melting of the winter’s snow and spring rains. Some depressions in the soil may be deep enough to meet this rising groundwater table and form vernal pools. As the soil dries out around the pool and the water table falls again, the pool will shrink and possibly disappear altogether, but it may also come back in a rainy autumn to persist and even freeze over through the winter. Vernal pools that reflect groundwater levels are generally depressions in porous soils. These pools can be distinguished from spring-fed ponds in that the source of the water appears generalized rather than focused on an identifiable eruption, and in their diminishment or vanishing with the water table. The soil banks of such a pool may rise quite high above the highest water level, or the basin may be nearly indistinguishable from surrounding terrain when the pool is dry. Hardpan and Rock Vernal pools may also form without access to groundwater, in depressions in soil or rock where accumulating water seeps very little if at all. One of the places such pools have been studied is in California’s Central Valley, where identified deposits of soil and clay have formed mounds and basins. The vernal pools collect the water that falls on the mounded features of the same soil and hold it until it evaporates. Vulnerability to Development Despite attention to wetlands conservation, if an environmental impact survey is conducted at the wrong time of year, it may completely miss the existence of vernal pools, especially in rocky formations. To prevent such errors, conservation groups need to document the existence of pools in land that may be vulnerable to development.
<urn:uuid:6ee7e68c-fcc8-422d-bc07-ea1121b5aabd>
4.21875
586
Knowledge Article
Science & Tech.
39.85603
95,589,931
Physicists have simulated the cores of some large rocky exoplanets by pummeling iron with lasers. The resulting measurements give the first clue to how iron might behave inside planets outside the solar system that are several times the mass of Earth, researchers report April 16 in Nature Astronomy. “Until now, there’s been no data available on the state of these materials at the center of large exoplanets,” says Ray Smith, a physicist at Lawrence Livermore National Laboratory in California. Working at the National Ignition Facility, Smith and his colleagues aimed 176 lasers at a pellet of iron a few micrometers thick wrapped in a gold cylinder. The lasers delivered enough energy over 30 billionths of a second to compress the iron to pressures up to 14 million times Earth’s atmospheric pressure at… Latest posts by Marcela (see all) More from Around the Web
<urn:uuid:97525997-2425-4d88-b800-ee9cc0b67969>
3.703125
186
Truncated
Science & Tech.
17.177802
95,589,950
Common Physical Forms of Nuclear Fuel Uranium dioxide (UO2) powder is compacted to cylindrical pellets and sintered at high temperatures to produce ceramic nuclear fuel pellets with a high density and well defined physical properties and chemical composition. A grinding process is used to achieve a uniform cylindrical geometry with narrow tolerances. Such fuel pellets are then stacked and filled into the metallic tubes. The metal used for the tubes depends on the design of the reactor. Stainless steel was used in the past, but most reactors now use a zirconium alloy which, in addition to being highly corrosion-resistant, has low neutron absorption. The tubes containing the fuel pellets are sealed: these tubes are called fuel rods. The finished fuel rods are grouped into fuel assemblies that are used to build up the core of a power reactor. Cladding is the outer layer of the fuel rods, standing between the coolant and the nuclear fuel. It is made of a corrosion-resistant material with low absorption cross section for thermal neutrons, usually Zircaloy or steel in modern constructions, or magnesium with small amount of aluminium and other metals for the now-obsolete Magnox reactors. Cladding prevents radioactive fission fragments from escaping the fuel into the coolant and contaminating it. Read more about this topic: Nuclear Fuel ... CANDU fuel bundles are about a half meter long and 10 cm in diameter ... Modern types typically have 37 identical fuel pins radially arranged about the long axis of the bundle, but in the past several different configurations and numbers of pins have been used ... The CANFLEX bundle has 43 fuel elements, with two element sizes ... Famous quotes containing the words fuel, nuclear, common, physical and/or forms: “I had an old axe which nobody claimed, with which by spells in winter days, on the sunny side of the house, I played about the stumps which I had got out of my bean-field. As my driver prophesied when I was plowing, they warmed me twice,once while I was splitting them, and again when they were on the fire, so that no fuel could give out more heat. As for the axe,... if it was dull, it was at least hung true.” —Henry David Thoreau (18171862) “We now recognize that abuse and neglect may be as frequent in nuclear families as love, protection, and commitment are in nonnuclear families.” —David Elkind (20th century) “We have too many intellectuals who are afraid to use the, the pistol of common sense.” —Samuel Fuller (b. 1911) “There are two kinds of timiditytimidity of mind, and timidity of the nerves; physical timidity, and moral timidity. Each is independent of the other. The body may be frightened and quake while the mind remains calm and bold, and vice versë. This is the key to many eccentricities of conduct. When both kinds meet in the same man he will be good for nothing all his life.” —Honoré De Balzac (17991850) “Psychoanalysis can unravel some of the forms of madness; it remains a stranger to the sovereign enterprise of unreason. It can neither limit nor transcribe, nor most certainly explain, what is essential in this enterprise.” —Michel Foucault (19261984)
<urn:uuid:74dfed28-5b86-4d5f-83dc-0b737274c0aa>
3.703125
717
Knowledge Article
Science & Tech.
53.168232
95,589,954
WASHINGTON — So far in 2018, the region faced destructive flooding in Ellicott City and a damaging wind storm in early March. But one mode of severe weather remains conspicuously absent: There have been no tornadoes anywhere near the D.C. area this year. “Over the last 10 years, we’ve had 143 tornadoes in our area. We haven’t had a tornado yet this year. That’s only the second time in the last 10 years that it’s been this late in the season,” said NWS meteorologist Jim Lee. Lee runs the Baltimore-Washington office, which monitors weather between the spine of the Appalachian Mountains in West Virginia to the western shore of the Chesapeake, and from central Virginia to as far north as the Pennsylvania state line. In that massive 35,000-square-mile area, it’s been nearly 11 months since the last tornado touched down. Lee is quick to point out that although the region has averaged about 14 tornadoes per year over the last decade, there is a high variance from year to year. Even so, the last time there was a comparable lull in local tornado activity was back in 2010, when the year’s first tornado didn’t touch down until Aug. 10. What’s more, the most tornado-prone period of the year is passing uneventfully. “June is definitely the peak [tornado month] in the last 10 years,” Lee said, adding that May and July are runners up. “June is a transition month because we still get some good jet stream movement. Starting in July and August, we get practically no wind shear and that’s a big component of tornadoes.” There were 16 tornadoes last year. In early April 2017, seven weak tornadoes touched down in a single day, including two that moved through the District. According to the National Centers for Environmental Information, Virginia averages 18 tornadoes per year and Maryland expects to see 10. Maryland hasn’t seen any twisters in 2018; the latest was an EF-2 that struck Kent Island on the state’s Eastern Shore last July. The weather service in Sterling has issued 10 tornado warnings so far this year but, despite showing signs of rotation, none of the storms that prompted the warnings were found to produce tornado damage. Despite the downturn, Lee urged the public to remain weather-aware. “People should always be prepared. The data set that we did the research on earlier this winter shows that we had tornadoes in every month of the year.” If a tornado is imminent, the safest place to be is in a basement or an interior room on the lowest floor of a sturdy building. If outdoors, in a mobile home or in a vehicle, the weather service suggests people should move to the closest substantial shelter. Like WTOP on Facebook and follow @WTOP on Twitter to engage in conversation about this article and others.
<urn:uuid:a173aab5-1bf1-4cc6-9488-3f595e3848d1>
2.671875
625
News Article
Science & Tech.
58.924834
95,589,962
20 March 2015 Arctic winter sea ice shrinks to record low The Arctic Ocean had less sea ice this winter than any year since records began. The unprecedented low brings the prospect of an ice-free Arctic a step closer. Summer ice has seen a series of record lows in recent years as the Arctic has warmed by almost 2°C, double the rate at mid-latitudes. Ice reforms each winter and, until now, the average extent of winter ice has remained relatively constant, though the loss of permanent ice means it is thinner than before. But last year’s winter re-freeze of the waters of the Arctic Ocean was the lowest since satellite observations began in 1979, according to provisional data released by the National Snow and Ice Data Center at the University of Colorado in Boulder. The maximum of 14.5 million square kilometres, recorded on 25 February, beat the previous worst, recorded in 2011, by 1 per cent. Arctic sea ice extent for 25 February 2015. The orange line shows the 1981 to 2010 median extent for that day. The black cross indicates the geographic North Pole (Image: National Snow and Ice Data Center) NSIDC researchers are wary of blaming global warming. They point out that there is a lot of natural variability. This winter, an unusual path taken by the jet stream – a high altitude wind that affects weather at ground level- strongly warmed the Pacific side of the Arctic, reducing ice in the Bering Sea in particular. The winter peak in ice was also lower this year because the spring melt began two weeks earlier than usual. “There is a strong presumption, based on earlier years, that this will result in a very low minimum this summer,” says Peter Wadhams, of the University of Cambridge. But Jason Box of the Geological Survey of Denmark and Greenland says that while such a conclusion was tempting, the sea ice winter maximum is not a very useful predictor of the summer minimum. Nonetheless, the Arctic is changing profoundly, says Box. “We are already in uncharted territory,” he says. “Models are all understating the Arctic response to climate change.” One sign is an epidemic of wildfires in the Arctic tundra. Box’s own unpublished data reveals a doubling in the number of fires over the past 15 years. “Last year was the most powerful fire season on record,” he told a conference in Oslo last week. The fires are generating soot that darkens snow and ice, reducing the reflectivity of the Arctic and accelerating warming further.
<urn:uuid:881b2d9d-2389-42c5-8ec0-5f1b23cd79fa>
2.953125
530
Truncated
Science & Tech.
49.119477
95,589,964
Monday, September 05, 2005 Lava provides clues from the past and the future They think their research can help explain what's happening to our warming world today. The extent of some of these buried lava flows is mind-boggling. Fragments left by a series of eruptions 200 million years ago in what's now the Atlantic Ocean stretch across four continents, in places ranging from New England to France and from the Amazon to West Africa. An even larger outburst, 120 million years ago off the Indonesian island of Java in the southwest Pacific, slathered molten rock over more than 1.2 million square miles of ocean floor, enough to cover Alaska or Western Europe with a layer up to 18 miles thick. Along with somewhat smaller -- but still enormous -- volcanic eruptions on dry land, these belches from the planet's fiery interior contributed to a series of mass extinctions of most of the organisms that were then alive. Although the extinctions were devastating to life at the time, scientists think they opened the way for new, more advanced creatures to evolve, including ourselves. Without them, we wouldn't be here. The blasts from the past may have ominous implications for future climate change, however, some scientists say. "The rapid release of carbon dioxide into the atmosphere, happening today, appears to have happened in the past, too," said Paul Wignall, an earth scientist at the University of Leeds in England. "In many ways, these rapid and giant eruptions seem to replicate the effects of fossil fuel burning, and so have provided natural experiments closely similar to human activity," Wignall said in an e-mail message. "The consequence of rapid warming of oceans and atmospheres appears to be mass extinction." Lava is a common type of rock that's been melted by temperatures as high as 2,000 degrees Fahrenheit and flows out from a volcano or a crack in the Earth's surface. It rises from a 400-mile-thick layer of hot, gooey material known as magma that lies between the planet's crust and its solid core. The vast expanses of seafloor lava -- technically known as "Large Igneous Provinces" -- are "one of Earth's most fascinating features," said John Mahoney, a geologist at the University of Hawaii in Honolulu. "They provide insights into the causes of major environmental and biological changes in the past," he said in an e-mail interview, and "almost certainly played an important role in bringing about these extinctions." The oceanic lava sheets are mostly invisible from the surface, but Earth's continents also bear traces of huge volcanic eruptions, some of them recent. Yellowstone National Park in Wyoming, for instance, contains a 50-by-40-mile crater left by a series of volcanic eruptions, averaging about 600,000 years apart. The latest explosion came 630,000 years ago, so the park is overdue for another. "Eventually Yellowstone will erupt again," said Don Hyndman, a geologist at the University of Montana. "When it does, I don't want to be living in Bozeman" -- 90 miles away. "The last event blew ash as far as Kansas and Arkansas." It produced enough lava, ash and rock to cover New York state 67 feet deep, he said.
<urn:uuid:6f6f9f6c-8d8c-4489-bdc4-05299d8f6d7d>
3.984375
683
Personal Blog
Science & Tech.
45.348605
95,589,969
Monster black holes can usually be found at the core of very large galaxies and is rarely seen in the center of a galaxy in a sparsely populated area in the universe. The is the reason why researchers at NASA were shocked when they uncovered a massive black hole weighing about 17 billion suns in the center of NGC 1600. An astronomy team recently looked at a "tidal disruption" that involved a black hole ripping up a star. They measured the spectrum of X-rays that resulted from this large space event. Astronomers from the U.K.'s Keele University and University of Central Lancashire recently learned many things about the much larger-than-usual black hole in a recently discovered 9 billion-year-old galaxy. Astronomers recently discovered two supermassive black holes in the quasar known as Markarian 231, using NASA's Hubble Space Telescope. This suggests that these massive black holes form from violent mergers. The brightest galaxy in the Universe has been discovered by scientists using data from NASA's Wide-field Infrared Survey Explorer (WISE), shining with the infrared light of more than 300 trillion suns, according to a new study. Intense magnetism has been discovered near a supermassive black hole, and scientists hope it can help them better understand these massive inhabitants of the centers of galaxies. Astronomers have discovered the fastest star ever known, dubbed US 708, hurtling through the galaxy after a massive supernova ejected it into space, and now it appears to be moving so fast that it is being flung out of the Milky Way altogether. Apparently proper hygiene is a bit different for growing galaxies, as a long "shower" could be a very bad thing. NASA's Chandra X-ray Observatory and Hubble Space Telescope have revealed a young galaxy cluster that is riddled with holes. Research now reveals that it's growth was stunted by its very own black hole after unusual cosmic precipitation halted an important cycle. Scientists have discovered a monstrous black hole about 13 billion light-years away from Earth, and it is the largest they have ever seen, a new study says. Black holes: we know so little about them (we're not even sure they exist!) and yet one is quite literally the center of our galaxy. Now a pair of telescopes has identified strong evidence of radiation and ultra-fast winds blowing in a nearly spherical fashion, suggesting that black holes are more than just bottomless pits of condensing matter. NASA's Chandra X-ray Observatory recently detected an incredibly powerful flare of x-rays from the supermassive black hole that's at the center of our Milky Way galaxy, raising some serious questions about the behavior of this black hole, and how it influences the immense world that it plays host to. Although the mad rush to snatch up those last minute Christmas gifts and Black Friday sales may make your bank account seem like a dark and empty abyss, you may have missed out on the true black holes that NASA was showcasing yesterday. Here's a recap. An international team of researchers have brought attention to an unknown object in our Universe - a unique source of light at the edge of a galaxy some 90 million light-years away. Based off observational data, experts have theorized that this can be one of two incredibly unique things: either a giant black hole that was somehow exiled from the center of its own galaxy, or an incredibly massive star that is self-destructing. Neutrinos are like the ghosts of the particle world, carrying no charge and interacting with electrons and protons in only the faintest ways. Now a new study suggests that the great majority of high-energy neutrinos that can be found in the Milky Way Galaxy may be coming from the large black hole at its center, implying that black holes are actually neutrino factories.
<urn:uuid:fce5df24-1b4e-4cc2-8af5-96d7a534a067>
3.96875
777
Content Listing
Science & Tech.
39.575953
95,589,984
Finding: Protein complex isn't always "on." Scientists at Johns Hopkins have created a 3-D model of a complex protein machine, ORC, which helps prepare DNA to be duplicated. Like an image of a criminal suspect, the intricate model of ORC has helped build a “profile” of the activities of this crucial “protein of interest.” But the new information has uncovered another mystery: ORC’s structure reveals that it is not always “on” as was previously thought, and no one knows how it turns on and off. A summary of the study will be published in the journal Nature on March 11. “Even though the ORC protein machinery is crucial to life, we didn’t know much about how it works,” says James Berger, Ph.D., professor of biophysics and biophysical chemistry. “By learning what it looks like, down to the arrangement of each atom, we can get a sense of where it interacts with DNA and how it does its job.” Multicelled organisms grow when their cells divide into two. However, before a cell can divide, it has to make copies of its parts for the new cell. Since DNA's information is sealed inside its double strands, a specialized machine, called the replisome, must unseal the strands before the information can be accessed and copied. A key piece of the replisome is a motor, termed MCM, which unwinds paired DNA strands. MCM is a closed protein ring that must be opened up before it can encircle the long strands of DNA. ORC, the origin recognition complex, solves that problem. It cracks open the MCM's circle so that it can fit around the DNA and unwind it. It was previously known that ORC is a six-piece protein complex, with five of the pieces forming a slightly opened ring and the sixth, Orc6, forming a tail. Mistakes in Orc6 cause assembly problems, which affect the function of the whole machine and contribute to a dwarfism disorder called Meier-Gorlin syndrome. To learn more about how the complex works, Franziska Bleichert, Ph.D., a postdoctoral fellow in Berger’s laboratory, extracted the protein from fruit fly cells and immobilized it by coaxing it into tiny crystals. She then analyzed its structure by shining high-energy X-rays at the crystals in very focused beams. The resulting data allowed her to reconstruct the precise shape of the proteins, atom by atom, on the scale of billionths of an inch. Her model reveals exactly where Orc6 connects to the ring of ORC and explains how mistakes in that protein wreak havoc, though why dysfunctional ORC should cause dwarfism is still a mystery. The 3-D model also showed the existence of an unexpected regulatory mechanism. It was previously thought that ORC was always “on,” just not always present in the nucleus where it does its work. The model shows that it can exist in an inactive state, raising the question: How does it turn on and off? “In hindsight, it’s not surprising that there is another level of regulation for ORC,” explains Berger. For example, he says, “As soon as an egg cell is fertilized, it has to jump into action to create the embryo through multiple rounds of cell division, which first requires DNA replication. This inactive state might allow egg cells to stockpile ORC inside the nucleus so it’s available when needed." His team plans to test this idea soon. Michael Botchan of the University of California, Berkeley, also contributed to the research. This work was supported by grants from the National Institute of General Medical Sciences (GM071747), the National Cancer Institute (CA R37-30490) and the University of California, Berkeley, Miller Institute for Basic Research in Science. Senior Communications Specialist Catherine Kolf | newswise Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:fab37a74-0ba8-4822-be6d-f5bf7f2b6fa8>
3.375
1,410
Content Listing
Science & Tech.
47.38575
95,589,996
Atmospheric Science : An Introductory Survey This book has been written in response to a need for a text to support several of the introductory courses in atmospheric sciences commonly taught in universities; namely, introductory survey courses at the junior or senior undergraduate level and beginning graduate level, the undergraduate physical meteorology course, and the undergraduate synoptic laboratory. These courses serve to introduce the student to the fundamental physical principles upon which the atmospheric sciences are based and to provide an elementary description and interpretation of the wide range of atmospheric phenomena dealt with in detail in more advanced courses. In planning the book we have assumed that students enrolled in such courses have already had some exposure to calculus and physics at the first-year college level and to chemistry at the high school level. - Hardback | 467 pages - 167.4 x 240.8 x 28.2mm | 895.77g - 04 May 1977 - Elsevier Science Publishing Co Inc - Academic Press Inc - San Diego, United States - Illustrations, charts, maps Table of contents Preface. Units and Numerical Values. A brief Survey of the Atmosphere. Atmospheric Thermodynamics. Extratropical Synoptic-Scale Disturbances. Atmospheric Aerosol and Cloud Microphysical Processes. Clouds and Storms. Radiative Transfer. The Global Energy Balance. Atmospheric Dynamics. The General Circulation. Index.
<urn:uuid:2a6cb70d-6dc4-4da8-8abb-39ea2e85a361>
2.625
289
Product Page
Science & Tech.
26.985538
95,590,045
Research biologists, chemists and theoreticians at the U.S. Naval Research Laboratory (NRL), are on pace to develop the next generation of functional materials that could enable the mapping of the complex neural connections in the brain. The ultimate goal is to better understand how the billions of neurons in the brain communicate with one another during normal brain function, or dysfunction, as result of injury or disease. "There is tremendous interest in mapping all the neuron connections in the human brain," said Dr. James Delehanty, research biologist, Center for Biomolecular Science and Engineering. Recruiting luminescent nanoparticles to image brain function, scientists at the US Naval Research Laboratory (NRL) are on pace to develop the next generation of functional materials that could enable the mapping of the complex neural connections in the brain. The intrinsic properties of quantum dots (QDs) and the growing ability to interface them controllably with living cells has far-reaching potential applications in probing cellular processes such as membrane action potential. The ultimate goal is to better understand how the billions of neurons in the brain communicate with one another during normal brain function or dysfunction as result of injury or disease. Credit: Reprinted Courtesy the American Chemical Society - 2015 "To do that we need new tools or materials that allow us to see how large groups of neurons communicate with one another while, at the same time, being able to focus in on a single neuron's activity. Our most recent work potentially opens the integration of voltage-sensitive nanomaterials into live cells and tissues in a variety of configurations to achieve real-time imaging capabilities not currently possible." The basis of neuron communication is the time-dependent modulation of the strength of the electric field that is maintained across the cell's plasma membrane. This is called an action potential. Among the nanomaterials under consideration for application in neuronal action potential imaging are quantum dots (QDs) -- crystalline semiconductor nanomaterials possessing a number of advantageous photophysical attributes. "QDs are very bright and photostable so you can look at them for long times and they allow for tissue imaging configurations that are not compatible with current materials, for example, organic dyes," Delehanty added. "Equally important, we've shown here that QD brightness tracks, with very high fidelity, the time-resolved electric field strength changes that occur when a neuron undergoes an action potential. Their nanoscale size make them ideal nanoscale voltage sensing materials for interfacing with neurons and other electrically active cells for voltage sensing." QDs are small, bright, photo-stable materials that possess nanosecond fluorescence lifetimes. They can be localized within or on cellular plasma membranes and have low cytotoxicity when interfaced with experimental brain systems. Additionally, QDs possess two-photon action cross-section orders of magnitude larger than organic dyes or fluorescent proteins. Two-photon imaging is the preferred imaging modality for imaging deep (millimeters) into the brain and other tissues of the body. In their most recent work, the NRL researchers showed that an electric field typical of those found in neuronal membranes results in suppression of the QD photoluminescence (PL) and, for the first time, that QD PL is able to track the action potential profile of a firing neuron with millisecond time resolution. This effect is shown to be connected with electric-field-driven QD ionization and consequent QD PL quenching, in contradiction with conventional wisdom that suppression of the QD PL is attributable to the quantum confined Stark effect -- the shifting and splitting of spectral lines of atoms and molecules due to presence of an external electric field. "The inherent superior photostability properties of QDs coupled with their voltage sensitivity could prove advantageous to long-term imaging capabilities that are not currently attainable using traditional organic voltage sensitive dyes," Delehanty said. "We anticipate that continued research will facilitate the rational design and synthesis of voltage-sensitive QD probes that can be integrated in a variety of imaging configurations for the robust functional imaging and sensing of electrically active cells." Additional contributors to this study included the Optical Sciences Division, and the Materials Science and Technology Division at NRL, Washington, D.C. A full report of the team's findings, entitled "Electric Field Modulation of Semiconductor Quantum Dot Photoluminescence: Insights Into the Design of Robust Voltage-Sensitive Cellular Imaging Probes", was published September 28, 2015 in the American Chemical Society publication, NANO Letters. This groundbreaking work was funded by the NRL Nanoscience Institute. Daniel Parry | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Materials Sciences 20.07.2018 | Physics and Astronomy 20.07.2018 | Materials Sciences
<urn:uuid:1b592ca1-088f-4e9d-a26f-3916950c1463>
2.84375
1,542
Content Listing
Science & Tech.
28.338167
95,590,060
Earth Day is this Week! April 22 marks the 40th anniversary of Earth Day. Earth Day offers a perfect opportunity to talk with students and young scientists about environmental, ecological, and energy issues, conservation efforts, and what it means to think "green." The Science Buddies library of science fair project ideas contains a number of projects that offer a launching point for relevant conversations with students of all ages. These projects bring the issues into focus, making them down-to-Earth and "real" as students get hands-on with Earth Day. We'll be posting projects throughout the week in celebration of Earth Day. First up, grab a bucket, collect some frogs, and evaluate what's really going on beneath the surface of a local pond with this Science Buddies science fair project idea: - Are your local ponds healthy? You can find out with this splish-splashing fun project: Froggy Forecasting: How Frog Health Predicts Pond Health (Science Buddies difficulty rating: 5 / duration: approximately 1 week) To find out more about Earth Day 2010, visit the Earth Day Network You Might Also Enjoy these Previous Entries: - Real-world Blood Typing and the Value of Blood Donation - A Pet Science Project Success - Laurel vs. Yanny and Student STEM - Inspiring Students about STEM Careers and Robotics - Celebrate Engineers Week with the Fluor Challenge - Can Aerodynamic Suits Give U.S. Speed Skaters an Edge? - Put a Heart Health Spin on Valentine's Day - Classroom Science for Flu Season
<urn:uuid:90182257-3e71-4ecc-b9f2-af17b4fafcc7>
3.9375
325
News (Org.)
Science & Tech.
52.666549
95,590,075
"Nowhere else than in these ecosystems do giant sea spiders and marine pillbugs share the ocean bottom with fish that have antifreeze proteins in their blood," says Rich Aronson, professor of biological sciences at Florida Institute of Technology in Melbourne, Fla. "The shell-cracking crabs, fish, sharks and rays that dominate bottom communities in temperate and tropical zones have been shut out of Antarctica for millions of years because it is simply too cold for them." But this situation is about to change. "Populations of predatory king crabs are already living in deeper, slightly warmer water," says Aronson. "And increasing ship traffic is introducing exotic crab invaders. When ships dump their ballast water in the Antarctic seas, marine larvae from as far away as the Arctic are injected into the system." Aronson and his colleagues published their results in the electronic journal PLoS ONE to coincide with the U.S. National Teach-In on Global Warming Solutions on Feb. 5. Fast-moving, shell-crushing predators, dominant in most places, cannot operate in the icy waters of Antarctica. The only fish there—the ones with the antifreeze proteins—eat small, shrimp-like crustaceans and other soft foods. The main bottom dwelling predators are slow-moving sea stars and giant, floppy ribbon worms. To understand their history, Aronson and a team of paleontologists collected marine fossils at Seymour Island off the Antarctic Peninsula. Linda Ivany of Syracuse University reconstructed changes in the Antarctic climate from chemical signals preserved in ancient clamshells. As temperatures dropped about 41 million years ago and crabs and fish were frozen out, the slow-moving predators that remained could not keep up with their prey. Snails, once out of danger, gradually lost the spines and other shell armor they had evolved against crushing predators. Antarctica's coastal waters are warming rapidly. Temperatures at the sea surface off the western Antarctic Peninsula went up 1°C in the last 50 years, making it one of the fastest-warming regions of the World Ocean. If the crab invasion succeeds, it will devastate Antarctica's spectacular fauna and fundamentally alter its ecological relationships. "That would be a tragic loss for biodiversity in one of the last truly wild places on earth," says Aronson. "Unless we can get control of ship traffic and greenhouse-gas emissions, climate change will ruin marine communities in Antarctica and make the world a sadder, duller place." karen rhine | EurekAlert! Further reports about: > Antarctic > Antarctic sea life > Antarctica > Climate change > Populations of predatory king crabs > antifreeze proteins > ecosystems > exotic crab invaders > gas emission > giant sea spiders > global warming > marine pillbugs > sharks > shell-cracking crabs > warming Antarctic waters Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:f546eded-9d21-4fac-804e-9db59236bf5d>
3.265625
1,173
Content Listing
Science & Tech.
37.122327
95,590,084
NASA revealed details and viewing advice for the upcoming eclipse, which will see the moon pass between the sun and Earth, casting a dark shadow and making visible the solar corona – the sun’s normally obscured atmosphere, as well as bright stars and planets. A total of 14 states from Oregon in the west to South Carolina in the east will experience more than two minutes of darkness in the middle of the day over a span of almost two hours. This will be the first total eclipse in the US since 1979 and the first coast-to-coast eclipse since 1918. The US will be the only country to experience the total eclipse. International visitors, however, are expected to descend on the country for the rare chance to see the sun disappear behind the moon, transforming daylight into twilight. At least three NASA aircraft, 11 other spacecraft, and more than 50 high-altitude balloons, as well as the astronauts aboard the International Space Station, will capture images. “Never before will a celestial event be viewed by so many and explored from so many vantage points – from space, from the air, and from the ground,” Thomas Zurbuchen, associate administrator of NASA’s Science Mission Directorate in Washington said. NASA is warning potential viewers to put safety first and use specialized solar viewing glasses to observe the non-eclipsed or partially-eclipsed sun. The space agency says it’s safe to look at the total eclipse with the naked eye only during the brief period of totality, which will last about two minutes, depending on location. NASA will broadcast live video of the celestial event, along with coverage of activities held across the country in its honor.
<urn:uuid:fdbfc52b-6ad9-41ef-a3c3-8f3599126ffb>
3.1875
348
News Article
Science & Tech.
36.707399
95,590,109
Harvey’s intensity and rainfall potential tied to global warmingUpdated: August 27, 2017 1:02pm Global warming is making the oceans hotter, fueling the intensity and flooding potential of storms like Harvey, climate scientists said as the hurricane approached. Driven by higher-than-average temperatures in the Gulf of Mexico, Harvey quickly intensified Thursday and is likely to reach Category 3 hurricane status before it hits the Texas Coast. Print subscribers get a password for your existing account here Sea surface temperatures in the Gulf on Thursday were up to 2 degrees above normal, the National Oceanic and Atmospheric Administration reported. The average temperature for most of the Gulf was 86 degrees, said Kevin Trenberth, a climate scientist with the National Center for Atmospheric Research in Boulder, Colorado. “That makes it almost the hottest spot on the planet” for sea surface temperatures, Trenberth said. “It’s an area that’s ripe for vigorous development to occur.” Harvey is projected to make landfall late today or early Saturday. As it sits over the inland Gulf Coast, it could bring rainfall of 10 to 20 inches Friday through Tuesday along and east of Interstate 35,the National Weather Service said. More than 25 inches could fall near and south of Interstate 10, and the Hill Country could get 5 to 10 inches. As the Earth’s climate warms because of human burning of fossil fuels, scientists have seen tropical cyclones become more intense and predict they will continue doing so. Since the 1980s when high-quality satellite observations became available, scientists have seen an increase in the “intensity, frequency, and duration” of Atlantic hurricanes, along with the number of Category 4 and 5 storms, according to the 2014 National Climate Assessment. The reason warmer oceans fuel stronger hurricanes is pretty easy to understand, University of Texas climate scientist Kerry Cook said. “Hurricanes are fueled by the condensation of water in the atmosphere that evaporated from the surface,” she said. “If the surface temperature is warmer, it increases the evaporation rate.” Scientists even have figured out how much more evaporation to expect. For every nearly 2 degrees of average sea surface temperature warming, evaporation increases 7 percent, Cook said. The concept relies on a well-known physics principle called the Clausius-Clapeyron relation, she said. Hurricanes, “which are really a collective of thunderstorms,” draw their moisture from a roughly 930-mile radius and can be affected by ocean temperatures up to 650 feet deep, Trenberth said. Tropical cyclones exchange heat between the oceans and the atmosphere. In general, more heat in the system, the more intense the storm. “In the process, they actually cool off the tropical oceans,” Trenberth said. “This is really one of the fundamental roles that hurricanes and typhoons play in the climate system. … They’re sort of a relief valve.” Some of the most severe floods ever seen in Texas have been caused by hurricanes. In 1979, Tropical Storm Claudette brought thunderstorms that dumped 42 inches of rain — almost an entire year’s worth — over 24 hours in an area south of Houston, Texas A&M University professor and State Climatologist John Nielsen-Gammon said. It was the heaviest 24-hour rainfall ever recorded in the continental U.S., he said. San Antonio also has suffered deluges tied to hurricanes. In July 2002, a tropical disturbance in the Gulf led to 12.78 inches, the second-highest amount recorded, falling at San Antonio International Airport, he said. The most rain recorded at that station was 15.61 inches in a three-day period that ended Oct. 19, 1998. Moisture from hurricanes in the Atlantic and Pacific contributed to that flooding, according to the city’s Office of Emergency Management. What scientists can’t say for sure is whether hurricanes and tropical storms are happening more frequently. Unlike some other extreme weather events like heat waves, blizzards and rainstorms, hurricanes are relatively rare. In Texas, two named storms make landfall every three years, on average, Nielsen-Gammon said. For Category 3 storms and above, the average rate is one per decade, he said. That means there’s not enough data to say with certainty that the frequency of these storms is going up, Cook said. “When you try to look at observations, it takes a long time to build up enough numbers of hurricanes so that you can say with statistical significance that you have observed an (increase),” she said, adding it might take another 20 years for a pattern to emerge. email@example.com Twitter: @bgibbs
<urn:uuid:db682016-686f-4ede-b68c-30aec46ff79c>
3.140625
1,011
Truncated
Science & Tech.
42.139248
95,590,125
When researching or managing threatened species, the status of the species is commonly described in terms of percentage declines. For example, species X may have experienced a 90% decline in distribution, or species Y may have seen an 85% decline in abundance in the last 20 years. Declines in range and population abundance are standard metrics in conservation science. However, only focusing on the rate and magnitude of species declines can miss important aspects about why species are declining, and this missing information could be crucial for developing effective management. Dr Ben Scheele and Dr Claire Foster from the Australian National University explain work they are involved in that explores threatened species and the idea of a reduced niche breadth. To conserve species, we need to know why they decline more rapidly or severely in some locations than others. It’s more than just measuring the size of the overall decline. Our new research shows that it is common for species to decline much more in some parts of their range than others. This is because the severity of threats can be reduced or amplified by environmental conditions, or because the species’ ability to tolerate the threat is different in different environments. We describe these environmentally-determined declines as ‘reductions in niche breadth’. A species niche is the multidimensional environmental space in which a species is able to survive and reproduce. As a species’ niche is shaped by its environmental tolerances, interactions with other species and movement barriers, threats that modify one or more of these factors can lead to reductions in the size of that niche (niche breadth). Figure 1: A conceptualised model of how the abundance of a species can vary over space. The fundamental niche is the environmental space the species can occupy when it is not limited by other species and threats. Its historical niche is that part of the fundamental niche the species occupied prior to threat emergence, being limited by native competitors, predators and pathogens. Its contemporary niche is where it is currently found following the introduction of some new threat (such as chytrid fungus for frogs, or cats for small mammals). The important thing about this conceptualisation is that we shouldn’t constrain our thinking or our management to where a threatened species is found today without considering the implications of threats and management to the other parts of its niche where it no longer occurs. (Modified from Scheele B, CN Foster, SC Banks & DB Lindenmayer (2017). Niche Contractions in Declining Species: Mechanisms and Consequences. http://www.sciencedirect.com/science/article/pii/S0169534717300496 ) Species such as the armoured mistfrog have been lost from closed canopy rainforest sites where they were known to historically occur and now persist only in open savanna sites where the chytrid fungus doesn’t do so well. Image: Conrad Hoskin An example helps to illustrate this. The armoured mistfrog, pictured above, is a species that was originally thought to live only in tropical rainforest. With the arrival of chytrid fungus (an introduced pathogen that has been wiping out many different species of frog), it was feared the armoured mistfrog had gone extinct when all known populations disappeared. However, a new population of the frogs was later discovered in open savanna habitat. This site was hot and dry enough that it was poor habitat for the chytrid fungus, and so even though chytrid is present, its impacts on the mistfrogs are greatly reduced. Since the arrival of chytrid, the armoured mistfrog has been unable to survive in rainforest, its pre-decline primary habitat, but it can persist in open savanna sites because they are less suitable for chytrid. Another example is the decline of many small Australian mammal species that have now been lost from open grassy habitats due to predation by introduced feral cats and foxes. Before the arrival of cats and foxes, many small mammal species used a variety of habitats, from open grassy woodland, to dense heath and forest. However, these days, many of these small mammals are now only found in dense, structurally complex habitats, where they can more easily escape predation by foxes and cats. This represents a major reduction in these species’ niche breadth. Recognizing that declining species can experience reductions in niche breadth can help to identify where to focus conservation actions. For example, for species which declined many decades ago, recognising that currently occupied habitat (where they are found now) could represent only a small part of the species’ potential habitat may open up new areas for conservation actions. In contrast, for species that have declined only recently, it is important to identify whether the species’ niche has been reduced, and what has caused this reduction. This understanding is important to avoid conservation efforts being wasted on areas that are no longer suitable for the species. The threatened alpine tree frog pictured top (image: David Hunter) is now only found in larger alpine ponds such as those pictured above, where high breeding success allows the frog to persist even if the population is afflicted with chytrid disease. Image Joslin Moore. In many cases, long-term control or removal of threats is not possible. In these cases, recognising parts of a species’ niche where threats are reduced or where the impacts from threats can be tolerated is crucial. For example, threatened alpine tree frogs in the Snowy Mountains are now locally extinct in sphagnum wetlands because of chytrid fungus. The species is now only found in larger ponds, where high breeding success allows the frog to persist, even though most frogs now die after only one year (compared with the frog’s ‘normal’ life span of four or more years prior to chytrid). As it is not feasible to eradicate chytrid from wetland habitats, a practical conservation option is to create new large ponds to increase the number of populations of this species. Recognising that a species niche is shaped by many different but interacting factors can be useful for identifying novel conservation actions, particularly when direct threat control is not possible. Returning to the example of a small mammal that now persists only in dense scrubby habitats, this habitat may be good for predator evasion, but poor in other ways, such as food availability. In such circumstances, conservation actions to increase food resources may help to increase species abundance. In other areas, it may be possible to artificially create some of the features that help the species to evade predators in dense habitats, allowing the species to re-expand into more open habitats. In an era of mass biodiversity loss, understanding how threats shape the realized niche of declining species can assist the development of new management responses and identify where to prioritise conservation actions. This ‘niche-reduction hypothesis’ provides a new lens for understanding why species decline in some locations and not others, with important implications for how we research and manage declining species. For further information: Top image: The threatened alpine tree frog. Image: David Hunter Most people know that cats kill many birds and mammals, but they also have impacts on less charismatic species. Australian cats are killing about 650 million reptiles per year, according to new research published in the journal Wildlife Research. You have to be pretty lucky to make a living by combining your passion and interests, and that’s exactly how Dr Daniel White feels about his current state of affairs. Dan began his career studying genes, and has since applied his science to saving species. Here he describes how. The TSR Hub recognises that outcomes for threatened species will be improved by increasing Indigenous involvement in their management. In response to this, the Hub is guided by an Indigenous Reference Group and has a number of projects across Australia that are collaborating with Indigenous groups on threatened species research on their country. A new contagious fungal plant disease has entered Australia, myrtle rust. It’s highly mobile, can reproduce rapidly and is infecting many species across a broad geographic range. Containment and eradication responses have so far been unsuccessful. Australia is losing large old hollow-bearing trees in our mountain ash forests due to logging, fires and climate change. A team at the Australian National University have been investigating the importance of these trees, the implications of their loss and things we can do to ensure we have enough mountain giants for the future.
<urn:uuid:3ff361c9-9b4f-4d12-8716-8619eb119d6b>
3.671875
1,733
Academic Writing
Science & Tech.
31.031359
95,590,167
|Genus||Cirrus (curl of hair)| Above 6,000 m| (Above 20,000 ft) Cirrus fibratus is a type of cirrus cloud. The name cirrus fibratus is derived from Latin, meaning "fibrous". These clouds are similar to cirrus uncinus, commonly known as "mares' tails"; however, fibratus clouds do not have tufts or hooks at the end. The filaments are usually separate from one another. - Wolken Online. "Cirrus". Cloud Atlas. Retrieved 13 July 2011. - Numen - The Latin Lexicon. "Definition of fibratus, fibratus". Retrieved 13 July 2011. - Dunlop, Storm (2003). The weather identification handbook (1st Lyons Press ed.). Guilford, Conn.: Lyons Press. p. 56. ISBN 1-58574-857-9. Retrieved 13 July 2011. - Callanan, Martin. "Cirrus fibratus". International Cloud Atlas. nephology.eu. Retrieved 13 July 2011. - Wolstanton Weather. "Cumulus Clouds". Clouds. Archived from the original on 27 March 2013. Retrieved 13 July 2011. - The Weather Observer. "Cirrus Fibratus (Ci fib)". Retrieved 13 July 2011. |This climatology/meteorology–related article is a stub. You can help Wikipedia by expanding it.|
<urn:uuid:e3b9802e-c347-45cd-ba44-b70ba2e63c4b>
3.21875
312
Knowledge Article
Science & Tech.
62.513758
95,590,172
Numerical study of flow and heat transfer from a torus placed in a uniform flow Forced convection heat transfer characteristics of a torus (maintained at a constant temperature) immersed in a streaming fluid normal to the plane of the torus are studied numerically. The governing equations, namely, continuity, momentum and thermal energy in toroidal coordinate system, are solved using a finite difference method over ranges of parameters (aspect ratio of torus, 1.4 ≤ Ar ≤ 20; Reynolds number, 20 ≤ Re ≤ 40; Prandtl number, 0.7 ≤ Pr ≤ 10). Over the ranges of parameters considered herein, the nature of flow is assumed to be steady. In particular, numerical results elucidating the influence of Reynolds number, Prandtl number and aspect ratio on the isotherm patterns, local and average Nusselt numbers for the constant temperature (on the surface of the torus) boundary condition. As expected, at large aspect ratio the flow pattern and heat transfer are similar to the case of flow and heat transfer over a single circular cylinder. KeywordsHeat Transfer Reynolds Number Nusselt Number Prandtl Number Circular Cylinder Unable to display preview. Download preview PDF. - 6.Sungnul, S. and Moshkin, N.P., Numerical Simulation of Flow over Two Rotating Self-Moving Circular Cylinders, Recent Advances in Computational Sciences, Selected Papers from the Int. Workshop on Computational Sciences and Its Education, Jorgensen, P., Xiaoping Shen, Chi-Wang Shu, and Ningning Yan, Eds., Beijing, China, 2005, pp. 278–296.Google Scholar - 7.Sungnul, S. and Moshkin, N.P., Numerical Simulation of Steady Viscous Flow Past Two Rotating Circular Cylinders, Suranaree J. Sci. Technol., 2006, vol. 13, no. 3, pp. 219–233.Google Scholar - 8.Moshkin, N.P. and Sompong, J., Numerical Simulation of Heat Transfer and Fluid Flow over Two Rotating Circular Cylinders at Low Reynolds Number, Heat Transfer-Asian Res., 2010, vol. 39, no. 4, pp. 246–261.Google Scholar - 11.Yanenko, N.N., The Method of Fractional Steps: The Solution of Problems of Mathematical Physics in Several Variables, New York: Springer-Verlag, 1977.Google Scholar
<urn:uuid:21d336fd-214b-4b88-a8b2-7c22605d4bbd>
2.609375
537
Academic Writing
Science & Tech.
55.348414
95,590,205
Great apes know when you're mistaken, study finds New research shows chimpanzees, bonobos and orangutans can identify false beliefs Great apes can tell when a person is mistaken and help to set them straight. A study published in the journal Plos One Wednesday found that great apes know when a person is holding a false belief. The experiment was conducted at the Leipzig Zoo in Germany and led by David Buttelmann from Max Planck Institute for Evolutionary Anthropology and the University of Bern. Chimpanzees, bonobos and orangutans watched while an actor placed an object in one of two boxes. For the test, the person left the room and a second person took the object out of the first box, put it into another box and locked both boxes. When the first person returned and attempted to retrieve the item from the original box, the apes — who had learned how to unlock the boxes — could decide accurately which box to open in order to assist. The apes unlocked the correct box more often than they had in tests where the actor had demonstrated he knew the correct box. It's been established for a while that humankind's nearest primate relatives can understand some things about the psychological states of others. They can read expressions to interpret food preferences, for example. But understanding when someone else has a false belief is a mark of advanced social cognition, and researchers previously thought great apes lacked that capacity. "Finding evidence of belief-tracking in great apes was kind of a surprise to all of us," said Buttelmann. A study published in the journal Science last year established the possibility that great apes had some awareness of false belief. That study tracked the eye movements of apes who knew an object had been moved, but understood a human believed it was in the original location. Buttelmann and his colleagues "were not satisfied" with that methodology, though, he said in an interview with CBC News. "We thought if apes indeed have an understanding of false belief they should be able to use it." These new results build on those earlier findings and establish that great apes "not only understand others' beliefs, they plan and execute their social interactions according to this understanding," he said. The point of investigating social cognition in apes is not just to get to know our close evolutionary cousins, but to help us understand how our own cognitive abilities evolved — essentially, what makes humans human. "The main idea of why we do those studies, not just because they're fun, but is to find out how theory of mind in humans evolved," said Buttelmann. Theory of mind is the understanding that others have thoughts, beliefs and knowledge that might be different from our own. We can better appreciate our own cognitive evolution if we compare ourselves to animals who share some characteristics with us, he said. The experiment was originally designed for human toddlers 16 to 18 months old, and conducted with toys in the boxes, the study says. The test was applied to the great apes in close fashion to the tiny humans. "Our studies show that there is at least some precursors of theory of mind in great apes," said Buttelman. From here scientists can explore whether it's because we live in large groups that we evolved to have advanced social cognition, or whether that hinged on our roots as hunters, for example, he said. Another possible distinction? Where great apes may use their social cognition competitively — to mate with a desirable female while a dominant male is seeking food in another area, for example — early humans may have put theirs to work for "pro-social" reasons like sharing knowledge of how to cook a potato, said Buttelmann.
<urn:uuid:ead0d877-d0d9-45d8-90a6-3f6311052004>
3.5625
755
News Article
Science & Tech.
39.414092
95,590,214
12 July 2018 Green technique harnesses solar power to make clean fuel Published online 16 October 2017 Scientists make an ultrathin material for storing the sun’s energy. An international research team has synthesized a new type of ultrathin, nanoplatelet-shaped material that can help store the sun’s energy in a supercapacitor and convert it into chemical energy that splits water into oxygen and hydrogen.1 A prototype shows that the material can generate, store and deliver charge whenever needed. This paves the way for creating multifunctional devices that operate on just sunlight, says Maher EI-Kady from Cairo University, Egypt, and one of the scientists responsible for the research. The team, which also included researchers from Iran and the USA, prepared the material by depositing layers of common elements such as nickel, cobalt and iron on a substrate made of nickel foam. The prototype was stable, showing only 8.7% decay in capacitance after 5,000 cycles of charging and discharging. After charging this device for 20 seconds at 1.5 volts, it successfully powered a mini-motor and drove a rotor for five minutes. When two such devices connected in series and charged for 20 seconds at 3 volts, they were able to light up 35 green round light-emitting diodes for more than 30 minutes and run a clock for more than five hours. According to the researchers, it’s also possible to integrate a supercapacitor and a water-splitting system; connecting a single cell to a solar cell turned on a mini-motor for a few seconds and split water. - Shabangoli, Y. et al. An integrated electrochemical device based on earth-abundant metals for both energy storage and conversion. Energy Storage Mater. https://doi.org/10.1016/j.ensm.2017.09.010 (2017)
<urn:uuid:f51720f2-c996-40b3-a44f-5a8fa6339b2d>
3.8125
402
Truncated
Science & Tech.
49.917573
95,590,243
Carnot cycle(Redirected from Carnot-cycle) The Carnot cycle is a theoretical thermodynamic cycle proposed by French physicist Sadi Carnot in 1824 and expanded upon by others in the 1830s and 1840s. It provides an upper limit on the efficiency that any classical thermodynamic engine can achieve during the conversion of heat into work, or conversely, the efficiency of a refrigeration system in creating a temperature difference by the application of work to the system. It is not an actual thermodynamic cycle but is a theoretical construct. Every single thermodynamic system exists in a particular state. When a system is taken through a series of different states and finally returned to its initial state, a thermodynamic cycle is said to have occurred. In the process of going through this cycle, the system may perform work on its surroundings, for example by moving a piston, thereby acting as a heat engine. A system undergoing a Carnot cycle is called a Carnot heat engine, although such a "perfect" engine is only a theoretical construct and cannot be built in practice. However, a microscopic Carnot heat engine has been designed and run. Essentially, there are two "heat reservoirs" forming part of the heat engine at temperatures Th and Tc (hot and cold respectively). They have such large thermal capacity that their temperatures are practically unaffected by a single cycle. Since the cycle is theoretically reversible, there is no generation of entropy during the cycle; entropy is conserved. During the cycle, an arbitrary amount of entropy ΔS is extracted from the hot reservoir, and deposited in the cold reservoir. Since there is no volume change in either reservoir, they do no work, and during the cycle, an amount of energy ThΔS is extracted from the hot reservoir and a smaller amount of energy TcΔS is deposited in the cold reservoir. The difference in the two energies (Th-Tc)ΔS is equal to the work done by the engine. The Carnot cycle when acting as a heat engine consists of the following steps: - Reversible isothermal expansion of the gas at the "hot" temperature, Th (isothermal heat addition or absorption). During this step (1 to 2 on Figure 1, A to B in Figure 2) the gas is allowed to expand, doing work on the surroundings by pushing up the piston (stage 1 figure, right). Although the pressure drops from points 1 to 2 (figure 1) the temperature of the gas does not change during the process because it is in thermal contact with the hot reservoir at Th, and thus the expansion is isothermal. Heat energy Q1 is absorbed from the high temperature reservoir resulting in an increase in the entropy of the gas by the amount . - Isentropic (reversible adiabatic) expansion of the gas (isentropic work output). For this step (2 to 3 on Figure 1, B to C in Figure 2) the gas in the engine is thermally insulated from both the hot and cold reservoirs. Thus they neither gain nor lose heat, an 'adiabatic' process. The gas continues to expand by reduction of pressure, doing work on the surroundings (raising the piston; stage 2 figure, right), and losing an amount of internal energy equal to the work done. The gas expansion without heat input causes it to cool to the "cold" temperature, Tc. The entropy remains unchanged. - Reversible isothermal compression of the gas at the "cold" temperature, Tc. (isothermal heat rejection) (3 to 4 on Figure 1, C to D on Figure 2) Now the gas in the engine is in thermal contact with the cold reservoir at temperature Tc. The surroundings do work on the gas, pushing the piston down (stage 3 figure, right), causing an amount of heat energy Q2 to leave the system to the low temperature reservoir and the entropy of the system to decrease by the amount . (This is the same amount of entropy absorbed in step 1, as can be seen from the Clausius inequality.) - Isentropic compression of the gas (isentropic work input). (4 to 1 on Figure 1, D to A on Figure 2) Once again the gas in the engine is thermally insulated from the hot and cold reservoirs, and the engine is assumed to be frictionless, hence reversible. During this step, the surroundings do work on the gas, pushing the piston down further (stage 4 figure, right), increasing its internal energy, compressing it, and causing its temperature to rise back to Th due solely to the work added to the system, but the entropy remains unchanged. At this point the gas is in the same state as at the start of step 1. In this case, This is true as and are both lower and in fact are in the same ratio as . The pressure-volume graphEdit When the Carnot cycle is plotted on a pressure volume diagram (figure 1), the isothermal stages follow the isotherm lines for the working fluid, the adiabatic stages move between isotherms, and the area bounded by the complete cycle path represents the total work that can be done during one cycle. Properties and significanceEdit The temperature-entropy diagramEdit The behaviour of a Carnot engine or refrigerator is best understood by using a temperature-entropy diagram (TS diagram), in which the thermodynamic state is specified by a point on a graph with entropy (S) as the horizontal axis and temperature (T) as the vertical axis (figure 2). For a simple closed system (control mass analysis), any point on the graph will represent a particular state of the system. A thermodynamic process will consist of a curve connecting an initial state (A) and a final state (B). The area under the curve will be: which is the amount of thermal energy transferred in the process. If the process moves to greater entropy, the area under the curve will be the amount of heat absorbed by the system in that process. If the process moves towards lesser entropy, it will be the amount of heat removed. For any cyclic process, there will be an upper portion of the cycle and a lower portion. For a clockwise cycle, the area under the upper portion will be the thermal energy absorbed during the cycle, while the area under the lower portion will be the thermal energy removed during the cycle. The area inside the cycle will then be the difference between the two, but since the internal energy of the system must have returned to its initial value, this difference must be the amount of work done by the system over the cycle. Referring to figure 1, mathematically, for a reversible process we may write the amount of work done over a cyclic process as: Since dU is an exact differential, its integral over any closed loop is zero and it follows that the area inside the loop on a T-S diagram is equal to the total work performed if the loop is traversed in a clockwise direction, and is equal to the total work done on the system as the loop is traversed in a counterclockwise direction. The Carnot cycleEdit Evaluation of the above integral is particularly simple for the Carnot cycle. The amount of energy transferred as work is The total amount of thermal energy transferred from the hot reservoir to the system will be and the total amount of thermal energy transferred from the system to the cold reservoir will be The efficiency is defined to be: - is the work done by the system (energy exiting the system as work), - is the heat taken from the system (heat energy leaving the system), - is the heat put into the system (heat energy entering the system), - is the absolute temperature of the cold reservoir, and - is the absolute temperature of the hot reservoir. - is the maximum system entropy - is the minimum system entropy This definition of efficiency makes sense for a heat engine, since it is the fraction of the heat energy extracted from the hot reservoir and converted to mechanical work. A Rankine cycle is usually the practical approximation. Reversed Carnot cycleEdit The Carnot heat-engine cycle described is a totally reversible cycle. That is, all the processes that comprise it can be reversed, in which case it becomes the Carnot refrigeration cycle. This time, the cycle remains exactly the same except that the directions of any heat and work interactions are reversed. Heat is absorbed from the low-temperature reservoir, heat is rejected to a high-temperature reservoir, and a work input is required to accomplish all this. The P-V diagram of the reversed Carnot cycle is the same as for the Carnot cycle except that the directions of the processes are reversed. It can be seen from the above diagram, that for any cycle operating between temperatures and , none can exceed the efficiency of a Carnot cycle. Carnot's theorem is a formal statement of this fact: No engine operating between two heat reservoirs can be more efficient than a Carnot engine operating between those same reservoirs. Thus, Equation 3 gives the maximum efficiency possible for any engine using the corresponding temperatures. A corollary to Carnot's theorem states that: All reversible engines operating between the same heat reservoirs are equally efficient. Rearranging the right side of the equation gives what may be a more easily understood form of the equation. Namely that the theoretical maximum efficiency of a heat engine equals the difference in temperature between the hot and cold reservoir divided by the absolute temperature of the hot reservoir. Looking at this formula an interesting fact becomes apparent; Lowering the temperature of the cold reservoir will have more effect on the ceiling efficiency of a heat engine than raising the temperature of the hot reservoir by the same amount. In the real world, this may be difficult to achieve since the cold reservoir is often an existing ambient temperature. In other words, maximum efficiency is achieved if and only if no new entropy is created in the cycle, which would be the case if e.g. friction leads to dissipation of work into heat. In that case the cycle is not reversible and the Clausius theorem becomes an inequality rather than an equality. Otherwise, since entropy is a state function, the required dumping of heat into the environment to dispose of excess entropy leads to a (minimal) reduction in efficiency. So Equation 3 gives the efficiency of any reversible heat engine. In mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. For the case when work and heat fluctuations are counted, there is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. Efficiency of real heat enginesEdit Carnot realized that in reality it is not possible to build a thermodynamically reversible engine, so real heat engines are even less efficient than indicated by Equation 3. In addition, real engines that operate along this cycle are rare. Nevertheless, Equation 3 is extremely useful for determining the maximum efficiency that could ever be expected for a given set of thermal reservoirs. Although Carnot's cycle is an idealisation, the expression of Carnot efficiency is still useful. Consider the average temperatures, at which heat is input and output, respectively. Replace TH and TC in Equation (3) by 〈TH〉 and 〈TC〉 respectively. For the Carnot cycle, or its equivalent, the average value 〈TH〉 will equal the highest temperature available, namely TH, and 〈TC〉 the lowest, namely TC. For other less efficient cycles, 〈TH〉 will be lower than TH, and 〈TC〉 will be higher than TC. This can help illustrate, for example, why a reheater or a regenerator can improve the thermal efficiency of steam power plants—and why the thermal efficiency of combined-cycle power plants (which incorporate gas turbines operating at even higher temperatures) exceeds that of conventional steam plants. The first prototype of the diesel engine was based on the Carnot cycle. - Nicholas Giordano (13 February 2009). College Physics: Reasoning and Relationships. Cengage Learning. p. 510. ISBN 0-534-42471-6. - Ignacio A. Martínez; et al. (6 January 2016). "Brownian Carnot engine". Nature Physics. pp. 67–70. - Çengel, Yunus A., and Michael A. Boles. Thermodynamics: An Engineering Approach. 7th ed. New York: McGraw-Hill, 2011. p. 299. Print. - N. A. Sinitsyn (2011). "Fluctuation Relation for Heat Engines". J. Phys. A: Math. Theor. 44: 405001. arXiv: . Bibcode:2011JPhA...44N5001S. doi:10.1088/1751-8113/44/40/405001. - Carnot, Sadi, Reflections on the Motive Power of Fire - Ewing, J. A. (1910) The Steam-Engine and Other Engines edition 3, page 62, via Internet Archive - Feynman, Richard P.; Leighton, Robert B.; Sands, Matthew (1963). The Feynman Lectures on Physics. Addison-Wesley Publishing Company. pp. 44–4f. ISBN 0-201-02116-1. - Halliday, David; Resnick, Robert (1978). Physics (3rd ed.). John Wiley & Sons. pp. 541–548. ISBN 0-471-02456-2. - Kittel, Charles; Kroemer, Herbert (1980). Thermal Physics (2nd ed.). W. H. Freeman Company. ISBN 0-7167-1088-9. - Kostic, M. "Revisiting The Second Law of Energy Degradation and Entropy Generation: From Sadi Carnot's Ingenious Reasoning to Holistic Generalization". AIP Conf. Proc. 1411: 327–350. doi:10.1063/1.3665247. American Institute of Physics, 2011. ISBN 978-0-7354-0985-9. Abstract at: . Full article (24 pages ), also at .
<urn:uuid:da6c3185-ea2a-4018-9e4b-1ed59ad65214>
3.84375
2,963
Knowledge Article
Science & Tech.
48.647072
95,590,249
A ticket to a school play cost dollars, where is a whole number. A group of 9th graders buys tickets costing a total of $, and a group of 10th graders buys tickets costing a total of $. How many values for are possible? This problem is copyrighted by the American Mathematics Competitions. Instructions for entering answers: For questions or comments, please email firstname.lastname@example.org.
<urn:uuid:78efa128-6055-4be4-982a-a3a5907b38aa>
2.90625
90
Tutorial
Science & Tech.
55.542308
95,590,285
To determine whether the mutation in a red yeast cell is in Ade1 or Ade2, the knowledge of complementation was used. The idea of complementation is that when two mutant haploid cells mate and produce a diploid, the ability of the diploid to produce functional, non-mutant proteins depends on whether the parent mutations were in the same gene or different genes. If the mutations were in the same gene, the diploid would inherit two dysfunctional alleles, and would therefore also be a mutant phenotype. However, if the mutations were in different genes, then the diploid would have one mutant allele and one functional allele for each of the mutant genes. The functional gene would be able to produce a functional product, and the diploid organism would not show the mutant phenotype. In this experiment, if the mutations of the haploid parents were both in Ade1 or Ade2, the diploid offspring would not have a functional copy of either enzyme, and thus still be red. If one mutation was in Ade1 and the other was in Ade2, the mutations would complement, and the diploid yeast would appear to be wild-type. Through complementation analysis, the unknown mutant gene in a red yeast colony can be determined. You are here Each week, post your own Perfect Paragraph and comment on three Perfect Paragraphs. Suggest improvements. Don't just say "Looks good." - a vs an 2 months 2 weeks ago - introductory phrases 2 months 2 weeks ago - what are SNP's 2 months 2 weeks ago - proves 2 months 2 weeks ago - broad term 2 months 2 weeks ago - "As to be able to join this 2 months 2 weeks ago - Who is 'they'? Try not to use 2 months 2 weeks ago - Morrill should have two r's. 2 months 2 weeks ago - explain humanities 2 months 3 weeks ago - explanation 2 months 3 weeks ago
<urn:uuid:e5fb3e59-af42-455d-8219-6d9d5c09c4e2>
3.640625
407
Comment Section
Science & Tech.
52.789341
95,590,288
The discovery was made by a scientific team led by astronomers at the California Institute of Technology (Caltech) that included three members from Vanderbilt. The team used data from NASA’s Kepler mission combined with additional observations of a single star, called KOI-961, to determine that it possesses three planets that range in size from 0.57 to 0.78 times the radius of Earth. This makes them the smallest of the more than 700 exoplanets confirmed to orbit other stars. In their investigation of KOI-961, which is about 130 light years away in the Cygnus constellation, the astronomers found that it is nearly identical to Barnard’s star, which is only six light years away in the constellation Ophiuchus. This similarity allowed them to use information about Barnard’s star, which was discovered in 1916 by Vanderbilt astronomer E.E. Barnard, to determine the mass, size and luminosity of the distant star. These values, in turn, were used to determine the size of the three new exoplanets. “Barnard’s star and KOI-961 are both M dwarfs, which are also known as red dwarfs. This is the smallest category of stars. They are popular targets for exoplanet hunters because their small size makes it easier to detect Earth-sized planets,” said Keivan Stassun, the professor of astronomy who headed the Vanderbilt contingent. The other Vanderbilt scientists involved were Research Assistant Professors Joshua Pepper and Leslie Hebb. From the 1960’s through the 1980’s, astronomers thought that Barnard’s star also had a planetary system – specifically one or two planets larger than Jupiter. If their existence had been verified, it would have been a scientific first, but the evidence was ultimately discredited. Today, advances in telescope technology and image processing allow astronomers to identify stars with exoplanets with considerable confidence. Barnard’s star favorite of science fiction destination Although Barnard’s star is too dim to be seen by the naked eye, its proximity to the Sun and the possibility that it possessed a planetary system made it a favorite destination for science fiction writers. It appears in dozens of science fiction novels, including Hitchhiker’s Guide to the Galaxy, movies like the 1979 film The Alien Encounters, television series including Galactica Discovers Earth and a number of computer and video games. By contrast, KOI-961 is one of thousands of nameless stars that NASA’s Kepler mission has identified as candidates that may possess planetary systems. The Kepler spacecraft contains a specially designed telescope that continuously monitors the brightness of 150,000 stars at a time. It flags stars whose brightness dips periodically because the dimming could be caused by a planet that passes in the front of the star as viewed from Earth. Astronomers call this the transit method of planet detection. The Caltech team used the Kepler data on KOI-961 along with follow-up observations from the Palomar Observatory near San Diego and the W.W. Keck Observatory in Hawaii to confirm the existence of its planetary system and to determine the size of its planets. Vanderbilt astronomers helped determine star’s size The transit method provides astronomers with the ratio of the size of the planet to that of the star. As a result, they needed to determine the star’s size to calculate the size of the planets. The Kepler telescope gives some crude information about a star’s diameter, but the researchers knew that this data is particularly unreliable for M dwarfs, Stassun said. So the Vanderbilt contingent performed the additional telescope observations and analysis that were required to get an accurate estimate of the star’s size. To get better estimates of the star’s properties, the astronomers obtained an accurate measure of the star’s color from Vanderbilt’s telescope in southern Arizona and a detailed spectrum of the star from Palomar and Keck. This provided a fingerprint of KOI-961. “When we compared its fingerprint with those of the best known M dwarfs we found that Barnard’s star was the best match,” said Stassun. That was fortunate because Barnard’s star is the one of the most studied and best characterized M dwarfs. Specifically, there is an accurate estimate of its size, which is one-fifth that of the Sun. This allowed the researchers to start with a mathematical model of Barnard’s star and alter it to account for the subtle differences between the two stars. When they did, the model produced an even smaller estimate of KOI-961’s size: about one-sixth that of the Sun. Once the size of the star was established, the team used the Kepler data to calculate that the three exoplanets range from the size of Mars to slightly more than three-quarters the size of Earth. They also determined that these planets orbit the star with periods ranging from a half day to two days. Such short periods mean that all three orbit so close to their star that they must be too hot for liquid water to exist and life to evolve, the astronomers calculate. New system comparable in size to Jupiter and its moons The diminutive dimensions of this planetary system prompted John Johnson, the principal investigator of the research from NASA's Exoplanet Science Institute at Caltech, to comment, "The really amazing thing about this system is that the closest size comparison is to Jupiter and its moons." (KOI-961 is just 70 percent bigger than Jupiter and its exoplanets are comparable in size and have similar orbital periods to the Galilean moons that circle the Jovian planet.) The fact that Barnard’s star doesn’t have a giant planet doesn’t preclude the possibility that it has smaller planets. The discovery of another M dwarf that has small exoplanets increases the likelihood that Barnard’s star may have some as well. If it does, however, the planets must orbit at a much greater distance than those at KOI-961. The Kepler mission requires that the image of a star must dip three times before it is tagged as a planet-bearing candidate. As a result, the longer a planet’s orbital period, the more difficult it is to discover. For example, if a planet orbits a star once a year, it would take three years of continuous observations to detect in this fashion. The Vanderbilt team’s contribution was supported by Vanderbilt’s Initiative in Data-intensive Astrophysics. Visit Research News @ Vanderbilt for more research news from Vanderbilt. David F. Salisbury | Vanderbilt University Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:71d5b39a-acd1-4e03-a047-1e3798849c64>
4
1,959
Content Listing
Science & Tech.
43.300447
95,590,292
The nuclear magnetic resonance gyroscope is based on spin-exchange optical pumping of noble gases to detect and measure the angular velocity of the carrier, but it would be challenging to measure the precession signal of noble gas nuclei directly. To solve the problem, the primary detection method utilizes alkali atoms, the precession of nuclear magnetization modulates the alkali atoms at the Larmor frequency of nuclei, relatively speaking, and it is easier to detect the precession signal of alkali atoms. The precession frequency of alkali atoms is detected by the rotation angle of linearly polarized probe light; and differential detection method is commonly used in NMRG in order to detect the linearly polarized light rotation angle. Thus, the detection accuracy of differential detection system will affect the sensitivity of the NMRG. For the purpose of further improvement of the sensitivity level of the NMRG, this paper focuses on the aspects of signal detection, and aims to do an error analysis as well as an experimental research of the linearly light rotation angle detection. Through the theoretical analysis and the experimental illustration, we found that the extinction ratio σ2 and DC bias are the factors that will produce detective noise in the differential detection method.
<urn:uuid:eba2b13d-9ca5-4d3b-b318-82a61bb22265>
2.6875
248
Academic Writing
Science & Tech.
8.101471
95,590,307
More Sheaf Theory We introduced sheaves of functions in the previous chapter as a convenient language for defining manifolds and varieties. However, as we will see, there is much more to this story. In this chapter, we develop sheaf theory in a more systematic fashion. Presheaves and sheaves are somewhat more general notions than what we described earlier. We give the full definitions here, and then explore their formal properties. We define the notion of an exact sequence in the category of sheaves. Exact sequences and the associated cohomology sequences, given in the next chapter, form one of the basic tools used throughout the rest of the book.We also give a brief introduction to Grothendieck’s theory of schemes. A scheme is a massive generalization of an algebraic variety, and quite a bit of sheaf theory is required just to give the definition. KeywordsAbelian Group Exact Sequence Vector Bundle Topological Space Line Bundle Unable to display preview. Download preview PDF.
<urn:uuid:f5d5431e-64a7-4a19-9119-d952a22cb057>
2.703125
209
Truncated
Science & Tech.
37.492134
95,590,325
For the first time ever, scientists were able to prove that plants can make decisions based on the perceived level of risk and variable conditions. If that’s not the first step toward sentient plants, able of thinking and moving to their own accord – Harry Potter’s Whomping Willow, anyone? – I don’t know what is. On a more serious note, however, scientists discovered that pea plants were putting out more roots if placed in pots of soil with higher levels of nutrients, similar to the animals’ behavior of devoting more energy and resources to foraging and hunting when food is plenty. Then, researchers separated the pea plant’s roots in two pots with variable conditions. One pot was offering the plant a constant level or nutrients while the second one sustained rising and falling levels. The soil in one pair of pots was consistently of poor quality, while the solid in another pair featured an above average supply of nutrients. According to the researchers’ hypothesis, the plants would choose to “grow more roots in the variable soil when the constant quality was low, and opt to devote root resources to the constant pot when soil quality was better.” Surprisingly so, the pea plants’ roots followed this exact prediction. Their adaptive behavior is much like the decision making that takes place in the human brain when we’re faced with risk variables. In general, humans are more likely to gamble or take risks when they’re less at stake. When times are good, taking a risk comes with less gain. “To our knowledge, this is the first demonstration of an adaptive response to risk in an organism without a nervous system,” said Alex Kacelnik, a zoologist and researcher at Oxford University. The authors explained that their study’s purpose was not to prove that plants are somehow intelligent, like other animals or humans. Instead, their focus was on showing that they are rather complex and act on particularly interesting behaviors. Theoretically, their findings could be classified as biological adaptations, as the plants have developed processes that help them exploit natural opportunities as efficiently as possible. Published in the journal Current Biology, the study suggests other varying models of behavioral economics could be used to predict this interesting decision making of plants. Image Source: Tumblr Latest posts by Nancy Young (see all) - Missouri Man Robbed by Date and Accomplice in Park - July 15, 2018 - Bose Poised to Launch Sleepbuds, In-Ear Headphones That Help You Sleep - July 15, 2018 - Russia Is Developing a Space Debris Laser to Keep Space Clean - July 15, 2018
<urn:uuid:c8b00b43-03dc-4ad2-9ae3-d22bd3db4110>
3.65625
553
Personal Blog
Science & Tech.
38.475392
95,590,326
The Tcl Programming Language Tcl (Tool Command Language) is a very powerful but easy to learn dynamic programming language, suitable for a very wide range of uses, including web and desktop applications, networking, administration, testing and many more. Open source and business-friendly, Tcl is a mature yet evolving language that is truly cross platform, easily deployed and highly extensible. For more information on Tcl see http://www.tcl.tk and http://wiki.tcl.tk . |mingw32-tcl-rpmlintrc||0000000210210 Bytes||1289281768over 7 years ago| |mingw32-tcl.spec||00000056035.47 KB||15081399729 months ago| |tcl8.6.7-src.tar.gz||00096877999.24 MB||15081359659 months ago|
<urn:uuid:9094938d-736c-4b14-8a19-125235bc265c>
2.703125
192
Product Page
Software Dev.
82.393929
95,590,344
A new type of biological camera can trace several different molecules at once in a live animal Doctors and scientists can visualize specific biological processes in living creatures by monitoring radioactive tracer molecules. So far, imaging techniques have largely been limited to seeing one tracer molecule at a time, which is unlikely to provide the full picture of complex functions or diseases. Now Shuichi Enomoto, Shinji Motomura and co-workers at the RIKEN Molecular Imaging Research Program in Kobe and Wako have produced images of three radioactive isotopes at the same time in a live mouse (1). The researchers adapted a gamma-ray imaging device called a semiconductor Compton camera, which was originally developed for gamma-ray astrophysics. “We had been working on research and development of ‘multitracer’ technology,” explains Motomura. “A multitracer contains radioisotopes of various chemical elements, so that many elements and their interactions can be observed by one experiment. Later we proposed realizing multiple molecular imaging with a semiconductor Compton camera.” The Compton camera consists of two detectors made from intermeshed strips of germanium, and can probe a wide range of gamma ray energies. “An extremely pure crystal of germanium can work as a radiation detector with high energy resolution,” explains Motomura. “Two sets of germanium electrodes are arranged in strips at right angles, so that the gamma-ray energy and hit positions can be detected.” To test their modified Compton camera for biological imaging, the researchers chose three common radioactive tracers—isotopes of iodine, strontium and zinc—and injected them into an eight-week-old male mouse. The mouse was anaesthetized and scanned for 12 hours, producing both 2D and 3D images. The three tracers were distinguished by identifying their different emission energy peaks, and could be represented together in images by allocating three different colors: red, green and blue. All the tracers collected in areas where they would normally be expected: zinc tends to accumulate in the liver or in tumors, while strontium collects in the bones and iodine is taken up into the adrenal and thyroid glands. The researchers observed similar concentrations and distributions of the tracers every 3 hours over the 12-hour scanning period, implying a fast and long-lasting imaging capability. The researchers believe their results show great promise for the Compton camera in biological imaging. At present these germanium-based detectors are very expensive, but there could be strong demand in future, once the researchers improve their equipment to provide higher resolution images in a shorter time. 1. Motomura, S., Kanayama, Y., Haba, H., Watanabe, Y. & Enomoto, S. Multiple molecular simultaneous imaging in a live mouse using semiconductor Compton camera. Journal of Analytical Atomic Spectrometry 23, 1089–1092 (2008). The corresponding author for this highlight is based at the RIKEN Metallomics Imaging Research Unit Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:83d303c0-cadd-4205-b1bd-ccb254d371b4>
3.25
1,276
Truncated
Science & Tech.
31.718105
95,590,350
Author: Paul Jones Publisher: Createspace Independent Publishing Platform Release Date: 2017-03-08 Master Java Programming Today Fast And Easily!! This book contains proven steps and strategies on how to create programs using the Java programming language. It contains details about the programming language that every beginner should be aware of. Through this book, you should be able to learn how to create programs for various purposes. This book also contains useful information regarding the features you can find in Java as well as why Java is a good programming language to use. You will also find sample programs that you can use as guidelines when writing your own programs and creating applications. Here is a preview of what this book will offer: What Is Java? How to Install Java and Set Up the Java Environment Understand the Language Structure What Is a Java Variable and How Can We Use It? How to Set a Simple Operator in Java Apply What You Already Know with Several Assignments and Exercises Concept of Variables and Methods Input, Output, and Import Operations Using Loop Statements in Python Study of Objects and Classes Inheritance in Java File Handling Operations Don't wait any longer, get your copy today! Are you ready to Learn Java? Java is a programming language that has been in use since 1995 and is currently being used by more than 3 billion devices. It is used to build desktop, web and mobile applications and has the ability to run on multiple platforms. If you want to install Java on your computer and learn how to program using it, but don't have the necessary skills to get started, then this is the book for you. Intended for beginners you will be taken on a step-by-step journey through all aspects of Java, including: The environmental setup Basic Syntax Basic operators Loop controls Decision making Strings and arrays And much more... By reading this book you will gain an understanding of the basic concepts of not only programming in general, but also with becoming familiar with the various concepts of Java. You will be guided you through the process of installing and running Java on your computer in a language that you can understand, delivered by experts who know exactly how to get the message across. Get your copy today and start to understand what Java can do for you Author: Herbert Schildt Publisher: McGraw Hill Professional Release Date: 2014-05-09 Essential Java Programming Skills--Made Easy! Fully updated for Java Platform, Standard Edition 8 (Java SE 8), Java: A Beginner's Guide, Sixth Edition gets you started programming in Java right away. Bestselling programming author Herb Schildt begins with the basics, such as how to create, compile, and run a Java program. He then moves on to the keywords, syntax, and constructs that form the core of the Java language. This Oracle Press resource also covers some of Java's more advanced features, including multithreaded programming, generics, and Swing. Of course, new Java SE 8 features such as lambda expressions and default interface methods are described. An introduction to JavaFX, Java's newest GUI, concludes this step-by-step tutorial. Designed for Easy Learning: Key Skills & Concepts -- Chapter-opening lists of specific skills covered in the chapter Ask the Expert -- Q&A sections filled with bonus information and helpful tips Try This -- Hands-on exercises that show you how to apply your skills Self Tests -- End-of-chapter quizzes to reinforce your skills Annotated Syntax -- Example code with commentary that describes the programming techniques being illustrated The book's code examples are available FREE for download. Author: Yakov Fain Publisher: John Wiley & Sons Release Date: 2015-06-04 Quick and painless Java programming with expert multimedia instruction Java Programming 24-Hour Trainer, 2nd Edition is your complete beginner's guide to the Java programming language, with easy-to-follow lessons and supplemental exercises that help you get up and running quickly. Step-by-step instruction walks you through the basics of object-oriented programming, syntax, interfaces, and more, before building upon your skills to develop games, web apps, networks, and automations. This second edition has been updated to align with Java SE 8 and Java EE 7, and includes new information on GUI basics, lambda expressions, streaming API, WebSockets, and Gradle. Even if you have no programming experience at all, the more than six hours of Java programming screencasts will demonstrate major concepts and procedures in a way that facilitates learning and promotes a better understanding of the development process. This is your quick and painless guide to mastering Java, whether you're starting from scratch or just looking to expand your skill set. Master the building blocks that go into any Java project Make writing code easier with the Eclipse tools Learn to connect Java applications to databases Design and build graphical user interfaces and web applications Learn to develop GUIs with JavaFX If you want to start programming quickly, Java Programming 24-Hour Trainer, 2nd Edition is your ideal solution. Learn Java Programming, The Most Popular Object Oriented Programming Language, Fast, Easily And In A Fun Way, Starting From The Basics And Become An Expert In No Time! This Book Is For You... If You Are New To Java Programming And Want To Start From A Solid Foundation ! 'Java Programming: A Complete Guide For Beginners To Master And Become An Expert In Java Programming Language' is a complete guide for beginners, covering the basic concepts and ideas, with simple to understand, follow and learn examples and explanations. Learn Java The Easy And Smart Way Java is one of the most easiest and powerful programming language to master, considering the fact that it is designed keeping simplicity in mind and can be used to develop almost all kinds of web applications including mobile games e.t.c. This makes Java Programming very interactive, robust and popular among computer programmers. What Are You Waiting For? Get Your Copy Today! Author: Herbert Schildt Publisher: McGraw Hill Professional Release Date: 2014-04-08 The Definitive Java Programming Guide Fully updated for Java SE 8, Java: The Complete Reference, Ninth Edition explains how to develop, compile, debug, and run Java programs. Bestselling programming author Herb Schildt covers the entire Java language, including its syntax, keywords, and fundamental programming principles, as well as significant portions of the Java API library. JavaBeans, servlets, applets, and Swing are examined and real-world examples demonstrate Java in action. New Java SE 8 features such as lambda expressions, the stream library, and the default interface method are discussed in detail. This Oracle Press resource also offers a solid introduction to JavaFX. Coverage includes: Data types, variables, arrays, and operators Control statements Classes, objects, and methods Method overloading and overriding Inheritance Interfaces and packages Exception handling Multithreaded programming Enumerations, autoboxing, and annotations The I/O classes Generics Lambda expressions String handling The Collections Framework Networking Event handling AWT and Swing The Concurrent API The Stream API Regular expressions JavaFX JavaBeans Applets and servlets Much, much more Author: Herbert Schildt Publisher: McGraw Hill Professional Release Date: 2011-08-16 Essential Skills--Made Easy! Learn the fundamentals of Java programming in no time from bestselling programming author Herb Schildt. Fully updated to cover Java Platform, Standard Edition 7 (Java SE 7), Java: A Beginner's Guide, Fifth Edition starts with the basics, such as how to compile and run a Java program, and then discusses the keywords, syntax, and constructs that form the core of the Java language. You'll also find coverage of some of Java's most advanced features, including multithreaded programming and generics. An introduction to Swing concludes the book. Get started programming in Java right away with help from this fast-paced tutorial. Designed for Easy Learning: Key Skills & Concepts--Chapter-opening lists of specific skills covered in the chapter Ask the Expert--Q&A sections filled with bonus information and helpful tips Try This--Hands-on exercises that show you how to apply your skills Self Tests--End-of-chapter questions that test your understanding Annotated Syntax--Example code with commentary that describes the programming techniques being illustrated Learn Java programming today and begin your path towards Java programming mastery! For a limited time only, get to own this Amazon top seller for just $15.38! Regularly priced at $20.99. In this Definitive Java Guide, you're about to discover how to... How to program code in Java through learning the core essentials that every Java programmer must know. Learning Java is going to benefit you because it is going to help you in writing programs for the Web as well as being a stepping stone for learning other programming languages. Here is a Preview of What You'll Learn... Essentials of Java programming. Read then pick up the language and start applying the concepts to learn better Major facets of Java programming Several mechanics of Java programming: variables, control flow, strings, arrays - and why learning these core principles are important to Java programming success ... And much, much more! Added Benefits of owning this book: Get a better understanding of the Java programming language Learn the basic essentials of Java in order to gain the confidence to tackle more advanced topics Several mechanics of Java programming: variables, control flow, strings, arrays - and why learning these core principles are important to Java programming success By implementing the lessons in this book, not only would you learn one of today's popular computer languages, but it will serve as your guide in accomplishing all your Java goals - whether as a fun hobby or as a starting point into a successful and long term programming career. Take action today and get this book now to reach your Java programming goals. Author: Herbert Schildt Publisher: McGraw Hill Professional Release Date: 2017-10-13 Up-to-Date, Essential Java Programming Skills—Made Easy! Fully updated for Java Platform, Standard Edition 9 (Java SE 9), Java: A Beginner’s Guide, Seventh Edition, gets you started programming in Java right away. Bestselling programming author Herb Schildt begins with the basics, such as how to create, compile, and run a Java program. He then moves on to the keywords, syntax, and constructs that form the core of the Java language. The book also covers some of Java’s more advanced features, including multithreaded programming, generics, lambda expressions, Swing, and JavaFX. This practical Oracle Press guide features details on Java SE 9’s innovative new module system, and, as an added bonus, it includes an introduction to JShell, Java’s new interactive programming tool. Designed for Easy Learning: • Key Skills and Concepts—Chapter-opening lists of specific skills covered in the chapter • Ask the Expert—Q&A sections filled with bonus information and helpful tips • Try This—Hands-on exercises that show you how to apply your skills • Self Tests—End-of-chapter quizzes to reinforce your skills • Annotated Syntax—Example code with commentary that describes the programming techniques being illustrated Author: John P. Flynt Release Date: 2006-06-01 Get ready to learn the principles of Java programming through simple game creation! No previous programming experience is required. Using the skills that you develop throughout the book, you will be prepared to work with any technology that is built upon core Java (such as, J2EE, J2ME, or open source technologies such as Struts, etc). You will also learn basic programming fundamentals that can apply to many other programming languages. Code examples have been updated from the first edition and new chapters covering GUI programming and Java packages have been added to this edition. Java Sale price. You will save 66% with this offer. Please hurry up! The Best Guide to Master Java Programming Fast (Java for Beginners, Java for Dummies, how to program, java app, java programming) This book is a quick guide for programming the popular language, Java.James Gosling started the programming language project that became Java in June 1991, for use in a set-top box project he had. The new language was named 'Oak', in honor of an oak tree that stood outside Gosling's office, then called Green and ended up finally renamed as Java.Sun's first release to the public was Java 1.0 in 1995. The motto Write Once, Run Anywhere(WORA), providing no-cost run-times on popular platforms, became the reputation of Java.November 13th 2006, Sun released the bulk of Java as open source and free software under the terms of the GNU General Public License (GPL).May 8th 2007, Sun finished the open sourcing process, releasing all of Java's core code open source and free. The sole exception to this was a small portion of the software that Sun simply did not own.The following chapters will cover basic concepts of Java and show proper syntax for applying these concepts within a Java program. Here is a preview of what you'll learn: Setting Up a Java Environment Environment and Syntax Identifiers, Modifiers and Variables Basic Operators Additional Operators and Loops If and Switch Statements Methods, Class, Objects and Finally Java programs assist in making websites and pages more dynamic. As programs that run within the structure of a webpage, it is important to understand these basic, Java concepts in order to properly utilize the program and its unique attributes.Download your copy of "Java " by scrolling up and clicking "Buy Now With 1-Click" button. Tags: Java, Java Programming, Learn Java, java for dummies, java app, computer programming, computer tricks, step by step, programming for beginners, data analysis, beginner's guide, crash course, database programming, java for dummies, coding, java basics, basic programming, crash course, programming principles, programming computer, ultimate guide, programming for beginners, software development, programming software, software programs, how to program, computer language, computer basics, computing essentials, computer guide, computers books, how to program. Java: Learn Java Programming ***Available at $20 for a LIMITED TIME ONLY (Usual Price: $30)*** We highly recommend you to buy our paperback version for the better reading experience of this java book. This New Book by Best-Selling Author Mr Kotiyana gets you started programming in Java right away & begins with the java basics, such as how to create, compile, and run a Java program. He then moves on to the keywords, syntax, and constructs that form the core of the Java language. What this book offers... Are you looking for a deeper understanding of the Java programming so that you can write code that is clearer, more correct, more robust, and more reusable? Look no further! This Java Programming book was written as an answer for anyone to pick up Java Programming Language and be productive. How is this book different.. You will be able to start from scratch without having any previous exposure to Java programming. By the end of this book, you will have the skills to be a capable programmer, or at least know what is involved with how to read and write java code. Afterward you should be armed with the knowledge required to feel confident in learning more. You should have general computer skills before you get started. After this you'll know what it takes to at least look at java program without your head spinning.Java is a popular general purpose programming language and computing platform. It is fast, reliable, and secure. According to Oracle, the company that owns Java, Java runs on 3 billion devices worldwide.Considering the number of Java developers, devices running Java, and companies adapting it, it's safe to say that Java will be around for many years to come. Like any programming language, the Java language has its own structure, syntax rules, and programming paradigm. The Java language's programming paradigm is based on the concept of Object Oriented Programming, which the language's features support. What You Will Learn in This Book: CHAPTER 1) Introduction CHAPTER 2) Getting Started & Setting Programming Environment CHAPTER 3) Basic JAVA Programming Terms CHAPTER 4) Basic of Java Program CHAPTER 5) Variables, Data Types and Keywords CHAPTER 6) Functions and Operators CHAPTER 7) Controlling Execution,Arrays and Loops CHAPTER 8) Object Oriented Programming CHAPTER 9) Exception Handling CHAPTER 10) Algorithms and the Big O Notation CHAPTER 11) Data Structures in java CHAPTER 12) Network Programming in Java CHAPTER 13) The Complete Software Developer's Career Guide Click the BUY button now and download the book now to start learning Java. Learn it fast and learn it well. Tags: ------------ Java , Java book, Java Programming book, Java for Beginners, Java programming for beginners, Java for Dummies, Java Beginners Guide, Java the Complete Reference, java apps, hacking, hacking exposed, java app, computer programming, computer tricks, step by step, programming for beginners, data analysis, beginner's guide, crash course, database programming, java for dummies, coding, java basics, basic programming, crash course, programming principles, programming computer, ultimate guide, programming for beginners, software development, programming software, software programs, how to program, computer language, computer basics, computing essentials, computer guide, computers books, how to program. Author: James Patterson Release Date: 2016-01-12 This book will help you learn the basics of Java programming. It offers a step-by-step approach filled with many examples and screenshots of actual programming codes. This book is written for people who don't have any background in programming. The book begins with the basic such as how to download and install the Java software development kit and NetBeans, which will help you to easily learn the program. It will then discuss the features, keywords, and formats that build the core of Java as a programming language. After reading this book, you will have a mid-level skills and basic understanding of Java programing. Bear in mind that reading this book is just the start of your journey towards learning Java. This widely used programming language is beyond the elements that define it. It also involves comprehensive libraries and tools that can help you in developing your own programs. Mastering these areas will help you to become an expert in Java programming. After reading this book, you will have the fundamental knowledge, skills, and interest to pursue these areas. Do You Want To Start Programming Quickly? Are You Tired of Your Java Code Turning Out Wrong? Want to Become A Programming Master?If you have always wanted to know how to program, then this book is your ideal solution!The book, "Java: Java For Beginners Guide To Learn Java And Java Programming" , contains proven steps and strategies on how to learn basic programming in Java, including lesson summaries for easy reference and lessons at the end of each chapter to help you compound your new knowledge. Java is a simple language, object-oriented and incredibly easy to learn, provided you put your mind to it. Once you have learned the fundamental concepts and how to write the code, you will soon be programming like a pro!This book aims to teach you the basics of Java language in the simplest way possible. Unlike other resources, this book will not feed you with too many technicalities that might confuse you along the way. Each discussion was written in simple words. All exercises in this book were carefully chosen to be simple cases in order to make your Java practice easier.By reading this book you will gain an understanding of the basic concepts of Java Programming including: Conditional Statements Statements - Looping and Iteration Arrays Functions and Methods Classes and Objects Solutions to Exercises and Many More... This book brings you a concise, straight to the point, easy to follow code examples so you can begin coding in 24 hours or less. Invest in yourself, learn the Java basics, practice Java programming and you will be a programmer in no time. Begin your journey TODAY, No Prior Programming Experience Is Required!Don't wait! Download "Java: Java For Beginners Guide To Learn Java And Java Programming" Today and Get Started With Your New Programming Career!! Author: Barry A. Burd Publisher: John Wiley & Sons Release Date: 2017-06-28 Learn to speak the Java language like the pros Are you new to programming and have decided that Java is your language of choice? Are you a wanna-be programmer looking to learn the hottest lingo around? Look no further! Beginning Programming with Java For Dummies, 5th Edition is the easy-to-follow guide you'll want to keep in your back pocket as you work your way toward Java mastery! In plain English, it quickly and easily shows you what goes into creating a program, how to put the pieces together, ways to deal with standard programming challenges, and so much more. Whether you're just tooling around or embarking on a career, this is the ideal resource you'll turn to again and again as you perfect your understanding of the nuances of this popular programming language. Packed with tons of step-by-step instruction, this is the only guide you need to start programming with Java like a pro. Updated for Java 9, learn the language with samples and the Java toolkit Familiarize yourself with decisions, conditions, statements, and information overload Differentiate between loops and arrays, objects and classes, methods, and variables Find links to additional resources Once you discover the joys of Java programming, you might just find you're hooked. Sound like fun? Here's the place to start.
<urn:uuid:8f516506-a60a-472a-8b44-cd03c4f31f8a>
2.953125
4,433
Content Listing
Software Dev.
41.807278
95,590,356
Over the weekend of 9-10 July 2005 a team of UK and US scientists, led by Dr. Dick Willingale of the University of Leicester, used NASA’s Swift satellite to observe the collision of NASA’s Deep Impact spacecraft with comet Tempel 1. Reporting today (Tuesday) at the UK 2006 National Astronomy Meeting in Leicester, Dr. Willingale revealed that the Swift observations show that the comet grew brighter and brighter in X-ray light after the impact, with the X-ray outburst lasting a total of 12 days. “The Swift observations reveal that far more water was liberated and over a longer period than previously claimed,” said Dick Willingale. What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:d6c6b7bc-cd8f-4a78-ad9e-d7429f14fd0c>
2.859375
780
Content Listing
Science & Tech.
42.877175
95,590,366
Slide Show: The World's 10 Largest Renewable Energy Projects From wind and wave to sun and trash, a look at how existing power plants are providing electricity generated from renewable sources on a massive scale Bonus: World's Largest Landfill Gas Recuperation Plant Puente Hills in Whittier, Calif. Producing power from the gas that seeps out of landfills is a better alternative than simply flaring it. (Though it's debatable whether or not landfill gas constitutes a renewable resource, because yields of combustible gas from landfills decline between 2 and 15 percent per year after a landfill is capped and no more garbage is being added, according to Jeff Pierce, vice president of power plant development company SCS energy). Landfill gas is about half methane and half carbon dioxide and also contains water vapor, which makes it more difficult to handle than conventional natural gas. The world's largest landfill gas plant sits atop the Puente Hills landfill—the largest in the U.S.—which accepts trash from Los Angeles County. Pierce says that because this active landfill is still growing, production at the 20 year old Puente Hills landfill gas plant has not yet peaked, and averages about 50 megawatts. Another 50-megawatt landfill gas plant sits atop another gigantic dump in Incheon, South Korea. Currently there are no plans for units larger than either the Puente Hills or Incheon facilities. Jeff Pierce, SCS Energy 10. World's Largest Hydroelectric Dam China's Three Gorges Dam On December 18, 2007, the electricity production capacity of China's Three Gorges Dam reached 14.1 gigawatts, surpassing for the first time the 14-gigawatt generating capacity of the Itaipu Dam on the border of Brazil and Paraguay, making it the largest and most productive dam in the world. By 2011, it will produce 18 gigawatts of electricity, or as much as 18 large nuclear power plants. Just one of the dam's main generators can produce 700 megawatts of electricity. It s construction cost $26 billion. A still-larger dam, the Grand Inga Dam, has been proposed for completion between 2020 and 2025 in the Democratic Republic of Congo on the Congo River: Its output could reach 39 gigawatts of power. Andrew Hitchcock 9. World's Largest Wave Power Plant: Aguçadoura Wave Farm near Póvoa de Varzim, Portugal The world's first and only commercial wave power plant resembles a 500-foot- (150 meter-) long, 11 foot- (3.5 meter-) wide snake that floats, half-submerged, on the sea surface. Each unit is anchored perpendicular to the beach, and has four segments connected in a line by hinges that house independent hydraulic power plants. As each segment surges up or down with the crest of an oncoming wave, its hydraulic power plant pumps a biodegradable hydraulic fluid through a turbine which produces up to 0.75 megawatt of electricity per unit. Three of these, constructed at a cost of $13 million are currently producing a total of 2.25 megawatts at peak off the coast of Portugal, and there are plans to eventually expand the wave farm to 21 megawatts. 8. World's Largest Dry Biomass-Fired Power Plant Oy Alholmens Kraft in Pietarsaari, Finland Like most biomass-fired power plants, the Oy Alholmens Kraft power plant relies on locally sourced bark, branches and peat to fuel its enormous boiler—the largest of its kind in the world at 550 megawatts of heat. Burning all that generates a peak output of 240 megawatts of electricity. (The plant also generates 160 megawatts of steam, which is used directly by nearby industry and for district heating.) Both the peat and the wood by-products burned by this plant are harvested sustainably. In the case of the wood, trees equal in amount to those felled are planted every year and are later harvested at maturity. Peat is also continuously generated by decaying plants in wetlands, and although it is produced slowly, it can be harvested sustainably as long as it's carefully managed. "We need more than 120 trucks [of biomass] per day," says Stig Nickul, managing director of the plant. "One truck is enough for six to seven minutes." By 2010, Wales will be able to claim a 350-megawatt biomass-fired power plant, but its waste wood feedstock will have to be imported from Canada, making it of questionable renewable value. Alholmens Kraft Oy Ab Advertisement 7. World's Most Productive Geothermal Field The Geysers in Sonoma and Lake Counties, Calif. Despite having declined from a peak production of 2,000 megawatts in the mid-1980's to the present value of about 1,000 megawatts, The Geysers remains the most productive geothermal field in the world, providing nearly 60 percent of the electricity used in California's North Coast region, which stretches from the Golden Gate Bridge to the Oregon border. (The decline is due to depletion of the aquifer from which the plants draw their steam; newer plant designs re-inject the water in order to eliminate this problem.) The first commercial geothermal power plant in the U.S. was built at The Geysers in 1960; it produced 11 megawatts of power. Individual plants at this location now average about 50 megawatts, but are dwarfed by the largest geothermal power plant currently proposed, which would be built in Sarulla, North Sumatra, Indonesia, by geothermal technology company Ormat and its partners, producing 330 megawatts of electricity at peak. Calpine 6. World's Largest Photovoltaic Power Plant Olmedilla Photovoltaic Park in Olmedilla de Alarcón, Spain The Olmedilla Photovoltaic (PV) Park uses 162,000 flat solar photovoltaic panels to deliver 60 megawatts of electricity on a sunny day. The entire plant was completed in 15 months at a cost of about $530 million at current exchange rates. Olmedilla was built with conventional solar panels, which are made with silicon and tend to be heavy and expensive. So-called "thin-film" solar panels, although less efficient per square meter, tend to be much cheaper to produce, and they are the technology being tapped to realize the world's largest proposed PV plant, the Rancho Cielo Solar Farm in Belen, N. Mex., which is expected to cost $840 million, cover an area of 700 acres (285 hectares), and produce 600 megawatts of power. Nobesol 5. World's Largest Solar Thermal Plant Solar Energy Generating Systems in Southern California Solar Energy Generating Systems (SEGS) has been the world record holder for largest solar thermal project since its completion in 1990. SEGS consists of nine separate solar thermal power plants spread across the Mojave Desert, which collectively can produce 354 megawatts of power. They were designed, built and operated by Luz International, which subsequently went bankrupt when the tax breaks that made the plant profitable evaporated. The chairman of Luz is back, however, heading up Brightsource, a new solar thermal energy company that has just signed the two largest contracts for solar thermal electricity in the world. These contracts will be serviced by 14 solar thermal plants with a total output of 2,600 megawatts, to be built between now and 2017. These facilities differ substantially from SEGS, which uses long troughs to collect the sun's heat; they will consist of thousands of mirrors that will reflect the sun's energy onto a central heating tower. "The overarching theme of why we moved from trough to tower is that it's much more efficient," says Keely Wachs, director of communications at BrightSource, who notes that the cost of the tower design is also significantly lower, making it cost-competitive with other sources of energy. Gregory Kolb, Sandia National Laboratories 4. World's Largest Tidal Power Turbine SeaGen Turbine in Strangford Lough, Ireland Like wind turbines, but powered by the flow of water instead of the flow of air, tidal power turbines transform tides or deep ocean currents into electricity. The 1.2-megawatt SeaGen tidal power turbine, which consists of a matched pair of turbines, each up to 66 feet (20 meters) in diameter, is currently the only commercial-scale tidal power turbine in the world. This system costs about $5 million per installed megawatt of capacity, or about 30 percent more than offshore wind power, according to the manufacturer. The blades have the ability to turn 180 degrees in order to spin in either incoming or outgoing tidal currents. The turbines can be raised for ease of maintenance, as depicted in this photo. The inset illustration shows the turbines under normal use. By 2015, the SeaGen turbine will be surpassed by a massive tidal power turbine project in the Wando Hoenggan Waterway off the coast of South Korea, to be built jointly by Lunar Energy and Korean Midland Power Company for $820 million. Generating 300 megawatts of capacity, the 300 one-megawatt, 60-foot- (18-meter-) high turbines will be anchored to the seabed by their own weight. Sea Generation Advertisement 3. World's Largest Tidal Power Barrage Rance Tidal Barrage in Bretagne, France Many of the world's largest renewable projects have been around for quite some time: Completed in 1967 at a cost of approximately $134 million, the Rance tidal barrage (dam) is the world's first, and remains the world's largest, power plant that produces electricity from tides. The Rance barrage works by blocking the entrance to the estuary of the Rance River, where average difference between low and high tides is 26 feet (eight meters). The 24 10-megawatt bulb turbines that sit in the barrage beneath the surface can be turned by the water as it flows both into and out of the estuary, allowing the dam to produce electricity almost continuously. In the future, the U.K. has proposed a tidal power barrage across the Severn Estuary that separates England and Wales. Whereas a number of different barrages have been proposed, the largest would be a 7.4-mile- (12-kilometer-) long dam that could produce 8.6 gigawatts of energy, or 5 percent of the electricity currently used in the U.K. 2. World's Biggest Offshore Wind Farm Lynn and Inner Dowsing Wind Farm Near Skegness, Lincolnshire, England Visible from the beach of Skegness, England, the 54 3.6-megawatt turbines of the Lynn and Inner Dowsing offshore wind farm collectively can produce up to 194 megawatts of electricity at peak. Each turbine is 353 feet (107 meters) in diameter and turns on a hub that is 265 feet (80 meters) above sea level. Every turbine sits on a pylon that was driven into the shallow seabed by the Resolution, a vessel purposely built for the installation of offshore wind farms. (It extends six legs into the seabed to stabilize itself before installation of the pylon on which each turbine sits.) The total cost of the project was nearly $500 million. By the end of 2009, Lynn and Inner Dowsing will have been superseded by the 209-megawatt Horns Rev 2 wind farm sited in the North Sea between 19 and 25 miles (30 and 40 kilometers) west of the westernmost tip of Denmark, which will cost about $670 million. And the 1,000-megawatt London Array in the outer Thames Estuary is projected to be completed in 2012. Centrica Energy 1. World's Biggest On-Shore Wind Farm Horse Hollow Wind Energy Center in Taylor and Nolan Counties, Tex. About 100 miles (160 kilometers) west of Dallas, 47,000 acres (19,000 hectares) of Texas cedar and scrub oak have been given over to the 421 wind turbines that comprise the Horse Hollow Wind Energy Center. The 291 1.5-megawatt turbines built by GE and the 130 2.3-megawatt wind turbines built by Siemens together deliver 735 megawatts of peak power. The farm was completed in 2006 and is operated by NextEra Energy, a subsidiary of Florida Power & Light, which operates wind facilities that deliver over four gigawatts of power across the U.S. Horse Hollow won't retain the crown for long, however: By the middle of 2009, E.ON Climate and Renewables will complete the fourth phase of the Roscoe Wind Farm in Texas, which will deliver 781.5 megawatts from 627 turbines. Other giant wind farms that have been announced include the Shepherd's Flat Wind Farm in Oregon (800 megawatts, 303 wind turbines) and a wind farm in Markbygden, Sweden, (four gigawatts, 1,101 wind turbines). NextEra Energy Resources Advertisement Expertise. Insights. Illumination. Discover world-changing science. Explore our digital archive back to 1845, including articles by more than 150 Nobel Prize winners.
<urn:uuid:9431d765-c860-4459-ae7c-4769a69247ee>
3.53125
2,767
Listicle
Science & Tech.
50.294551
95,590,369
|ECHINODERMATA : OPHIURIDA : Ophiocomidae||STARFISH, SEA URCHINS, ETC.| Description: A large brittle star which lives in coarse gravel and extends a group of long arms up into the water. The arms are banded with dark and pale brown and have long tube feet. The arm spines are flattened and arranged in groups of 11-12 at each joint. There are one long and one small tentacle scale at the edge of each tentacle pore, the long ones tend to cross on the underside of the arm like a pair of swords. Disc 14mm., arms 10x disc diameter. Habitat: This species has strict habitat requirements, living only in coarse gravel. It is inconspicuous despite its size, and quickly retracts into the gravel on disturbance. May be found with Echinocardium flavescens. Distribution: A southern species which occurs on the south coast of England, the west coast of Ireland and in SW Scotland as far north as Oban. Its habitat is rarely sampled adequately and it could be much less rare than is supposed. Further distribution south to the Mediterranean. Similar Species: Ophiopsila aranea is considered to be difficult to separate from this species but has fewer arm spines per joint (6-8). Key Identification Features: Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Picton, B.E. & Morrow, C.C. (2016). Ophiopsila annulosa (M Sars, 1859). [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=ZB2470 Accessed on 2018-07-18 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:ec6b9d30-f646-4488-929c-3739dd6abfd2>
3.28125
413
Knowledge Article
Science & Tech.
55.383433
95,590,377
A Canadian company is planning to build a prototype fusion demonstrator that would be a fraction of the cost of a standard fusion reactor, as Hamish Johnston reports For most physicists, there are two possible paths to fusion energy. The first is magnetic-confinement fusion, which involves using magnetic fields to trap a plasma that is then heated until it is hot enough for hydrogen ions to fuse – about 150 million kelvin is what is typically needed. The other path is inertial confinement, whereby a dense target of hydrogen is compressed further by powerful lasers to initiate fusion. While these approaches are technically very different, they have one thing in common – they are being developed in huge and expensive projects. Magnetic confinement is being led by the ITER facility that is currently being built in France at an estimated cost of €16bn, while the National Ignition Facility (NIF) in California is pioneering inertial confinement through the use of 192 giant lasers to blast a pea-sized target. There are some physicists, however, who believe that there is a middle ground towards practical fusion – one that combines magnetic and inertial confinement yet can be achieved at a fraction of the cost of either. One such person is the Canadian physicist Michel Laberge, who co-founded the company General Fusion to commercialize a fusion technique called “magnetized target fusion”, or MTF. In 1990 Laberge received a PhD in plasma physics from the University of British Columbia, where he researched laser–plasma interactions. He then completed a postdoc in the same field at the Ecole Polytechnique in Paris and later at the National Research Council of Canada, where he used femtosecond lasers to study fast chemistry. This was followed by a nine-year spell developing technology for colour printing at the Vancouver-based firm Creo. In 2002 Laberge and Creo colleague Doug Richardson left the company to create General Fusion, of which Laberge is now president and chief technology officer, and Richardson is the chief executive. Based in Burnaby, a suburb of Vancouver, the firm wants to build a $40m prototype reactor based on the principles of MTF. So far, the firm has raised more than $33m from a range of investors, including Amazon founder Jeff Bezos, the oil company Cenovus Energy and the Canadian government. Hotter than the Sun Like magnetic confinement, MTF begins with a plasma that is held in place by a magnetic field. This plasma would, however, be 1000 times denser than might be found in a reactor such as ITER – and therefore much less stable. But as long as the plasma sticks around for long enough that it can be compressed, it should be possible to achieve fusion. Several schemes have been proposed for how to do this squeezing. At one end of the spectrum, lasers like those used at NIF – but not necessarily as powerful – could be used. The Shiva Star experiment at the Air Force Research Lab in Albuquerque, New Mexico, for example, uses the electrical energy stored in a huge bank of capacitors to change a magnetic field very rapidly, which squeezes a metal tube on the plasma. Other schemes, including that of General Fusion, involve the mechanical compression of the plasma using pistons. A cycle in the proposed reactor would begin with the creation of a plasma of tritium and deuterium. The plasma is formed in an injector, which wraps it in a magnetic field creating something akin to a swirling smoke ring. The plasma is then transferred along a magnetic vortex to the centre of a rotating sphere of molten lead and lithium. The sphere is surrounded by about 200 pneumatic pistons, which will suddenly all push in on the sphere at exactly the same time. This, claims General Fusion, will create an acoustic wave that will travel through the molten metal and compress the plasma so much that it will become hot enough and dense enough for the deuterium and tritium nuclei to fuse together. The goal for the initial plasma is a temperature of 106 K, at a density of 1017 particles/cm3. By contrast, an ITER plasma is expected to be about 150 million kelvin and have a density of about 1014 particles/cm3. The large amount of heat produced by the fusion process would be absorbed by the molten metal and then recovered by passing the metal through a heat exchanger to generate steam that could in turn be used to make electricity. Some of the neutrons created during fusion will be absorbed by the lithium in the molten metal, which will create more tritium. This tritium will then be removed from the molten metal and used in future compression cycles. The entire process would be repeated with the injection of the next plasma. General Fusion’s current design for the reactor predicts that about 100 MJ of electrical energy per cycle could be created. Running the system at one cycle per second would generate power at 100 MW – which is about one-fifth the capacity of a small commercial nuclear power plant. The firm claims that a reactor could operate at this power for a year by consuming only 18 kg of deuterium and 60 kg of lithium. According to Laberge, the firm has to overcome two key challenges before the reactor can be a reality. The first is being able to create a plasma that will endure long enough in the reactor that it can be compressed. Currently, the firm can create a plasma that hangs around for about 50 µs, but Laberge says this must be boosted to at least 100 µs. The plasma would be injected from two identical sources on opposing sides of the sphere. Each source would produce a doughnut-shaped toroid of plasma at a temperature of 106 K that would be “blown” to the centre of the sphere much like a smoke ring. The two doughnuts would collide and combine at the centre of the sphere prior to compression. So far, the firm has built one such plasma injector. The other big challenge, says Laberge, is creating a system of pistons, all of which must strike the sphere simultaneously to within about 10 µs to ensure that the plasma is compressed evenly. If it is unevenly squeezed, some of the plasma could leak out, preventing the target from reaching the correct density and temperature for fusion to occur. The firm currently controls its pistons using a feedback mechanism that measures the positions of the pistons and uses piezoelectric brakes to keep them all moving at the same pace. As a result, General Fusion can control the motion of a single piston to about 10 µs, and Laberge is confident that this figure can be reduced further. A working fusion reactor would have to be built at a new location because the ceiling at the firm’s Burnaby premises is too low. Laberge says that the company has its eye on a disused transformer-testing facility owned by BC Hydro. He does not think it will be difficult for the firm to get a licence to build the prototype facility, which would take about three years to construct. While the reactor could be run in a proof-of-principle mode without tritium, Laberge says that the facility would require a very small amount of tritium – less than that used in a hospital – to achieve fusion. If the firm can solve these problems, Laberge and colleagues plan to embark on a fundraising campaign to raise $40m to build the prototype. Just a stunt? So what do plasma-fusion experts think of General Fusion’s plans for MTF? “I believe that MTF has potential as a viable path to fusion energy,” says Uri Shumlak of the University of Washington, who is familiar with the company’s plans. However, he points out that the science is not as fully developed as magnetic- or inertial-confinement fusion. “MTF represents a higher risk but lower cost path that is worth pursuing in my opinion,” he adds. Plasma physicist Michael Brown of Swarthmore College in Pennsylvania agrees that MTF is a realistic path. But he adds that while the firm’s plan to crush the plasma with pistons is in some ways simpler than the NIF approach of using lasers, it is also a big technological challenge because such a carefully timed implosion of liquid metal has never been achieved before. He also warns that forming a target plasma in a liquid-metal vortex is not an easy task. “There are a lot of plasma physics and piston-technology issues for them to work out,” he says. Brown also admits that MTF is “viewed as a stunt” by the majority of the fusion community. He adds that a one-time implosion might generate a burst of neutrons from fusion but a reactor is a different story. “Many fusion scientists view the magnetic-confinement approach, as embodied by ITER, as the next big step to a steady-state fusion reactor,” he says. David Ward, who works on magnetic-confinement fusion at the Culham Centre for Fusion Energy in the UK, says that while General Fusion’s approach seems plausible and that he will be following the firm’s results with interest, he will not be jumping ship from magnetic confinement just yet. “In the past, when people have proposed new approaches as a shortcut to fusion, these have always failed and the lesson we have learned is that we just have to do it properly.”
<urn:uuid:44ebcd0a-dd7e-4fbd-a124-7e4dc06a8e01>
3.1875
1,958
News Article
Science & Tech.
44.113683
95,590,378
Engineering of Reactive Species Detoxification Pathways for Increasing Stress Tolerance in Plants The productivity of plants is greatly affected by environmental stresses such as drought, high or low temperature, high salinity and UV-B irradiation, therefore there is a continuous need for the genetic improvement of stress tolerance in the agriculture. The abiotic stresses can disturb the homeostasis between assimilation and oxidative reactions, negatively influencing the photosynthetic yield of higher plants and resulting in oxidative damage. Reactive compounds produced under such conditions significantly increase the cytotoxic effect of environmental stresses. Beside the generated reactive oxygen species (ROS), reactive aldehydes (as 4-hydroxy-nonenal and methylglyoxal) can further increase the cellular damages mainly due to their better penetration through biological membranes and their rapid reaction with biomolecules such as proteins and DNA. Improvement of intracellular scavenging capacity of such toxic compounds provenly leads to increased stress tolerance. Plant aldo-keto reductases (AKRs) are important enzymes for such function since they have a wide range of activity on lipid peroxidation and glycolysis generated reactive aldehydes. AKRs can detoxify those lipid peroxidation products (e.g., 4-hydroxynon-2-enal) and glycolysis-derived reactive aldehydes (e.g., methylglyoxal) that contribute significantly to cellular damages caused by environmental stresses. Moreover, the specific members of this NADPH-dependent aldo-keto reductase superfamily are able to catalyze the production of sugar alcohols (like sorbitol or mannitol). The products of these reactions can act as radical scavengers even at low concentration and their accumulation as osmolytes can lead to improvement of osmotic adaptation. KeywordsAldose Reductase Reactive Aldehyde Photosynthetic Yield Rice Cell Suspension Increase Stress Tolerance Unable to display preview. Download preview PDF. - Horváth et al. (1999). J Plant Biotechnol 1: 61-68.Google Scholar
<urn:uuid:59c64b1e-6eec-429e-ade8-173999eca495>
2.59375
443
Academic Writing
Science & Tech.
5.534351
95,590,379
An article published in the journal “Nature” describes a high resolution observation of a pulsar cataloged as PSR B1957+20. A team of astronomers used data collected using the Arecibo radio telescope, obtaining one of the best results in the history of astronomy thanks to the presence of a trail of plasma left by a brown dwarf, a companion of the pulsar in a binary system. According to the astronomers, the lens effect generated suggests that it’s also the cause of fast radio bursts. Approximately 6,500 light years from Earth, the pulsar PSR B1957+20 is one of the most massive known and has a rotation speed of more than 600 times per second. A pulsar is a neutron star, one of the possible remains of a star that exploded in a supernova, that’s rotating quickly. This pulsar’s companion is a brown dwarf, an object at the boundary between star and planet with a diameter about a third of the Sun’s about 2 million kilometers (1.2 million miles) away from the pulsar. It has an orbital period around the pulsar of 9 hours and it’s tidally locked to it, which means that it always shows the same face to its companion, like the Moon to the Earth. The brown dwarf has a surface temperature estimated around 6,000° Celsius (10,800° Fahrenheit) on the side facing its companion due to the strong radiation that hits it. The consequence is that the gas that forms the brown dwarf becomes plasma that expands considerably forming a sort of trail similar to a comet’s tails. That gas is slowly getting lost in space and this means that one day it will run out. That’s the reason why these pulsars that steal gas from their companions are nicknamed black widows. A curious but very useful characteristics of the plasma emitted by the brown dwarf is that it acts like a magnifying glass and allows to see images of the pulsar 70-80 times larger. The image (courtesy Dr. Mark A. Garlick, Dunlap Institute for Astronomy & Astrophysics, University of Toronto. All rights reserved) shows an artistic representation of that lens with the pulsar in the background seen through the plasma cloud surrounding the brown dwarf, seen in the foreground. This situation is useful to study the pulsar PSR B1957+20 but suggested the researchers an interesting possibility. Robert Main of the University of Toronto, the article’s lead author, explained that many properties observed in fast radio bursts (FRBs) could be explained if they were amplifications caused by plasma lenses, noting the similarities with the pulses coming from the pulsar PSR B1957+20 observed thanks to that type of amplification. So far, astronomers cataloged a couple of dozens of fast radio bursts, radio pulses that last no more than a few thousandths of a second. Their nature has remained mysterious so far because it’s difficult to study so few events of such short duration. If Robert Main team’s hypothesis is right, things could change. This study was conducted thanks to data collected using the Arecibo radio telescope before it was damaged by Hurricane Maria in September 2017. The hope is to be able to conduct follow-up observations of the pulsar PSR B1957+20 and obtain more data to test the theory on the connection with fast radio bursts.
<urn:uuid:6a119bac-c4ae-4033-9cb8-c212ce081cb1>
3.90625
708
News Article
Science & Tech.
50.27543
95,590,387
Washington, DC— Earth's magnetic field shields us from deadly cosmic radiation, and without it, life as we know it could not exist here. The motion of liquid iron in the planet’s outer core, a phenomenon called a “geodynamo,” generates the field. But how it was first created and then sustained throughout Earth’s history has remained a mystery to scientists. New work published in Nature from a team led by Carnegie’s Alexander Goncharov sheds light on the history of this incredibly important geologic occurrence. Our planet accreted from rocky material that surrounded our Sun in its youth, and over time the most-dense stuff, iron, sank inward, creating the layers that we know exist today—core, mantle, and crust. Currently, the inner core is solid iron, with some other materials that were dragged along down during this layering process. The outer core is a liquid iron alloy, and its motion gives rise to the magnetic field. A better understanding of how heat is conducted by the solid of the inner core and the liquid in the outer core is needed to piece together the processes by which our planet, and our magnetic field, evolved—and, even more importantly, the energy that sustains a continuous magnetic field. But these materials obviously exist under very extreme conditions, both very high temperatures and very intense pressures. This means that their behavior isn’t going to be the same as it is on the surface. “We sensed a pressing need for direct thermal conductivity measurements of core materials under conditions relevant to the core,” Goncharov said. “Because, of course, it is impossible for us to reach anywhere close to Earth’s core and take samples for ourselves.” The team used a tool called a laser-heated diamond anvil cell to mimic planetary core conditions and study how iron conducts heat under them. The diamond anvil cell squeezes tiny samples of material in between two diamonds, creating the extreme pressures of the deep Earth in the lab. The laser heats the materials to the necessary core temperatures. Using this kind of lab-based mimicry, the team was able to look at samples of iron across temperatures and pressures that would be found inside planets ranging in size from Mercury to Earth—345,000 to 1.3 million times normal atmospheric pressure and 2,400 to 4,900 degrees Fahrenheit—and study how they propagate heat. They found that the ability of these iron samples to transmit heat matched with the lower end of previous estimates of thermal conductivity in Earth’s core—between 18 and 44 watts per meter per kelvin, in the units scientists use to measure such things. This translates to predictions that the energy necessary to sustain the geodynamo has been available since very early in the history of Earth. “In order to better understand core heat conductivity, we will next need to tackle how the non-iron materials that went along for the ride when iron sunk to the core affect these thermal processes inside of our planet,” Goncharov added. The paper’s other authors are Zuzana Konopkova of DESY Photon Science, Stewart McWilliams of University of Edinburgh, and Natalia Gomez-Perez of Universidad de Los Andes. Caption: An illustration of how the diamond anvil cell is used to mimic and study planetary core conditions, courtesy of Stewart McWilliams. A larger version is available here. The work was supported by the National Science Foundation, the Army Research Office, the Carnegie Institution for Science, the National Natural Science Foundation of China, the Chinese Academy of Science, the University of Edinburgh, and the British Council Research Links Programme. Portions of the research were carried out at the light source Petra III at DESY, a member of the Helmholtz Association. The Carnegie Institution for Science (carnegiescience.edu) is a private, nonprofit organization headquartered in Washington, D.C., with six research departments throughout the U.S. Since its founding in 1902, the Carnegie Institution has been a pioneering force in basic scientific research. Carnegie scientists are leaders in plant biology, developmental biology, astronomy, materials science, global ecology, and Earth and planetary science. Quelle: Carnegie Institution for Science
<urn:uuid:9ff9ba29-0b8a-4958-9f04-7accb5822fc4>
4.34375
884
News (Org.)
Science & Tech.
36.892627
95,590,404
Researchers from Bayreuth University (Germany) uncover mechanisms that allow bone-forming cells to regenerate a correctly shaped new fin skeleton. Fish, in contrast to humans, have the fascinating ability to fully regenerate amputated organs. The zebrafish (Danio rerio) is a popular ornamental fish. When parts of its tailfin are injured by predators, or are experimentally amputated, the lost tissue is replaced within three weeks. Zebrafish therefore are a favored animal model to study the cellular and molecular principles of organ regeneration. Bone formation upon partial amputation of zebrafish tail fins. L: normal regeneration of fin rays; R: irregular bone formation with widely expanded production of the signaling protein Sonic Hedgehog. Image: Developmental Biology, University of Bayreuth Zebrafish fins consist of a skin that is stabilized by a skeleton of bony fin rays; similar to an umbrella that is supported by metallic stretchers. Fin rays are formed by bone-producing cells, the osteoblasts. In order to rebuild an amputated fin, a large number of new osteoblasts have to be formed by cell divisions from existing osteoblasts. For this to happen, osteoblasts exposed to the vicinty of the wound have to abandon the production of bone material and revert, or dedifferentiate, to a "rejuvenated" developmental stage. Specialized mature cells, therefore, become osteoblast precursors, which is a requirement to begin several rounds of cell divisions. Up to now only little was known about how changes in the differentiation status of osteoblasts are brought about, and it was unknown how zebrafish manage to regenerate the exact shape of the lost fin skeleton. Prof. Dr. Gerrit Begemann, at the Developmental Biology unit at the University of Bayreuth (Germany), and Ph.D. student Nicola Blum now report progress on both fronts, published in two articles in the "Advance Online Articles" section of the journal "Development". The new results are likely to inform efforts to reconstitute bone tissue and injured organs in humans. A cell's dilemma of proliferation versus specialization Retinoic acid is required to regulate the addition of bone material in growing fish. During regeneration, mature osteoblasts have to revert to an immature osteoblast precursor, which enables the switch from bone synthesis to cell division. The switch requires retinoic acid levels to drop below a critical concentration. However, upon amputation the tissue beneath the wound initiates a massive bout of retinoic acid synthesis that is required to mobilize cell division in the fin stump. How do mature osteoblasts circumvent this dilemma? The answer was provided by Nicola Blum in the laboratory of Prof. Dr. Gerrit Begemann: Osteoblasts that participate in regeneration transiently produce Cyp26b1, an enzyme that destroys and inactivates retinoic acid. Protected by this process, osteoblasts are able to rewind their developmental clocks, thus turning into precursor cells that contribute to a pool of undifferentiated cells, the blastema. Cells in the blastema pass through a number of cell divisions to provide the building blocks for the regenerated fin. However, these cell divisions are supported by high concentrations of retinoic acid, which poses the next predicament: The reversion to become a mature osteoblast is inhibited by high levels of retinoic acid. Nicola Blum found out that connective tissue in those areas of the blastema from which new mature osteoblasts eventually emerge produces the retinoic acid killer Cyp26b1. This lowers the local concentration of retinoic acid, so that osteoblast precursors are again able to mature and produce new fin rays. Other parts of the blastema, which replenish the supply of cells needed for regeneration to occur, continue to produce retinoic acid. "This is an elegant mechanism that ensures a gradient of cells experiencing high and low levels of retinoic acid", Begemann explains, "This allows two processes to run in parallel during regeneration: Proliferation for the production of all cells that replace the lost structure and redifferentiation of osteoblasts where the skeleton re-emerges." A navigation system that routes cells to regenerating fin rays How is the exact shape of the fin skeleton regenerated? In order to form new fin rays, newly formed osteoblasts have to align at the correct positions, in this case in extension of existing fin rays in the stump region. The mechanisms ensuring correct osteoblast alignment had remained unknown so far. In a second study, also published in "Development", Nicola Blum broke down the events required for skeletal pattern regeneration. Osteoblasts are ultimately guided to target regions by a signaling protein called Sonic Hedgehog. This is produced locally in the epidermis, a skin-like layer that covers the fin and the blastema. However, signal production only occurs in locally restricted cells that are free of retinoic acid. Such epidermal cells produce Cyp26a1, an enzyme that is functionally similar to Cyp26b1. By manipulating the levels of retinoic acid metabolism in a way that allows Sonic Hedgehog expression from most regenerating cells, Nicola Blum could show that Sonic Hedgehog acts as a beacon for osteoblast precursor cells. The consequences were dramatic: Instead of aligning with existing fin rays, the cells also invaded the spaces between them, which normally form elastic skin. Eventually bone also formed at inappropriate positions, which in turn sabotaged regeneration and the emergence of the original skeletal pattern. Lastly, it emerged that osteoblasts themselves exert a piloting function for other cell types, particularly mesenchymal cells and blood vessels that also have to be directed to appropriate destinations during the rebuilding process. If osteoblast precursors are misguided, these cell types follow and exacerbate the inability to reform a functional fin skeleton. "The re-emergence of the skeletal pattern relies on a navigation system with interacting parts", Begemann summarizes. "Initially, retinoic acid is inactivated where new rays are to form. This allows the local production of a signal that pilots immature osteoblasts to areas where existing fin rays are to be extended. Interestingly, over the course of regeneration other cell types in the blastema are informed by osteoblast precursors to respect the boundaries between emerging fin rays." Nicola Blum and Gerrit Begemann, Osteoblast de- and redifferentiation is controlled by a dynamic response to retinoic acid during zebrafish fin regeneration. Development 2015, Vol 142 / Issue 17; posted ahead of print August 7, 2015, Nicola Blum and Gerrit Begemann, Retinoic acid signaling spatially restricts osteoblasts and controls ray-interray organization during zebrafish fin regeneration. Development 2015, Vol 142 / Issue 17; posted ahead of print August 7, 2015, Prof. Dr. Gerrit Begemann University of Bayreuth Telephone: +49 (0)921 55 2475 Christian Wißler | Universität Bayreuth Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:f026af89-abb2-4ab3-99d6-e5debbdf31e5>
3.546875
2,119
Content Listing
Science & Tech.
28.474583
95,590,407
What shape is the Earth? It sounds like a pretty basic question, but it’s trickier to answer than you might think.Read More I’m a seismologist. Most of my research involves sitting in front of the computer writing code to manipulate waves on my screen, and then using particular characteristics of those waves to infer properties of the Earth’s interior. The waves are recordings of earthquakes, which send vibrations radiating out through the Earth. The recordings are made by specialised instruments called seismometers, which are sufficiently sensitive that they can theoretically detect vibrations smaller than a billionth of a metre. In fact, they’re so sensitive that we have to cover them with heat-shields so tiny changes in temperature don’t cause their parts to minutely expand or contract, messing with our signals. We don’t put them too near trees because as trees sway in the wind their roots tug at the ground around them, causing soil and rock and seismometers to tilt, ever so slightly. Then there is so-called “cultural noise” caused by pesky humans with their cars and trains and pneumatic drills. In order to minimise all the sources of surface noise, we bury these instruments beneath the surface: around a metre down. This means seismologists (or their paid field assistants) have a side-career as semi-professional hole diggers. There are some high-quality permanent seismic stations that are professionally installed. These are often housed in special heat/noise/everything-proof “vaults” in particularly quiet locations. For instance, the 150+ stations of the Global Seismographic Network (GSN) have provided 24/7 data for decades. The global network was first established in the 1960s and has provided a live-stream of global earthquakes ever since. That’s not actually what the stations were put there for, though - they were (and are) utilised by international bodies responsible for enforcing bans on nuclear weapons testing. Earthquakes have provided most of their stimulation since the last Chinese and Pakistani tests in 1998, but North Korea have recently helped keep nuclear monitoring relevant. Thanks, Kim Jung-Un, for keeping us employed! Okay, but what happens if a seismologist like me wants to study a particular area in detail? Well, if I’m lucky enough to get funding, I gather up a whole load of instruments, put some beers in a cooler, and venture into the ‘field’. I then try to distribute the seismometers over the whole area of interest, ideally within reach of roads (but not too close). Finally, I wait for earthquakes around the world to show up on those sensors and tell me about the Earth underneath my little array of instruments. The process of actually installing the sensors is pretty laborious. First, we have to negotiate with land owners to ask permission to bury complicated-looking sensors in their back gardens - you can imagine how those conversations go. Luckily, Americans are NEVER paranoid about being monitored by the government and they all feel kindly towards liberal university elites… Actually, people are almost always lovely and interested and happy to help. Once we know where the sensors are going, it’s time to start digging! We generally dig about a metre down, pour concrete in the bottom of the hole for the sensor to stand on, and then fashion a complicated contraption of pipes and wires and big plastic bins to keep the whole thing fairly waterproof. We cover it with heatproof foam and bury the whole thing again. At the surface, we keep a box with the digital recorder and power. Since these guys can sit out there for several years, we have to build a solar panel array to keep their battery charged. Then we try to hide the solar panels so no one makes off with them (dooming the station as a byproduct of their larceny). The beers are for after the digging is finished. So, yeah - seismological fieldwork is basically a series of DIY projects with mild landscaping thrown in. It’s tremendous fun. Also, while there are a few ‘best practice’ techniques for sensor installation, everyone does them a little differently and people are always coming up with new tips and tricks. Field seismology is also an exercise in patience and faith - you bury these little packages of electronics for years at a time, hoping they stay alive through the cold and the damp. We try to go and check up on them every few months, but it’s too expensive and energy-intensive to get them to send permanent streams of data. So it’s not uncommon (but it is depressing) to return to a station after 6 months to find that it died 5 months ago and there is no data! That’s why PhDs take so bloody long… So next time you’re out hiking and you come across a small solar panel, a little mound of earth, and a sign that says “Earthquake monitoring equipment - do not disturb”, tread lightly, and wish us luck! This week the Lamont-Doherty Earth Observatory (where I work) held a symposium marking the 50th anniversary of the theory of Plate Tectonics. This paradigm shift was such an important breakthrough that we even describe other scientific advances in terms that rely upon it: “tectonic shift in thinking…”, “groundbreaking idea…”, “plate-deformingly great”. Okay, I made the last one up, but you get my drift (ahem). Anyway, this week Lamont turned into a walking hall of fame for the geosciences, as giants in the field - at least one of whom used to sit at the desk I now occupy - returned to pay homage to the institution that housed so many breakthroughs. They reminisced about the good old days when computers filled entire rooms, poring over nautical charts with your advisor between 10pm and 2am was a normal day’s work (including whisky after 11pm), and no one wore shirts on the scientific cruises (let alone hard hats or shoes…). I particularly enjoyed the story of a Lamont ship that got lost off Bermuda and safely navigated to shore by periodically dropping dynamite off the ship. With a wrist watch to measure the time it took for the ‘bang’ to reflect off the seafloor, they echo-sounded their way up-slope, towards land. People were awesome back then. Tellingly, only one of the decorated Professors Emeriti was a woman (the inimitable Tanya Atwater); in those days women were considered unlucky on ships and in 1963 the first woman to ever board a US scientific vessel was actually a Soviet scientist (Elena Lubimova) who was reluctantly accommodated to help smooth relations following a small diplomatic incident in Cuba… The symposium was convened explicitly to celebrate the breakthroughs in the mid 1960’s and early 1970’s that solidified the theory of Plate Tectonics. At the turn of 1960, almost all Earth scientists were ‘fixists’: unmoved (!) by Wegener’s outmoded theory of Continental Drift. Their major gripe was that no mechanism existed to plough the continents through the hard volcanic rock that they knew underlay the Earth’s oceans. But their adversaries, the ‘mobilists’, had an idea: the continents were not pushing through the oceans. The oceans were moving too! In fact, as North America and Eurasia move apart from each other, they reasoned, the oceanic crust moves with the continents and new ocean floor is created at the seam along huge chains of volcanoes (mid-ocean ridges). This process is known as seafloor spreading. Lamont ships traversing the world’s seas had recently found vast and mysterious mountain chains circling the Earth and bisecting many oceans, but no one really knew what they were. (Hint: them’s the mid-ocean ridges.) One of the key pieces of evidence came from the community of scientists measuring rocks’ magnetic fields. As you probably know, the Earth has a strong magnetic field (hence compasses being a thing), generated within its core. But rocks (especially volcanic ones) can host their own magnetic fields, inherited from the Earth’s field at the time that they are created. It turns out everyone was rather keen on magnets in those days because the Cold War was afoot and WWII had just happened, and submarines were a bit of a concern… Detecting subs through their magnetic fields was a hot idea, but it required knowing the background field of the rocks pretty well so you could detect anomalies caused by an infiltrating sub, presumably captained by Sean Connery. As scientists mapped the seafloor, they noticed the magnetic field of the rocks looked like a series of stripes. First a bunch of ‘Normal’ polarity magnets, then a bunch of ‘Reversed’ polarity ones. They figured out that this pattern results from the Earth’s magnetic field periodically reversing (so the magnetic North pole randomly flips South, every few 100,000 years, and then back). As volcanic rocks get churned out along these long chains of volcanoes, they hold onto the field at the time of their eruption and you end up with magnetic ‘stripes’ of Normal or Reversed polarity. >>>>> Excellent video demonstrating seafloor spreading <<<<< Now, according to the theory of seafloor spreading, you should see these stripes on either side of the chains of underwater volcanoes, and they should be symmetrical. This would have definitively proved that seafloor is created at the mid-ocean ridges and moves off on either side like symmetric conveyor belts (see animation above). But magnetised rocks are disobliging little buggers and all the random timings of flips (coupled with unknown rates of volcanic production) was making robust observations of this phenomenon elusive. Until… Eltanin 19. The re-purposed US Navy icebreaker Eltanin was a Research Vessel used by Lamont scientists (among others) from 1962-72, completing 52 research cruises in Antarctic waters and surveying vast swaths of the southern oceans. Importantly, the good ship Eltanin carried a magnetometer. During its 19th research cruise, it took a long traverse across the Pacific-Antarctic ridge and measured the magnetic field of the rocks. When then-graduate-student Walter Pitman plotted the data, he saw something that irreversibly (c’mon) changed the face of Earth science. The ‘magic’ Eltanin-19 magnetic profile was perfect. It was so detailed, and so symmetric, it proved that seafloor spreading was real and turned ‘fixists’ into ‘mobilists’ wherever it was published. It also provided the key to all the other oceans: by stretching or squashing other magnetic profiles, you could show they, too, fit Eltanin-19 (and the amount of stretching or squashing told you how slow or fast the spreading was!) Although it took a few more years for the legendary founding director of Lamont (Maurice ‘Doc’ Ewing) to be convinced, the then-younger generation were already sprinting into the future with a series of seminal papers that established the fundamentals of Plate Tectonics. As Neil Opdyke commented yesterday, “Only when we began to ask the correct questions did the answers begin to appear”. The marriage of the ‘correct questions’ with the reams of data collected by Lamont’s global fleet of research vessels proved potent. Graduate students were publishing in Science and Nature for fun. Contorted fixist geological narratives fell away. More and more unexplained phenomena slotted into place. And a new paradigm was born. p.s. Sorry about all the puns. p.p.s. Aside from sharing stories about cantankerous advisors and preposterous field exploits, the venerable alumni had a few tips for the aspiring researchers in the audience: - Respect the data - if the data disagree with your ideas, you are probably wrong. - Don’t have ‘darling’ theories. See above. Be prepared to challenge accepted wisdom. - It’s better to be right than first. (No details were given regarding being left or last.)
<urn:uuid:0fc25d48-a19c-43e4-a949-4bbf46cae352>
2.765625
2,608
Personal Blog
Science & Tech.
46.580153
95,590,410
"When faced with the possibility of an earthquake, up until now the physical risk of the city has only ever been evaluated. This, in other words, means damage to buildings and infrastructures taking into consideration the amount of people inside," as explained to SINC by Liliana Carreño, researcher at the Polytechnic University of Catalonia (UPC). Her team proposes a new method of carrying out an overall assessment of the seismic risk of an urban area, taking into account the social strengths and weakness and the city's governance. The system created by Carreño and her team considers values such as "crime rates, whether there are marginalised areas, the number of hospital beds, training of hospital staff, etc, which all constitute factors of fragility and social capacity," explain the researchers. "This methodology greatly improves on our ability to assess future losses because it takes into account the social condition of the exposed population, which was previously treated as a mere number," states Carreño. Published in the Bulletin of Earthquake Engineering, the new approach has another added value: it uses a technique based on 'fuzzy logic theory' which allows for the use of qualitative information obtained from expert opinion when the necessary numerical information is lacking. Translating Opinions to Numbers "The methods for making a complete risk calculation in a given urban area require great quantities of information that is not always available" highlights the researcher. According to Carreño, seismic risk specialists have always faced complex problems concerning imprecise information. "We can now translate linguistic variables like a lot, a few, slight, severe, scarce and enough into mathematic formalism for their subsequent measurement," outlines the scientist. In order to verify the method's validity, Carreño and her team applied it to the cities of Barcelona and Bogotá (Colombia). She adds that "the Catalan city is a good model since its seismic risk has been subject to study for more than 20 year." Its results confirmed expected risk levels: medium-high for Bogotá and medium-low for Barcelona. As Carreño concludes, "Barcelona's assessment was carried out with the availability of sound information. But, the most important aspect of this model is that it is especially useful when studying an urban space that does not have such an advantage and where information is lacking." Reference: Carreño M.L.; Cardona O.D.; Barbat A.H. "New methodology for urban seismic risk assessment from a holistic perspective" Bull Earthquake Eng 10:547-565. 2012. DOI 10.1007/s10518-011-9302-2 SINC | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0bf036ac-1cd6-4849-9b7f-949f09511772>
2.921875
1,120
Content Listing
Science & Tech.
37.16134
95,590,416
What is Quantum Gravity and the Curvature of Spacetime and how is it all relevant to one another? Welcome to the forum. As your just getting started here allow me to offer a few tips. If your not working on a classroom or homework problem (Top of the main page), go to the appropriate forum for the subject. Looks like you've found your way kind of close to the area you want, but still forums work best when you know enough about the area to be able to ask a pointed question. For questions like "What is this subject and what does it mean for some other subject" are better handled by starting with a book so you can have enough knowlage to ask a question. And once you know enough to ask; remember you can often find where your question has already been asked use the SEARCH Function - find a few relevant threads and go though them, then you can also get a feel how to use the forum from the example of how other questions have been handled. Find and read the sticky threads (on top) and actually read and understand the forum rules. How to pick a book to read - search for a question. Try looking for "quantum book" or "relativity book" I'm sure you'll see some recommendations and maybe some web site explanations to read as well. Actually how quantum gravity relates to curvature of space-time seems like an interesting question. Is it really impossible to give some overview on the general approach on how to introduce backgound independence into quantum theory? Because my main interest was in understanding what is known about black holes, i began reading on the properties of black holes. in the article that i was reading Quantum Gravity is mentioned. now i am reading on Quantum Gravity and how it what occurs when Quantum Gravity Meets the Quantum Theory, my problem now is this equation and what it means in relation to Quantum Gravity and to the properties of a black hole. "In general relativity, mass and energy are treated in a purely classical manner, where ‘classical’ means that physical quantities such as the strengths and directions of various fields and the positions and velocities of particles have definite values. These quantities are represented by tensor fields, sets of (real) numbers associated with each spacetime point. For example, the stress, energy, and momentum Tab(x,t) of the electromagnetic field at some point (x,t), are functions of the three components Ei, Ej, Ek, Bi, Bj, Bk of the electric and magnetic fields E and B at that point. These quantities in turn determine, via Einstein's equations, an aspect of the ‘curvature’ of spacetime, a set of numbers Gab(x,t) which is in turn a function of the spacetime metric. The metric gab(x,t) is a set of numbers associated with each point which gives the distance to neighboring points. At the end of the day, a model of the world according to general relativity consists of a spacetime manifold with a metric, the curvature of which is constrained by the stress-energy-momentum of the matter distribution. All physical quantities — the value of the x-component of the electric field at some point, the scalar curvature of spacetime at some point — have definite values, given by real (as opposed to complex or imaginary) numbers. Thus general relativity is a classical theory in the sense given above." Oh, I did not see your most recent post. This is a reply to your original post. Juan, the first thing to understand is the classical idea of the curvature of space, from back in 1915 before quantum theory entered the picture it was Einstein's insight that what we experience as gravity is really geometry, and the Einstein equation of 1915 shows how the distribution of matter determines the shape of space around it-----this is the main equation of Gen Rel a famous physicist later put that equation into words: "Matter tells spacetime how to curve. Spacetime tells matter how to move." starting in 1919 the geometrical theory of gravity (Gen Rel) was tested---repeatedly and with increasing precision. It really does predicts what will happen more accurately than non-geometric theories----theories in which space is a rigid rectilinear framework and gravity is explained by force vectors so after 80 years of testing Gen Rel, we pretty much accept that space is a dynamic, changing, active thing-----its shape changes as matter moves around in it. fortunately it doesnt change very much except for very dense massive things, so we dont notice------it is still approximately the foresquare rigid spacetime that Newton imagined, so for practical purposes we still think of it like that. quantum theory comes in when you try to give this dynamic geometry, interacting with matter, a QUANTUM description-----that means using stuff like wave-functions, and having uncertainty built in. it doesnt mean that space has to be divided up into little bits:rofl: (sometimes people think space must be made of quanta) it means that things that you observe and measure about the geometry of space----like SURFACE AREA, and ANGLE, and VOLUME, and even the dimensionality itself-----are no longer fixed definite things but are instead quantum observables (which can incorporate uncertainty) For example, in a quantum model of a region of spacetime, the curvature is allowed to have some uncertainty---and depend on the quantum state of the system. that's all I can provide as an introduction. the main thing is to understand the pre-quantum 1915 business first-----when that is assimilated it is easier to think about shifting to a quantum version of it. If and when someone can do it I suppose they might, but being able to give a correct way to start would mean you knew were you were going. Smolin in his book and Perimeter Institute papers seem to show they have tried several approaches and are still looking very hard. Personally I do not think they will succeed in combining the background independence of GR with QM or the Standard Model. Thank you, this clears up a lot. But how is it all relevant to a black hole and it's properties? Several people are around here who could reply well to that. I will give someone else a chance. juan_rod:” What is Quantum Gravity and the Curvature of Spacetime and how is it all relevant to one another?” marcus:” a famous physicist later put that equation into words: "Matter tells spacetime how to curve. Spacetime tells matter how to move." “it means that things that you observe and measure about the geometry of space----like SURFACE AREA, and ANGLE, and VOLUME, and even the dimensionality itself-----are no longer fixed definite things but are instead quantum observables (which can incorporate uncertainty) For example, in a quantum model of a region of spacetime, the curvature is allowed to have some uncertainty---and depend on the quantum state of the system. that's all I can provide as an introduction. the main thing is to understand the pre-quantum 1915 business first-----when that is assimilated it is easier to think about shifting to a quantum version of it.” It seems that nature like follow to something very exceptional. In addition to Marcus suggestion and in accordance with him look also the elliptical geometry in S7 E.Cartan “sphere” which admits an absolute parallelism. juan_rod:” But how is it all relevant to a black hole and it's properties?” I have no idea. Hence my problem... thanks a lot everyone. Wait, Juan and Anonym, I will try to reply. I thought some other people would like to reply, but they didn't. You ask what BH has to do with classic 1915 Gen Rel and also with QG. I think maybe you know the answer or part of the answer. everything that has been observed about BH so far is simply consistent with classic Gen Rel and has nothing to do with QG I think you will agree. If not, please say how I am mistaken. Astronomers have no observation of BH hawking radiation or BH evaporating. So there is no empirical data about relation of QG to BH! By contrast there is a large amount of empirical data about stuff observed falling into BH and the minimal stable orbit radius----and the redshift of X ray from iron very close to hole----and so on. Wonderful empirical data. Stars have been observed orbiting the central BH in our galaxy and so on. All that is wonderful but perfectly classical. So your question about what is relevance of QG to BH has a peculiar status. Unless we get some new observation, the question concerns something about which there is no empirical data, but is a rather more speculative question, purely about theory. Well you seem more advanced than I thought your OP indicated. I’d recommend considering what you already know about the fundamentals behind the two ideas: “Quantum Gravity” “Curvature of Spacetime” and how they are relevant to each other, before you worry about how they may be relevant to a Black Hole. “Quantum Gravity” - Fundamentally based a quantum approach, utilizing QM (its derivatives or equivalents) and uses the Standard Model including the idea of particle exchange of gravitons (yet to be discovered) to account for gravity. “Curvature of Spacetime” – Fundamentally linked to GR and accounting gravity by curves or warping in a 4D Dimensions (some claim 5) with our view of 3 dimensions and time along with gravity being the result. Note the lack of need or use of particle exchange. Most see these two ideas, as completely incompatible. I agree with the view (but not all do) that they cannot be combined. As in, at least one must be shown as wrong (but they both work so well) some day! Hence my opinion that Smolin will not succeed in combining the background independence of GR with QM or the Standard Model. Got to hand to those with the persistence to keep trying to combine them, IMO they are working on the core of your question. Parampreet Singh in Ashtekar's group at Penn State has proposed a new phenomenon to look for in Gammaray Bursts (GRB) that would have a QG signature. I heard this in a recorded Penn State seminar talk you see, I have to stretch very hard to reach some contact with empirical observation. He says that some instances of gravitational collapse would release a GRB with a distinctive lightcurve. I don't remember, the lightcurve would have some peculiar feature that astronomers could be told to look for. In QG, gravitational collapse is different because gravity actually turns repulsive at near-Planck density. Some extreme cases might look different. My apologies----I do not know if this work of Singh has even been published. I will try to think of some more cases. If one has no observational check then there is no certitude of talking about something real---it could all be just weaving words about the artifacts of theory. From my basic understanding of QG and QM, i know that quantum leaps from within electrons in the matter surrounding and in front of a BH make it seem as if time curves and or slows down. i also understand that space-time curvature helps both find and identify a BH. if any of my assertions are wrong please share some insight. Marcus:” Wait, Juan and Anonym” “everything that has been observed about BH so far is simply consistent with classic Gen Rel” Sorry, when I was a student, I attended the seminar given by Y.Zeldovich at Moskow Stecklov Institute. The issue was not a particular problem or particular solution of some problem. Y.Zeldovich presented analysis whether the Einstein GR contains essential singularity. If I understood him correctly, the answer was no. However, I agree to wait Of course, what happens to RT at this singularity? With regards to this question I am interested to see what the members here think of integrating quantum theory with the priniciple of general covariance. Is it at all possible or would it indicate a fundamental flaw in at least one of the theories? DocN:” what happens to RT at this singularity?” I beg your pardon for my ignorance, what RT stands for? I think he means "renormalization theory" or something like that [EDIT: correction, I see from DocN next post that he may have meant "relativity theory"] Anonym, I was interested by what you said here: I agree that classical Einstein GR with a positive cosmological constant Lambda can have a bounce that begins the expansion phase. It does not absolutely need to have a cosmological singularity there. But I am not sure that I understand what you are reporting from Zeldovich. I was glad to hear that you were attending seminars at the Steklov. This is to be congratulated as a kind of good fortune. Very famous institute. I hope your present location is also stimulating and has plenty of ideas. doestn't relativity theory "collapse" at the singularity just like all physics laws? What is it that you mean exactly by "collapse", and how do all the laws of physics "collapse" at the singularity? a singularity is defined in the context of some particular theory---as a place where that particular theory breaks down (i.e. produces meaningless results)----so a singularity, or breakdown, in the theory helps define the limits of applicability of the theory. A breakdown in Gen Rel would not, AFAIK, imply a breakdown in actual physical reality (however one defines that :-) ) or a necessary breakdown in other physical laws. One way to say this is "singularities do not exist in nature, as far as we know, they are glitches in theories". I think you may very well agree with what I just said. If you don't please offer us some physical evidence of the existence of the big bang singularity, besides just the fact that one theory, Gen Rel, breaks down there. Assuming you DO agree with what I just said, let me try to answer the question you asked, that was quoted here. "Doesn't Gen Rel break down at the cosmological singularity?" Yes, by definition. The cosmological singularity is defined using Gen Rel, as a place where Gen Rel breaks down. So yes. "...just like all other physical laws." How do we know other physical laws break down at the beginning of expansion? Other theories constructed to replace Gen Rel do NOT predict infinite pressure, curvature, density, temperature. I don't know of any empirical evidence yet to say who is right. I would assume that much of physical law would have to be CORRECTED to be applicable at very high (Planck) temperature and density. Perhaps one would need radically new law or perhaps extensive quantum corrections. I simply don't know. But I have no scientific reason to suppose that ALL physical law simply "collapses" and ceases to apply. Marcus:” I think he means "renormalization theory" or something like that [EDIT: correction, I see from DocN next post that he may have meant "relativity theory"]” “I agree that classical Einstein GR with a positive cosmological constant Lambda” “everything that has been observed about BH so far is simply consistent with classic Gen Rel and has nothing to do with QG” Let define what we are discussing. I suggest: 1.classic Einstein GR means without cosmological term; 2.GR means “relativity theory”; 3.nothing to do with QG means "renormalization theory" is not relevant. I am outsider here. I jumped into discussion since juan_rod originally posted his thread in Quantum Physics. I was sure that you laughing in me (and I deserve it).I was ready to quit. However, perhaps, I misinterpret you. Now I guess that DocN refered to Ch.12-14 of the later edition (6) of L.D. Landau, E.M. Lifshtz “Field Theory” which I never read untill today. It turns out that E.M.Lifshtz desided to improve L.D.Landau. However, Y. Zeldovich et al are discussed there in details. I refer to seminar since I hadn’t reference in hand ( by the way, this is the only seminar at Steklov I was attending. Occasionly I jumped to Moskow and was invited to listen “as a kind of good fortune”). I interpret the negative answer of Y.Zeldovich as a statement that BH may be “gauged” away from GR in contrast with your statement. Why I am here I will explain in the next post. DocN:” doestn't relativity theory "collapse" at the singularity just like all physics laws?” And what are substitutions? Hollywood movies? I guess that it is the collapse of wave packet speaking (transition from Quantum world to Classical world: E. Schrödinger Cat). BH out, WH remain similarly as W.Ritz in the classical electrodynamics. juan_rod:” My original question consisted of a general and or limited understanding of Quantum physics and the curvature of space-time. now you have introduced electrodynamics, how is this relevant to the properties of a BH?” You received the identical answer to your original question from Marcus and me.But nobody know the ultimate truth. Let me formulate our answer in my own words: The classical A. Einstein local field theory of gravitational interactions is not complete ( It can’t be wrong, it based on universally valid and firmly established experimental result that the inertial mass is identical to the gravitation mass. It has enormous predictive power, it was verified and confirmed by all available experiments. In addition, it is most beautiful theory ever formulated by human mind). However, our answer was: the Cartan’s torsion should be added (see Marcus in the “Einstein was wrong, and should be Cartanized!” session and F.W. Hehl et al, Rev. Mod. Phys., 48, 393 (1976) for example). They wrote:”Not least among this evidence is the demonstration that the U(4) theory arises as a local gauge theory for the Poincare group in space-time”. That I also know with certainty from completely different consideration (the structure of the tensor products in QM). Compare with A. Einstein (1915), today we have experimental evidence that in addition to the long range interactions transparent in the Classical world, two short range interactions are hidden classically and transparent in the Quantum world: weak and strong. All four are called the fundamental interactions. It is generally accepted that no other fundamental interactions exist. All fundamental interactions have the same origin: presence of phases in the QM description of system states. It allows to formulate a principal postulate of the physics: Principle of Local Gauge Invariance (E. Schrödinger, H. Weyl, Y. Aharonov, D. Bohm and ultimately C.N. Yang and R.L. Mills; for review see L. O’Raifeartaigh “ The Daving of Gauge Theory”, Prinston Univ. Press (1997). Using that postulate the phenomenological U(2) theory of electroweak interactions was formulated ( S. Weinberg et al). It has unquestionable experimental confirmation. In addition, the identical approach allows to formulate the preliminary version of the strong interactions (QCD). It also has substantial experimental support. All that I consider as elements of the relativistic quantum field theory. The consistent formulation of that theory is still open problem. I do not believe that the formulation of QG may be obtained before, however, every attempt is legitimate. This is the way the physical knowledge is acquired. For the described reasons I consider our debate about BH very interesting but at present status of the theory groundless. I have no required background in gravitation (I did not work in that area of scientific research, only read sporadically experimental and theoretical papers). In past I was deeply impressed by Y. Zeldovich presentation, he demonstrated time oscillations in classical world which I associate with the quantum behaviour. For all these reasons I did not accept Marcus statement “everything that has been observed about BH so far is simply consistent with classic Gen Rel”. But frankly, I have no idea. For sure, the complete classical as well as quantum gravitation theory must be in compliance with all physical knowledge obtained during last 450 years and not in contradiction with it. If what I said still seems to you complicated, next time try to ask more simple questions. Separate names with a comma.
<urn:uuid:4149715c-8d54-4c46-af62-2e7c5cc478f1>
2.59375
4,471
Comment Section
Science & Tech.
50.377279
95,590,448
- Happy future physicists - Researchers First Reveal the Behaviors of Photons in a Birefringent Interferometer - Direct measurement of the winding number in quantum walks - Scientists Observe Stronger-than-Binary Correlations with Entangled Photonic Qutrits - USTC develops all-optically controlled non-reciprocal multifunctional photonic devices Researchers First Reveal the Behaviors of Photons in a Birefringent Interferometer The CAS Key Laboratory of Quantum Information, led by academician GUO Guangcan, has achieved significant progresses in theoretical and experimental study of photon interference in a birefringent interferometer, Prof. SHI Baosen, Associate Researcher ZHOU Zhiyuan and their collaborators construct a quantum optical model for a birefringent interferometer and reveal the interference behavior of photons in this interferometer for the first time, the experimental demonstration agree very well with the theoretical predictions. The main results have been published in journal “Physical Review Letters” [Phys. Rev. Lett. 120, 263601(2018)]. Interferometer is an indispensable tool for modern science and technology, which has been widely used in optical researches and other scientific fields. The fundamental understanding of photon interference has been debated since Dirac. In Dirac’s viewpoint, a photon can interfere only with itself. Such a viewpoint encounters some problems when one explains two-photon interference generated from spontaneous parametric down-conversion process. Later, physicists updated Dirac’s viewpoint as a pair of photons interferes only with the pair itself. Once we know how a photon behaves in a certain interference process, we can better apply this behavior for high precision metrology based on photon interference. Measurements of most physical quantities, including position, angle, optical dispersion, and temperature, often depend on decoding parameters from specific interference fringes or patterns. How to obtain stable interference fringes and obtain more parameters in a single interference fringe is the long pursued aim in interference based precision optical metrology. Prof. SHI Baosen, Associate Researcher ZHOU Zhiyuan and their collaborators construct the general quantum optical model for a birefringent interferometer first, then they design a passively stable Mach-Zehnder interferometer (MZI) to demonstrate the theoretical predictions. Two KTP crystals are inserted in two arms of the MZI, one for phase compensate and the other one as a sample for testing. They use a self-developed high brightness telecom band photon source as an input light source [Opt. Express 23, 28792 (2015)] to study both two-photon and single photon interference behaviors. They find that for both cases, one can observe the temperature beating fringes depending on the rotation angle of the sample in testing, furthermore, the two photon interference fringes beat 2 times faster than the single-photon interference fringes, which shows super-resolution in phase measurement for two-photon input case. Through the beating curve, they can determine the thermal dispersion coefficients for both optical axes of the KTP crystal with a single interference fringe. In addition, they also study the influence of the polarization decoherence to the interference fringes, they find that the polarization decoherence increases with the increasing of photon bandwidth, which results in decrease of interference visibilities for both single and two-photon input cases. The experimental observations are in perfect agreement with the theoretical predictions. Figure 1. Experimental setups for the experiments. Figure 2. two-photon and single photon temperature beating curves at different rotation angles. Left column of figures are for two-photon input case; right column the corresponding single photon input case. Finally, they point out that for single photon beating phenomena is not limited with true single photon input, the same results can be observed with lasers, this fact would have great benefit in practical measurement applications. Moreover, the theoretical model can be extended for measuring the wavelength dispersion and electro-optical coefficient of the birefringent crystal. Therefore, this work will be of great importance for understanding the nature of photon and for precision optical metrology. This work is supported by the National Natural Science Foundation of China (NSFC); the National Key Research and Development Program of China; the Anhui Initiative In Quantum Information Technologies; the China Postdoctoral Science Foundation; and the Fundamental Research Funds for the Central Universities (School of Physical Sciences,USTC)
<urn:uuid:2c883463-79e3-4354-8254-435254f38a15>
2.625
930
News (Org.)
Science & Tech.
12.288351
95,590,450
Kinetics of Some Special Reactions There are certain reactions in which the activation may be carried out by electromagnetic radiations in visible and ultraviolet region having wavelength approximately between 100 to 1000 nm. These reactions are called photochemical reactions. A photon of radiation, referred to as quantum with energy hv (v being its frequency), is primary unit of radiation. When a photon from high energy electromagnetic radiations such as X- and γ-ray is used, the chemical processes are then called radiolytic reactions. The photochemical reactions are governed by two basic principles, viz. the Grotthus-Draper law and Einstein law of photochemical equivalence. KeywordsAnionic Polymerization Cationic Polymerization Heterolytic Fission Step Growth Polymerization Chain Growth Polymerization Unable to display preview. Download preview PDF.
<urn:uuid:e5df0816-9cb1-410a-9594-8d31e375db84>
3.265625
174
Truncated
Science & Tech.
15.915266
95,590,504
The Interaction of Supernova Remnant in the Early Phase with a Circumstellar Shell The observations in the ultraviolet(IUE), the near infrared(speckle interferometry), and the soft X-ray(Ginga) of SN1987A suggest circumstellar shells(CSSs) exist. The shells have been formed when the fast blue supergiant wind sweeps the red supergiant wind. KeywordsRarefaction Wave Supernova Remnant Speckle Interferometry Azimuthal Wave Numerical Result Figure Unable to display preview. Download preview PDF.
<urn:uuid:2820df1e-fa0f-4e42-886d-e4f407955e0a>
2.71875
123
Truncated
Science & Tech.
24.384947
95,590,520
Considering the Specific Impact of Harsh Conditions and Oil Weathering on Diversity, Adaptation, and Activity of Hydrocarbon-Degrading Bacteria in Strategies of Bioremediation of Harsh Oily-Polluted Soils MetadataShow full item record Weathering processes change properties and composition of spilled oil, representing the main reason of failure of bioaugmentation strategies. Our purpose was to investigate the metabolic adaptation of hydrocarbon-degrading bacteria at harsh conditions to be considered to overcome the limitations of bioaugmentation strategies at harsh conditions. Polluted soils, exposed for prolonged periods to weathered oil in harsh soils and weather conditions, were used. Two types of enrichment cultures were employed using 5% and 10% oil or diesel as sole carbon sources with varying the mineral nitrogen sources and C/N ratios. The most effective isolates were obtained based on growth, tolerance to toxicity, and removal efficiency of diesel hydrocarbons. Activities of the newly isolated bacteria, in relation to the microenvironment from where they were isoalted and their interaction with the weathered oil, showed individual specific ability to adapt when exposed to such factors, to acquire metabolic potentialities. Among 39 isolates, ten identified ones by 16S rDNA genes similarities, including special two Pseudomonas isolates and one Citrobacter isolate, showed particularity of shifting hydrocarbon-degrading ability from short chain n-alkanes (n-C12–n-C16) to longer chain n-alkanes (n-C21–n-C25) and vice versa by alternating nitrogen source compositions and C/N ratios. This is shown for the first time. - Biological & Environmental Sciences [178 items ]
<urn:uuid:debed995-5afe-469a-b8de-c4c9cc91271e>
2.65625
351
Academic Writing
Science & Tech.
3.440968
95,590,546