text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
Introducing the FSA Module (2:39)
This module of the Reef Resilience Toolkit focuses mainly on the conservation of Reef Fish Spawning Aggregations (FSAs). Although the problems facing global fisheries are diverse, we have chosen to focus on fish spawning aggregations because of their extreme susceptibility to over-fishing, and their importance as part of the resilience model.
Maintaining healthy breeding populations (seed sources) of reef fish is critical for the sustainability and health of coral reef systems. Fish spawning aggregations are an example of Critical Areas that need special protection and management in order to build resilience into a comprehensive reef management program.
While this module emphasizes the conservation of FSAs, many of the recommendations and tools can be applied to other fishery and conservation problems.
Ultimately, managers need to approach the ecosystems they manage from a holistic perspective, requiring them to think about all functional aspects of the ecosystem. Focusing on the conservation of FSAs is only one piece, albeit a critica one, in a broader management context.
A variety of topics are included in this section and follow in sequential order. However, many sections of this toolkit can be used as stand-alone resources. A comprehensive training on the resilience principles is also available in the Resilience Training section of this website, where you will find links to numerous resources developed by global experts on coral reef management. To make contributions to future versions of this toolkit, or to add case studies, please contact us. | <urn:uuid:ee227392-398b-42a5-a930-2083da3b3f49> | 3.265625 | 305 | Knowledge Article | Science & Tech. | 20.209428 |
In this example we are going to sort integer values of an array using
Insertion sorting algorithm is similar to bubble sort. But insertion sort is more efficient than bubble sort because in insertion sort the elements comparisons are less as compare to bubble sort. In insertion sorting algorithm compare the value until all the prior elements are lesser than compared value is not found. This mean that the all previous values are lesser than compared value. This algorithm is more efficient than the bubble sort .Insertion sort is a good choice for small values and for nearly-sorted values. There are more efficient algorithms such as quick sort, heap sort, or merge sort for large values .
Positive feature of insertion sorting:
1.It is simple to implement
2.It is efficient on (quite) small data values
3.It is efficient on data sets which are already nearly sorted.
The complexity of insertion sorting is O(n) at best case of an already sorted array and O(n2) at worst case .
In insertion sorting take the element form left assign value into a variable. Then compare the value with previous values. Put value so that values must be lesser than the previous values. Then assign next value to a variable and follow the same steps relatively until the comparison not reached to end of array.
Working of insertion sorting:
The code of the program :
Output of the example:
Values Before the sort:
12 9 4 99 120 1 3 10 Values after the sort: 1 3 4 9 10 12 99 120 PAUSE
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:6907f2e1-3d51-4742-85e3-451910a5e86b> | 3.890625 | 354 | Tutorial | Software Dev. | 57.557331 |
This tutorial demonstrates a way of running animation and game logic at a constant speed independent of effective frame rate. It's a pretty simple technique, so I'll try to keep it short and sweet.
Frame-based animation is a system in which the game world is updated one iteration each frame. This is the easiest system to implement, and the way most new programmers do it, but it has several drawbacks. The speed of your game is effectively tied to the frame rate, which means it will run more slowly (chronologically) on older computers, and faster on newer ones. This is generally undesirable.
Variable interval time-based animation is a system in which the game state is updated once per frame, similarly to frame-based animation. The difference from frame-based animation is that the amount of time elapsed since the last frame is used as part of your game state update mechanisms. For example, if an object is to move at a rate of three units per second, you might do something like this:
The larger interval is, the further the object moves.
As opposed to frame-based animation, this system allows your game to run at a constant speed regardless of frame rate, but it has its own drawbacks. It tends to be rather unstable; different frame rates will produce slightly (or in some cases, dramatically) different results. It's possible to implement a variable interval system that is completely stable, and even ideal if you're up to the challenge, but it presents some very difficult problems to overcome. Take, for instance, an object moving along a curved path:
At each iteration, the object is moved forward, and the direction it's moving is rotated slightly. Notice that the object's path hits the edge of the wall. This works fine with short timesteps, but with longer ones, something like this might happen:
The object has missed the wall entirely because its curve is less rounded, since it's moving longer distances and turning sharper angles to make up for the longer time interval. The behavior of your game has been indirectly affected by a difference in frame rate. This could potentially be remedied by subdividing the timesteps as necessary, or with more sophisticated collision detection code that takes into account the curved path of the object. However, there's a much simpler and easier way:
Fixed interval time-based animation is the system I'll be demonstrating in this tutorial. Using this system gives you most of the benefits of a variable interval system with a lot less headache. If you've already implemented frame-based or variable interval time-based animation, you can switch to fixed interval time-based animation with very little change in your code.
In a nutshell, the idea behind fixed interval time-based animation is to update the state of your game world a variable number of times per frame, but at a fixed timestep interval. If a particularly complex scene is being drawn, the frame rate will bog down, causing more game world update cycles to happen at once in order to catch up. If a very simple scene is being drawn and the frame rate is high, the state of the game world will be updated fewer times (or not at all, in some cases) in one frame.
You may want to cap the number of update cycles at some point to prevent large hiccups in performance from causing mass mayhem in your game. If your game stalls for some reason (due to a background task, large amounts of virtual memory being paged in, etc.), this will allow gameplay to pause momentarily until it can run at a reasonable speed again, rather than catching up by a large amount while the user has no way of interacting with the game. (Note that if your game runs over a network, and needs to remain synchronized with a peer, capping the number of update cycles may not be an option.)
So, enough preamble. On with some example code!
First, we define the constants MAXIMUM_FRAME_RATE, MINIMUM_FRAME_RATE, UPDATE_INTERVAL, and MAX_CYCLES_PER_FRAME. MAXIMUM_FRAME_RATE determines the frequency of update cycles; MINIMUM_FRAME_RATE is used to constrain the number of update cycles per frame. UPDATE_INTERVAL is the amount of time, in seconds, that passes between updates. MAX_CYCLES_PER_FRAME is the maximum number of updates per frame before gameplay starts slowing down to let drawing catch up.
In the runGame() function, two static variables are defined: lastFrameTime and cyclesLeftOver. lastFrameTime is the return value of GetCurrentTime() (more on this in a moment) at the end of the last call to runGame(). cyclesLeftOver stores fractions of update cycles between calls to runGame() to be carried out later, when/if they add up to UPDATE_INTERVAL, and consequently an extra cycle.
The first thing we do in the function is get the current time in seconds with GetCurrentTime(). This is a made-up function name; you'll have to replace it with an appropriate function call in whatever API you're using. I'll list some functions you can use for it in various APIs at the end of the tutorial.
currentTime is used, along with lastFrameTime and cyclesLeftOver, to compute the number of update cycles for this frame. The number of update cycles is then capped to meet the minimum frame rate. Note that updateIterations is actually an interval, not a literal number of cycles. This is more convenient, as we'll see below.
The while loop runs a variable number of times, determined by the value of updateIterations. In your updateGame function, you should perform the necessary actions to update the state of your game world by one iteration. Note that if you're converting your code from a variable-interval system (or want to have the flexibility to do the reverse) and your game state updates require an interval argument, the appropriate value to pass for the interval is the UPDATE_INTERVAL constant.
After the game update loop is finished running, the remaining value of updateIterations (which, after the while loop, will be less than UPDATE_INTERVAL) is saved in cyclesLeftOver, and currentTime is saved in lastFrameTime. At this point, you're ready to draw the game scene.
One other thing to watch for: In this example, I set the initial value of lastFrameTime to 0.0. In some APIs, The function you choose to replace GetCurrentTime() will return time since program startup, which would be appropriate here; however, some others will return the time since the computer last started, or since a fixed reference date. In these cases, you'll want to explicitly set the initial value of lastFrameTime, like so:
As promised, here's a list of functions you can use to get the current time, in various APIs. See your API documentation for more information on how to use them:
|API||Function name||Units returned|
|Win32||QueryPerformanceCounter()||Variable; see also QueryPerformanceFrequency()|
(also in Carbon)
|UpTime()||AbsoluteTime (variable; see developer.apple.com)| | <urn:uuid:8f055c20-768d-466c-9be7-9fa361185d5f> | 3.078125 | 1,502 | Tutorial | Software Dev. | 35.234462 |
Tutorials Shared by the Internet Community
|Top Tutorials||New Tutorials||Submit||Login||Register|
Tutorial Basic Detail
Total Hits: 2341
Total Votes: 24 votes
Average Rating: 2.5 out of 5
Description:Debugging essentially means to track down an error in your code. Found a "bug" in your code? Then you need to "de-bug" it! This article will introduce you to some basic concepts such as error checking and built-in functions like var_dump() that will aid you in tracking down errors in your PHP applications.
There are also many fully-fledged debugging tools for PHP that lets you set breakpoints, watch variables, and such like, but we won't be using them in this article. | <urn:uuid:5d1e5c2a-2680-4f78-a4a1-abdc9741a8b3> | 2.703125 | 160 | Truncated | Software Dev. | 56.665115 |
Bode Titius Rule
In the 18th century, two astronomers, Johann Bode and Johann Titius, reported a numerical
sequence into which the sizes of the planetary orbits fit. At the time the rule was recognized,
there were only six known planets; Mercury, Venus, Earth, Mars, Jupiter and Saturn.
Textbooks usually remark on the Bode-Titius sequence casually, noting that many astronomers dismiss it as
mathematical slight of hand. In any event, astronomers note the following points,
which we must keep in mind: The rule hypothesized a planet between Mars and Jupiter,
which turned out to be where the asteroid belt is. It also hypothesized a planet out from
Saturn, which turned out to be Uranus, but Neptune and Pluto don't fit the pattern very well.
Pluto has an especially irregular orbit coming in closer than Neptune at one point and it is slightly
askew from the plane on which the other orbits closely lie (the ecliptic).
Generating the Bode-Titius Sequence:
1. Orbit: From left to right, make ten columns and number them.
2. Scaling: Under orbit #1 write 0, under #2 write 3. For the rest, double the previous (6, 12, 24...)
3. Add 4: In the next row, add 4 to the number above it for each column.
4. Divide by 10: In the next row, divide the number above by 10.
|Add 4 to Each||4||7||10||16||28||52||100||196||388||772
|Divide by 10||.4||.7||1.0||1.6||2.8||5.2||10.0||19.6||38.8||77.2
5. Compare: compare the numbers in the last row to the known orbital radii of the planets (in AU's).
The numbers match the spacing of the inner planets very well. The correlation diminishes slightly in the outer
solar system. Pluto represents a great discrepancy. | <urn:uuid:b8b5cd21-c7da-4dad-a85d-84c5ff3e4f10> | 4.375 | 440 | Tutorial | Science & Tech. | 76.136579 |
Shedding Light on Penguins - Studies by Milwaukee County Zoo staff in Chile provide insights into helping Humboldt penguins survive.
In conjunction with the Milwaukee County Zoo’s long-term field studies of Humboldt penguin ecology in Chile, the Zoo has been coordinating funding for annual population censuses of the wild penguins in Chile since 1994. Annual censuses not only provide information on the size of the wild population, but also allow us to detect trends in population growth or decline, and population movements, particularly in reference to El Nino weather patterns. This information is critical to understanding the behavior and activity of wild Humboldt penguins.
Zoological Society of Milwaukee (ZSM) funding has paid the expenses for Chilean ornithologists to survey the Chilean coast during the penguin molting season to provide accurate estimates of existing penguin populations. The ZSM also has supported the Zoo’s broader study on Humboldt penguins since it started in 1994.
Recent efforts include installing low-cost artificial burrows and studying how to protect the penguins from getting caught in commercial fishing nets.
Adobe Acrobat PDF
Filesize: 6.48 MB
Animal(s) being conserved/studied: Humboldt penguins
Alive issue: Fall 2001
|The conservation story above is in Adobe Acrobat PDF format and require the FREE Adobe Acrobat Reader to view. Click the icon to the left to download and install Acrobat Reader if you don't have it installed already.| | <urn:uuid:a4912e87-b25f-4938-88ff-01d20cb0bb6a> | 3.234375 | 318 | Knowledge Article | Science & Tech. | 31.413826 |
NGC 2366 is a small, irregular dwarf galaxy 10 million light years away in the direction of the constellation Camelopardalis - the giraffe.
[important]You can click on the image to view a larger version[/important]
The clearly visible blue region in the upper-right corner of the image is the star-forming nebula NGC 2363.
Broadly similar to the Milky Way’s satellite galaxies the Large and Small Magellanic cloud NGC 2366 may be small in comparison to many of the galaxies we are more accustomed to viewing in Hubble image though this doesn’t stop it from being a very active star factory indeed.
The smattering of active regions indicates that the galaxy is producing a great deal of the high mass blue stars (the blue smudges – and of course within NGC 2363).
The image was produced using Hubble’s infrared and green filters and so even though these regions appear blue they are actually a shade of red.
The image spans a distance of roughly 1/5 the diameter of the full moon though the galaxy itself is much too faint to be seen with the naked eye.
The view also captures a much more distant spiral galaxy which can be seen as the orange-brown structure in the upper middle portion of the image.
You can read more about the image here
Hubble has for the first time spotted Aurorae on the distant ice giant Uranus. In the image below you can see the turquoise disk of the planet has a bright ‘blotch’.
An aurora is produced when a stream of charged particles from the solar wind (the material ejected from the Sun) collides with a planet’s magnetic field (more properly called its magnetosphere) and excites the particles within the atmosphere casing them to glow. This glow is what we observe as the aurora.
On Earth aurorae with a blue or red colour are due to excited nitrogen, whilst green or a redish brown hue is due to excited oxygen. The aurorae can dance across the sky in waves of coloured light and whilst some last for a few brief minutes others can remain active for hours depending on the conditions creating them – solar storms for example can create very powerful aurorae.
Aurorae have been observed on other planets as well, particularly Jupiter and Saturn; both of which have prominent auroral systems. Those present in Uranus’ atmosphere are considerably fainter and appear to last only for a few short minutes at a time.
These images represent the first observation of Uranus aurorae, with previous data collected directly during the Voyager 2 flyby in 1986.
These new observations should help to reveal more about Uranus’ magnetic field, which we currently know little about.
You can read more here
Since the beginning of the Space Age, man has sent many manned and unmanned missions into space. Very powerful telescopes, built around the world, broaden our vision and understanding of the universe. Spacecraft, whether visiting other worlds or orbiting the Earth send us images and data collected from our outer atmosphere to the outer planets and beyond.
However, all this was only possible thanks to the incredibly rapid development of technology in recent years. Only then, could the essential resources for the construction of the current generation of spaceships be developed.
So, let us talk a little about some of the most important of space exploration’s tools and its greatest discoveries in this series, called Astronomy Tech.
In this first post, let’s get to know Cassini-Huygens a bit better. It is a joint mission between NASA, the European Space Agency (ESA) and the Italian Space Agency (ISA), which has uncovering the secrets of Saturn, including its rings and moons, as its primary objective.
On October 15th, 1997, the Cassini-Huygens spacecraft – composed of NASA’s Cassini orbiter and the ESA’s Huygens probe – was launched, beginning a long and complex seven-year journey, including gravitational slingshot manoeuvres around Venus, Earth and Jupiter. After arriving at its destination, the mother ship; Cassini, began its main objective exploring Saturn, whilst the Huygens probe was lunched and landed on Titan –Saturn’s largest moon and the second largest in the Solar System, after Jupiter’s moon Ganymede.
The spacecraft’s name was a tribute to the Italian Jean-Dominique Cassini (1625-1712) – discover of the Saturnine satellites Iapetus, Rhea, Tethys and Dione. In 1675 he discovered what is known today as the ‘Cassini Division’, the narrow gap separating Saturn’s A and B rings. Christiaan Huygens (1629-1695) was a Dutch scientist who first described Saturn’s rings and, in 1655 he discovered the moon Titan.
The Cassini spacecraft has a set of 12 instruments on-board. Some of them work in similar ways to our own. However, the instruments on the Cassini spacecraft are much more advanced than our own.
Cassini can “see” in wavelengths of light that the human eye cannot. The instruments on the spacecraft can “feel” things about magnetic fields and tiny dust particles that no human hand could detect. This means that Cassini can, for example ‘see the temperature’ of the objects it observes.
The magnetic field and particle detectors take direct sensing measurements of the environment around the spacecraft. These instruments measure magnetic fields, mass, electrical charges and densities of atomic particles. They also measure the quantity and composition of dust particles, the saturation of plasma (electrically charged gases), and radio waves.
Exploring the Ringed Planet
The expected return to Saturn – which hadn’t been visited by any spacecraft since Voyager 2 left Saturn’s orbit in 1981, – happened in July 2004. Since then, Cassini has made great discoveries about the Saturnine System and taken some terrific pictures, like the one below.
A few days after reaching Saturn, Cassini released the Huygens probe to land on Titan. On January 14, 2005, during its fall, six instruments analysed Titan’s atmosphere. According to the returned data, Titan has a nitrogen rich atmosphere. It also confirmed that Titan’s orange colour is due to the presence of hydrocarbons, formed when sunlight breaks down the abundant methane molecules within the atmosphere.
These results have given scientists a glimpse of what Earth might have been like before life evolved. They now believe Titan possesses many similarities to the Earth, including lakes, rivers, channels, dunes, rain, snow, clouds, mountains and possibly volcanoes.
Isn’t over yet; every day, it sends us vast amounts of data back to astronomers allowing them to resolve and answer questions about Saturn and our own planet.
240 miles above your head a 420 tonne satellite orbits the Earth at 17000mph. It has been there, albeit in various states of construction, for 14 years, and for the last 11 of those it has been continuously occupied.
The International Space Station is a feat of engineering like no other. Not only does it demonstrate our technical ability to construct, launch, and maintain a permanent presence in space, but also our ability to coordinate the work of five different space agencies and their operations all over the planet.
But the journey from its conception has not been an easy one, the ISS was born out of three separate national programmes: NASA’s Freedom station, proposed in the early ‘80s as a response to the Soviet space stations Mir and Salyut, the Russian (formerly Soviet) Mir-2 project designed as a replacement for the aging Mir station, and the European Columbus space station project.
Budgetary constraints brought on by post-Cold War political changes made it increasingly clear that no single national programme was going to create a fully functioning scientific outpost. Instead the suggestion to combine the three programmes into a single international one was put forward and agreed in 1993 by US Vice-President Al Gore and Russian Prime Minister Viktor Chernomyrdin.
The first component, the Russian Zarya cargo block originally intended for the Mir-2 station was launched in 1998, and since then the station has expanded, first with the addition of connecting and services modules such as NASA’s Unity and RKA’s Zvezda, and later with more specialist modules such as ESA’s Columbus laboratory and the Cupola observation module, the largest window in space. In total the ISS consists of fifteen pressurised modules, with one more, Russian research laboratory Nauka still to be added. They comprise laboratories, docks and airlocks, and living areas, and their combined volume is just less than 1,000 cubic metres.
That all of these modules fitted together perfectly is a success story in itself. Many had not been built when the first pieces were launched, and for most their mating in orbit was the first time they were put together. Though there have been a few minor problems, they have always been resolved quickly, and at no point has the station ever had to be evacuated.
The station’s unique conditions have allowed a large variety of experiments to be performed, many of which would be impossible on Earth. Research is being done into how structures such as crystals and organic cells form and develop outside the influence of the Earth’s gravity. NASA is also taking the opportunity to do closer studies on the effects of prolonged exposure to microgravity on astronauts and the possible implications on future manned missions to the Moon, asteroids, or Mars.
Until the end of the shuttle program in August of last year, crew and supplies were transported by a variety of means including the space shuttles, and the Soyuz and Progress spacecraft. The 6-man Soyuz craft operated by Roscosmos, the Russian Federal Space Agency, is now the only method of sending new crews to the station.
Each contributory nation retains ownership of and responsibility for the components that it added. This responsibility extends to the disposal of the station when it reaches the end of its operational life, which the current time frame places somewhere in the 2020s, depending on whether and for how long its decommissioning is postponed after the initial 2020 date. Given the huge amount of money that has been invested in the station as well as the later than expected completion date, it is very likely that the ISS’s operational life will be extended some way beyond that deadline. By that point it is also expected that commercial space ventures will play a much larger role in the life and upkeep of the station, so they too may play some role in its end-of-life decisions.
The Orion nebula is the closest region of large scale star formation to Earth sitting just 1340 light years from where you are reading this post.
The nebula is in the process of birthing the next generation of stars, with many still cocooned within the clouds from which they are forming, from peering eyes. Well that’s in the visible spectrum at least. Using infra red observations we can looks through the obsuring dust as if it isn’t there at all.
This is exactly what astronomers using the Sptizer and Hershel Space telescopes have done to produce this gorgeous image:
The rainbow effect is due to the combination of different sets of observations through different filters. by combining the individual images the compound image can reveal the nebula in stunning detail with each colour displaying a different wavelength of I-R radiation. Using two telescopes also has advantages, as Sptizer is designed to observe at shorter wavelengths than Hershel and so by combining the two sets of data astronomers can get a more complete view of what is going on.
In this case the data revealed something very unusual indeed. Several of the young protostars have been flickering wildly, with their brightness fluctuating by as much as 20% in just a few weeks. Based on the cool temperatures of the material involved, the fluctuations had to occur far from the hot regions near the growing star, but such material should be far enough away from the star to spend years or even centuries in a slow decaying orbit before accreating onto the star’s surface.
Currently the explanation for how such a process could be so drastically accelerated is still up for debate though there are several suggestions. The other material may not be evenly distributed around the star, with some regions being more densely occupied than others. That may allow some of the denser clumps or filaments to collide with an inner, warmer shell of material causing the flare ups. It could also be caused by material piling up at the edge of the inner disk and so casting a shadow on the outer disk.
- The Worlds with Two Suns | The Young Astronomers on Binary Stars Blitzed – Updated
- Ed.A on Image of the Week – A Peculiar Pencil – 18/09/2012
- Saint on SS 433 – A Magnificent Microquasar
- SS 433 – A Magnificent Microquasar » The Young Astronomers on Binary Stars Blitzed – Updated
- John Fairweather on A Star’s Death Giving Life to a Monster – Recovered
- New Post from @Lightbulb500 - The Worlds With Two Suns - bit.ly/RUQKuk 7 months ago
- We will also be posting about our plans for the next while both here, on the blog and our Facebook page - on.fb.me/RUQCuA 7 months ago
- Sorry for the long delay in posts, we have all been very busy. We will hopefully have a more regular post program shortly. 7 months ago
- Our latest Image of the Week highlights the star cluster NGC 1929 and the surrounding nebula N44 - bit.ly/QbkwY6 - by @Lightbulb500 8 months ago
- New post by @Lightbulb500 - How to Understand Spectra – Part 2 - bit.ly/NveYoX 8 months ago
TagsAGN Astronomy Astrophysics Big Bang Black Holes Cassini Chandra Curiosity Emission Nebulae ESA ESO Exoplanets Galaxies Gravity High Mass Stars HST Hubble Hubble Space Telescope Image of the Week Infra-red IOTW ISS Kepler Life LMC Mars NASA Nebula Nebulae Planets Russia Saturn Solar System Spacecraft Spitzer Starbirth Star death Star Formation Stars Star Sailor Podcast Supernova Supernovae VLT WISE Young Astronomers | <urn:uuid:33c8be44-32a4-434d-89b1-86888e1857e6> | 3.71875 | 3,012 | Content Listing | Science & Tech. | 39.719688 |
Other assembly systems
There are a wide range of techniques that facilitate cloning, and they all have their strengths and weaknesses. Here we compare the different techniques to the Plug 'n' Play assembly standard. You can read more about the different techniques by clicking on the menu in the left side.
Standard BioBrick Assembly
The Standard Assembly of BioBricks was first developed by Tom Knight, and has subsequently been modified by other scientist to overcome some of the hurdles of the Standard Assembly. Only two parts larger than 200 bp can be assembled by the Standard assembly in each cycle. Furthermore, the parts should preferably be either 500 bp smaller or larger from each other as well as the backbone. The system is based on one of the parts to be assembled in the destination vector from the beginning. The assembly can be performed with either suffix insertion (insertion of the added part behind the existing part), or a prefix insertion (insertion of the added part in front of the existing part).
The way it's done
The Standard assembly makes use of the restriction recognition sites of four restriction enzymes; EcoRI and XbaI should be located upstream of the BioBrick and SpeI and PstI downstream of the BioBrick. The BioBricks can not contain the four mentioned restriction recognition sites. These restriction recognition sites are used to cut the BioBrick to be inserted obtain sticky ends and open the second receiving plasmid also obtaining sticky ends.
The resulting insert and open vector must be purified so unwanted and unspecified parts can be removed. The inserted and cut plasmids are mixed and under the right conditions, making the sticky ends to form hybridization. Subsequently, a ligation is performed to re-ligate in order to form a plasmid that can be transformed into E. coli cells. .
Difference between Plug'n Play assembly and Standard Assembly
A major disadvantage of the Standard assembly is the need for restriction digestions, ligations and the need for site-directed mutagenesis if more restriction recognition sites are present on the plasmid. The limitation in only assembling two parts at the time, makes the Standard assembly much more time consuming. Furthermore, the scars made by the assembling make it impossible to create fusion proteins by the Standard assembly.
3A (3 antibiotic) is a method for assembling two BioBrick parts at the time. 3A assembly relies on three way ligation (between the two parts and the backbone vector), thereby it differs from Standard Assembly, which uses two way ligation between a part and a part + vector. The 3A assembly was designed to make the gel purification of the digested parts unnecessary. Furthermore, antibiotic selection is used to eliminate unwanted background. The assembled parts can either be in two plasmids or generated by PCR.
The way it's done
The process of assembly with 3A resembles the Standard Assembly method. The 3A assembly uses, just like the Standard Assembly, the restriction recognition sites of EcoRI, SpeI, Xbal and PstI for flanking the BioBricks and destination vector.
Digestion of the two parts and the destination plasmid are performed so sticky ends are compatible with the wanted assembly of the vector. The restriction digestion and ligation has to be executed in two separate steps. After transformation in E. coli cells, positive/negative selection is performed. .
Difference between Plug 'n' Play assembly and 3A assembly
The 3A method is based on use of restriction enzyme digestion and ligation, which means that illegal restriction recognition sites need to be eliminated. The method also leaves a scar between the BioBricks. Another disadvantage of the 3A assembly is the requirement for plasmids to contain three different antibiotic markers. Furthermore, the 3A method can only combine two parts at a time.
Gibson Assembly is an isothermal, single-reaction method for assembling multiple overlapping DNA fragments. The method was developed by Daniel G. Gibson at the J. Craig Venter Institute in 2009.
The way it's done
The isothermal 5´-T5 exonuclease removes the bases from the 5'-end of double stranded DNA, leaving a recess in the DNA. The ssDNA overhang is used to assemble the DNA fragments.
Difference between Plug'n Play assembly and Gibson Assembly
One of the disadvantages of the Gibson assembly is that the primers for the assembly are more expensive due to the 40 bp extra nucleotides the primers have to be flanked with. The Gibson assembly is not as specific as USER cloning, where there is nicked at a uracil instead of chewing back from the end point.
The Gateway® cloning technology is based on a site-specific recombination system of the lambda bacteriophage provided by Invitrogen. The lambda phage can change between the lytic and the lysogenic life cycle by enzymes called clonases. The recombination occurs between phage and DNA of E. coli via specific recombination sequences denoted as att sites (4). On the phage genome attP sites are found, and on the host bacterial genome attB sites are found. After recombination the att sites, hybrid sequences contain sequence from both the phage and the host att site and are then called attL and attR (5). Gateway cloning enables the assembly of multiple DNA fragments. Taking advantage of the site-specific clonase enzymes and the att sites, the problems with conventional cloning method are avoided, which use restriction enzymes and ligase (6).
The way it's done
In the Gateway technology four different recombination sites attB, attP, attL and attR are utilized (4). attB sites always recombine with attP sites in a reaction mediated by the BP clonase, and attL-sites recombine with attR-sites mediated by LR clonase. Furthermore, attB1 reacts only with attP1 and not attP2, thereby maintaining the orientation of the transferred DNA fragment and the reading frame (5). In the Gateway® cloning system the final plasmid is obtained through two steps of cloning. Through PCR amplification with primers flanked with attB BioBricks for the first recombination, the BP reaction. First PCR fragments with the appropriate att sites and orientation has to be constructed as shown in the figure below.
Difference between Plug'n Play assembly and Gateway assembly
The gateway assembly relies on a kit from Invitrogen. Furthermore, different entry clones and destination vectors are often needed and requires that a big library is established first. Furthermore, the biggest difference between the two assembly systems are the speed. The Gateway assembly takes longer time, because it relies on two cloning and transformation steps, before the final plasmid is obtained. Thus, the Gateway Assembly is far more complex than Plug 'n’ Play.
In-fusion assembly is a method for assembling of two or more parts, provided by Clontech. The assembly system can be semi-standardized by simple primer design rules, minimizing the time used on planning the assembly reactions.
The way it's done
The PCR fragments are assembled with the use of at least 15bp homology on both ends. The forward primer of the first PCR fragment must be homologues to the reverse primer of the second PCR fragment and so forth. The assembly system can be seen below. It works with either the up-stream or down-stream PCR amplification of the vector and gene.
Afterwards the PCR fragments can be fused into a pre-engineered vector containing an antibiotic resistance gene, by creation of single-stranded regions made by the In-fusion enzyme. .
Difference between Plug'n Play and In-Fusion assembly
Just like the Plug 'n' Play reaction, the In-Fusion reaction is fast and has a high efficiency. However, the main disadvantage of the In-Fusion assembly system is that it is expensive. .
References http://partsregistry.org/Assembly:Standard_assembly (website accessed 21.09.2011).
http://openwetware.org/wiki/Synthetic_Biology:BioBricks/3A_assembly (website accessed 21.09.2011).
Gibson, D.D et. al., 2009. Enzymatic assembly of DNA molecules up to several hundred kilobases. Nature Methods, vol. 6, no. 5, pp. 343-47
http://tools.invitrogen.com/downloads/gateway-multisite-seminar.html (website accessed 21.09.2011).
Hartley, J.L., Temple, G.F. & Brasch, M.A., 2000. DNA Cloning Using In Vitro Site-Specific Recombination. Genome Research, vol. 10, no. 11, pp.1788-95.
Sasaki, Y. et al., 2004. Evidence for high specificity and efficiency of multiple recombination signals in mixed DNA cloning by the Multisite Gateway system. Journal of Biotechnology, vol. 107, no. 3, pp.233-43.
Sleight, S.C., Bartley, B,A., Lieviant, J,A., and Sauro, H,M.,2010. In-Fusion BioBrick assembly and re-engineering. Nucleic Acids Research, Vol. 38, No. 8, pp. 2624–36. | <urn:uuid:24e91bd5-3c60-443b-8ce3-499c69285b5c> | 3.21875 | 1,954 | Knowledge Article | Science & Tech. | 50.081307 |
Telescopes had been built to look at the stars, and astronomers weren’t going to ignore the closest example — our Sun.
Usually, telescopes are built to see objects that are too faint and far away to be easily visible. They’re constructed with giant mirrors or lenses so they can collect more light than the human eye can see on its own.
Telescopes designed to see the Sun, or “solar telescopes,” have the opposite problem — their target emits too much light. The Sun is extremely bright, and astronomers need to be able to filter out much of the light to study it. This means that the telescope itself doesn’t have to be extremely powerful; instead, the instruments attached to it do the heaviest work.
Solar telescopes are ordinary reflecting telescopes with some important changes. Because the Sun is so bright, solar telescopes don’t need huge mirrors that capture as much light as possible. The mirrors only have to be large enough to provide good resolution. | <urn:uuid:c07a99f3-d567-46ac-8fc0-ce0f5b1447b5> | 4.375 | 206 | Knowledge Article | Science & Tech. | 48.183147 |
Genuine science is always based on reality, never dogma. And there are two issues regarding reality:
- Nature gives consistent answers based on empirical analysis. So those answers will tend to be reliable.
- Human beings are fallible. That means they make mistakes and do not always make precise measurements.
A contradiction? Not really. Look at these two charts:
The light blue areas in the first graph, and the grey areas in the second, are uncertainties resulting from the fact that there were fewer measurments relative to earlier time periods than later ones. There were far fewer tide gauges in the late 19th Century than in the late 20th Century. And there were far fewer proxies extending back to the Middle Ages than those which referred only to modern times. And in both charts, there are more precise measurements of sea level (from satellites) or of temperatures (from direct thermometer readings).
Scientists take pride in their honesty, so they allow for errors and uncertainty in their data, even while attempting to increase the accuracy and detail of their measurements. Even if the actual sea levels or temperatures centuries ago were not exactly known, we can still give approximate estimates that are better than knowing nothing at all.
Contrast these two charts with this one:
Where is the uncertainty? This chart seems to depict EXACT measurements of sea levels from hundreds of years ago, which is really impossible! But those who are scientifically illiterate (like many members of the British House of Lords, I would guess), would not realize that!
Which explains why I commented on this chart and others here:
How the hell is it that denialists are willing to accuse the makers of the “hockey stick” graphs of faking data, yet they never noticed anything from their own people like THAT?!
Ironically, when you have no uncertainty allowed for in the data, THAT is a sign of fakery! | <urn:uuid:61c95ae1-4cee-4014-acdb-32ebdea76084> | 2.96875 | 387 | Personal Blog | Science & Tech. | 41.783807 |
Adding DNS Record
Resource Records define data types in the Domain Name System (DNS). Resource Records identified by RFC 1035 are stored in binary format internally for use by DNS software. But resource records are sent across a network in text format while they perform zone transfers. The following record types are available in Plesk:
- A (Address). Used for storing an IP address (specifically, an IPv4 32-bit address) associated with a domain name.
- NS (Authoritative name server). Specifies a host name (which must have an A record associated with it), where DNS information can be found about the domain name to which the NS record is attached. NS records are the basic infrastructure on which DNS is built; they stitch together distributed zone files into a directed graph that can be efficiently searched. Defined in RFC 1035.
- CNAME (Canonical name for a DNS alias). Note that if a domain name has a CNAME record associated with it, then it can not have any other record types. In addition, CNAME records should not point to domain names which themselves have associated CNAME records, so CNAME only provides one layer of indirection.
- MX (Mail Exchanger). Each MX record specifies a domain name (which must have an A record associated with it) and a priority; a list of mail exchangers is then ordered by priority when delivering mail. MX records provide one level of indirection in mapping the domain part of an email address to a list of host names which are meant to receive mail for that domain name. Critical part of the infrastructure used to support SMTP email.
- PTR (Domain name pointer). Provides a general indirection facility for DNS records. Most often used to provide a way to associate a domain name with an IPv4 address in the IN-ADDR.ARPA domain.
- TXT (Text string). Arbitrary binary data, up to 255 bytes in length.
- AXFR (Asynchronous Full Transfer Zone).
- SRV (service) records are a generalization and expansion of features provided by MX records. Where MX records work only for mail delivery and provide "failover" via the Priority value, SRV records add in support for load balancing (via the Weight value) and port selection (via the Port value). This type of records is available only in Plesk for Windows via API RPC v.126.96.36.199 and later.
Note: You can add a DNS record for the specified domain or to the DNS zone template. On creation of a new domain, Plesk automatically generates zone file for the domain or domain alias basing on the server template. | <urn:uuid:4c55066a-401f-4aea-be99-a3ce07d25b72> | 3.15625 | 549 | Documentation | Software Dev. | 53.611937 |
||This article needs additional citations for verification. (May 2013)|
Miscibility pron.: // is the property of liquids to mix in all proportions, forming a homogeneous solution. In principle, the term applies also to other phases (solids and gases), but the main focus is usually on the solubility of one liquid in another. Water and ethanol, for example, are miscible because they mix in all proportions.
By contrast, substances are said to be immiscible if a significant proportion does not form a solution. Otherwise, the substances are considered miscible. For example, butanone is significantly soluble in water, but these two solvents are not miscible because they are not soluble in all proportions.
Organic compounds
In organic compounds, the weight percent of hydrocarbon chain often determines the compound's miscibility with water. For example, among the alcohols, ethanol has two carbon atoms and is miscible with water, whereas octanol with a C8H17 substituent is not. Octanol's immiscibility leads it to be used as a standard for partition equilibria. This is also the case with lipids; the very long carbon chains of lipids cause them almost always to be immiscible with water. Analogous situations occur for other functional groups. Acetic acid (CH3COOH) is miscible with water, whereas valeric acid (C4H9COOH) is not. Simple aldehydes and ketones tend to be miscible with water, because a hydrogen bond can form between the hydrogen atom of a water molecule and the unbonded (lone) pair of electrons on the carbonyl oxygen atom.
Immiscible metals are unable to form alloys. Typically, a mixture will be possible in the molten state, but upon freezing the metals separate into layers. This property allows solid precipitates to be formed by rapidly freezing a molten mixture of immiscible metals. One example of immiscibility in metals is copper and cobalt, where rapid freezing to form solid precipitates has been used to create granular GMR materials.
There are examples of immiscible metals in the liquid state. One with industrial importance is that liquid zinc and liquid silver are immiscible in liquid lead, while silver is miscible in zinc. This leads to the Parkes process, an example of liquid-liquid extraction, whereby lead containing any amount of silver is melted with zinc. The silver migrates to the zinc, which is skimmed off the top of the two phase liquid, and the zinc is boiled away leaving nearly pure silver.
Effect of entropy
Miscibility of two materials is often determined optically. When the two miscible liquids are combined, the resulting liquid is clear. If the mixture is cloudy the two materials are immiscible. Care must be taken with this determination. If the indices of refraction of the two materials are similar, an immiscible mixture may be clear and give an incorrect determination that the two liquids are miscible. | <urn:uuid:1d441e9c-4dcf-42a3-b490-ce3f333c30d7> | 3.59375 | 627 | Knowledge Article | Science & Tech. | 31.508279 |
Principles of Statistical Glass Modeling
On this page statistical analysis for the calculation of glass properties is explained (see also overview article at Wikipedia). The topic may be divided as follows:
1) Linear regression,
2) Non-linear regression,
3) Special applications.
1) Linear regression
Linear regression is characterized by liniarity in the coefficients. However, the equation does not need to be linear in the variables such as seen in Figure 1 and described in a linear regression tutorial (PDF, 0.4 MB).
Figure 1: Linear regression according to the equation z = a + b·x + c·x² + d·x³ + e·y + f·y² + g·y³.
The equation is linear in the coefficients a, b, c, d, e, f, and g, but not linear in the variables x and y.
It is possible to desribe almost all glass properties using linear regression, except liquidus temperatures, phase separation, and specialized chemical durability tests such as vapor hydration. For further information it is referred to the linear regression tutorial (PDF, 0.4 MB).
2) Non-linear regression
Glass liquidus temperatures can be modeled using disconnected peak functions. For the sake of simplicity as peak function second order polynomials are explained here, but in principle other functions such as Gauss curves or superimposed step-functions ("neurons") as used for neural networks can be applied as well. The peak functions can be based on the quadratic equation that may be expressed two-dimensionally as y = ax² + bx + c with the coefficient a being negative or zero and without any limitations for the coefficients b and c. The independent variable x is substituted by the component concentration in a binary glass and y represents the liquidus temperature in °C. For the reason that a large number of experimental data is available in binary silicate glass systems in some cases cubic equations may be used for modeling. In multi-component systems the two-dimensional quadratic equation must be extended to the level of dimensions that equals the number of components in the glass. Several disconnected peak functions are fitted to the liquidus surface ("landscape") by non-linear regression, for example using the Solver tool in Microsoft Excel. The results of the individual functions are not added, but only the maximum of all functions is counted as response, thereby enabling sharp eutectic minima:
Figure 2: Principle of disconnected peak functions.
Figures 3 to 5 show the application of disconnected peak functions to liquidus temperature modeling in the ternary system SiO2-Na2O-CaO based on 237 experimental data from 28 investigators, extracted from SciGlass:
Figure 3: Model initiation, peaks distributed evenly in composition space, R² = 0.09.
Figure 4: The peak functions are allowed to move, R² = 0.83.
Figure 5: Final liquidus surface in the system SiO2-Na2O-CaO, R² = 0.985, Error = 15°C.
More information about disconnected peak functions and their application to liquidus temperatures modeling in the six-component system SiO2-Na2O-CaO-Al2O3-MgO-K2O can be found in the proceeding: A. Fluegel: "Modeling of glass liquidus temperatures using disconnected peak functions"; Presentation at ACerS 2007 Glass and Optical Materials Division Meeting, Rochester, NY, USA.
3) Special applications
Statistical analysis may be applied to thermodynamic glass modeling using chemical equilibrium constants as coefficients.
Another special application represents variable database search modeling, where the model is automatically derived from similar glass compositions in a (possibly proprietary) database. For more information please use the contact page. | <urn:uuid:38747492-6129-469e-9766-b2936359a765> | 3.234375 | 793 | Knowledge Article | Science & Tech. | 33.480281 |
There are thousands of endangered species across our planet which are feared to be on the road to extinction. These species could belong to the four-legged travellers of the earth, the winged birds or the marine life. Whatever their habitat, the endangered species are either at risk due to human activities in that area or a change of climate in their habitat which could be due to global warming.
Dinosaurs lived billions of years ago; and perhaps if they were not extinct, humans and other life forms of today would have found a way to live together on the planet Earth. But nature’s course made sure that this did not happen and now we try to relive that era, in animated science fictions. Soon many of the species that inhabit the earth today might go extinct and our next generation would forever wonder if they could see one of those animals alive today.
Therefore we have put together a list of 10 most endangered species of animals . And as said earlier there are many many more, but some have already got the world’s attention, so now it’s time to highlight the less famousones.
1. The Ivory-Billed Woodpecker
Amongst our list of 10 most endangered species, the Ivory-billed woodpecker is believed to be the one that is most endangered; in fact they are so endangered, that they are believed to be even extinct by now. They were found in the Southeastern area of the US and sightings of this large woodpecker were also seen in Cuba. A rescue mission that started right away after this, has led to some rumors believing that there might be a small population living in Florida or Arkansas, but confirmed news has still not been received.
2. The Amur Leopard
The Amur Leopard that lived in the snowy and remote areas of Northern China and Korea were already few and rare. And now due to road building, over-logging, encroaching civilization, illegal hunting this beautiful species is also fearing extinction. In fact a recent survey revealed there are only 14 – 20 adults which have been confirmed to be living in a forest.
3. The Javan Rhinoceros
If you thought that of all the rhinoceros species, the Black Rhino (a.k.a. Diceros Bicornis) was an endangered one, think again. Because according to a survey, it is estimated that only 40 – 60 of the Javan rhinos are left remaining in the Ujung Kulon National Park which is located on the western part of the an Island in Indonesia. Facing extinction because of its precious horn, this rare species came under protection however people still might not be able to save it because of scarce mating population.
4. The Greater Bamboo Lemur
Madagascar is known for its population of lemurs; however if the illegal hunting and habitat loss continues in that region then it is quite possible that soon the critically endangered species of Lemur – the Greater Bamboo Lemur, will become history, with no live remains of it left.
5. The Northern Right Whale
The Mako Shark, Blue fin Tuna and the Beluga Sturgeon are some of the members of marine life that are considered to be one of the endangered species. However here we are going to talk of the Northern Right Whale which got its name back in the 1970’s ironically by people who are making sure it goes extinct quickly – the fishermen. The Right Whale possess good quantities of oil which is why it is hunt down; moreover it floats atop when dead which makes it easy to handle. It is protected now but it is feared that extinction might not be prevented.
6. The Cross River Gorilla
This species of the lowland gorilla is found in West Africa; and it is believed that only a few hundred of these are left remaining. This atrocity is for no other reason than due to illegal hunting and that too for food. It has been included in the critically endangered ones as their population is facing a constant decrease for the last 25 years.
7. Leatherback Sea Turtle
Swimming across the seas and oceans of the whole world, the Leatherback Sea Turtle is the largest turtle which comes out on the sub-tropical beaches to lay its eggs. The fact that people keep on hunting for its eggs, or that fact that the beaches are destroyed is leading to the extinction of Leatherback Sea turtles. Pollution of the sea and ocean waters is also killing the turtles.
8. The Amur Tiger
The Amur Tiger like the Amur Leopard is believed to be near extinction. The Amur Tiger which was found in cold parts of China and Korea, reduced to a horrid number of 40 – 50 but has rebounded to a number of 500 because the Tiger has gone into protection by the wild life services. Even though its number has increased, it is still not out of danger.
9. The Chinese Giant Salamander
A unique thing about the Giant Salamander is that it is the world’s largest amphibian which can grow up to 6 feet in length. Although it had the advantage of laying almost 500 eggs in one go and that too which remained in protection of the male, it can still go extent. The sad news is that it is not going to be due to a loss in habitat but because it is largely hunt down for its meat. Yes, it is true, it is abundantly eaten in most parts of China.
10. The Kakapo Parrot
Last and certainly a species left in the least amount of number is the Kakapo parrot. It is the only parrot that remains flightless throughout its life and is also the heaviest amongst all the parrots. Once common throughout the mainland of New Zealand, this endangered species is led to its present state due to the hunt down by rats, cats and dogs. It is now only found living in some islands and less than 150 are left. | <urn:uuid:a52ba860-35e3-4950-8bd1-82543ae5cb73> | 3.171875 | 1,212 | Listicle | Science & Tech. | 55.89533 |
You don't need to know the exact definitions of the terms used in calculus if you want to see how they are applied to the real world, because in real life nothing is known exactly (unless it comes in discrete values, like the number of pages in a book, in which case calculus does not apply to it exactly anyway). That is, when we measure a 2 foot length, we might know that it is 2.0 ft or 2.00 ft, but there is always a limit on the precision. Later if somebody comes along with a better measuring tool and declares it to be 2.003 ft, this does not contradict the earlier measurement, it just improves the precision.
In applying the equations of physics, we should keep the above limitations in mind. For example if x = yz, even if this equation is exact (and of course we will never know if we have an equation that describes the real world exactly), we are no better off than saying that x is approximately yz, since we cannot measure exactly.
You have, no doubt, run into D meaning "change of." In calculus, d means very small D . So dx is a very small change of x. A mathematician is concerned with the exact definition of dy/dx. In practical applications, we can regard dy/dx as the ratio of two very small quantities and not worry about being exact, because there is no point to it.
The derivative: If y is a function of x, for example y = x3, then the derivative of y with respect to x is dy/dx, the slope of the graph of y vs x, with x on the horizontal axis. For this case, if you want dy/dx at x = 2, you could calculate dy as 2.013 - 1.993 for a dx of 0.02. Divide, and you will find dy/dx = 12. (The calculator says 12.0001. The "exact" value is 12, but in practice it doesn't matter.) This brings up the question, how small is very small. Well, that depends on how much precision is needed. Let's try dy = 2.53 - 1.53 divided by a dx of 1. We get dy/dx = 12.25, close enough to 12 for some purposes but not for others. Make life easier: instead of going a little above 2 to a little below 2, do a little above down to 2: y = x3 at x=2à dy/dx = (2.00013-23)/0.0001 = 12.000596 ≈ 12.
You might as well learn the exponent rule:
if y = x3, dy/dx = 3x2, in other words the original exponent times x to the power reduced by one. In our example,
when x=2, 3x2 = 12. If y = 6x1.5, then dy/dx = (1.5)(6)x0.5. It works for any exponent. Note that the 6 is left unchanged. Test this rule with a calculator for a few cases (as was done in the paragraph above), and you will get the hang of it. If you want the reason behind the exponent rule, (a+b)n = an + nan-1b + n(n-1)an-2b2/2 …. All we need are the first two terms if b is really tiny. Now consider y = xn. dy = [(x+dx)n-xn]/dx
= [xn + nxn-1dx…-xn]/dx = nxn-1. If the terms left out bother you, make dx =10-20 or something and experiment with it. In math they are more theoretical, but the outcome is the same: those terms disappear.
Two other formulas that you might as well
learn: the derivative of the sine is the cosine and the derivative of the
cosine is -sine. To be more precise and more general, if kx is in radians and
y = Asin(kx) then dy/dx = Akcos(kx). Similarly
if y=Acos(kx) then dy/dx=-Aksin(kx). If kx is not in radians, convert it to radians before finding the derivative.
The second derivative: This is simply the derivative of a derivative. If y =
x3, dy/dx = 3x2, as we have seen, and the derivative of
3x2 is 6x. Rather than the awkward symbol
d(dy/dx)/dx, we label the thing d2y/dx2. The slope of the y vs x graph is dy/dx, and the change of slope divided by the change of x is d2y/dx2. Similarly if y is a function of t, dy/dt (the slope of the y vs t graph) is the velocity and d2y/dt2 is the rate of change of v: the acceleration.
Partial derivatives: When y is a function of two or more variables, a
partial derivative is the same as a derivative but with the other variables held constant. For example if y = ax3
+bxt + ct2, with a, b and c constant, then the partial derivative of
y with respect to x,
labeled ¶y/¶x, is 3ax2 + bt, and
¶y/¶t is bx + 2ct.
Partials are useful for waves.
y = Asin(kx - w t) is a plane wave traveling in the +x direction with speed w/k. For waves on a string, sound, light, and others, we find that
¶ 2y/¶ x2 is proportional to ¶ 2y/¶ t2, and from this we find the speed of the wave.
If y = Asin(kx-wt),
then ¶ 2y/¶ x2=-Ak2sin(kx-w t) and
¶ 2y/¶ t2 =-Aw 2sin(kx-w t), so for example when Newton's 2nd law tells us that a string with tension F and mass/length m has
F¶ 2y/¶ x2 = m ¶ 2y/¶ t2 , we easily show that the wave speed is (F/m )1/2. This is a powerful tool.
The integral: This is just a sum. ò x2dx, for example, is a bunch of x2 values, each multiplied by a very small dx, then all added together. To show specifically how to do it, let's integrate the above integral from x=1 to x=2 (these are called the limits) on a calculator, using dx = 0.1. The 1 to 2 interval is split up into 10 parts, 1 to 1.1, 1.1 to 1.2,…,1.9 to 2. Using the x2 value at the center of each interval, (1.052, 1.152,…) calculate the sum of x2dx and you will find that it is 2.3325. (The exact value of 1.ò 2 x2dx is 7/3 = 2.33333… so there is a very small error.) On a computer, using a spreadsheet or a programming language, you could divide it into 1000 parts and use dx = 0.001. You will then find that the integral is 2.33333…. The moral of the story is that the smaller the dx, the smaller the error. (By the way, if you tell your math teacher about dx = 0.1 or 0.001, be prepared to call 911. He or she might have a heart attack, or (s)he might inflict bodily injury on you.)
In math-speak, f(x) means function of x: for any x, there is some value of f(x). The sketch below shows a graph of y = some function of x, and when you integrate from A to B, you add up the areas of those vertical strips to get the area bounded by the function on top, the x axis on the bottom, the vertical line x=A on the left, and the vertical line x=B on the right.
To see what the integral is theoretically, let u(x) = the area under the curve starting at some point to the left of A and ending at any x; then clearly, the area that we want, the integral from A to B is u(B) - u(A). So we need to find u(x). To do this, note that as you add the area terms, your total increases by ydx each time, so du = ydx, or du/dx = y. So this is the opposite of the derivative: if y = x2, for example, increase the exponent by one and divide by the new exponent to get x3/3. In general u(x) = x3/3 + C, (C= a constant), but when we calculate u(A) - u(B), the constant drops out, so forget about it. The integral of x2dx from x=1 to x=2 is 23/3 - 13/3 = 7/3.
Here is an important integral:
ò d(cabin)/cabin = natural log cabin.
(In other words, ò dx/x = lnx.)
Now trade in this drivel by hitting your browser's back button.
Comments, questions: fredrick.gram at tri-c.edu (but remove “at” and spaces
and insert @)
or alphabetical crud in my index. | <urn:uuid:3188c11b-620c-4135-8ff0-c9b32ecb9c1b> | 3.78125 | 2,050 | Tutorial | Science & Tech. | 90.544044 |
Mapping the Atom
Ernest Rutherford had led a trail to the heart of the atom. They now
knew about the positively charged nucleus containing protons. The focus
of Nuclear physics was shifted to the behavior and structure of electrons.
Throughout the 1920's, Danish physicist Niels Bohr would work with other
scientists to develop new way indeed of looking at electrons.
Niels Bohr was born in Denmark and which he was in University he did
research on the possible arrangements of electrons on the surface of atoms
of metals. Bohr studied the arrangements of electrons around the nucleus
and needed an explanation in finding what would prevent the electrons from
falling into the nucleus instead of circling around. He found the idea
of quantum - a fixed unit of energy.
As a body is heated, it gives out light. As the temperature increases,
the light coming from it not only has more energy but its wavelengths
grow shorter. This is why a piece of metal when heated changes from red
to yellow and then to white. Our eyes perceive color based on the wavelengths
of light. Red has the longest wavelength followed by yellow, blue and white.
From the 19th century scientists new that glowing material was actually
a mixture of wave lengths and though that these were given out continuously.
What Bohr's scientists found was that the wave radiation was given out
in spurts called quantum.
Using this idea of quantum Bohr found an idea which could explain why
electrons didn't spiral and fall into the atom's nucleus.
He said that an electron must always have an angular movement (the energy
of orbiting) and it was equivalent to an integer (any whole number) times
h(Planck's constant) divided by 2'pi'. Because the energy
determines the location of the orbit this in turn meant that the electron
could only be in a fixed amount of orbital paths. Unlike a car, the electron
can not change to a faster or slower lane. When light, heat, or some other
kind of energy hits an atom, it can cause an electron to instantly jump
from one orbit to another further away. Similarly an electron can jump
back into an inner one, sending our quantum energy.
While Bohr's worked with the hydrogen atom, it could not accurately
predict more compilcated ones. Two different approaches were tried. In1923,
the French physicist Louis de Broglie suggested that just as a wave had
particle properties, maybe a particle had wave properties. Experiments
found startling evidence about the wavelike behavior of electrons. In 1925,
German physicist Werner Heisenberg created a different approach to the
quantum theory. Instead of trying to visualize how electrons moved, he
compiled sets of numbers that represents various properties of electrons.
Eg: spin, energy, and momentum. | <urn:uuid:5f4ca5c7-828e-4222-be46-96540c14a20f> | 4.3125 | 597 | Knowledge Article | Science & Tech. | 49.607739 |
were discovered by accident in 1967 while Jocelyn Bell
and Antony Hewish were looking for twinkling sources
of radio radiation. The explanation for the radio pulses
proved the existence of neutron stars, incredibly dense
remains of massive collapsed stars.
this section to read about the discovery, and find out
how a dying star can become a pulsar. | <urn:uuid:a2f455d3-3205-4021-b605-d0ce8b01da63> | 3.421875 | 77 | Knowledge Article | Science & Tech. | 36.160526 |
Hammerling's experiment with the single celled green algae, Acetabularia,
showed that the nucleus of a cell contains the genetic information that
directs cellular development.
A. mediterranea has a smooth, disc shaped cap, while A. crenulata
has a branched, flower-like cap. Each Acetabularia cell is composed
of three segments: the "foot" or base which contains the nucleus,
the "stalk," and the "cap."
In his experiments, Hammerling grafted the stalk of one species of Acetabularia
onto the foot of another species. In all cases, the cap that eventually
developed on the grafted cell matched the species of the foot rather than
that of the stalk. In this example, the cap that is allowed to grow on the
grafted stalk looks like the base species one... A. mediterranea.
This experiment shows that the base is responsible for the type of cap
that grows. The nucleus that contains genetic information
is in the base, so the nucleus directs cellular development.
Hammerling's Acetabularia From: Peters, Pamela. "Biotechnology:
A Guide to Genetic Engineering." Dubuque, IA: William C. Brown Publishers, | <urn:uuid:0c0dcbea-78a7-4b9a-8b50-249ebdcb618b> | 3.765625 | 273 | Knowledge Article | Science & Tech. | 46.354037 |
Science Fair Project Encyclopedia
In particle physics, an elementary particle is a particle of which other, larger particles are composed. For example, atoms are made up of smaller particles known as electrons, protons, and neutrons. The proton and neutron, in turn, are composed of more elementary particles known as quarks. One of the outstanding problems of particle physics is to find the most elementary particles - or the so-called fundamental particles - which make up all the other particles found in Nature, and are not themselves made up of smaller particles.
(main article with table of particles: Standard Model)
The Standard Model of particle physics contains 12 species of elementary fermions ("matter particles") and 12 species of elementary bosons ("radiation particles"), plus their corresponding antiparticles and the still undiscovered Higgs boson. However, the Standard Model is widely considered to be a provisional theory rather than a truly fundamental one, since it is fundamentally incompatible with Einstein's general relativity. There are likely other elementary particles not described by the Standard Model, such as the graviton, the particle that would carry the gravitational force or the sparticles, supersymmetric partners of the ordinary particles.
The 12 fundamental
The 12 fundamental fermionic particles are divided into three families of four particles each. Six of the particles are quarks. The remaining six are leptons, three of which are neutrinos, and the remaining three of which have an electric charge of -1: the electron and its two cousins, the muon and the tauon.
|First family||Second family||Third family|
There are also 12 fundamental fermionic antiparticles which correspond to these 12 particles. The positron e+ corresponds to the electron and has an electric charge of +1 and so on:
| First family
|| Second family
|| Third family
Quarks and antiquarks have never been detected to be isolated. A quark can exist paired up to an antiquark, forming a meson: the quark has a "color" (see color charge) and the antiquark a corresponding "anticolor". The color and anticolor cancel out, yielding black (i.e. absence of color charge). Or three quarks can exist together forming a baryon: one quark is "red", another "blue", another "green". These three colors together form white (i.e. absence of color charge). (Cf. RGB color space, complementary color.) Or three antiquarks can exist together forming an antibaryon: one antiquark is "antired", another "antiblue", another "antigreen". These three anticolors together form antiwhite (i.e. neutral). A more recent discovery is the five quark baryon state, created in Jefferson lab. It consists of two up quarks, two down quarks, and one anti-strange quark. These colors cancel out to form white. The result is that colors (or anticolors) cannot be isolated either, but quarks do carry colors, and antiquarks carry anticolors.
Quarks also carry fractional electric charges, but since they are confined within hadrons whose charges are all integral, fractional charges have never been isolated. Note that quarks have electric charges of either +2/3 or -1/3, whereas antiquarks have corresponding electric charges of either -2/3 or +1/3.
Out of the 12 bosonic fundamental particles, eight of them are gluons. Gluons are the mediators of the strong force, and carry both a color and an anticolor. Although gluons are massless, they are never observed in detectors due to confinement; rather, they produce jets of hadrons like single quarks.
Out of the remaining four fundamental bosons, three are weak gauge bosons: W+, W-, and Z0; these mediate the weak force. The last fundamental boson is the photon, which mediates the electromagnetic force.
Although the weak and electromagnetic forces appear quite different to us at everyday energies, the two forces are theorized to be unified as a single electroweak force at high energies. The reason for this difference at low energies is thought to be due to the existence of the higgs boson. Through the process of spontaneous symmetry breaking, the Higgs selects a special direction in electroweak space that causes three electroweak particles to become very heavy (the weak bosons) and one to remain massless (the electromagnetic photon). Although the Higgs mechanism has become an accepted part of the Standard Model, the boson itself has never been observed in detectors. This is thought to be due to the particle's great mass, but its continuing absence is a major cause of concern for particle physicists.
Beyond the Standard Model
One major extension of the standard model involves supersymmetric particles, abbreviated as sparticles, which include the sleptons, squarks, neutralinos and charginos. Each particle in the Standard Model would have a superpartner whose spin differs by 1/2 from the ordinary particle. In addition, the sparticles are heavier than their ordinary counterparts: they are so heavy that existing particle colliders would not be powerful enough to be able to detect them. However, some physicists believe that sparticles will be detected by 2008 in the Large Hadron Collider at CERN.
According to string theorists, each kind of fundamental particle corresponds to a different resonant vibrational pattern of a fundamental string (strings are constantly vibrating in standing wave patterns, similar to the way that quantized orbits of electrons in the Bohr model vibrate in standing wave patterns). All strings are essentially the same, but different particles differ in the way their strings vibrate. More massive particles correspond to more energetic vibrational patterns. But fundamental particles do not contain strings: they are strings.
String theory also predicts the existence of gravitons. Gravitons are practically impossible to detect experimentally, because the gravitational force is so weak compared to the other forces.
Links and References
- Greene, Brian, "Elementary particles". The Elegant Universe, NOVA (PBS)
- particleadventure.org: The Standard Model, *Unsolved Mysteries. Beyond The Standard Model, *What is the World Made of? The Naming of Quarks
- University of California: Particle Data Group
- particleadventure.org: Particle chart
- CERNCourier: Season of Higgs and melodrama
- Pentaquark information page
The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details | <urn:uuid:33e35d5a-6f54-49a2-b9ed-330bbfc50cad> | 3.921875 | 1,379 | Knowledge Article | Science & Tech. | 32.900733 |
The launch window would open at 7:08 a.m. EDT for 10 minutes and would provide for a rendezvous in five revolutions. Recovery of SL-3 was planned for 22 September 1973. Two members of the Skylab 3 crew, Jack R. Lousma, left, and Owen K. Garriott, center, inspect a part of the twin-pole solar sail at MSFC (above). At right, Lousma practices erecting the solar sail over a portion of the Orbital Workshop mock in the MSFC Neutral Buoyancy Tank. Nylon netting was used for this underwater training instead of the aluminized fabric the actual sail was made of. | <urn:uuid:cfa9b16a-2c58-4cf9-89d9-0c2a959f215f> | 3 | 137 | Knowledge Article | Science & Tech. | 74.237549 |
From Wikipedia, the free encyclopedia
A cell wall is a fairly rigid layer surrounding a cell, located external to the cell membrane, that provides the cell with structural support, protection, and a filtering mechanism. The cell wall also prevents over-expansion when water enters the cell. They are found in plants, bacteria, archaea, fungi, and algae. Animals and most protists do not have cell walls.
The cell wall is constructed from different materials dependent upon the species. In plants, the cell wall is constructed primarily from a carbohydrate polymer called cellulose, and the cell wall can therefore also functions as a carbohydrate store for the cell. In bacteria, peptidoglycan forms the cell wall. Archaea have various chemical compositions, including glycoprotein S-layers, pseudopeptidoglycan, or polysaccharides. Fungi possess cell walls of chitin, and algae typically possess walls constructed of glycoproteins and polysaccharides, however certain algal species may have a cell wall composed of silicic acid. Often, other accessory molecules are found anchored to the cell wall.
Plant cell walls
The major carbohydrates making up the primary cell wall are cellulose, hemicellulose and pectin. The cellulose microfibrils are linked via hemicellulosic tethers to form the cellulose-hemicellulose network, which is embedded in the pectin matrix. The most common hemicellulose in the primary cell wall is xyloglucan.
The three primary polymers that make up plant cell walls consist of about 35], 20 to 35 % hemicellulose and 10 to 25% lignin. Lignin fills the spaces in the cell wall between cellulose, hemicellulose and pectin components.
Plant cells walls also incorporate a number of proteins; the most abundant include hydroxyproline-rich glycoproteins (HRGP), also called the extensins, the arabinogalactan proteins (AGP), the glycine-rich proteins (GRPs), and the proline-rich proteins (PRPs). With the exception of glycine-rich proteins, all the previously mentioned proteins are glycosylated and contain hydroxyproline (Hyp). Each class of glycoprotein is defined by a characteristic, highly repetitive protein sequence. Chimeric proteins contain two or more different domains, each with a sequence from a different class of glycoprotein. Most cell wall proteins are cross-linked to the cell wall and may have structural functions.
The relative composition of carbohydrates, secondary compounds and protein varies between plants and between the cell type and age.
The middle lamella is laid first, formed from the cell plate during cytokinesis, and the primary cell wall is then expanded inside the middle lamella. The actual structure of the cell wall is not clearly defined and several models exist - the covalently linked cross model, the tether model, the diffuse layer model and the stratified layer model. However, the primary cell wall, can be defined as composed of cellulose microfibrils aligned at all angles. Microfibrils are held together by hydrogen bonds to provide a high tensile strength. The cells are held together and share the gelatinous membrane called the middle lamella, which contains magnesium and calcium pectates (salts of pectic acid). Cells interact though plasmodesma(ta), which are inter-connecting channels of cytoplasm that connect to the protoplasts of adjacent cells across the cell wall.
In some plants and cell types, after a maximum size or point in development has been reached, a secondary wall is constructed between the plant cell and primary wall. Unlike the primary wall, the microfibrils are aligned mostly in the same direction, and with each additional layer the orientation changes slightly. Cells with secondary cell walls are rigid. Cell to cell communication is possible through pits in the secondary cell wall that allow plasmodesma to connect cells through the secondary cell walls.
Algal cell walls
Like plants, algae have cell walls. Algal cell walls contain cellulose and a variety of glycoproteins. The inclusion of additional polysaccharides in algal cells walls is used as a feature for algal taxonomy.
- Manosyl form microfibrils in the cell walls of a number of marine green algae including those from the genera, Codium, Dasycladus, and Acetabularia as well as in the walls of some red algae, like Porphyra and Bangia.
- Alginic acid is a common polysaccharide in the cell walls of brown algae
- Sulfonated polysaccharides occur in the cell walls of most algae; those common in red algae include agarose, carrageenan, porphyran, furcelleran and funoran.
The group of algae known as the diatoms synthesis their cell walls (also known as frustules or valves) from silicic acid (specifically orthosilicic acid, H4SiO4). The acid is polymerised intra-cellularly, then the wall is extruded to protect the cell. Significantly, relative to the organic cell walls produced by other groups, silica frustules require less energy to synthesize (approximately 8%), potentially a major saving on the overall cell energy budget, and possibly an explanation for higher growth rates in diatoms.
Bacterial cell walls
- Further information: Cell envelope
Around the outside of the cell membrane is the bacterial cell wall. Bacterial cell walls are made of peptidoglycan (also called murein), which is made from polysaccharide chains cross-linked by unusual peptides containing D-amino acids. Bacterial cell walls are different from the cell walls of plants and fungi which are made of cellulose and chitin, respectively. The cell wall of bacteria is also distinct from that of Archaea, which do not contain peptidoglycan. The cell wall is essential to the survival of many bacteria and the antibiotic penicillin is able to kill bacteria by inhibiting a step in the synthesis of peptidoglycan.
There are broadly speaking two different types of cell wall in bacteria, called Gram-positive and Gram-negative. The names originate from the reaction of cells to the Gram stain, a test long-employed for the classification of bacterial species.
Gram-positive bacteria possess a thick cell wall containing many layers of peptidoglycan and teichoic acids. In contrast, Gram-negative bacteria have a relatively thin cell wall consisting of a few layers of peptidoglycan surrounded by a second lipid membrane containing lipopolysaccharides and lipoproteins. Most bacteria have the Gram-negative cell wall and only the Firmicutes and Actinobacteria (previously known as the low G+C and high G+C Gram-positive bacteria, respectively) have the alternative Gram-positive arrangement. These differences in structure can produce differences in antibiotic susceptibility, for instance vancomycin can kill only Gram-positive bacteria and is ineffective against Gram-negative pathogens, such as Haemophilus influenzae or Pseudomonas aeruginosa.
Fungal cell walls
Not all species of fungi have cell walls but in those that do, the cell walls are composed of glucosamine and chitin, the same carbohydrate that gives strength to the exoskeletons of insects. They serve a similar purpose to those of plant cells, giving fungal cells rigidity and strength to hold their shape and preventing osmotic lysis. They also limit the entry of molecules that may be toxic to the fungus, such as plant-produced and synthetic fungicides. The composition, properties, and form of the fungal cell wall change during the cell cycle and depend on growth conditions.
The group Oomycetes (also known as water molds) are saprotrophic plant pathogens like fungi, but anomalously possess cellulose cell walls. Until recently they were widely believed to be fungi, but structural and molecular evidence has led to their reclassification as heterokonts, related to autotrophic brown algae and diatoms.
- ^ Sendbusch, P. S. (2003). Cell Walls of Algae. Botany Online
- ^ Raven, J. A. (1983). The transport and function of silicon in plants. Biol. Rev. 58, 179-207
- ^ Furnas, M. J. (1990). In situ growth rates of marine phytoplankton : Approaches to measurement, community and species growth rates. J. Plankton Res. 12, 1117-1151
- ^ van Heijenoort J (2001). "Formation of the glycan chains in the synthesis of bacterial peptidoglycan". Glycobiology 11 (3): 25R – 36R. PMID 11320055.
- ^ a b Koch A (2003). "Bacterial wall as target for attack: past, present, and future research". Clin Microbiol Rev 16 (4): 673 – 87. PMID 14557293.
- ^ Gram, HC (1884). "Über die isolierte Färbung der Schizomyceten in Schnitt- und Trockenpräparaten". Fortschr. Med. 2: 185–189.
- ^ Hugenholtz P (2002). "Exploring prokaryotic diversity in the genomic era". Genome Biol 3 (2): REVIEWS0003. PMID 11864374.
- ^ Walsh F, Amyes S (2004). "Microbiology and drug resistance mechanisms of fully resistant pathogens.". Curr Opin Microbiol 7 (5): 439-44. PMID 15451497.
- ^ Interactions between Plants and Fungi: the Evolution of their Parasitic and Symbiotic Relations, P. v. Sengbusch, accessed 8 December 2006 | <urn:uuid:033041c6-d1c4-4d8b-b769-4d0b44d350ab> | 4.03125 | 2,097 | Knowledge Article | Science & Tech. | 37.27198 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
use of radiation
...are most common. The latter avoid unwanted chemical interactions between the ions of the beam and the substrate. Sputtering results from several interaction mechanisms. Conceptually, the simplest is rebound sputtering, in which an incident ion strikes an atom on the surface, causing it to recoil into the target. The recoiling atom promptly collides with a neighbouring atom in the target,...
What made you want to look up "rebound sputtering"? Please share what surprised you most... | <urn:uuid:9e42d364-4a67-4818-a666-0f47dcad3bc7> | 3.171875 | 136 | Knowledge Article | Science & Tech. | 50.123982 |
A Measure of Success
A Finnigan Instrument Corporation Model 1015 GC/MS/DS. From left to right: a minicomputer from Digital Equipment; part of the quadrupole mass spectrometer; the remainder of the mass spectrometer electronics console and gas chromatograph. Image courtesy of Robert Finnigan.
In 1970 many Americans, dismayed by pollution in the air, water, and soil, participated in the first Earth Day and contributed to the political momentum that led to the creation of the Environmental Protection Agency (EPA). To protect the environment the EPA needed legislation, and to create legislation it needed the ability to precisely identify pollutants and measure their concentrations. Enter an unlikely entrepreneur, ex–cold war engineer Robert E. Finnigan.
Computers played an important role in waging the cold war. In the 1950s, at the Lawrence Livermore National Laboratory scientists and engineers used some of the most advanced digital computers of the time to design nuclear weapons. Finnigan, an Air Force officer with a Ph.D. in electrical engineering, led a Livermore effort to develop a computerized control system for a nuclear reactor intended to power a thermonuclear missile that could prowl the skies for days. By 1962, though his team had successfully run a prototype reactor using a computer, Finnigan was convinced the missile program was doomed.
Finnigan and collaborator Mike Uthe wondered what they would do with their expertise in computer control of advanced nuclear reactors until the Stanford Research Institute (SRI) recruited them to start a controls group. At the time another team at SRI was developing quadrupole mass spectrometers, in which particular combinations of voltages are applied to four parallel rods, allowing selected ions to pass between them and reach a detector. Such an instrument can identify specific substances and measure their quantities. Finnigan and Uthe speculated that such a spectrometer would have broad applications.
Two years later Finnigan and Uthe joined Electronic Associates, Inc. (EAI), a leading U.S. supplier of analog computers, and continued working on quadruple mass spectrometers. The two wanted to contract the quadrupole research out to their former SRI colleagues but SRI shut down development, telling its researchers to buy the instruments rather than produce the devices themselves. However, no quadrupole mass spectrometer manufacturers existed, so Finnigan decided to become a commercial manufacturer, with SRI as his first customer.
Jack Jennings and Charles Rosen at SRI decided to simply give Finnigan what he needed to get into commercial production as quickly as possible: know-how and names. From 1964 to 1966 Finnigan and Uthe’s EAI division sold over 500 quadrupole residual gas-analyzer instruments. Researchers working in microelectronics and materials science were major consumers. With orders increasing, Finnigan and Uthe raised the price of their instruments, and their upstart division provided most of EAI’s profit.
While marketed as residual gas analyzers, the instruments that EAI produced from 1964 to 1966 were bona-fide mass spectrometers. They were not, however, adequate for the demanding analyses required in chemical research. Much would be required to transform them into analytical instruments. Finnigan became increasingly interested in the idea of a computer-controlled instrument in which a gas chromatograph (GC) would be used to separate the constituents of complex samples, and a robust quadrupole mass spectrometer (MS) would determine the nature and quantity of these constituents: a computerized GC/MS. With this idea Finnigan was developing a vision of the next phase of the instrumentation revolution in chemistry, a phase associated with the minicomputer revolution.
The first minicomputers appeared in the early 1960s. These were refrigerator-sized digital computers that sold for tens of thousands of dollars. Laboratory scientists first used them to process the data from a single laboratory’s instruments. Palo Alto was home to several laboratories leading the application of GC/MS to biochemistry, including those of Carl Djerassi and Joshua Lederberg, who worked in pharmaceuticals and genetics, respectively. But making sense of the data produced in a single GC/MS run was extremely time-consuming. The measurement could be performed in a few hours, but organizing the data from a single run and then manually interpreting it could take weeks, even months. Both groups were looking at minicomputers as a solution to this glacially slow pace of data handling and interpretation. Their ultimate goal—real-time speed where data interpretation happens as a measurement takes place. | <urn:uuid:ebfef636-1852-4ff8-820e-355756d27d1b> | 3.15625 | 937 | Knowledge Article | Science & Tech. | 27.282117 |
Visual Studio 2010 and Visual Studio Team Foundation Server 2010 help software development teams successfully deliver complex software solutions. Learn how Visual Studio and Team Foundation Server enable you to enforce best practices for software development and to develop better quality software. This course uses the latest versions of Visual Studio and Team Foundation Server.
You'll get answers to these questions:
Introduction to Visual Studio and Team Foundation Server
This module presents an introduction to Microsoft's Team Foundation Server, a platform for integrating and managing all aspects of the software development process. It provides an overview of the Visual Studio client application and the Team Foundation Server, discussing how and where each feature fits into the software development lifecycle and how it pertains to each project role.
There is also an introduction to project management with Microsoft's Visual Studio. In particular, this module shows how project managers can use Team Foundation Server to create and manage a software development project. It talks briefly about project methodology and about how different methodologies are supported by Team Foundation Server. As well as discussing how to use the Team Explorer tool to manage the team project, it also explores how to integrate other related tools commonly used by project managers today, such as Microsoft Excel and Microsoft Project.
This module also talks about how Team Foundation Server uses process templates to represent the core information about these methodologies and how these process templates can be customised. This module shows how to change the contents of an existing template or add new ones, amending such details as work items, work item queries, reports, document templates and process guidance.
Introduction to Source Control
Source code control, or version control, is the management of revisions to pieces of information that make up a project. Many applications make use of source code control to manage their source code, documentation, project data and any other information that a project requires. Team Foundation Server includes an enterprise-class source control system, entirely separate from Visual Source Safe (Microsoft's previous source control offering). A Team Foundation Server team project will typically have one or more source control folders associated with it. Each folder is a location where project items can be stored, and are created either during project creation or at a later stage as needed. The actual data that the source control system manages is held in a SQL Server database set up during the installation of Team Foundation Server.
Project members can perform various actions on the source control repository, and in this module we will look at the basic mechanisms involved in source control. Members can check data into and out of source control. When checked out the data is associated with a client side workspace. The workspace is an area on the client that holds checked out data. A given client may have multiple workspaces, this allows the clients to check out different parts of the repository into different workspaces, allowing clients (for example) to work with multiple versions of the code. Check ins are atomic and each check in creates a new changeset.
A changeset is a snapshot of all the files in the folder at the time this change was committed and each changeset is given a unique Identifier. Users are able to check out the latest version from the repository or they can check out a specific changeset. Changesets can also be labelled and users can check out a given labelled version of the repository. Users can also check out a version that was checked in on a specific date. All the above work can be done with the Source Control Explorer, although Team Foundation Server supplies a command line tool called TF.exe, which is also discussed.
Branching, Merging and Shelving
One of the features that Team Foundation Server provides is the ability to branch the source control repository. A branch is essentially a copy of the repository and branching allows a developer to work on new features without changing the current version that can still be worked on by other developers within the team. We will examine how branching is performed and how to manage branches as a user. Changes a developer makes in a branch may need to be added back into the main source control tree or indeed into another branch, a process called merging. Team Foundation Server also introduces a concept that is new to source control, the idea of shelving. The concept behind shelving is that while a developer is working on changing code, they may be asked to work on something else. Rather than save the current changes locally and risk losing them, or check them back into source control when they are not finished, a developer can shelve the changes. Shelves are essentially named branches that belong to a developer. They can be checked out by the person that created the shelveset, or by another user. Shelves are useful in many scenarios during the lifetime of a project and will be looked at in detail. Finally, this module looks at how check in policies can be tailored by writing and installing custom check in policies.
Team Foundation Build
On a large software team, it is often desirable to set up a public build environment where builds of the whole project are executed upon some trigger, such as may happen with continuous integration or with a nightly build. This module investigates how the Team Build component of Team Foundation Server manages public builds. It shows how the Team Foundation Server Team Explorer can be used to set up different build configurations on different build servers, all version controlled using the Team Foundation Server version control repository, and how builds can be executed. This module covers how to schedule builds and how to use the continuous integration features of Visual Studio Team Build. It also explains how builds can be executed in response to a variety of triggers (for example, source code check in), how build types are defined using Windows Workflow and how they can be customised to suit the project's requirements.
Managing Databases with Team Foundation Server
Enterprise developers use databases constantly and these databases can become very large and often unwieldy. Visual Studio provides tools for designing, developing, managing and testing database definitions, data and stored procedures. Database schemas can be kept under source control and used to manage controlled updates of databases. Data can be compared across databases, for example, to check that staging databases have the same data as a master database. Unit tests can be written to check that the functionality provided by your database actually does what it is supposed to. This module will cover each of these areas.
Test-Driven Development and Unit Testing
In recent years, unit testing has become recognised as a very important part of the software development lifecycle. A new style of development, Test Driven Development (TDD), has also come to the fore. In this module, we will examine how Visual Studio supports TDD. We will start by explaining unit testing and defining what a unit test actually is. We will then look at writing unit tests in Visual Studio, in particular looking at the attributes that are used when defining a test case and how to write code to determine whether a test has succeeded or failed. We examine how to execute and debug tests and how to view test results. We also examine other areas of the test process such as checking for exceptions and writing code to initialise the test. Visual Studio also allows us to generate tests for already existing code; we will look at that process and at the generated test code. We may also have a need to test private methods. Visual Studio can generate stub code to test private methods. This module will look at the generated code, and at how to use it within the test project. When testing it is important all code is tested, but how does a developer know that all of the code in a project has been exercised? The answer is provided in the next module!
Data-Driven Unit Testing, Code Coverage and Impact Analysis
When testing a specific method, you will often want to test the method with different data. For example, you want to test edge conditions (values such as 0, +INF or -INF when dealing with integers). While this can be done by writing multiple tests, it is also possible in Visual Studio to do this using a Data Driven test. In a data driven test, the test data is stored in a database table and this table is specified during the test process, each row in the table representing a different test case. In this module, we will show how to create the database, how to specify the table and the columns within that table to use, and how that data is passed to the test. We will also discuss the thorny issue of how to manage this database.
Visual Studio offers the ability to instrument code to show which code has been executed and which has not during the testing. In this module, we will show how to configure test settings to execute the code coverage engine in order to determine which areas of code need improved testing.
Related to this, Visual Studio has the ability to work out which tests have been affected when code is changed; this is called Test Impact Analysis. This module covers how to use this so that only tests that are affected are rerun, thereby significantly reducing the amount of time spent testing with no loss of test effectiveness.
Dependency Injection and Inversion of Control
Inversion of Control is a mechanism that allows components in a software system to be very loosely coupled. It is achieved by configuring components to use a dependency injection container. This is vital if you want to be able to test code effectively.
In this module, we will look at the latest version of Microsoft's Inversion of Control (IoC) container, Unity. We will examine how to best use the container through configuration and through code.
Doubles and Mocking
Sometimes when testing code you have to be able to replace complex parts of the system under test, such as a database. In this module, we will look at using Moq to help us do this. Moq supports various styles of mocking, from letting us create stubs to letting us return data and creating dynamic mocks to ensure certain calls are made. We will couple the mocking tools with an IoC container to understand how to create good unit tests.
Applications go wrong! When you are developing code and something goes wrong, it's very easy to run the code under the debugger and to look for the error.
There are also tools that allow you to do post-mortem debugging, i.e. to help debug applications that crash or hang when actually running on a user's computer.
IntelliTrace provides a missing link. What happens when applications go wrong when being tested by your QA or acceptance test department? Those teams probably don't know how to use (and don't want to learn how to use) the Visual Studio debugger. In addition, while WinDbg and friends are great for post-mortem debugging, it would be nice if we could gather more debugging information in such a controlled environment as our test environment. This is where IntelliTrace comes in; you can run the trace on a test computer and have it gather all the data necessary to run a full debugging session after the event.
Syntactically correct code that passes all of the compiler's checks may still have issues that need to be resolved. If best practices weren't employed, then code may not perform well, may not be scalable or it may not be secure. Such code is an accident waiting to happen later in the project's life. To avoid these problems, code should be reviewed. Visual Studio has a rule-based code analysis tool built in that allows code to be reviewed early and often during development, thus saving time and resources later on in the project's life. This module shows how to use code analysis effectively, and explains how it can become an integral part of the build process, how it integrates with Visual Studio, and how it is possible to develop custom code analysis rules that can be deployed to ensure that your coding best practices are followed.
Visual Studio introduces Code Contracts to the .NET development world. Code contracts let us provide pre and post conditions for any method in our code, as well as class invariants. Code contracts can be checked at runtime, but more interestingly, they can also be checked at compile time. This provides an extra level of insurance for our code.
This module looks at how to install the tooling for code contracts, how to write code contracts to set up pre and post conditions and at how to use code contracts to their full.
Tony Whitter is very friendly and upbeat. | <urn:uuid:4dfb8822-8df5-44df-b1b5-95c2e63c8852> | 2.6875 | 2,508 | Tutorial | Software Dev. | 39.473747 |
The Universe is a near
endless space filled with galaxies, stars, planets and
other objects like asteroids and meteorites.
galaxy can contain potentially billions of stars or suns.
The Earth's sun is part of the Milky Way galaxy.
The Earth's sun is at the center of our Solar System.
The Solar System includes all the planets that revolve
around the Sun, and the moons that revolve around the
Our solar system consists of 9 planets that revolve
around the Sun. These planets, listed from the closest to
farthest from the Sun, are: Mercury, Venus, Earth, Mars,
Jupiter, Saturn, Uranus, Neptune and Pluto. | <urn:uuid:925f1d18-b0cb-4984-a884-4ac8bf20a8d9> | 3.3125 | 145 | Knowledge Article | Science & Tech. | 54.207051 |
Programs do not have to be written in just one file, your code can split up into as many files as you want, if a program is comprised of forty functions you could put each function into a separate file. This is a bit extreme though. Often functions are grouped by topic and put into separate files. Say you were writing a program that worked out the price of a pizza and displayed the result, you could put the calculation functions into one file, the display functions into another and have main() in a third one. The command you would use to compile your program would look something like this:
ciaran@pooh:~/book$ gcc -o pizza_program main.c prices.c display.cRemember: If you define a function in prices.c and you want to call this function in main.c you must declare the function in main.c. | <urn:uuid:2cbb65a8-65a4-4708-98ba-169863d1eca8> | 3.359375 | 179 | Tutorial | Software Dev. | 67.42188 |
House Fly Larvae
The house fly can be an aggravating pest found within human homes. In fact, the house fly has a strong relationship with man and will travel with human populations to even the coldest of regions. House fly populations can be harmful to human health: they carry multiple pathogens and have been linked to the spread of a number of diseases.
House fly eggs look like small grains of rice. Eggs hatch within 24 hours, and house fly larvae emerge. House fly larvae, or maggots, appear similar to pale worms. Their sole purpose is to eat and store energy for their upcoming pupation. Larvae feed for approximately five days, after which they find dry, dark locations for pupal development.
House fly larvae can be commonly found on rotting plant or animal material. If an animal dies, maggots will most likely feed on the corpse. These larvae also fall prey to many other species, including reptiles, birds and other insects. Certain wasps are known to lay their eggs inside maggots. When these eggs hatch, young wasps devour the maggot from the inside out.
When entering the pupal stage, white larvae develop hard, dark outer shells. Within a few hours of emerging from the pupa case, females are capable of breeding. She is capable of depositing almost a thousand eggs in her lifetime. | <urn:uuid:d04f31fb-520a-4b2b-8655-5ee367090ea5> | 3.59375 | 276 | Knowledge Article | Science & Tech. | 54.217585 |
|Nov26-12, 09:56 AM||#1|
pressure vessels problem
1. The problem statement, all variables and given/known data
20 m3 of gas at a pressure of 25 bar is to be stored in a cylindrical
pressure vessel 2 m long. Given the following information :
The yield strength of the vessel material is 14,000 psi
If a factor of safety of 5 is to be used, determine:
Whether the vessel should be treated as a thin or thick cylinder.
2. Relevant equations
iv been given the feed back as follows:
For this question you need to apply the thin cylinder theory to determine the thickness t, then depending on the answer for r/t, determine whether the cylinder should be treated as a thick cylinder. If it is a thick cylinder, then the thick cylinder theory must be applied to determine the thickness of vessel required.
The attempt at a solution
3.1 bar = 100,000 Pa
factor 5 means that maximumstrength
1 psi=6894.7N /m2
a) We have PV = RT = PSL (L= 2 m long.)
So the strength of our vessel should be 25⋅105 Pa≈362.6 psi
From the factor of safety we can find the the maximum strength should be
14/5⋅103 psi=2.8⋅103 psi
So, the vessel should be treated as a thick.
im told that my attempt is incorrect but i dont know how else to solve it useing the feed back im given.
|Nov26-12, 10:42 AM||#2|
The length is fixed at 2 m. So, calculate what the diameter needs to be to hold 20 m^3 of gas. From the diameter, calculate what the thickness needs to be and then check the r/t value to determine what set of equations should be used.
|Nov26-12, 10:43 AM||#3|
Did you work out the radius of the cylinder required to hold the compressed gas?
The decision on whether to apply thick or thin cylinder theory depends on the ratio r/t, not on what the ratio of the wall stress to yield might be.
|Nov26-12, 10:44 AM||#4|
pressure vessels problem
can you explain to me how to do this, what equations do i need to use.
|Nov26-12, 10:46 AM||#5|
The equation to figure out the volume is straight forward. Just look it up for a cylinder. A basic thickness equation can easily be derived (or looked up) for the stress in the hoop direction. The longitudinal stress is always 1/2 of the hoop stress. So, the hoop stress governs.
|Similar Threads for: pressure vessels problem|
|What causes pressure drops in pressure vessels||Mechanical Engineering||3|
|Pressure Vessels||Introductory Physics Homework||0|
|pressure vessels||Introductory Physics Homework||0|
|Pressure Vessels||Engineering, Comp Sci, & Technology Homework||2|
|Pressure vessels||Mechanical Engineering||3| | <urn:uuid:59ef43b9-a8b7-4e4e-94d7-d5ffa836784b> | 2.875 | 668 | Comment Section | Science & Tech. | 73.971422 |
im learning about radian and degree measurements and im not so good at word problems... is there anyone out there that can help me with the following?...
assume that the planet venus travels around the sun in a circular orbit. the radius of the orbit is 6.7 x 10^7 mi and the time needed to make one revolution is 243 earth days.
a. determine the angle, in radians, that venus moves through in 1 earth day.
b. determine the distance that venus travels in 1 earth day.
someone please help!!!! | <urn:uuid:b5640e31-f590-4328-b134-cbb5aafd9748> | 3.34375 | 116 | Comment Section | Science & Tech. | 80.264614 |
Extreme Weather In The 1970’s
In the 1970s, climatologists blamed the same sort of extreme weather events on global cooling that are now blamed on global warming. And not only the same events, but the same causes.
We are all familiar with the “ice age “scare of the early 1970’s. Science News ran a report at the time, with an interview with C C Wallen, chief of the Special Environmental Applications Division, at the World Meteorological Organisation.
According to the article,
By contrast, (with the Little Ice Age), the weather in the first part of this century has been the warmest and best for world agriculture in over a millenium, and, partly as a result, the world’s population has more than doubled. Since 1940, however, the temperature of the Northern Hemisphere has been steadily falling: Having risen about 1.1 degrees C. between 1885 and 1940, according to one estimation, the temperature has already fallen back some 0.6 degrees, and shows no signs of reversal.
This topic has been thoroughly discussed many times previously, so I don’t intend to rehash the same arguments. I am , though, interested in what climatologists at the time thought about the effects of this cooling.
C C Wallen had this to say,
The principal weather change likely to accompany the cooling trend is increased variability – alternating extremes of temperature and precipitation in any given area – which would almost certainly lower average crop yields.
The cause of this increased variability can best be seen by examining upper atmosphere wind patterns that accompany cooler climate. During warm periods a “zonal circulation” predominates, in which the prevailing westerly winds of the temperate zones are swept over long distances by a few powerful high and low pressure centers. The result is a more evenly distributed pattern of weather, varying relatively little from month to month or season to season.
During cooler climatic periods, however, the high-altitude winds are broken up into irregular cells by weaker and more plentiful pressure centers, causing formation of a “meridional circulation” pattern. These small, weak cells may stagnate over vast areas for many months, bringing unseasonably cold weather on one side and unseasonably warm weather on the other. Droughts and floods become more frequent and may alternate season to season, as they did last year in India. Thus, while the hemisphere as a whole is cooler, individual areas may alternately break temperature and precipitation records at both extremes.
In other words, Wallen observed exactly the same sort of extreme weather then, that is now blamed on global warming – unusual cold, unusual warmth, floods and droughts. And not only the events. The same meridional circulation patterns, that he observed, are happening again now, resulting in cold winters in some places, and warm summers in others. | <urn:uuid:a731fe91-6285-475a-a135-f6a777a3ca33> | 3.484375 | 597 | Personal Blog | Science & Tech. | 35.923867 |
In 300 B.C., a student of Aristotle observed that humans could change regional temperatures by draining marshes and clearing forests. More than 2,000 years later, a Swede quantified carbon’s role in keeping the planet warm.
That Swede, Svante Arrhenius, concluded that burning coal could cause a “noticeable increase” in atmospheric carbon levels across centuries.
Interest in global warming has followed the same general erratic trend as the yearly warming of the planet since NASA scientist James Hansen first testified in 1988 before the U.S. Senate that humans were causing the change. Since then, the Kyoto Accord, detailed below, and several other climate summits, have come and gone without producing substantial changes in the way humans conduct their economic activity and produce and use energy.
Global leaders will get their next chance to act or sit still—while journalists, activists and members of the concerned public get the opportunity to wring their hands—during the 18th United Nations climate summit that begins next week in Qatar.
—Posted by Alexander Reed Kelly.
Reuters at Scientific American:
1957-58 - U.S. scientist Charles Keeling sets up stations to measure carbon dioxide concentrations in the atmosphere at the South Pole and Mauna Loa, Hawaii. The measurements have shown a steady rise.
1988 - The United Nations sets up the Intergovernmental Panel on Climate Change (IPCC) to assess the scientific evidence.
1992 - World leaders agree the U.N. Framework Convention on Climate Change, which sets a non-binding goal of stabilizing greenhouse gas emissions by 2000 at 1990 levels - a target not met overall.
1997 - The Kyoto Protocol is adopted in Japan; developed nations agree to cut their greenhouse gas emissions on average by at least 5 percent below 1990 levels by 2008-12. The United States stays out of the deal.
MSVG (CC BY 2.0) | <urn:uuid:9efe7448-b196-413c-b665-c827dc701a09> | 4.0625 | 392 | Knowledge Article | Science & Tech. | 50.468983 |
Chinese Simplified: 雪 Chinese Traditional (Taiwan): 雪 Chinese Traditional (Hong Kong): 雪 Japanese: 雪 Korean: 雪
Check whether there is variation in the fonts used for the following characters.
Assertion: [Exploratory test] If no font-family is applied using styling, the user agent will select different fonts for display of ideographic text when language attribute values vary in the markup.
When no other styling is specified, some user agents automatically choose a different font for the display of Unicode text in Traditional Chinese, Simplified Chinese, Japanese and Korean, depending on the setting of the lang/xml:lang attribute. The test seeks to show whether that is the case for the user agent where this page is displayed.
This behaviour is not specified in the HTML specification.
The test assumes that recognizably different default fonts are assigned in the browser's font preferences for Simplified Chinese, Traditional Chinese, Japanese, and Korean (so that you can tell the difference). Some user agents also allow for a distinction between Traditional Chinese for Taiwan and Hong Kong, in which case different fonts should be applied for each. The same font should be applied to the serif and sans-serif settings, to eliminate noise from the overall page setting. | <urn:uuid:e8f5b550-9656-4aaa-b249-5dcb6a03c58e> | 2.78125 | 265 | Documentation | Software Dev. | 27.926029 |
Desargues Theorem in a Finite Geometry
Department of Mathematics and Computer Studies
York College (CUNY)
Jamaica, New York
Desargues "theorem" is one of the most unexpected and elegant results in projective geometry.
Given two triangles ABC and A'B'C', if AA', BB', and CC' are lines which go through a single point O, then the points where AB and A'B', BC and B'C', and AC and A'C' meet lie on a line (are collinear)!
If the roles of point and line above are interchanged, than we get the dual of Desargues "statement" which amazingly is also the converse of the original statement. Desargues statement is true for any projective plane embedded in a projective 3-space, for the real projective plane, and for finite projective planes which arise by using coordinates from a field (in particular from a Galois Field). However, there are projective planes which fail to obey the Desargues statement. Such planes can occur for either infinite planes or for finite planes. One nice example of a non-desarguesian plane (in the infinite case) is the Moulton Plane:
The basic idea here is that the lines of the Euclidean plane are modified so that if they have positive slope they are "bent" when they cross the x-axis. (It is a bit like what happens to light rays when they enter water from the air - they bend. However, in the Moulton Plane only the lines of positive slope are "bent.")
Here is an example (exercise) which may help enrich your understanding of the real projective plane, Euclidean analytical geometry, and the projective geometry of finite planes. If you work out the details it will help you see how Desargues Theorem holds in finite geometries where the coordinates come from a finite arithmetic (finite field; Galois field).
We will work in Z5 which will denote the integers mod 5, and which form a field. We can think of the elements of this field as the "numbers" 0, 1, 2, 3, and 4. (We will drop the convention of using bars over these numbers or showing them in a different type font to distinguish them from "ordinary" integers.)
Hence, 4 + 3 = 2 since 7 leaves a remainder of 2 when divided by 5 and 4(3) = 2 since 12 divided by 5 leaves a remainder of 2. "1/2" = 3 since 2x3 is congruent to 1 mod 5. (Thus, we can solve the equation 2x=1; x =3 solves it.)
Our points are the ordered triples (x, y, z) where not all the coordinates are zero and (kx, ky, kz) is the same point as (x, y, z) when k is not 0. The only legal values for x, y, and z are 0, 1, 2, 3, or 4. Similarly we use as lines ax +by +cz = 0. (Again a, b, c are not all 0 and there are only 5 choices for each of them: 0, 1, 2, 3, or 4.)
Suppose we are given the line x + y + 2z = 0. There are only a finite number of points (exactly 6 , in fact) on this line. Think of the corresponding "affine" (Euclidean) line: x + y = 3, (y = 4x + 3) obtained by setting z = 1. Since -2 is congruent to 3 mod 5, we get the above equation. So we get 5 points by letting x be 0, 1, 2, 3, and 4 in turn. There is also one more point on this line: the point where the line meets the line at infinity (z=0) which is gotten by setting z = 0 in the original 3-variable equation.
and the point at infinity on this line (last coordinate 0)
(1,4,0). (This "makes sense" because the line has slope 4.)
What is exciting about these finite planes is that one can write down all the points, all the lines, all the points on each line, and all the lines through each point if one has the patience. To test your skill in this 31 point world, you can try verifying Desargues Theorem for this pair of triangles:
A = (1,2,1)
B = (2,1,1)
A' = (2,4,1)
B' = (1,3,1)
C' = (1,4,1)
First, check that AA', BB', CC' go through a single point. Second, check that AB intersect A'B', AC intersect A'C' and AC intersect A'C' at points that all lie on a single line!
To do this you may want to use the "determinant" approach to finding the line through two points and the point where two lines meet. Mathematically, one does the same thing because we are working in a projective geometry and lines can not be parallel. Lines can be thought of as having equations or coordinates, and similarly for points. | <urn:uuid:c74daf87-0cc3-4429-989f-6b7bb8a5e624> | 3.09375 | 1,119 | Academic Writing | Science & Tech. | 66.652398 |
Permanent URL to this publication: http://dx.doi.org/10.5167/uzh-38862
Copeland, S R; Sponheimer, M; Lee-Thorp, J A; de Ruiter, D J; le Roux, P J; Grimes, V; Codron, D; Berger, L R; Richards, M P (2010). Using strontium isotopes to study site accumulation processes. Journal of Taphonomy, 8(2-3):115-127.
|PDF - Registered users only|
PDF - Registered users only
Strontium isotopes (87Sr/86Sr) in tooth enamel reflect the geological substrate on which an animal lived during tooth development. Therefore, strontium isotopes of teeth in fossil cave accumulations are potentially useful in determining whether an animal was native to the vicinity of the site or was brought in by other agents such as predators from farther afield. In this study, we tested the ability of strontium isotopes to help determine the origins of fossil rodents in Gladysvale Cave, South Africa. First, biologically available 87Sr/86Sr ratios were established using modern plants recovered from three geologically distinct areas, the Malmani dolomite, the Hekpoort andesite/basalt, and the Timeball Hill shale, all of which were found to be significantly different. Strontium isotope values were then measured on tooth enamel of rodents from a modern barn owl (Tyto alba) roost in Gladysvale Cave. The results clearly distinguished modern owl roost rodents that came from local dolomite (67%) versus those from other geological zones. We then measured strontium isotope values of enamel from 14 fossil rodent teeth from Gladysvale Cave. The average and range of values for the fossil rodents is similar to that of the modern owl roost rodents. Fifty-seven percent of the fossil rodents probably derived from the local dolomite, while others were brought in from at least 0.8 km away. A pilot study of 87Sr/86Sr ratios of fossil rodent teeth from Swartkrans Member 1 and Sterkfontein Member 4 indicates that 81% and 55% of those rodents, respectively, are from the local dolomite substrate. Overall, this study shows that strontium isotopes can be a useful tool in taphonomic analyses by identifying non-local individuals, and has great potential for elucidating more of the taphonomic history of fossil accumulations in the dolomitic cave sites of South Africa.
|Item Type:||Journal Article, refereed, original work|
|Communities & Collections:||05 Vetsuisse Faculty > Veterinary Clinic > Department of Small Animals > Clinic for Zoo Animals, Exotic Pets and Wildlife|
|DDC:||570 Life sciences; biology|
|Deposited On:||23 Dec 2010 17:40|
|Last Modified:||23 Nov 2012 13:52|
|Publisher:||Palaeontological Network Foundation|
Users (please log in): suggest update or correction for this item
Repository Staff Only: item control page | <urn:uuid:71673eae-2d88-4a13-aef2-c665d4848087> | 3.09375 | 669 | Truncated | Science & Tech. | 32.67392 |
Carbon dioxide concentrations in the atmosphere have crossed a major threshold: 400 parts per million. Here are five key points on how carbon dioxide is affecting Earth’s atmosphere and the role we're playing in it.
Broadcast meteorologists are a leading source of information about the atmosphere for the public, but many avoid mentioning global warming. New research finds barriers that may keep them from addressing the science of climate change on the air.
More than two days ahead of landfall, it was clear that Hurricane Sandy could bring higher water than New York and New Jersey had seen in decades. But for thousands of people in the area, the threat simply didn’t register. (Part 1 of 2)
The United States faces more varied weather risks than most nations on Earth, but we also have uniquely strong capabilities to confront these risks, thanks to decades of research conducted by government agencies, universities, and the private weather industry.
With its enormous computing capacity and speed, the new NCAR-Wyoming supercomputer will dramatically advance our understanding of Earth, helping to tackle major questions affecting our economy, health, and well-being.
Studies show 63% of hurricane-related deaths occur inland. To help emergency managers prepare, NCAR scientists are pinpointing vulnerable populations using tropical storm winds, census data, and flood maps.
States are having to make tough decisions regarding their water use and their interaction with water. NCAR scientists are involved in collaborative projects in Colorado, Louisiana, and Oklahoma to evaluate the long-term effects of today’s decisions.
The atmosphere has dealt Houston more than a few wild cards over the last few years, including two devastating tropical cyclones and unprecedented drought. While dealing with such weather threats, the nation's fourth largest city is also taking steps to tackle longer-term climate change.
Many facets of everyday life, from boarding a plane to turning on the lights or driving down the highway, are becoming safer and more cost-effective with the help of technologies rooted in atmospheric science.
Days lengthen as spring arrives, but several other signs of the season are showing up earlier and earlier. Some animals and insects aren’t adapting fast enough to this "asynchrony," and there's an increasing disconnect with legal dates that govern hunting and other resource management.
When weather disasters happen, is climate change to blame? The stories, video, and interactives in "Weather on Steroids" explore that question from a number of angles. It turns out that blaming climate change for wild weather's not that simple. Here’s why.
Experts from a variety of disciplines are joining forces to improve how severe-weather warnings are crafted and communicated. The "Weather-Ready Nation" initiative comes on the heels of a year packed with U.S. weather disasters. | <urn:uuid:da81544d-b727-4b5d-a3e4-721498cae771> | 3.109375 | 568 | Content Listing | Science & Tech. | 38.852634 |
The sum of the interior angles of a quadrilateral are equal to 360o. To find
the fourth angle of a quadrilateral when the other three angles are known, subtract the number
of degrees in the other three angles from 360o.
Example: How many degrees are in the fourth angle of a quadrilateral whose other three angles
are 80o and 110o and 95o? Answer: 360o - 80o - 110o - 95o = | <urn:uuid:9fd848fd-2096-4de0-a38b-196c4557af85> | 3.09375 | 96 | Tutorial | Science & Tech. | 52.847264 |
If you tried to judge whether the 2008-2009 winter in the Northern Hemisphere was colder or milder than usual based on the conditions where you lived, it would be something like the ancient Indian story of the blind men and the elephant. If you couldn’t see the whole elephant, and you could only touch its leg, or its trunk, or its ears, would you be able to understand the whole creature? Our local weather provides us a similarly limited view of what is happening to the weather or climate on a global scale.
That point is illustrated by this land surface temperature anomaly map spanning the three months of meteorological: December 2008-February 2009. Based on data from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite, the map shows places where the winter was warmer than the 2000-2008 average as red, places where temperatures were near-average as white, and places that were colder than the average as blue.
Across most of Canada and the eastern United States, land surface temperatures were several degrees cooler than the average of roughly the past decade, while temperatures in the Great Plains and the South were above average. Across the Atlantic, western Europe and eastern Russia were cooler than average, but a broad swath of warmer-than-average temperatures stretched from eastern Scandinavia southeastward through western Russia, across central Asia, and down into southern China.
In the Southern Hemisphere, Argentina was experiencing devastating summer drought; the failure of crops and parched vegetation are probably responsible for the elevated land surface temperatures there. Meanwhile, the cold temperatures across northern Australia probably resulted from unusually heavy rains and flooding.
To view animations of monthly anomalies from March 2000 through March 2009, please see Global Maps: Land Surface Temperature.
NASA image created by Jesse Allen, using data provided by the Land Processes Distributed Active Archive Center (LPDAAC). Caption by Rebecca Lindsey.
- Terra - MODIS | <urn:uuid:974958bc-78e7-4153-a06a-88e1e49bb06d> | 3.90625 | 391 | Knowledge Article | Science & Tech. | 23.410559 |
Orography (from the Greek όρος, hill, γραφία, to write) is the study of the formation and relief of mountains, and can more broadly include hills, and any part of a region's elevated terrain. Orography (also known as oreography, orology or oreology) falls within the broader discipline of geomorphology.
Orography has a major impact on global climate, for instance the orography of East Africa substantially determines the strength of the Indian monsoon. In geoscientific models, such as general circulation models, orography defines the lower boundary of the model over land.
When a river's tributaries or settlements by the river are listed in 'orographic sequence', they are in order from the highest (nearest the source of the river) to the lowest or mainstem (nearest the mouth). This method of listing tributaries is similar to the Strahler Stream Order, where the headwater tributaries are listed as category = 1.
||This section needs additional citations for verification. (January 2009)|
Orographic precipitation, also known as relief precipitation, is precipitation generated by a forced upward movement of air upon encountering a physiographic upland (see anabatic wind). This lifting can be caused by two mechanisms:
- The upward deflection of large scale horizontal flow by the orography.
- The anabatic or upward vertical propagation of moist air up an orographic slope caused by daytime heating of the mountain barrier surface.
Upon ascent, the air that is being lifted will expand and cool. This adiabatic cooling of a rising moist air parcel may lower its temperature to its dew point, thus allowing for condensation of the water vapor contained within it, and hence the formation of a cloud. If enough water vapor condenses into cloud droplets, these droplets may become large enough to fall to the ground as precipitation. In parts of the world subjected to relatively consistent winds (for example the trade winds), a wetter climate prevails on the windward side of a mountain than on the leeward (downwind) side as moisture is removed by orographic precipitation. Drier air (see katabatic wind) is left on the descending, generally warming, leeward side where a rain shadow is formed.
Terrain induced precipitation is a major factor for meteorologists as they forecast the local weather. Orography can play a major role in the type, amount, intensity and duration of precipitation events. Researchers have discovered that barrier width, slope steepness and updraft speed are major contributors for the optimal amount and intensity of orographic precipitation. Computer model simulations for these factors showed that narrow barriers and steeper slopes produced stronger updraft speeds which, in turn, enhanced orographic precipitation.
Orographic precipitation is well known on oceanic islands, such as the Hawaiian Islands or New Zealand, where much of the rainfall received on an island is on the windward side, and the leeward side tends to be quite dry, almost desert-like, by comparison. This phenomenon results in substantial local gradients of average rainfall, with coastal areas receiving on the order of 20 to 30 inches (510 to 760 mm) per year, and interior uplands receiving over 100 inches (2,500 mm) per year. Leeward coastal areas are especially dry—less than 20 in (510 mm) per year at Waikiki—and the tops of moderately high uplands are especially wet—about 475 in (12,100 mm) per year at Wai'ale'ale on Kaua'i.
Another well known area for orographic precipitation is the Pennines in the north of England where the west side of the Pennines receives more rain than the east because the clouds (generally arriving from the west) are forced up and over the hills and cause the rain to fall preferentially on the western slopes. This is particularly noticeable between Manchester (West) and Leeds (East) where Leeds receives less rain due to a rain shadow of 12 miles from the Pennines.
- 11th Edition of Encyclopaedia Britannica (1911)
- Orography (from the American Meteorological Society website)
- "Map of the Southern Half of Eastern Siberia and Parts of Mongolia, Manchuria, and Sakhalin: For a General Sketch of the Orography of Eastern Siberia". World Digital Library. Retrieved 23 January 2013.
- Srinivasan, J., Nanjundiah, Ravi S. and Chakraborty, Arindam (2005) Impact of Orography on the Simulation of Monsoon Climate in a General Circulation Model Indian Institute of Science | <urn:uuid:73a58261-afa3-4d7c-8f9a-9e295217c849> | 3.859375 | 994 | Knowledge Article | Science & Tech. | 29.035405 |
- In calculus and related areas, a linear function is a polynomial function of degree zero or one.
- In linear algebra, a linear function is a linear map.
As a kind of polynomial function
In calculus, analytic geometry, and related areas, a linear function is defined by a polynomial of degree zero or one. When there is only one independent variable, these functions are of the form
For a function f(x1, ..., xk) of two or more independent variables, the general formula is
- f(x1, ..., xk) = b + a1x1 + ... + akxk,
and the the graph is a hyperplane.
A constant function is also considered linear in this context, as it is given by a polynomial of degree zero. Its graph, when there is only one independent variable, is a horizontal line.
In this context, the other meaning (a linear map) may be referred to as a homogeneous linear function or a linear form. In the context of linear algebra, this meaning (polynomial functions of degree 0 or 1) is a special kind of affine map.
As a linear map
- Homogenous function
- Nonlinear system
- Piecewise linear function
- Linear interpolation
- Discontinuous linear map
- "The term linear function, which is not used here, means a linear form in some textbooks and an affine function in others." Vaserstein 2006, p. 50-1
- Stewart 2012, p. 23
- Shores 2007, p. 71
- Gelfand 1961
- Izrail Moiseevich Gelfand (1961), Lectures on Linear Algebra, Interscience Publishers, Inc., New York. Reprinted by Dover, 1989. ISBN 0-486-66082-6
- Thomas S. Shores (2007), Applied Linear Algebra and Matrix Analysis, Undergraduate Texts in Mathematics, Springer. ISBN 0-387-33195-6
- James Stewart (2012), Calculus: Early Transcendentals, edition 7E, Brooks/Cole. ISBN 978-0-538-49790-9
- Leonid N. Vaserstein (2006), "Linear Programming", in Leslie Hogben, ed., Handbook of Linear Algebra, Discrete Mathematics and Its Applications, Chapman and Hall/CRC, chap. 50. ISBN 1-584-88510-6 | <urn:uuid:ea39cc43-5c1e-4ccf-ad71-6383920d8d0a> | 3.375 | 518 | Knowledge Article | Science & Tech. | 63.605305 |
There are three classes of meteorites: stony, iron, and stony-iron. A meteorite is heavier than an ordinary rock and will be attracted to a magnet. The condition of a meteorite can range from fresh to very weathered. Fresh meteorites have fusion crust, an aerodynamic shape and possibly thumbprints (regmaglypts). Weathered meteorites may be more difficult to recognize due to the deterioration of its' meteoritic properties.
Note: This website only addresses identification of common meteorites.
99% of all meteorites are attracted to a strong magnet.
(As are metal artifacts, slag and iron ore) Or if the object is small, hang it or the magnet from a string.
This is used as a preliminary test and is recommended to new collectors. If your specimen does not pass this test it is probably NOT a meteorite!(advanced)
Note: Samples passing this test are not necessarily meteorites.
Test Specimen - No crust, magnetic
Iron ore is the most common meteor-wrong.
Magnetite especially is very magnetic (hence its name) and hematite may or may not be mildly magnetic.
Both these minerals may possibly be distinguished from meteoritic material by a
characteristic known as 'streak'. You can test the streak very simply. If you take a
common ceramic tile, such as a bathroom or kitchen tile, it has a smooth glazed slide
and an unfinished dull side which is stuck to the floor/wall when installed. Take the sample
which you think is a meteorite and scratch it quite vigorously on the unglazed side of
If it leaves a black/gray streak (like a soft leaded pencil) the sample is likely magnetite, and if it leaves a vivid red to brown streak it is likely hematite. A stone meteorite, unless it is very heavily weathered will not normally leave a streak on the tile.
You say that you don't have a ceramic tile? You can use the bottom of an ceramic coffee cup or
you can also use the inside of your toilet tank cover. (the heavy rectangular lid on top of the tank) It is very heavy, so be careful.
Note:Samples passing this test are not necessarily meteorites. Also, I have had some heavily weathered stone meteorites that leave a slight gray streak so be sure you streak a sample representative of the interior.
hematite streaks red
magnetite streaks black
Here is the test you wanted to avoid. If your stone specimen passed the magnet test, it is time to make a small window to see inside. Yes, I know you fear damaging your suspect meteorite but this is necessary and will not decrease the value. The goal is to use a file to grind flat a corner or appropriate area on the stone. If the specimen is small and you have a bench vice or vicegrip, wrap the specimen in a cloth and secure it so you can file a window. Filing will take some effort. (if this is beyond your abilities - see professional testing) Look at the cut surface from several different angles, if you can see shiny metal flakes scattered throughout the stone, it may well be a meteorite. If the interior is plain then it is probably a wrong.
Note: Samples passing this test might be meteorites.
Test Specimen in vise
Test Specimen - shiny metal flakes visible in 'window' ( a meteorite!)
All iron meteorites contain nickel. Most stone meteorites contain nickel.
Thus, a chemical test for nickel in normal meteoritic proportions is definitive for meteorites in most cases.
Caution should be taken when acid is used in the test. Helpful info on using this test.
Note: Samples passing this test might be meteorites. Warning:(1) slag can test positive and (2) R. Korotev relates "if the pink color fades away after 5 minutes, then the metal contains Ni, but not enough to be of meteoritic origin."
DENSITY & SPECIFIC GRAVITY
If you are able to calculate the density of your suspect meteorite, that information may help you to identify it.
Washington University in St. Louis (WUSTL) has a fine page covering Density and SG.
(See Jim Woodell Video below)
I'm sure you have probably looked at meteorite photos, so here is an interactive exercise that puts all the above together. Cascadia's Interactive Meteorite Identification (Note: to find out what the samples actually are, you have to go go through ALL tests including completely rotating the item and putting a check in the 'Appearance' box at each of its four rotations)
For more photos of meteor-wrongs see the WUSTL Photo Gallery
McCartney Taylor reviews some of the above steps with you! Stone Meteorite Identification
Ruben Garcia explains visual clues for identifiyng stony meteorites. | <urn:uuid:f1352a90-5365-40a2-bb4f-c36ba701520b> | 3.140625 | 1,027 | Knowledge Article | Science & Tech. | 43.989073 |
Reconstructing plant invasions using historical aerial imagery and pollen core analysis: Typha in the Laurentian Great Lakes
Article first published online: 15 JUN 2012
© 2012 Blackwell Publishing Ltd
Diversity and Distributions
Volume 19, Issue 1, pages 14–28, January 2013
How to Cite
Lishawa, S. C., Treering, David. J., Vail, L. M., McKenna, O., Grimm, E. C. and Tuchman, N. C. (2013), Reconstructing plant invasions using historical aerial imagery and pollen core analysis: Typha in the Laurentian Great Lakes. Diversity and Distributions, 19: 14–28. doi: 10.1111/j.1472-4642.2012.00929.x
- Issue published online: 14 DEC 2012
- Article first published online: 15 JUN 2012
- Aerial imagery;
- biological invasions;
- invasive species;
- long-term effects;
- pollen core analyses;
Determining the spatial-temporal spread of an invasive plant is vital for understanding long-term impacts. However, invasions have rarely been directly documented given the resources required and the need for substantial foresight. One method widely used is historical photography interpretation, but this can be hard to verify. We attempt to improve this method by linking historical aerial photos to a paleobotanical analysis of pollen cores.
Laurentian Great Lakes coastal wetlands, United States of America.
We chose invasive cattail ( Typha) as our model species because it is identifiable from aerial imagery and has persistent, identifiable pollen, and its ecological impacts appear to be time-dependent. We used Geographic Information Systems, aerial photo-interpretation and field verification to post-dict the invasion history of Typha in several wetland ecosystems. Using 210 Pb and 137 Cs sediment dating and pollen classification, we correlated the temporal dominance of Typha to our estimates of per cent coverage at one site. The pollen record was then used to estimate the Typha invasion dynamics for dates earlier than those for which aerial photos were available.
Typha spread through time in all study wetlands. Typha pollen dominance increased through time corresponding with increased spatial dominance. Hybrid cattail, T. × glauca increased in pollen abundance relative to T. angustifolia pollen through time.
This study illustrates the value of generating historical invasion maps with publically available aerial imagery and linking these maps with paleobotanical data to study recent (< 100 years) invasions. We determined rates of Typha expansion in two coastal wetland types, validated our mapping methods and modelled the relationship between pollen abundance and wetland coverage, enhancing the temporal precision and breadth of analyses. Our methodology should be replicable with similar invasive plant species. The combination of pollen records and historical photography promises to be a valuable additional tool for determining invasion dynamics. | <urn:uuid:c04f08b3-1454-4248-a6bc-a57a8c3f6526> | 3.140625 | 600 | Academic Writing | Science & Tech. | 28.343106 |
Nocturnal light and lunar cycle effects on diel migration of micronekton
Limnol. Oceanogr., 54(5), 2009, 1789-1800 | DOI: 10.4319/lo.2009.54.5.1789
ABSTRACT: The roles of nocturnal light and lunar phase in the diel migration of micronekton from a nearshore scattering layer were examined. Migration patterns were measured over six complete lunar cycles using moored upward-looking echosounders while nocturnal surface irradiance was recorded. We hypothesized that animals would remain at a constant isolume at night despite changes in nocturnal illumination between nights. The scattering layer migrated closer to the surface during dark nights than during well-lit ones. However, this movement was not enough to compensate for observed changes in light, and at night animals often remained at light levels higher than they experience at depth during the day. Light and lunar cycle were not completely coupled, allowing separation of the light and lunar phases. Contrary to the initial hypothesis, lunar phase accounted for substantially more of the variability in layer migration than surface irradiance, showing strong effects on the scattering layers depth and animal density within the layer. Changes in layer depth and animal density were amplified a small amount by variations in light level but were minimized by the seafloor in shallow areas. The horizontal component of the scattering layers migration was also affected by lunar phase, with animals remaining further offshore in deeper waters during nights near and during the full moon, even when these were not the nights with the highest light levels. These results suggest that moonlight may be a cue for an endogenous lunar rhythm in the process of diel migration rather than a direct cause. | <urn:uuid:65f7d964-0ed4-4db0-8b38-1a5f487af0e4> | 2.84375 | 351 | Academic Writing | Science & Tech. | 40.337626 |
Common Lisp the Language, 2nd Edition
Every object of type character has three attributes: code, bits, and font. The code attribute is intended to distinguish among the printed glyphs and formatting functions for characters; it is a numerical encoding of the character proper. The bits attribute allows extra flags to be associated with a character. The font attribute permits a specification of the style of the glyphs (such as italics). Each of these attributes may be understood to be a non-negative integer.
The font attribute may be notated in unsigned decimal notation between the # and the \. For example, #3\a means the letter a in font 3. This might mean the same thing as #\ if font 3 were used to represent Greek letters. Note that not all Common Lisp implementations provide for non-zero font attributes; see char-font-limit.
The bits attribute may be notated by preceding the name of the character by the names or initials of the bits, separated by hyphens. The character itself may be written instead of the name, preceded if necessary by \. For example:
#\Control-Meta-Return #\Meta-Control-Q #\Hyper-Space #\Meta-\a #\Control-A #\Meta-Hyper-\: #\C-M-Return #\Hyper-\
Note that not all Common Lisp implementations provide for non-zero
bits attributes; see char-bits-limit.
X3J13 voted in March 1989 (CHARACTER-PROPOSAL) to replace the notion of bits and font attributes with that of implementation-defined attributes. | <urn:uuid:564cb2df-32c9-4c77-aac5-bf3c8375a07b> | 3.46875 | 335 | Documentation | Software Dev. | 42.868659 |
A fission reactor consists basically of a mass of fissionable material usually encased in shielding and provided with devices to regulate the rate of fission and an exchange system to extract the heat energy produced. A reactor is so constructed that fission of atomic nuclei produces a self-sustaining nuclear chain reaction, in which the neutrons produced are able to split other nuclei. A chain reaction can be produced in a reactor by using uranium or plutonium in which the concentration of fissionable isotopes has been artificially increased. Even though the neutrons move at high velocities, the enriched fissionable isotope captures enough neutrons to make possible a self-sustaining chain reaction. In this type of reactor the neutrons carrying on the chain reaction are fast neutrons.
A chain reaction can also be accomplished in a reactor by employing a substance called a moderator to retard the neutrons so that they may be more easily captured by the fissionable atoms. The neutrons carrying on the chain reaction in this type of reactor are slow (or thermal) neutrons. Substances that can be used as moderators include graphite, beryllium, and heavy water (deuterium oxide). The moderator surrounds or is mixed with the fissionable fuel elements in the core of the reactor.Types of Fission Reactors
A nuclear reactor is sometimes called an atomic pile because a reactor using graphite as a moderator consists of a pile of graphite blocks with rods of uranium fuel inserted into it. Reactors in which the uranium rods are immersed in a bath of heavy water are often referred to as "swimming-pool" reactors. Reactors of these types, in which discrete fuel elements are surrounded by a moderator, are called heterogeneous reactors. If the fissionable fuel elements are intimately mixed with a moderator, the system is called a homogeneous reactor (e.g., a reactor having a core of a liquid uranium compound dissolved in heavy water).
The breeder reactor is a special type used to produce more fissionable atoms than it consumes. It must first be primed with certain isotopes of uranium or plutonium that release more neutrons than are needed to continue the chain reaction at a constant rate. In an ordinary reactor, any surplus neutrons are absorbed in nonfissionable control rods made of a substance, such as boron or cadmium, that readily absorbs neutrons. In a breeder reactor, however, these surplus neutrons are used to transmute certain nonfissionable atoms into fissionable atoms. Thorium (Th-232) can be converted by neutron bombardment into fissionable U-233. Similarly, U-238, the most common isotope of uranium, can be converted by neutron bombardment into fissionable plutonium-239.Production of Heat and Nuclear Materials
The transmutation of nonfissionable materials to fissionable materials in nuclear reactors has made possible the large-scale production of atomic energy. The excess nuclear fuel produced can be extracted and used in other reactors or in nuclear weapons. The heat energy released by fission in a reactor heats a liquid or gas coolant that circulates in and out of the reactor core, usually becoming radioactive. Outside the core, the coolant circulates through a heat exchanger where the heat is transferred to another medium. This second medium, nonradioactive since it has not circulated in the reactor core, carries the heat away from the reactor. This heat energy can be dissipated or it can be used to drive conventional heat engines that generate usable power. Submarines and surface ships propelled by nuclear reactors and nuclear-powered electric generating stations are in operation. However, nuclear accidents in 1979 at Three Mile Island and in 1986 at Chernobyl have raised concern over the safety of reactors. Another concern over fission reactors is the storage of hazardous radioactive waste. In the United States, where nuclear fission now is neither politically acceptable nor economically attractive, no new plants have been ordered since 1978, but nuclear fission is used extensively for power generation in France, Japan, and a few other nations.
Fusion reactors are being studied as an alternative to fission reactors. The design of nuclear fusion reactors, which are still in the experimental stage, differs considerably from that of fission reactors. In a fusion reactor, the principal problem is the containment of the plasma fuel, which must be at a temperature of millions of degrees in order to initiate the reaction. Magnetic fields have been used in several ways to hold the plasmas in a "magnetic bottle." If development should reach a practical stage of application, it is expected that fusion reactors would have many advantages over fission reactors. Fusion reactors, for instance, would produce less hazardous radioactive waste. Because their fuel, deuterium (an isotope of hydrogen readily separated from water), is far less expensive to obtain than enriched uranium, fusion reactors also would be far more economical to operate.
See G. I. Bell, Nuclear Reactor Theory (1970); R. J. Watts, Elementary Primer of Diffusion Theory and the Chain Reaction (1982).
Device that can initiate and control a self-sustaining series of nuclear-fission reactions. Neutrons released in one fission reaction may strike other heavy nuclei, causing them to fission. The rate of this chain reaction is controlled by introducing materials, usually in the form of rods, that readily absorb neutrons. Typically, control rods made of cadmium or boron are gradually inserted into the core if the series of fissions begins to proceed at too great a rate, which could lead to meltdown of the core. The heat released by fission is removed from the reactor core by a coolant circulated through the core. Some of the thermal energy in the coolant is used to heat water and convert it to high-pressure steam. This steam drives a turbine, and the turbine's mechanical energy is then converted into electricity by means of a generator. Besides providing a valuable source of electric power for commercial use, nuclear reactors also serve to propel certain types of military surface vessels, submarines, and some unmanned spacecraft. Another major application of reactors is the production of radioactive isotopes that are used extensively in scientific research, medical therapy, and industry.
Learn more about nuclear reactor with a free trial on Britannica.com.
A nuclear reactor is a device in which nuclear chain reactions are initiated, controlled, and sustained at a steady rate, as opposed to a nuclear bomb, in which the chain reaction occurs in a fraction of a second and is uncontrolled causing an explosion.
The most significant use of nuclear reactors is as an energy source for the generation of electrical power (see Nuclear power) and for the power in some ships (see Nuclear marine propulsion). This is usually accomplished by methods that involve using heat from the nuclear reaction to power steam turbines. There are also other less common uses as discussed below.
The physics of operating a nuclear reactor are explained in Nuclear reactor physics.
Just as many conventional thermal power stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear power plants convert the thermal energy released from nuclear fission.
The nuclear chain reaction can be controlled by using neutron poisons and neutron moderators to change the portion of neutrons that will go on to cause more fissions.* Increasing or decreasing the rate of fission will also increase or decrease the energy output of the reactor.
Control rods that are made of a nuclear poison are used to absorb neutrons. Absorbing more neutrons in a control rod means that there are fewer neutrons available to cause fission, so pushing the control rod deeper into the reactor will reduce its power output, and extracting the control rod will increase it.
In some reactors, the coolant also acts as a neutron moderator. A moderator increases the power of the reactor by causing the fast neutrons that are released from fission to lose energy and become thermal neutrons. Thermal neutrons are more likely than fast neutrons to cause fission, so more neutron moderation means more power output from the reactors. If the coolant is a moderator, then temperature changes can affect the density of the coolant/moderator and therefore change power output. A higher temperature coolant would be less dense, and therefore a less effective moderator.
In other reactors the coolant acts as a poison by absorbing neutrons in the same way that the control rods do. In these reactors power output can be increased by heating the coolant, which makes it a less dense poison.
Nuclear reactors generally have automatic and manual systems to insert large amounts of poison into the reactor to shut the fission reaction down if unsafe conditions are detected.
The energy released in the fission process generates heat, some of which can be converted into usable energy. A common method of harnessing this thermal energy is to use it to boil water to produce pressurized steam which will then drive a steam turbine that generates electricity.
The key components common to most types of nuclear power plants are:
Nuclear power plants typically employ just under a thousand people per reactor (including security guards and engineers associateed with the plant but working elsewhere).
See especially http://www.nucleartourist.com, "The Nuclear Tourist", section on "Operation of a nuclear power plant", sub-section on "Operations".
The "Gen IV"-term was dubbed by the DOE for developing new plant types in 2000 . In 2003 the French CEA was the first to refer to Gen II types in Nucleonics Week; "Etienne Pochon, CEA director of nuclear industry support, outlined EPR's improved performance and enhanced safety features compared to the advanced Generation II designs on which it was based." . First mentioning of Gen III was also in 2000 in conjunction with the launch of the GIF plans.
Under 1% of the uranium found in nature is the easily fissionable U-235 isotope and as a result most reactor designs require enriched fuel. Enrichment involves increasing the percentage of U-235 and is usually done by means of gaseous diffusion or gas centrifuge. The enriched result is then converted into uranium dioxide powder, which is pressed and fired into pellet form. These pellets are stacked into tubes which are then sealed and called fuel rods. Many of these fuel rods are used in each nuclear reactor.
Most BWR and PWR commercial reactors use uranium enriched to about 4% U-235, and some commercial reactors with a high neutron economy do not require the fuel to be enriched at all (that is, they can use natural uranium). According to the International Atomic Energy Agency there are at least 100 research reactors in the world fueled by highly enriched (weapons-grade/90% enrichment uranium). Theft risk of this fuel (potentially used in the production of a nuclear weapon) has led to campaigns advocating conversion of this type of reactor to low-enrichment uranium (which poses less threat of proliferation).
It should be noted that fissionable U-235 and non-fissionable U-238 are both used in the fission process. U-235 is fissionable by thermal (i.e. slow-moving) neutrons. A thermal neutron is one which is moving about the same speed as the atoms around it. Since all atoms vibrate proportionally to their absolute temperature, a thermal neutron has the best opportunity to fission U-235 when it is moving at this same vibrational speed. On the other hand, U-238 is more likely to capture a neutron when the neutron is moving very fast. This U-239 atom will soon decay into plutonium-239, which is another fuel. Pu-239 is a viable fuel and must be accounted for even when a highly enriched uranium fuel is used. Plutonium fissions will dominate the U-235 fissions in some reactors, especially after the initial loading of U-235 is spent. Plutonium is fissionable with both fast and thermal neutrons, which make it ideal for either nuclear reactors or nuclear bombs.
Most reactor designs in existence are thermal reactors and typically use water as a neutron moderator (moderator means that it slows down the neutron to a thermal speed) and as a coolant. But in a fast breeder reactor, some other kind of coolant is used which will not moderate or slow the neutrons down much. This enables fast neutrons to dominate, which can effectively be used to constantly replenish the fuel supply. By merely placing cheap unenriched uranium into such a core, the non-fissionable U-238 will be turned into Pu-239, "breeding" fuel.
At the end of the operating cycle, the fuel in some of the assemblies is "spent" and is discharged and replaced with new (fresh) fuel assemblies, although in practice it is the buildup of reaction poisons in nuclear fuel that determines the lifetime of nuclear fuel in a reactor. Long before all possible fission has taken place, the buildup of long-lived neutron absorbing fission byproducts impedes the chain reaction. The fraction of the reactor's fuel core replaced during refueling is typically one-fourth for a boiling-water reactor and one-third for a pressurized-water reactor.
Not all reactors need to be shut down for refueling; for example, pebble bed reactors, RBMK reactors, molten salt reactors, Magnox, AGR and CANDU reactors allow fuel to be shifted through the reactor while it is running. In a CANDU reactor, this also allows individual fuel elements to be situated within the reactor core that are best suited to the amount of U-235 in the fuel element.
The amount of energy extracted from nuclear fuel is called its "burn up," which is expressed in terms of the heat energy produced per initial unit of fuel weight. Burn up is commonly expressed as megawatt days thermal per metric ton of initial heavy metal.
Soon after the Chicago Pile, the U.S. military developed nuclear reactors for the Manhattan Project starting in 1943. The primary purpose for these reactors was the mass production of plutonium (primarily at the Hanford Site) for nuclear weapons. Fermi and Leo Szilard applied for a patent on reactors on 19 December, 1944. Its issuance was delayed for 10 years because of wartime secrecy .
"World's first nuclear power plant" is the claim made by signs at the site of the EBR-I, which is now a museum near Arco, Idaho. This experimental LMFBR operated by the U.S. Atomic Energy Commission produced 0.8 kW in a test on December 20, 1951 and 100 kW (electrical) the following day, having a design output of 200 kW (electrical).
Besides the military uses of nuclear reactors, there were political reasons to pursue civilian use of atomic energy. U.S. President Dwight Eisenhower made his famous Atoms for Peace speech to the UN General Assembly on December 8, 1953. This diplomacy led to the dissemination of reactor technology to U.S. institutions and worldwide.
After World War II, the U.S. military sought other uses for nuclear reactor technology. Research by the Army and the Air Force never came to fruition; however, the U.S. Navy succeeded when they steamed the USS Nautilus (SSN-571) on nuclear power January 17, 1955.
The first portable nuclear reactor "Alco PM-2A" used to generate electrical power (2 MW) for Camp Century from 1960 .
Such reactors can no longer form on Earth: radioactive decay over this immense time span has reduced the proportion of U-235 in naturally occurring uranium to below the amount required to sustain a chain reaction.
The natural nuclear reactors formed when a uranium-rich mineral deposit became inundated with groundwater that acted as a neutron moderator, and a strong chain reaction took place. The water moderator would boil away as the reaction increased, slowing it back down again and preventing a meltdown. The fission reaction was sustained for hundreds of thousands of years.
These natural reactors are extensively studied by scientists interested in geologic radioactive waste disposal. They offer a case study of how radioactive isotopes migrate through the earth's crust. This is a significant area of controversy as opponents of geologic waste disposal fear that isotopes from stored waste could end up in water supplies or be carried into the environment.
Agency Reviews Patent Application Approval Request for "Heat Removal System and Method for Use with a Nuclear Reactor"
Sep 06, 2012; By a News Reporter-Staff News Editor at Politics & Government Week -- A patent application by the inventors Houghton, Zach James...
Analyzing the Market for Nuclear Reactor Coolant Pumps in Russia - Russia Plans to nearly double its nuclear power output by 2020.
Mar 20, 2012; Research and Markets(httpwww.researchandmarkets.com/research/3b1d4e/analyzing_the_mark) has announced the addition of the... | <urn:uuid:13ecb980-b639-4b38-9746-baf4ac01278c> | 4.1875 | 3,474 | Knowledge Article | Science & Tech. | 40.16123 |
A galaxy is a massive, gravitationally bound system consisting of stars, stellar remnants, an interstellar medium of gas and dust, and, dark matter, an important but poorly understood component. The word galaxy is derived from the Greek galaxias (γαλαξίαÏ), literally "milky", a reference to the Milky Way. Examples of galaxies range from dwarfs with as few as ten million (107) stars to giants with a hundred trillion (1014) stars, each orbiting their galaxy's own center of mass.
For more information about galaxy check the Wikipedia article here
ZME Science posts about galaxy | <urn:uuid:c273bad0-5a87-4a28-8a5f-1f1f730f00ed> | 3.59375 | 145 | Knowledge Article | Science & Tech. | 36.41915 |
Gray County, Kansas USA
Carbonaceous chondrite (CM2)
A single stone of unknown weight was found by a rancher on a farm 3.4 miles north of Cimarron and sent to the University of Kansas in the early 1950s. A piece was later acquired by a meteorite collector and samples given to NAU in 1998 and pieces to AMNH in 1992 and 1993. Classification and mineralogy (M Zolensky, JSC; and T. Bunch, NAU): olivine ranges from Fa1 to Fa64, with a peak at Fa1-2, average Fa1.2, PMD 11%. Low Ca-pyroxene ranges from Fs2Wo5 to Fs5Wo4, also present are diopside, enstatite-diopside, pigeonite, and chromite. Porphyritic olivine, barred olivine and granular olivine crystals are most abundant, maximum chondrule diameter is 2 mm. Chondrules are sparse, matrix and chondrule rims comprise ~85 vol. % of the meteorite. The percentage of matrix is similar to that of Bells and Nogoya, but the composition of these is lower in S and Mg, and higher in Si; this could be due to terrestrial weathering. Specimens: type specimen 21 g AMNH; 7.1 g, NAU.
Algeria or Morocco
Find: April 2006
Carbonaceous chondrite (CM2, anomalous)
History: Purchased by F. Kuntz in April 2006 in Erfoud, Morocco, and subsequently acquired for the DuPont Collection at PSF.
Physical characteristics: Two pieces from a very fresh, broken, black, porous stone (total weight 12.7 g) with shiny fusion crust on one side.
Petrography: (A. Irving and S. Kuehner, UWS; T. Bunch, NAU) Sparse mineral grains, carbon-rich objects, dust-armored chondrules and rare refractory inclusion occur in a heterogeneous, very fine grained, porous matrix composed mainly of bladed phyllosilicates with some pentlandite and calcite (clearly visible under incident UV light). Olivine grains (up to 2 mm across) are commonly armored by fine, polycrystalline "dust" and contain inclusions of Ni-rich troilite, chromite, millerite, kamacite and taenite. Both pentlandite and magnesian olivine occur as separate smaller, angular grains. The carbon-rich objects (up to 50 µm across) consist of either pure graphite or a chlorine-rich organic phase. Chondrules consist of PO and POP olivine, many having a fine-grained polycrystalline "dust" rims. One small refractory inclusion is composed of Mg-Al spinel with inclusions of perovskite.
Mineral compositions and geochemistry: Larger zoned olivine grains (e.g., Fa17.9-33.9, Fa39.9–66); smaller homogeneous olivine grains (e.g. Fa1.5, Fa19.9); pentlandite (Ni = 26.2 wt. %). Matrix phyllosilicate material could not be analyzed quantitatively, but has very consistent proportions of Mg, Fe, Si and S. The chlorine-rich organic phase contains ~17 wt.% Cl and ~32 wt. % C, but no detectable N and minor O. Oxygen isotopes: (D. Rumble, CIW) Replicate analyses by laser fluorination gave, respectively, δ18O = 0.494, 1.166; δ17O = 6.224, 7.049; Δ17O = -2.780, -2.542 (all ‰).
Classification: Carbonaceous chondrite (CM2, anomalous); minimal weathering. The presence of chlorine-rich carbon compounds, which may be enigmatic chlorinated hydrocarbons, makes this specimen potentially unique among CM chondrites.
Specimens: A total of 2.7 g of sample is on deposit at UWS and the remainder of the mass (10 g) is at PSF. | <urn:uuid:80acb093-96f4-4033-a332-40732e077e70> | 2.734375 | 911 | Knowledge Article | Science & Tech. | 54.705157 |
||family: Uncertain (Hyloidea)|
Country distribution from AmphibiaWeb's database: Brazil
View distribution map using BerkeleyMapper.
IUCN (Red List) status: Data Deficient (DD).
For Red List information on this species, see the IUCN species account.
From the IUCN Red List Species Account:
This species is known only from the municipality of Santa Teresa, in the state of Espírito Santo, south-eastern Brazil, at 650-675m asl. The limits of its distribution are not known, and it might occur more widely.
Habitat and Ecology
It is found in forest, including secondary forest (but not in open areas) where it lives in leaf-litter on the forest floor. Assuming that it breeds in the same way as Zachaenus parvulus, the egg clutch is deposited on the ground under leaves, and the terrestrial larvae live and develop in a hole under the leaf-litter.
It is a hard species to find, and so its population status is unknown. However, it has been found in the type locality since the initial collection.
The area where this species is found is quite well protected (as a biological reserve), but habitat loss is taking place nearby (where it might occur), due to agricultural development, creation of wood plantations, logging, human settlement and tourism.
It occurs in the Reserva Biológica Augusto Ruschi.
Oswaldo Luiz Peixoto, Débora Silvano 2004. Zachaenus carvalhoi. In: IUCN 2012 | <urn:uuid:efb865d8-eb3c-4112-9aba-e65218bd19d1> | 2.765625 | 340 | Knowledge Article | Science & Tech. | 43.003692 |
The research findings are detailed in the Aug. 8 issue of Science magazine, in the article “Brown Carbon Spheres in East Asian Outflow and Their Optical Properties,” co-authored by Crozier, Anderson and Duncan Alexander, a former postdoctoral fellow at ASU in the area of electron microscopy, and the paper’s lead author. So-called brown carbons – a nanoscale atmospheric aerosol species – are largely being ignored in broad-ranging climate computer models, Crozier and
Studies of the greenhouse effect that contribute directly to climate change have focused on carbon dioxide and other greenhouse gases. But there are other components in the atmosphere that can contribute to warming – or cooling – including carbonaceous and sulfate particles from combustion of fossil fuels and biomass, salts from oceans and dust from deserts. Brown carbons from combustion processes are the least understood of these aerosol components.
…The ASU researchers say the effect of brown carbon is complex because it both cools the Earth’s surface and warms the atmosphere. “Because of the large uncertainty we have in the radiative forcing of aerosols, there is a corresponding large uncertainty in the degree of radiative forcing overall,” Crozier says. “This introduces a large uncertainty in the degree of warming predicted by climate change models.”
A key to understanding the situation is the light-scattering and light-absorbing properties – called optical properties – of aerosols. Crozier and
….It’s typical for climate modelers to approximate atmospheric carbon aerosols as either non-absorbing or strongly absorbing. “Our measurements show this approximation is too simple,” Crozier says. “We show that many of the carbons in our sample have optical properties that are different from those usually assumed in climate models.”.... | <urn:uuid:08fa9412-6bf9-496d-834b-14a4d303d537> | 3.390625 | 382 | Personal Blog | Science & Tech. | 29.467381 |
http://SpaceWeather.com is the link for solar flare and aurora alerts, also has information on meteor showers for scatter.
Ham radio has a strong historical connection to radio astronomy and space science. Radio astronomy is the study of radio emissions from objects outside of the Earth's atmosphere, such as the Sun, solar system planets, stars, interstellar molecular clouds, supernovas, black holes, galaxies, quasars, and other mysterious objects. The first serious radio astronomy studies were carried out by a Dutch ham, Grote Reber. Some excellent links to amateur radio astronomy are:
Solar activity and it's effects on the ionosphere plays a dominant role in propagation of long-distance high-frequency (HF) amateur radio and other communication signals. This is another area where amateurs are very active. Most people are familiar with the 11-year sunspot cycle, which nears it maximum in the early part of 2000. Sunspot numbers, solar flux, A & K propagation indices are indicators of the quality of long distance radio propagation and are broadcast by WWV 18 minutes after each hour (2.5, 5,10, and 15 MHz).
Other sources of solar activity information can be found at:
Another activity with recent strong amateur interest is SETI, or the Search for Extraterrestial Intelligence. Professional interest in this area started with radio astronomers such as Frank Drake, and several programs of serious listening for radio transmissions from other civilizations have been carried out since the 1960's. The main work in this field is organized by the SETI Institute, http://www.seti-inst.edu, with headquarters in Mountain View, California. Since this is the ultimate SWL activity, a large number of amateurs have organized the SETI League, http://seti1.setileague.org/homepg.htm. The amateurs generally use 9-12 ft parabolic dishes from first generation satellite television systems and can't hear as well or as far, but are pioneering low-cost low-noise communications which have other applications. Who knows? -- our first contact with an outside civilization may be through a nearby robot exploration probe similar to the many we have sent out through our solar system!
Other ham radio space related activites such as meteor scatter, ham radio balloons and satellites, etc. can be found at:
Amateur astronomy and general astronomy information can be found at:
Back to Home Page | <urn:uuid:a9abf582-6e22-4e1f-943c-a2c948087674> | 3.328125 | 497 | Knowledge Article | Science & Tech. | 39.011015 |
How many solutions does this equation x1+x2+x3 = 11, given x1,x2,x3 >0 have?
This is solved using combination.
How can this be solved.
But it is not matching with the answer given.
By using that theorem, it would taken two from 10 available gaps, 10C2.
But i have the answer as 13c11.
I think they are considering it as X1,X2,X3 >=0. So we can have 13 gaps and 13C2 which is equal to 13C11.
I dont know whether my understanding is wrong , please correct.
How many solutions does this equation have: .
. . given
Consider an eleven-inch board marked in one-inch intervals.
. . . .
It can be divided into three nonzero pieces
. . by choosing any two of the ten inch-marks
. . and cutting the board there.
Therefore, there are: . | <urn:uuid:fd920c56-071c-4593-83f6-f4a90c9e1a20> | 3.109375 | 207 | Q&A Forum | Science & Tech. | 85.646146 |
What is Fog Lift?
context - when walking through a swamp the area is foggy,
later in the day when the sun is higher the fog has "lifted"
What really happened?
Fog doesn't really lift. As the sun warms the surface
the warmed ground warms the air, raising the air temperature
above the temperature at which water vapor condenses into a
cloud (fog is just a cloud near the Earth's surface). The sun
can penetrate the early morning fog enough to warm the surface
and create the appearance that the fog is lifting.
Argonne National Laboratory
Click here to return to the Weather Archives
Update: June 2012 | <urn:uuid:f8826430-a8f0-4904-bbe2-256bc334a8b3> | 3.5 | 141 | Knowledge Article | Science & Tech. | 54.877667 |
This section illustrates you how to use macros in C.
Macros are the identifiers that represent statements or expressions. To associate meaningful identifiers with constants, keywords, and statements or expressions, #define directive is used. You can see in the given example, we have define a macro i.e SQUARE(x) x*x. Here the macro determines the square of the given number.
Macro Declaration: #define name text
Here is the code:
Output will be displayed as:
If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:b665a92e-084d-4e6c-9118-df7efd96b4b5> | 4.0625 | 143 | Tutorial | Software Dev. | 46.841071 |
Superstorm! Part 5
The weather can be hazardous to your health. The high winds, heavy rains, and lightning produced by big storms are all potential killers.
Space weather can be hazardous to your health, too. Big storms on the Sun shoot particles that could injure or kill an unprotected astronaut. These storms are known as solar flares. They're giant explosions above the Sun's surface. They produce outbursts of energy and protons -- the positively charged particles in the nucleus of an atom.
The protons race into space at up a third of the speed of light. When they hit the human body, they damage cells. High doses can cause cancer or other long-term problems. And the highest doses can kill within days or even hours.
Astronauts in Earth orbit are shielded from solar flares by their spacecraft and Earth's magnetic field. And if necessary, they can return to Earth within hours.
But astronauts on the Moon or on the way to Mars would be in greater peril -- outside Earth's protective magnetic field, and unable to get back home quickly.
Fortunately, solar flares that are strong enough to do serious damage aren't all that common.
Still, astronauts who venture out of Earth orbit will need protection. On the Moon, they can cover their shelters with moondust. And on the way to Mars, they may have emergency shelters that are surrounded by water. So while water may be a deadly part of storms here on Earth, it just might save people from storms in space.
Script by Damond Benningfield, Copyright 2009
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:b016de1b-be0d-4d81-87ef-3d2d7220ca85> | 3.390625 | 338 | Truncated | Science & Tech. | 63.223261 |
NEMATOMORPHA DON'T HAVE
|Long, thin and
|A straight gut which is
|The cuticle and body
organisation of adults similar to that of Nematoda
|The adults free-living but
|Separate sexes and internal
|The larval stages are
parasitic on arthropods
|Freshwater, terrestrial and
Greek: nematos = thread, morphe = form
About 250 Nematomorpha species have
been described so far. Their common name comes from the superstition that the worms are born from horse hair falling into water.
They range in size from 10 cm to over 100 cm in length,
but are always less than 3 mm in diameter, and are parasites of insects (often
beetles or grasshoppers) and other arthropods. They are found world wide.
Locomotion is achieved by the
same method as the Nematoda. The main difference
between Nematomorpha adults and Nematoda adults is the degenerate gut in the
The role of the adults is not feeding, but reproduction and
dispersal, and they have a featureless body, as the name, hair worm, suggests.
They are free living, usually in freshwater or damp soil where the smaller male
swims or wriggles towards the relatively inactive, larger female to mate.
main feeding is done in the juvenile and larval stages (the drawing on the right shows the larval stage). These resemble
the adult Kinorhynchans, some species of Priapidula, and Loriciferans. The larvae are equipped with
eversible stylets that may be used in penetrating the host's
The female lays her eggs in long strings in water. If the host is a terrestrial insect it is stimulated by some unknown mechanism to find water when the parasitic larval hairworm is ready to emerge as an adult. | <urn:uuid:34ee520a-4ed5-47af-b141-2157aae9761c> | 3.6875 | 401 | Knowledge Article | Science & Tech. | 35.686304 |
IMPORTANT NOTE: THESE NOTES ARE STILL UNDER DEVELOPMENT. PLEASE WAIT UNTIL AFTER THE LECTURE WITH HANDING ANYTHING IN, OR TREATING THE NOTES AS READY TO READ.
We've spent quite a bit of time talking about categories, and special entities in them - morphisms and objects, and special kinds of them, and properties we can find.
And one of the main messages visible so far is that as soon as we have an algebraic structure, and homomorphisms, this forms a category. More importantly, many algebraic structures, and algebraic theories, can be captured by studying the structure of the category they form.
So obviously, in order to understand Category Theory, one key will be to understand homomorphisms between categories.
1.1 Homomorphisms of categories
A category is a graph, so a homomorphism of a category should be a homomorphism of a graph that respect the extra structure. Thus, we are led to the definition:
Definition A functor from a category C to a category D is a graph homomorphism F0,F1 between the underlying graphs such that for every object :
- F1(gf) = F1(g)F1(g)
Note: We shall consistently use F in place of F0 and F1. The context should be able to tell you whether you are mapping an object or a morphism at any given moment.
1.1.1 Examples and non-examples
- Monoid homomorphisms
- Monotone functions between posets
- Pick a basis for every vectorspace, send and to the matrix representing that morphism in the chosen bases.
1.2 Interpreting functors in Haskell
One example of particular interest to us is the category Hask. A functor in Hask is something that takes a type, and returns a new type. Not only that, we also require that it takes arrows and return new arrows. So let's pick all this apart for a minute or two.Taking a type and returning a type means that you are really building a polymorphic type class: you have a family of types parametrized by some type variable. For each type
The rules we expect a Functor to obey seem obvious: translating from the categorical intuition we arrive at the rules
- andfmap id = id
- fmap (g . f) = fmap g . fmap f
data Boring a = Boring instance Functor Boring where fmap f = const Boring
2 Natural transformations
3 The category of categories
- For now, I wanna introduce functors as morphisms of categories, then introduce the category of categories, and the functor categories, and then talk about functors as containers and the HAskell way of dealing with them. | <urn:uuid:773ae548-5cb4-4cf1-8113-7956fc751e1a> | 3.453125 | 596 | Audio Transcript | Software Dev. | 47.635123 |
Date Added: February 16, 2010 09:19:15 AM
Category: Regional: Europe: United Kingdom: Computers & Internet
PHP is a scripting language designed for web development and can be embedded into HTML. PHP language runs on a web server, the code of PHP acts as the input and output is the creation of the web pages. The language is also used for command-line scripting and client-side GUI applications. PHP has been deployed on many web servers, operating systems and platforms. It is also useful with many database management systems. The complete source code is available to the users for free. The users can build; customize language according to their requirements.
PHP has been created by Rasmus Lerdorf in the year1995. PHP’s main implementation is now produced by The PHP Group and it is released under the PHP License. According to the Free Software Foundation it is considered as free software.
PHP was originally designed to create only dynamic web pages. It is a server-based script and is similar to other server-based script languages such as Microsoft's ASP.NET system, Sun Microsystems' JavaServer Pages and mod_perl. PHP’s main framework provides building blocks and design structure to promote rapid application development (RAD). Some of the frame works include CakePHP, PRADO, Symfony and Zend Framework.
PHP acts as a filter, taking input from a file or stream containing text and instructions the outputs for another stream of data. The most common form of the output is HTML. The most popular architecture is the LAMP architecture for deploying web applications. In PHP the P is refer to Python or Perl and it is used as bundle alongside with Linux, Apache and MySQL.
PHP interface also has Extensions with a number of systems such as IRC, and Windows API. PHP extensions are used in creating Macromedia Flash movies. In the Version 3, PHP has integrated object oriented features and Version 5 has limited functionalities. Now PHP has robust object capabilities such as interface, exceptions, destruction and abstractions which are a great help in the development of a website.
PHP has wide-spread popularity because of the version 4. It is considered as one of the top languages used for server-based scripting. The language is easy to learn. PHP has many arrays and variables which can hold any type of object, where the variables need not be declared, and the syntax is remarkably simple.
If you are looking for PHP Web Development and looking for a company who has the expertise in PHP technology working with latest PHP and My SQL version contact IT Chimes. IT Chimes is one of the few companies in India who has the needed knowledge, expertise and the resources to provide complex solution in PHP, Joomla, Drupal, Zen Cart, Zend Framework, OS Commerce and more.
For more information on PHP Web Development visit IT Chimes or email email@example.com | <urn:uuid:d08b2b08-e4d5-4063-b298-aea22774d3e7> | 3.40625 | 604 | Knowledge Article | Software Dev. | 50.088296 |
Branch of biology that deals with the reproduction, development and growth, anatomy, physiology, and behavior of animals.
Improving strategies to assess competitive effects of barred owls on northern spotted owls in the Pacific Northwest [ More info] A scientific study has determined that survey methods designed for spotted owls do not always detect barred owls that are actually present in spotted owl habitat.
Integrated Science: Florida Manatees and Everglades Hydrology [ More info] Description of research and monitoring work proposed for 2008 to combine hydrologic models with manatee distribution and movement models near and within Everglades National Park.
Interagency Grizzly Bear Study Team (IGBST) [ More info] Description of research program for immediate and long-term management of grizzly bears (Ursus arctos horribilis) inhabiting the Greater Yellowstone Ecosystem. Includes links to reports in PDF format and cooperating organizations.
Invasive crayfish in the Pacific Northwest [ More info] These organisms have negative effects on local ecosystems, but we don't yet know how extensively they have spread. Here is a key to help people identify them.
Investigating White-Nose Syndrome in Bats [ More info] Explains the nature and history of this fungal infection that is killing large numbers of bats in the northeast US.
Lead Poisoning in Wild Birds [ More info] Even though lead usage has declined due to environmental awareness and regulation, several human sources of lead continue to affect birds. Hunting ammunition and fishing gear are ingested by the birds, with toxic effects.
Managing habitat for grassland birds: a guide for Wisconsin [ More info] Guide to identification, selection, and management of grassland habitats in Wisconsin to conserve the populations of grassland birds. Includes glossary, references, bird lists, graphs, and maps.
Maps of distribution and abundance of selected species of birds on uncultivated native upland grasslands and shrubsteppe in the northern Great Plains [ More info] Links to maps of breeding distributions of bird species on grasslands and shrublands in the northern Great Plains. Maps can also be downloaded from *.zip files in HTML format.
Mayflies of the United States [ More info] Links to checklists and species maps showing the distribution of mayflies in the United States with links to other information websites on mayflies and reference list.
Miscellaneous fungal diseases [ More info] Chapter of Field Manual of Wildlife Diseases on miscellaneous fungi with information on fungal skin and subcutaneous lesions or mycosis primarily in birds.
Alphabetical Index of Topics
a b c d e f g h i j k l m n o p q r s t u v w x y z | <urn:uuid:e03ca3a8-1ac8-4a1e-ac0b-c27c5e926b3e> | 3.0625 | 561 | Content Listing | Science & Tech. | 27.168182 |
Galileo's observations of the phases of Venus do not prove that
the Copernican model is correct. But they do show that the Ptolemaic
model is wrong, for in that model Venus does not go through the full
range of phases.
As to whether a geocentric or heliocentric model is correct -
Tycho Brahe proposed a compromise model in which the sun orbits
the earth & all the other planets orbit the sun. It's generally
recognized that this is _kinematically_ equivalent to the Copernican
model. What doesn't seem to be so widely recognized - though I think
most general relativists will agree if they think about it - is that a
SEMI-Tychonic model is valid. I emphasize SEMI because two important
changes have to be made in Tycho's model.
1) Tycho wanted the earth non-rotating but that won't work. If
everything rotated around the earth once in 24 hours, objects beyond
Neptune would have to be moving faster than light.
2) The "fixed stars" have to be "fixed" with respect to the
sun, rather than the earth, so that there will be stellar parallaxes.
But the major point - that the earth is stationary at the center
of the planetary system - remains.
This semi-Tychonic model can be worked out consistently in
general relativity. Though I think there is nothing really new about
this claim, I have never actually seen it treated in the literature, &
some relativists (e.g., Fock) have disagreed with the idea that one can
have a valid geocentric model. I have a brief paper on this which (as
coincidence would have it) I hope to be able to give at the Ohio Section
meeting of the American Physical Society in May.
George L. Murphy | <urn:uuid:34fa68b7-c943-4742-bee1-463e4db0ebe8> | 2.734375 | 404 | Comment Section | Science & Tech. | 52.087722 |
Once you realize that you need to be concerned with "how confident am I in this measurement?" - precision, in other words, you have 2 problems:
Thankfully, (luckily, perhaps) when you reported your measurement you didn't say "exactly 25.45 centimeters"! If you had, you would have made yourself an example of what physicists call an "idiot". Why? First of all, 25.45 cm is a measurement - made by a real measuring instrument, in this case a meter stick. A typical meter stick has marks at 1 millimeter intervals - your 25.45 cm measurement is pictured at left. The arrow appears to be half way between the 25.4 cm mark and the 25.5 cm mark - therefore, you were justified in calling the length 25.45 cm.
Notice that the last digit of the measurement is an estimate, though. Certainly, there is nothing wrong with this. In fact, it is clearly more reasonable to call the length 25.45 cm than to call it 25.4 cm or 25.5 cm.
However, how could you justify saying that the measurement was exactly 25.45 cm? You could certainly judge the measurement to the nearest 0.25 mm, and a strong case could be made, probably, for an estimate to the nearest 0.1 mm in this case. But exactly? That's just an unjustified guess! A careful estimate is not the same as a wild guess!
In the same way, you would not be justified in calling your measurement 25.452 cm. Doing this would mean that you claim to be able to divide a single millimeter into 100 equal parts - accurately - by eye! (You are claiming that the measurement is 52/100 of the distance between 25.4 cm and 25.5 cm.) If you make a claim like this, you had better be ready to back it up!
Apparently, the precision of a measurement is limited by the markings on the measuring instrument. The best that a competent physicist can be expected to do is estimate one digit between the finest markings on the measuring scale. (Physicists are expected to be able to do that, by the way.)
This limitation on the precision of a measurement is commonly called scale error. Actually, this is unfortunate, since "error" carries the connotation of "mistake" - but
Scale errors are not mistakes.
For this reason, I prefer the term scale uncertainty, although "scale error" is more commonly used. The scale uncertainty is ultimate limit to the precision of a measurement, but it might not be the largest factor affecting the precision of a measurement.
At this point, you should realize that: | <urn:uuid:1d11ad96-de10-4513-801a-6d11aee08da2> | 3.09375 | 548 | Knowledge Article | Science & Tech. | 70.388226 |
Early Analog Computers (Super-Brains) (Jun, 1932)
Calculations in Higher Mathematics Performed by Complex Machinery
• FOR thousands of years after arithmetic and geometry had been worked out, these forms of mathematics were sufficient for most purposes of even learned men. However, when science became complex, and especially in the development of modern astronomy, it was apparent that new methods of calculation were needed. Two hundred and fifty years ago, Sir Isaac Newton and Wilhelm Leibnitz, working independently, devised methods of procedure which have been refined into what is now called, for short, calculus. Without this, modern science and engineering could never have reached their present development.
To explain the difference between arithmetic and the calculus, an example may be given. A body falling, near the earth, has its speed accelerated 32.16 feet per second, each second; the distance through which it will fall in a given time is equal to the square of the number of seconds, multiplied by half the acceleration. At any point near the surface of the earth, the acceleration of gravity is fairly uniform, over the distance through which a body can fall.
But let us suppose a body falling upon the earth from a height of 7,920 miles. Where it starts, the acceleration of gravity is only one-ninth as great, or about 3.57 feet per second. Not only the speed of the body, but the rate at which it changes, will increase steadily. We cannot therefore, by simple multiplication, compute the time which it will take to fall, or the velocity with which it will arrive.
It is evident, therefore, that for problems such as would arise in the design and operation of a space-flying rocket, we must employ higher mathematics—the calculus.
This is a work of great difficulty, requiring a great deal of mental labor. Various machines, such as the familiar adding machine, have been made to shorten arithmetical tasks, and others, more complicated, for specific tasks in higher mathematics. That illustrated above is the most complex yet devised.
The Differential Analyzer
In the calculus, we deal with quantities which are continually varying. In the complicated machine shown, we have mechanical movements, operating at varying rates of speed; and acting through various levers on a pen to show the effect of all the combined factors of the problem in the final result.
To describe the functioning of the “differential analyzer” (for this is the name of the apparatus, designed by Dr. Vannevar Bush, of the Massachusetts Institute of Technology) would require considerable explanation of the calculus. However, it is based, among other things, on the fact that the change, or rate of change, of a value entering into the computations may be represented by a curve on a sheet of paper; and by having an operator to keep a pointer on each of the curves submitted to the apparatus, all factors of the problem are submitted for analysis.
The results given by the machine are not absolutely exact; what is sought for is a degree of exactitude corresponding with the technique involved in a practical problem. Few readings and measurements, on ordinary apparatus, are more accurate than one-tenth of one per cent.
Mechanical problems were encountered in the construction of the machine; particularly that of taking up backlash in the driving apparatus. It was checked by setting the machine to solve a problem the mathematical answer of which is known to be a circle; and compensating this.
The sketch at the head of this article shows one of a number of units which are incorporated in this device, for the purpose of moving parts in exact correspondence to an applied signal. The operation of tightening on a revolving drum a friction band which is wrapped around another drum, revolving in the opposite direction, has the effect of multiplying the force of the pull; somewhat as variation of input voltage on the grid of a vacuum tube produces a corresponding, but GREATER voltage change in the plate coupling device. The analogy extends to the fact that the mechanical amplifier, like the tube amplifier, can oscillate; and it does so when its amplification fact is too high—because of output energy being fed back into the input. This was overcome by the use of a flywheel, loose on the shaft and revolving with it only through friction, to damp out oscillations.
The whole machine, as constructed, has eighteen shafts, through which varying factors may be regulated. It fills a good-sized room. Yet, as its constructor says: “It is not yet completed; it is questionable whether it will ever be complete, for it can always be extended by the addition of units to cover greater order or complexity of equations.”
Another machine, illustrated below on the same page, has for its specific purpose the separation of a complex curve into its components. As those who are interested in radio theory know, a ‘modulated’ wave may be considered as the sum of a number of waves. This machine, with its brass cylinders tracing an irregular curve, is intended to show the simpler curves which lie beneath.
For instance, the heat received from the sun fluctuates slightly from day to day; and the Smithsonian Institution has long been conducting a survey based on continual readings, taken in different parts of the world. The purpose of this machine, invented for purposes of the solar survey, is to render it possible, by a knowledge of the different cycles affecting solar variation, to plot the heat to be received, and therefore the weather, for a number of years in the future. | <urn:uuid:aea04070-634a-4cbb-8d1a-e2c376205f51> | 3.625 | 1,140 | Personal Blog | Science & Tech. | 34.587349 |
We’ve all seen lectures go awry when plastic transparencies slide off projectors, but L. Mahadevan was probably the first to seriously analyze a plastic sheet’s fall from grace. It is even safer to assume he was the first to use it as a model for a flying carpet. Now, due to Mahadevan’s curiosity and an enterprising grad student, scientists have created an electrically powered sheet that propels itself through the air.
In 2007 Mahadevan, a mathematician at Harvard University, turned his analysis into a proposal for coaxing a flexible sheet to fly [pdf] just above the ground. His study concluded that a thin sheet rapidly vibrating in a wavelike motion, much like a ray swimming near the seafloor, would stay aloft.
Mahadevan never built his flying carpet—he moved on to analyzing how wet paper curls and lilies bloom. But in 2008 Princeton graduate student Noah Jafferis came across Mahadevan’s paper and put the idea into practice. What Jafferis produced last fall isn’t exactly a flying carpet. It is more like a 4-by-1.5-inch plastic transport, but it’s still the first object of its kind to achieve propulsion through the air.
Jafferis constructed his carpet from two sheets of coated plastic that are fastened together and divided into four sections each. When a voltage is applied, parts of the surface contract while others expand, causing the sheet to bend in the shape of a moving water wave. The wave pushes air trapped between the sheet and the ground in one direction, while propelling the sheet the opposite way. “So long as the sheet is moving forward, it will stay aloft,” Jafferis says.
For now, his craft gets its power through four-inch wires attached to an external battery, seriously limiting its range. Jafferis hopes to untether it by installing a power source on the craft itself. Unfortunately, his long-term plans have nothing to do with Arabian Nights. “We’d need a surface 50 feet wide to carry a person,” he says, “and that would get just a millimeter or two off the ground.” Instead, he has conjured up the idea of a carpet that could fly above the dusty surface of Mars. | <urn:uuid:c91cf1cf-4708-4853-8df3-d4cadcada942> | 3.734375 | 490 | Nonfiction Writing | Science & Tech. | 54.006 |
Tree Rings Reveal Sunspot Record
a somewhat better article concerning sunspot activity with graphs.
be well, be love.
sunspots hit new highs
27 october 2004
the sun is more active at present than it has been for over 8000 years according
to a new method for determining the level of sunspot activity in the past. sami
solanki of the max planck institute in katlenburg-lindau and colleagues in
finland, germany and switzerland have developed a technique that relates the
number of sunspots to the concentration of carbon-14 in tree rings. however, the
team insists that this high level of solar activity is unlikely to be the main
cause of global warming (nature 431 1084).
[non-text portions of this message have been removed] | <urn:uuid:b7816816-f214-4191-93af-a63d1b2d47c5> | 3.015625 | 174 | Comment Section | Science & Tech. | 42.482355 |
This page shows you how to exit a loop in emacs lisp.
In many languages, there's “break” or “exit” keywords that you can use to exit a loop. In functional programing, usually you don't use loop/iteration, but sometimes a loop is just what you need.
In elisp, to exit loop, you can use a while loop and check a flag (set a variable to true/false), or use the built-in
Here's a sample of setting flag:
(let (myList foundFlag-p i) (setq myList [0 1 2 3 4 5] ) (setq foundFlag-p nil ) (setq i 0) (while (and (not foundFlag-p) (<= i (length myList))) ;; if found, set foundFlag-p (when (equal (elt myList i) 3) (setq foundFlag-p t ) ) (message "value: %s" i) (setq i (1+ i)) ) )
Here's a actual example using a flag:
(defun get-new-fpath (ξfPath moveFromToList) "Return a new file full path for ξfPath. moveFromToList is a alist." (let ((ξfoundResult nil) (ξi 0) (ξlen (length moveFromToList)) ) ;; compare to each moved dir. (while (and (not ξfoundResult) (< ξi ξlen)) (when (string-match (concat "\\`" (regexp-quote (car (elt moveFromToList ξi))) ) ξfPath ) (let ( (fromDir (car (elt moveFromToList ξi))) (toDir (cdr (elt moveFromToList ξi))) ) (setq ξfoundResult (concat toDir (substract-path ξfPath fromDir)) ) ) ) (setq ξi (1+ ξi) ) ) (if ξfoundResult ξfoundResult ξfPath ) ) )
Here's a pseudo-code of
(let (myList) (setq myList [0 1 2 3 4 5] ) ;; map lambda onto a list. If value 3 is found, exit map. (catch 'myTagName (mapc (lambda (x) (message "%s" x) (when (equal x 3) (throw 'myTagName "VALUE of catch if throw is called") ) ) myList) ;; return value of catch if throw didn't occur "normal return VALUE of catch here") )
(throw ‹tag› ‹value›) is basically like “goto”. It will jump to the nearest outer
(catch ‹tag› …) with matching
‹tag›, and also pass the
‹value› to it.
(catch ‹tag› … ‹value›) didn't get any throw, it'll return
‹value›, else it'll return the value from
Here's a example using
(defun xahsite-url-is-xah-website-p (myURL) "Returns t if MYURL contains a xah domain name, else nil. See: `xahsite-domain-names'." (catch 'myloop (mapc (lambda (x) (when (string-match-p (format "\\`http://\\(www\\.\\)*%s\.*/*" (regexp-quote x)) myURL) (throw 'myloop t))) (xahsite-domain-names)) nil ) )
(info "(elisp) Catch and Throw") | <urn:uuid:99e6ab3e-7da5-457b-b10b-b1637f7604b0> | 2.9375 | 811 | Documentation | Software Dev. | 65.306492 |
An exoplanet has been discovered by Kepler using a strange quirk of relativity. Continue reading →
The odds are miserable and massively against the detection techno-aliens in the newly discovered Kepler-62 system. But like lottery players, we're going to try anyway. Continue reading →
The recent Kepler-62 discovery is like "exoplanetary gold" for SETI scientists who are on the hunt for extraterrestrial intelligences. ->
NASA has selected a $200 million mission to carry out a full-sky survey for exoplanets orbiting nearby stars. The space observatory, called the Transiting Exoplanet Survey Satellite (TESS), is scheduled for a 2017 launch. ->
The Kepler space telescope's prime objective is to hunt for small worlds orbiting distant stars, but that doesn't mean it's not going to detect some extreme relativistic phenomena along the way. ->
Although there appears to be a mysterious dearth of exoplanets smaller than Earth, data from NASA's Kepler space telescope suggest that nearly a quarter of all sun-like stars in our galaxy play host to worlds 1-3 times the size of our planet. ->
New research from the planet-hunting Kepler space telescope shows Earth-sized planets may be widespread in the Milky Way.
A planet about 1.6 times the radius of Earth has been found closely circling the sun-like star Kepler-21, one of the 100,000 stars under scrutiny by NASA's Kepler Space Telescope.
+ Load More | <urn:uuid:39cdb231-7826-4214-a59e-28cb9ab0f1e2> | 3.015625 | 305 | Content Listing | Science & Tech. | 43.102704 |
Mechanical engineer Meenakshi Reddy of Sri Venkateswara College of Engineering and Technology, in Chittoor, Andra Pradesh, and colleagues explain how certain materials, known as phase change materials (PCM) can store a large amount of heat in the form of latent heat in a small volume.
Heated in the sun, the mixture of paraffin wax (which melts at about 37 Celsius) and stearic acid (a fat commonly used to make soap) becomes entirely liquid. However, as it solidifies it slowly releases the stored heat. The process is akin to the phase changing heating that occurs in hand-warmers that contain a PCM but in this case the material does not need to be boiled in a pan or heated in a microwave oven to absorb latent heat.
The team has now tested spherical capsules just 38 millimetres in diameter containing a blend of paraffin and stearic acid, which can be floated on the top of water in a tank. Stearic acid is a lot cheaper on the Indian market than paraffin and more readily available. The team found that costs could be held down without reducing the overall heating efficiency of the capsules by lowering the proportion of paraffin wax.
"Solar energy based thermal energy storage system using phase change materials" in Int. J. Renewable Energy Technology, 2012, 3, 11-23
This work has been presented at conferences since 2010 Dr. N. Nallusamy, ‘Solar energy based thermal energy storage system using phase change materials’, Proceedings of International conference on ‘Advances in Energy Conversion Technolgies’ ICAET2010, MIT, Manipal, India, January 7 – 10, 2010
If you liked this article, please give it a quick review on ycombinator or StumbleUpon. Thanks | <urn:uuid:27a1afc2-ee57-4bcf-b522-6141b43699b5> | 3.203125 | 381 | Truncated | Science & Tech. | 37.416969 |
An investigation involving adding and subtracting sets of consecutive numbers. Lots to find out, lots to explore.
A circle is inscribed in an equilateral triangle. Smaller circles
touch it and the sides of the triangle, the process continuing
indefinitely. What is the sum of the areas of all the circles?
Rectangle PQRS has X and Y on the edges. Triangles PQY, YRX and XSP
have equal areas. Prove X and Y divide the sides of PQRS in the
Let AB have length 3r. The distance moved by A is then the
circumference of a semicircle radius 3r (3$\pi$r). C moves along a
circle of radius 2r (2$\pi$r), followed by a semicircle of radius r
($\pi$r). The total distance moved by C is therefore also 3$\pi$r.
This problem is taken from the UKMT Mathematical Challenges. | <urn:uuid:f8abd489-ceef-429f-b196-7b8c717c554e> | 3.125 | 209 | Tutorial | Science & Tech. | 68.90875 |
Can you fill in the empty boxes in the grid with the right shape
Someone at the top of a hill sends a message in semaphore to a
friend in the valley. A person in the valley behind also sees the
same message. What is it?
What is the missing symbol? Can you decode this in a similar way?
The Vikings communicated in writing by making simple scratches on
wood or stones called runes. Can you work out how their code works
using the table of the alphabet?
I was looking at the number plate of a car parked outside. Using my special code S208VBJ adds to 65. Can you crack my code and use it to find out what both of these number plates add up to?
Semaphore is a way to signal the alphabet using two flags. You
might want to send a message that contains more than just letters.
How many other symbols could you send using this code?
Ruth from Swanborne House School describes some unusual shapes very clearly.
Go to last month's problems to see more solutions.
When you think of spies and secret agents, you probably wouldn’t think of mathematics. Some of the most famous code breakers in history have been mathematicians.
A game for 2 people that everybody knows. You can play with a
friend or online. If you play correctly you never lose! | <urn:uuid:c8abc583-4a54-472a-ae3f-8186637801a2> | 3.265625 | 283 | Content Listing | Science & Tech. | 69.744156 |
Dec. 6, 2011: NASA’s Hubble Space Telescope presents a festive holiday greeting that’s out of this world. The bipolar star-forming region nearly 2,000 light-years from us, called Sharpless 2-106, looks like a soaring, celestial snow angel. The outstretched “wings” of the nebula record the contrasting imprint of heat and motion against the backdrop of a colder medium. Source: NASA, ESA, and the Hubble Heritage Team (STScI/AURA)
Two weeks ago, Fox News published a slide deck entitled, EyePoppers: The best science photos of the week. We have been taking the time to examine many of them to demonstrate varying degrees of evolutionary propaganda. Thus far we have successfully confronted each and have observed them wilt under the scrutinization of logic and simple scientific tests and observations.
Today’s slide and caption however, require a keener eye than most. In order to have already assessed what direction I am heading with today’s slide and caption, you would have to be a full time student of Across the Fruited Plain.
Dec. 6, 2011: NASA’s Hubble Space Telescope presents a festive holiday greeting that’s out of this world. The bipolar star-forming region nearly 2,000 light-years from us, called Sharpless 2-106, looks like a soaring, celestial snow angel. The outstretched “wings” of the nebula record the contrasting imprint of heat and motion against the backdrop of a colder medium.
One of the major black-eyes on evolution theory is the fact we see stars die, (novas & supernovas approximately every 30 years), but we have never observed a star forming or being “born.”
Now if evolution theory were true, and the universe got its start from the Big Bang, then one of the areas of evolution that must have occurred aside from:
is called Stellar and Planetary Evolution. (Each one defined here)
The Big Bang Models demand that the planet and stars are still evolving. If that is true, we ought to be observing star births and they ought to at least equal star deaths. The glaringly unmistakeable error of astronomy however is two-fold:
1.) Not Enough Dead Stars
We do not have enough dead stars to equal 16 billion years worth of star explosions–16 Billion divided by 30 equals just over 500,000,000 (533333333.333333).
Now, if we were supposed to see 350 dead stars and are only seeing 300, then it can more easily be argued that we’re just not seeing the other 50 out there somewhere. However, when the evolution timeline prepares us to observe over FIVE HUNDRED MILLION nova or supernova rings and we only see THREE HUNDRED, that is a major discrepancy. Three hundred rings however, is precisely consistent with what you would expect to see after only a few thousand years as the Bible maintains.
2.) No Star Birth Ever Observed
The reason that I mentioned that this slide example was trickier than most is because they actually tip their hand on the slide, rather than on the caption. Notice that the top of the slide says: “Star-forming Region S106.”
This is to intentionally deceive you into believing:
A. Stellar Evolution has been vindicated and proven
B. That star births are being observed
What you will find consistent with every false report of a star birth is the fact that it is getting brighter. They point their telescopes out into space and if they observe a spot getting brighter, they automatically and enthusiastically assume that a star is being born. That however is hardly conclusive evidence. Seeing a spot getting brighter can just as equally mean that a dust cloud is clearing and revealing the star that is already behind it.
I would classify that as a star discovery, not a star forming!
God’s word says he made the stars and you know what??
I’m with Yahweh on this one.
“God made two great lights—the greater light to govern the day and the lesser light to govern the night. He also made the stars.” Genesis 1:16 | <urn:uuid:1f9d4d28-6b0a-4fe4-acff-121b7ae36daa> | 3.546875 | 872 | Personal Blog | Science & Tech. | 58.37219 |
"The Midwest and the upper Midwest were the epicenters for this vast warmth," Deke Arndt of NOAA's Climatic Data Center said in an online video. That meant farming started earlier in the year, and so did pests and weeds, bringing higher costs earlier in the growing season, Arndt said.
"This warmth is an example of what we would expect to see more often in a warming world," Arndt said.
NASA: Many have noted that the winter has been particularly cold and snowy in some parts of the United States and elsewhere. Does this mean that climate change isn't happening?
Gavin Schmidt: No, it doesn't, though you can't dismiss people's concerns and questions about the fact that local temperatures have been cool. Just remember that there's always going to be variability. That's weather. As a result, some areas will still have occasionally cool temperatures -- even record-breaking cool -- as average temperatures are expected to continue to rise globally.
NASA: So what's happening in the United States may be quite different than what's happening in other areas of the world?
Gavin Schmidt: Yes, especially for short time periods. Keep in mind that that the contiguous United States represents just 1.5 percent of Earth's surface.
As Chris Horner says "US is meaningful, or it isn't. Not US is meaningful if it cooperates." | <urn:uuid:b064db3a-3dbf-41aa-8e3f-d35668aa7eb1> | 2.90625 | 286 | Audio Transcript | Science & Tech. | 60.458545 |
Examining Quantum-Degenerate Bose Gases
Albert Einstein and Satyendra Nath Bose predicted that when a gas of weakly interacting bosons - a Bose gas - was cooled to a low enough temperature, a new state of matter, a so-called Bose-Einstein condensate, would form. It took 70 years to demonstrate the transition to this state in a gas of alkali-metal atoms in 1995, garnering a Nobel prize.
A subset of Bose-Einstein condensates called spinor condensates, made of atoms that possess a spin degree of freedom, has come into focus more recently. At low enough temperatures, these systems exhibit spontaneous symmetry breaking, magnetic order, and intricate spin textures governed by the interplay of magnetism and superfluidity.
Image Credit: ©2011 American Physical Society
To create the image above, researchers cooled a rubidium gas to a temperature of 1.5 µK and then let it equilibrate. In the image, the magnetization of the gas is shown as the gas equilibrates. The increasing color brightness from left to right represents the growing strength of the gas’s magnetization. The different colors indicate the orientation of the magnetization. Initially, the magnetization domains point in many directions, as indicated by the variety of colors in the far left bar. Over time, large regions of the gas begin to point in the same direction so the variety of color decreases and red and pink areas predominate.
"Long-time-scale dynamics of spin textures in a degenerate F=1 87Rb spinor Bose gas," Phys. Rev. A. 84, 063625 (2011)
J. Guzman 1
G.-B. Jo 1
A. N. Wenz 12
K. W. Murch 1
C. K. Thomas 1
D. M. Stamper-Kurn1 13
1 Department of Physics, University of California, Berkeley, California 94720, USA
2 Physikalisches Institut, Ruprecht-Karls-Universität Heidelberg, D-69120 Heidelberg, Germany
3 Materials Sciences Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA
This research and work was supported in part by the NSF and by a grant from the Army Research Office with funding from the Defense Advanced Research Projects Agency Optical Lattice Emulator program. | <urn:uuid:b123db5e-7985-4786-98c9-c504004b0e76> | 3.296875 | 517 | Knowledge Article | Science & Tech. | 45.703117 |
Add your answer here.
Check out some similar questions!
ask physics questions for free online? [ 6 Answers ]
A mouse is initially at rest on a horizontal turntable mounted on a frictionless vertical axle. If the mouse begins to walk clockwise around the perimeter of the table, what happens to the turntable? Explain using Newton's Laws.
ask physics questions for free online [ 1 Answers ]
A uniform capillary tube, colsed at one end, contained air which is trapped by a thread of mercury 85mm long. When the tube was held horizontally, the length of the air column was 50mm. When it was held vertically with the closed end downwards, the length was 45mm. Find the atmospheric pressure in...
physics online answers [ 3 Answers ]
a train travels 180km with a uniform speed 60km/h,next 100km with a speed 50km/h and the last 80km with a speed 40km/h.calculate the average speed of the train.
Answers for Physics Questions Online [ 1 Answers ]
a man of mass 80 kg is standing on a compression balance which is fixed in a lift. find the reading on the balance, when 1. the lift is moving upwards in a uniform velocity
physics online solver [ 3 Answers ]
a dictionary is pulled to the right at constant velocity by 10N force acting 30 degrees upward above the horizontal. the coefficient of kinetic friction between the book and horizontal surface is 0.5. what is the weight of the book?
View more Math & Sciences questions Search | <urn:uuid:0ba9719f-9f79-4c11-8e49-4e2a01504fd4> | 3.25 | 324 | Q&A Forum | Science & Tech. | 68.147648 |
"Numbers are fun." So insisted my seventh grade teacher, but my stubborn thirteen-year-old self refused to believe him. Numbers may be fun, but mathematics was hard. I struggled through maths classes when I was in school, painfully working my way through algebra, then geometry, trigonometry, and calculus. Though I eventually got over my contempt for mathematics, there was a time in which I thought that only humans could be so sadistic as to inflict the pain of mathematics on their young.
But, while other species may not spend their time fretting over the quadratic equation or the transitive property of equality, mathematical ability is widespread in the animal kingdom.
Take the domestic chicken (Gallus gallus), a bird that many think of as having more to do with barbecue sauce than with arithmetic. If a chicken sits in front of two small opaque screens, and one ball disappears behind the first screen, followed by four balls disappearing behind a second screen, the chicken walks towards the screen that hides four balls, since four balls are better than one ball. The feat is made more impressive when you consider that the chicken in question is only three days old. And it can do a lot more than add up.
If one ball disappears behind the first screen, and four balls disappear behind the second, just as before, but then two of the four balls behind the second screen are visibly moved over to the first screen, the chicken is now faced with two tasks. It must add two to one, and know that there are now three balls behind the first screen. It must also subtract two from four, and realise that there are only two balls left behind the second screen. The young chicken must overcome its initial impulse to approach the second screen, which initially hid four balls, and instead approach the first screen, now hiding three balls. If this sounds complicated for the three-day-old bird, think again. Infant chickens correctly approached the screen hiding more balls nearly 80% of the time.
Chimpanzees perform even better in their maths tests, succeeding in this sort of task 90% of the time. In one experiment, researchers placed a chimpanzee in front of two sets of bowls that contained chocolate pieces. Each set had two bowls, and to receive their treats, the chimps had to select the set that had the largest combined number of chocolate pieces, in other words adding together the number of pieces in each individual bowl. They succeeded even on trials where one of the bowls in the "incorrect" set contained more chocolates than either individual bowl in the "correct" set.
In fact, decades of research have provided evidence for the numerical abilities of a number of species, including gorillas, rhesus, capuchin, and squirrel monkeys, lemurs, dolphins, elephants, birds, salamanders and fish. Recently, researchers from Oakland University in Michigan added black bears to the list of the numerically skilled. But the real maths wizards of the animal kingdom are the ants of the Tunisian desert (Cataglyphis fortis). They count both arithmetic and geometry as parts of their mathematical toolkit.
When a desert ant leaves its nest in search of food, it has an important task: find its way back home. In almost any other part of the world, the ant can use one of two tricks for finding its way home, visual landmarks or scent trails. The windswept saltpans of Tunisia make it impossible to leave a scent trail, though. And the relatively featureless landscape doesn't provide much in the way of visual landmarks, other than perhaps the odd rock or weed. So evolution endowed the desert ant with a secret weapon: geometry. Armed with its mathematical know-how, the desert ant is able to “path integrate”. This means, according to ant navigation researchers Martin Muller and Rudiger Wehner, that it "is able to continuously compute its present location from its past trajectory and, as a consequence, to return to the starting point by choosing the direct route rather than retracing its outbound trajectory." | <urn:uuid:9a3232cf-2d56-45ec-bb1c-5fcb900a4f3a> | 3.046875 | 831 | Nonfiction Writing | Science & Tech. | 48.21519 |
The comparison test provides a way to use the convergence of a series we know to help us determine the convergence of a new series. Suppose we have two series and , where 0 ≤ an < bn. Then if B converges, so does A. Also, if A diverges, then so does B. So if we suspect that a series A converges, we can try to find a similar series B where the terms are all bigger than the terms of A and where B is known to converge, thus proving that A converges. Conversely, if we have a series B that we suspect diverges, we can try to find a similar series A where the terms are all smaller than the terms of B and where A is known to diverge, thus proving that B diverges.
Try the following:
- The applet shows the series . This is similar to a p-series, so the applet also shows a p-series as B. The blue dots are terms of A and the blue/purple rectangles are the terms of the underlying sequence an. The red dots represent B and the red/pink rectangles are the terms bn. Note that all of the an are less than the corresponding bn and that all are positive, so we can apply the comparison test. Since we know that a p-seriese with p > 1 converges, B converges, and hence so does A. The table on the left shows terms of A and B and supports the convergence of both series.
- Select the second example from the drop down menu, showing the series . This is similar to a harmonic series, which is shown as A. Note that all of the bn are greater than the corresponding an and that all are positive, so we can apply the comparison test. Since we know that the harmonic series diverges, then so must B. The table of values isn't quite clear on whether B converges or diverges, so the comparison test is useful here to determine what happens to B in the long run.
This work by Thomas S. Downey is licensed under a Creative Commons Attribution 3.0 License. | <urn:uuid:9801a166-b7ee-4391-ab0c-18006fa8f059> | 4.03125 | 438 | Tutorial | Science & Tech. | 69.82 |
Ask A Scientist/Astronomer
The Universe is a big place - we know you have questions. We can find the answers.
Submit your questions about space, physics, environment, biology, engineering and more. Our Scientists and Astronomers will post the answers.
Email your questions to: AskaScientist@chabotspace.org
We will post the answers here. Keep an eye on our home page, your answer could become a Did You Know fact.
Questions and Answers:
7/21/11: Perseid Meteor Shower
Q: When are the Perseid meteors visible this year?
A: The annual Perseid Meteor Shower takes place for a couple of weeks surrounding their peak activity date. This year, the Perseids reach their peak on the night of August 12th leading into the morning of August 13th. As with most meteor showers, the Perseids are best viewed after midnight, ideally around 3:00 AM.
6/23/11: Meteorite or Meteor-wrong?
Q: How do I determine if the rock I have is or isn't a meteorite?
A: Here are a couple of links to sites with information and experts. They may be able to help you. 1) http://www.meteorflash.com/ (Mare Meteoritics, Mike Martinez, former Chabot meteorite curator); 2) http://www.meteorite.com/Meteorite_Identification.htm (Meteorite.com, Meteorite Identification).
Q: When pumping water up high, does it make any difference in power consumption if it is pumped right up in one stage, or if it is done in stages with intermediate pools?
A: From the standpoint of how much energy moving the water up (against gravity) takes, it does NOT matter if you do it all at once or in stages. Raising a unit of water vertically by a certain distance requires a specific cost of energy, equal to the weight of the water times the vertical distance. Any difference in energy consumption between two different schemes of getting the water uphill will be in the efficiency of the method—that is, how much or how little energy is WASTED in the process because of non-ideal efficiency (which is usually the case).
9/15/10: Planetarium Software
Q: What is a good piece of software or online resource for displaying the appearance of the sky from a given location at a specific time and date?
A: As Freeware goes, my favorite is "Home Planet," which can be downloaded from www.fourmilab.ch/homeplanet/. Set your location, time, and date and the select view sky.
8/24/10: Orion's Belt:
Q: What month can you begin to see Orion's Belt?
A: If you're an early riser, you can begin to see Orion, and his Belt, rising in the east as early as August. Here at the end of August you can see the Belt low in the eastern sky at 5:00 AM DST. As we move into Autumn, Orion will rise earlier and earlier, and by mid to late December it is rising after 7:00 PM and high in the south at midnight.
8/18/10: Light in the Eastern Sky:
Q: Everynight for about the past two weeks there is a very bright light that appears at around 11:00pm. I live in Concord, CA and the object is visible in the eastern sky. It seems way too close and bright to be a star. It seems relatively stationary but appears to very slowly rise higher in the sky. Do you have any idea what we are looking at? My son was the first to see it about two weeks ago. He was certain it was a UFO but we have seen it just about every night since. The first night he saw it, he swore it was doing figure eights in the sky but I am highly doubtful.
A: Very likely what you are seeing is the planet Jupiter. At around 11:00 PM, Jupiter is slightly south from the eastern point, and about 18 degrees (roughly two fist-widths) above the horizon. Jupiter is getting close to opposition - the point when we are closest to it - and so is at its brightest right now. A small telescope, or maybe even a good pair of binoculars, will reveal Jupiter as a bright disk with a string of up to four starlike dots - its four large moons, the Galileans--that change their positions from night to night. As for figure-eights, sometimes a star or planet can appear to move about a bit; part of this may be atmospheric turbulence bending the object's light (the typical "twinkling" of a star - though planets are not observed to twinkle as much), and part may be due to an optical illusion we can experience when we see a dot that is not close to other reference points (like the horizon). | <urn:uuid:1d4aa966-2ef2-4993-95d7-6f41feffcbf0> | 3.0625 | 1,030 | Q&A Forum | Science & Tech. | 64.713265 |
Date: Around 1993
Does Centrifugal force really exist? If not, then why do so many
people use it to explain everyday occurrences? For example, when you swing a
bucket of water around, what keeps the water in? Most people would say
Yes, this idea does have a sound meaning and valid existence. It
does not represent a "real" force in the sense that Newton uses, which is:
something that gives rise to accelerations in a reference frame which is not
rotating or being accelerated (a frame in which objects at rest tend to stay
at rest unless acted on by a force) (these frames are called inertial frames).
However, in a rotating reference frame (or coordinate system) such as a merry
go round, objects that are at rest tend to slide and objects with no "real"
force do not move in straight lines. Have you ever tipped over a glass full
of liquid in a turning car? From the point of view of the rotating coordinate
system what tipped the glass over is the "centrifugal force". From the point
of view of the inertial frame of reference outside the car, the glass was
still trying to go forward in a straight line when the car turned and the
force acting on the bottom of the cup flipped it over. Both perspectives are
valid and you can calculate the results from both perspectives as dictated by
convenience, but there is no "real" force of this type. It is however very
useful to think this way.
Click here to return to the Physics Archives
Update: June 2012 | <urn:uuid:490e05ff-ff9b-4fe8-bb7c-b98264faea2b> | 3.40625 | 335 | Knowledge Article | Science & Tech. | 48.938199 |
| Section8 =
Trifluoroacetic acid (TFA) is the chemical compound with the formula CF3CO2H. It is a strong carboxylic acid due to the influence of the three very electronegative fluorine atoms. Relative to acetic acid, TFA is almost 100,000-fold more acidic. TFA is widely used in organic chemistry.
TFA is a reagent
used frequently in organic synthesis
because of a combination of convenient properties: volatility, solubility in organic solvents, and its strength. It is also less oxidizing than sulfuric acid
but more readily available in anhydrous form than hydrochloric acid
. One complication to its use is that TFA forms an azeotrope
with water with a boiling point of 105 °C.
It is also used as a ion pairing agent in liquid chromatography for separation of organic compounds, particularly peptides and small proteins. It is a versatile solvent for NMR spectroscopy (for materials stable in acid).
The derived acid anhydride, [CF3C(O)]2O, is a common reagent for introducing the trifluoracetyl group.
Electrofluorination of acetic acid
with the Simons method
is the best way to obtain trifluoroacetic acid. The anodic
reaction of the electrolysis
of a mixture of hydrogen fluoride
and acetic acid below the voltage at which elemental fluorine
) develops is a mild reaction which leaves the carboxylic group | <urn:uuid:56332333-89ca-4358-ba14-6d8fbfc7c67d> | 3.5 | 328 | Knowledge Article | Science & Tech. | 25.636124 |
An artcile in the new Nature (probably behind the paywall) called “Quantum computation: The dreamweaver’s abacus” discusses first successful experiments that for the first time proved the existence of quasi-particles of quarter charge that have the properties needed to build qubits for quantum computing. Wait? Quasi-particles? Qubits? What?
Well, I can’t even start telling about quantum computing, that is because I don’t have much of a clue about it, only small hints that I should shed some light on eventually. But I investigated the other part. Quasiparticles are an interesting concept of physics, with the most prominent appearance in solid-state physics where you describe the movement of the atoms as quasi-particles (called phonons, imagine a sound wave running through a medium as a line of balls connected by springs. The state of exciation, or the mode as it is called, is seen as if it were a particle. You can just calculate an electron and a phonon hitting each other!). Now, the interesting concept used here is about quasi-particles in two dimensions.
Our world is three-dimensional (at least), so why are we even talking about two dimensions outside if a theorist’s mind set? Well, just imagine a very fine layer of metal - and you know that technology is able to produce these - then electrons moving through this layer will have no choice but to visit a two-dimensional world.
Now, what’s with the quarter charge? Isn’t the nice thing about elementary charges that they are, well, undividable? As long as we are talking free particle, this is true. A quark carries a third of an elementary charge, but it is confined in, let’s say, a proton, and that one carries one elementary charge (called e). Now we are in the quasi-particle regime, so it’s not actually little balls flying around we’re discussing. In the quantum world, one of the most important concepts is that everything comes in steps - in quantums. Energy is quantized, that’s about the first thing you learn in Quantum Mechanics. You know that electrons can exist in discrete orbits around the core of the atom. Why? Quantized energy and momentum. If we take a our-world phenomenon and move to a quantum scale, you are bound to find quantified stuff - like with the Hall effect. Take a metal plate, send current through the plate, apply a magnetic field perpendicular to the plate, you will move the electrons and create a voltage and thus an electric field directed perpendicular to current flow and magnetic field. By measuring the effect of this electric field described by the Hall resistance, you can determine the strength of the magnetic field. Now move to the microscale, to the two-dimensional system described above, and so will have the Quantum Hall effect - the Hall resistance will move only in steps (at low temperature). Again, it is because energy is quantized. Electrons in one energy state cannot continously move between states but have to get enough energy in one collision to jump to another level - like excitation in an atom. At low temperature, this rarely occurs, so you can see the steps in resistance.
Now comes the crazy part, and I’m also rather lost here now. But normally, energy levels are so that higher states are a multiple of a ground state energy, integer multiples for the atom. This is true here, and called the integer Quantum Hall effect. But there’s also a fractional Quantum Hall effect, having the property of small fractions like 1/3 or 3/5 - and there the theorists come running, introduce a concept of quasi-particles, and assign fractional charges to them. Bear with me - I can’t really explain this correctly yet as I’m trying to grasp the concept myself.
So, let’s hurry over this and just accept that there are quasi-particles of fractional charge in these events. Now what’s new? First of all, these particles, called anyons, are neither fermion nor boson. Normally, if you have two identical particles and let them swap place, this will not change anything. Fermions will multiply their wave function by -1, but you take the square of the wave function for observation anyways…now these new anyon things, and mind you they can only occur in two dimensions, our 3D-world only allows fermions and bosons, the way the anyons are swapped matters. The way their wave function will change is different! What does this mean for computing? If they have the additional trait that, if you do several swaps and the order matters (it usually doesn’t, that’s Abelian behavior), you can use that for computing…
Now, you need quarter-charged quasi-particles, and these have been experimentally discovered for the first time.
For a much more thourough discussion of anyons, see this post by someone who actually knows what he’s talking about. | <urn:uuid:8bc4fcf1-4b8a-41d5-aa12-bb0c9f723430> | 2.984375 | 1,070 | Personal Blog | Science & Tech. | 50.01987 |
The number of heads obtained in 100 tosses of a coin (even if the coin
is moderately biased): Count the number of heads in each of many 100-toss
trials; make a histogram of the number of heads obtained in each trial.
Do it long enough, the histogram will be approximately the normal curve,
except (for a fair coin) shifted right 50 units, and stretched horizontally
by a factor of 5. Also, the histogram will be a bit "jaggy" since one can
obtain 50 or 51 or 52 heads in a 100-toss trial, but not 51.4 heads:
The total number of spots seen in 50 rolls of a die (similar to above).
Choose 200 people randomly from the Philadelphia population. Repeat 99
more times to get 100 different samples of 200 people each. Find
the average weights of the 200 people in each sample, to get 100 different
averages. Make a histogram of those averages. The curve will
be approximately the normal curve, though shifted right (I'd guess about
140 pounds worth) and stretched horizontally by a factor of about, say,
5 lbs [averages tend to cancel out extremes of variation].
This last fact (#4) is why the normal curve is so important to statisticians.
Many, many calculations about chance can be approximated very well with
the normal curve. This is particularly important when looking at
a sample from a population (of people, cats, or computer chips).
For example, if a random sample of 500 Philadelphia voters shows 74% will
vote for Candidate A, there is a chance that this sample is completely
non-representative of the overall Philadelphia voting population.
In fact, if the sample was really random (everone in Philly having an equal
chance of being picked for the sample), the chance of the sample not representing
the population reasonably well is low...
and the punchline
the normal curve is a tool a statistician can use to tell how
far the sample is likely to be off from the overall population, i.e. how
big a "margin of error" there is likely to be in his/her poll.
Another example: I test 200 tires from a production run, by wearing
them out, to see how many miles they last. I select those 200 at
random from the entire production run. I can't test the entire production
run (because I can't sell tested, i.e. worn-out tires). Again, my
sample may be unrepresentative, but the normal curve will give me a way
to estimate the likely margin of error. | <urn:uuid:384a8e6f-bd25-4db6-b2f9-afd3af9e4145> | 3.453125 | 552 | Tutorial | Science & Tech. | 58.270386 |
Metallic robots constructed by ingenious humans can survive on Mars. But what about future human astronauts? What fate awaits them on a bold and likely year's long expedition to the endlessly extreme and drastically harsh environment on the surface of the radiation drenched Red Planet? How much shielding would people need? Thankfully, recent evidence suggests that radiation levels at the Martian surface appear to be similar to what's experienced by astronauts in low-Earth orbit — what's a very promising sign for future explorers.
NASA's plucky Mars Exploration Rover Opportunity has thrived for nearly a decade traversing the plains of Meridiani Planum despite the continuous bombardment of sterilizing cosmic and solar radiation from charged particles thanks to her radiation hardened innards. But as to whether or not humans could survive under similar conditions has remained an open question.
And indeed, answering this quandary has been one of the key quests ahead for NASA's SUV sized Curiosity Mars rover – now 100 Sols, or Martian days, into her two year long primary mission phase.
Preliminary data looks promising.
Curiosity survived the eight month interplanetary journey and the unprecedented sky crane rocket powered descent maneuver to touch down safely inside Gale Crater beside the towering layered foothills of 3 mi. (5.5 km) high Mount Sharp on Aug. 6, 2012.
Now she is tasked with assessing whether Mars and Gale Crater ever offered a habitable environment for microbial life forms – past or present. Characterizing the naturally occurring radiation levels stemming from galactic cosmic rays and the sun will address the habitability question for both microbes and astronauts. Radiation can destroy near-surface organic molecules.
Researchers are using Curiosity's state-of-the-art Radiation Assessment Detector (RAD) instrument to monitor high-energy radiation on a daily basis and help determine the potential for real life health risks posed to future human explorers on the Martian surface.
"The atmosphere provides a level of shielding, and so charged-particle radiation is less when the atmosphere is thicker," said RAD Principal Investigator Don Hassler of the Southwest Research Institute in Boulder, Colo. See the data graphs herein.
"Absolutely, the astronauts can live in this environment. It's not so different from what astronauts might experience on the International Space Station. The real question is if you add up the total contribution to the astronaut's total dose on a Mars mission can you stay within your career limits as you accumulate those numbers. Over time we will get those numbers," Hassler explained.
The initial RAD data from the first two months on the surface was revealed at a media briefing for reporters on Thursday, Nov. 15 and shows that radiation is somewhat lower on Mars surface compared to the space environment due to shielding from the thin Martian atmosphere.
RAD hasn't detected any large solar flares yet from the surface. "That will be very important," said Hassler.
"If there was a massive solar flare that could have an acute effect which could cause vomiting and potentially jeopardize the mission of a spacesuited astronaut."
"Overall, Mars' atmosphere reduces the radiation dose compared to what we saw during the cruise to Mars by a factor of about two."
RAD was operating and already taking radiation measurements during the spacecraft's interplanetary cruise to compare with the new data points now being collected on the floor of Gale Crater.
Mars atmospheric pressure is a bit less than 1% of Earth's. It varies somewhat in relation to atmospheric cycles dependent on temperature and the freeze-thaw cycle of the polar ice caps and the resulting daily thermal tides.
"We see a daily variation in the radiation dose measured on the surface which is anti-correlated with the pressure of the atmosphere. Mars atmosphere is acting as a shield for the radiation. As the atmosphere gets thicker that provides more of a shield. Therefore we see a dip in the radiation dose by about 3 to 5%, every day," said Hassler.
There are also seasonal changes in radiation levels as Mars moves through space.
The RAD team is still refining the radiation data points.
"There's calibrations and characterizations that we're finalizing to get those numbers precise. We're working on that. And we're hoping to release that at the AGU [American Geophysical Union] meeting in December."
Radiation is a life limiting factor to habitability. RAD is the first science instrument to directly measure radiation from the surface of a planet other than Earth.
"Curiosity is finding that the radiation environment on Mars is sensitive to Mars weather and climate," Hassler concluded.
Unlike Earth, Mars lost its magnetic field some 3.5 billion years ago – and therefore most of it's shielding capability from harsh levels of energetic particle radiation from space.
Much more data will need to be collected by RAD before any final conclusions on living on Mars, and for how long and in which type habitats, can be drawn.
Learn more about Curiosity and NASA missions at my upcoming free public presentations:
- On Dec. 6 held at Brookdale Community College, Monmouth Museum, Lincroft, NJ at 8 PM – hosted by STAR astronomy
- And on Dec 11 held at Princeton University and the Amateur Astronomers Association of Princeton (AAAP) in Princeton, NJ at 8 PM.
Image No. 1 caption: Longer-Term Radiation Variations at Gale Crater. This graphic shows the variation of radiation dose measured by the Radiation Assessment Detector on NASA's Curiosity rover over about 50 sols, or Martian days, on Mars. (On Earth, Sol 10 was Sept. 15 and Sol 60 was Oct. 6, 2012.) The dose rate of charged particles was measured using silicon detectors and is shown in black. The total dose rate (from both charged particles and neutral particles) was measured using a plastic scintillator and is shown in red. Credit: NASA/JPL-Caltech/ SwRI
Image No. 2 caption: Daily Cycles of Radiation and Pressure at Gale Crater. This graphic shows the daily variations in Martian radiation and atmospheric pressure as measured by NASA's Curiosity rover. As pressure increases, the total radiation dose decreases. When the atmosphere is thicker, it provides a better barrier with more effective shielding for radiation from outside of Mars. At each of the pressure maximums, the radiation level drops between 3 to 5 percent. The radiation level goes up at the end of the graph due to a longer-term trend that scientists are still studying. Credit: NASA/JPL-Caltech/SwRI
This article originally appeared at Universe Today.
Top image: The never-was Soviet Mars colony of 2061 by Green Forest. | <urn:uuid:9b9278c2-0f84-4007-9504-ab58b8f6c5c9> | 3.765625 | 1,351 | Truncated | Science & Tech. | 42.217941 |
In a wedding banquet, guests are seated in circular table for four. In how many ways can the guests be seated?
We have learned that the number of permutations of objects on a straight line is . That is, if we seat the four guests Anna, Barbie, Christian, and Dorcas, on chairs in on a straight line they can be seated in ways (see complete list).
However, circular arrangement is slightly different. Take the arrangement of guests A, B, C, D as shown in the first figure. The four possible seating arrangements are just a single permutation: in each table, the persons on the left and on the right of each guest are still the same persons. For example, in any of the tables, B is at the left hand side of A and D is at the right hand side of A. In effect, the four linear permutations ABCD, BCDA, CDAB, and DABC are counted as one in circular permutation. This means that the number of linear permutations of 4 persons is four times its number of circular permutations. Since the number of all possible permutations of four objects is 4!, the number of cirular permutations of four objects is .
This is also similar with circular permutations of three objects. Since the number of permutations of three objects is , the number of circular permutations of three objects is .
The observation above can be generalized. Notice that the circular permutations in both figures are just the rotations of the guests about the table. Rotating the guests without swapping their positions pairwise does not change the permutation since the person the right and left of each person will still be the same person after the rotation. This means that if there are 8 persons seated, they can be rotated 7 times giving us 8 possible seating arrangements (includes the initial position). So, we have circular permutations of 8 guests.
In general, given number of objects, there are
Simplifying the equation above, we have
Therefore, the number of circular permutations of objects is
This post was brought to you by Amanda Green. | <urn:uuid:920be5ad-72d7-42d8-a3df-99b35d8ff296> | 3.0625 | 431 | Personal Blog | Science & Tech. | 53.617806 |
Acid Rain Lesson Plan
Activity 2 – Understanding the difference between an acid and a base
Time: 1-2 hours
At the end of this lesson the student will be able to:
- Explain the difference between an acid and a base.
- Understand that an acid can be made more neutral by adding something which is basic or interacting with a natural buffering agent in nature.
- Become familiar with the pH scale.
You will need:
- Lemon juice
- Tomato juice (pure)
- Distilled water
- Salt water (3 tbs./1 cup distilled water)
- Milk of Magnesia
- Blank Chart (Figure 4) - one per student or group
- Enough wide-range (0-14 pH) litmus paper to give each group twenty-one 1-1/2 inch strips
- Eight 6-8 oz. cups (Because the students will need to use the comparison chart included with the litmus container, you may wish to obtain enough dispensers for each group to have one.)
Instructions to Teacher
- Refer to "Sources of Acid Pollution and the pH scale Figure 1.
- Distribute litmus paper and Figure 4 chart to students.
- Direct supervision is necessary when working with these materials.
- Put one of the seven samples in a cup and pass these among he students or groups for testing. The students should test each sample three times and arrive at an average pH using the following formula:
Example: Test 1 = pH 2, Test 2 = pH 3, Test 3 = pH 2, Total (2+3+2) = 7
Formula: Average pH = Total / 3 ( pH = 7/3 or pH = 2.3)
Instructions to Students
- Using the wide-range litmus paper, test each different sample three times.
- Using these three tests, calculate the average pH of the sample.
- Record your results on the data sheet, Figure 4. Include the three pH test figures, the sum of these, and the average pH.
- Repeat this for each of the seven samples.
- With the help of your teacher, add two Alka-Seltzer tablets to a cup of vinegar. Test this solution for pH.
Questions to Students
- How do your results of this test compare with the answers of the rest of the group?
- What is an acid? A base? Look for answers in reference books such as encyclopedias, science books, etc.
- Which samples are acidic? Neutral? Basic (alkaline)?
- Did the pH of the vinegar change when you added the Alka-Seltzer tablets? Why? Hint: Make up a cup of Alka-Seltzer and test it for pH. What does this tell you about the pH change in the vinegar - Alka-Seltzer solution? | <urn:uuid:19234476-1aa8-4528-a756-72516eab6348> | 3.65625 | 597 | Tutorial | Science & Tech. | 68.340885 |
Quicktime7 (H.264), 160x120, 1.9 Mb
Quicktime7 (H.264), 720x480, 76.7 MB
iPod QuickTime7 (H.264), 480x270, 13.4 Mb
MP3 (Audio), 1.5 Mb
Windows Media (Audio), 1 Mb
Windows Media, 720x405, 23.3 Mb
Mendocino Ridge 4,600ft gas plume discovery off the California coast.
Okeanos Explorer, "America's Ship for Ocean Exploration", is equipped with the latest in technology systems, including multibeam sonar. This technology involves sending beams of sonar to the ocean floor and measuring the amount of time it takes for those beams to bounce back to the ship. In doing so, the sonar creates a 3-D "sound picture", or map, of the seafloor.
While the ship was testing its sonar off the coast of California, the sound waves bounced off gas in the water column, creating a remarkable image of a gas plume that rose 4,600 feet from the seafloor. A landslide area at the base of the plume has led some scientists to believe that the plume might be methane, released by the landslide from methane hydrates.
Okeanos Explorer is preparing to explore the waters north of Indonesia in the summer of 2010, in collaboration with Indonesia. The ship will continue to explore the western Pacific in 2011.
Video Credit: NOAA Office of Ocean Exploration and Research. | <urn:uuid:ff8e3b76-6b1b-4ed6-8e95-5ea7574b32b9> | 2.6875 | 316 | Truncated | Science & Tech. | 68.471411 |
On 23 March 1950, the World Meteorological Organization (WMO) was formed for meteorology (weather and climate), operational hydrology and related geophysical sciences. It has 188 members.
World Meteorological Day is celebrated worldwide by the meteorological community every year on March 23 to commemorate the organization of WMO. Each year a different theme is chosen for the occasion. WMO was designated a specialized agency of the United Nations System in 1951. | <urn:uuid:4ccc631d-3aac-4721-b7af-524892120fff> | 3.453125 | 90 | Knowledge Article | Science & Tech. | 21.007857 |
A good definition for Ray of light is provided by Wiki. It is a theoretical term ideally used to mention the propagation of light in Ray optics (simply, it's an assumption of a path that light may take along its direction). You could draw infinite number of rays from a point source of light. We require rays (at least wave-fronts) for drawing diagrams of reflection, refraction, etc.
According to Wiki,
It is a line or curve perpendicular to the wavefront of light.
If we take Huygens principle (Every point on a given wavefront may be considered as a source of secondary wavelet which spread out in the medium at $c$ and the new wavefront is the forward envelope of secondary wavelet at that instant) into account, we could do a lot of tactics here.
The number of photons in a ray of light at any given period of time (i.e. the rate of emission) is finite. But, that number of photons is also based on our assumption (how we've chosen the ray). Also, the source is always emitting photons until it's switched off...
Photon is just the quantum of electromagnetic radiation or the carrier of EM energy (or force) having zero rest mass, exhibits wave-particle duality and which has the anti-particle only as itself. While calculating the frequency (or wavelength) of the light (or photon), you aren't considering a ray of light 'cause you don't require the use of it.
I can't still understand how the energy became infinity? It becomes infinite only when the frequency is $\infty$ or it's an infinitely energetic source (which is too ideal here). | <urn:uuid:6908a6b2-611f-49bc-adab-f36c41c28a22> | 3.703125 | 348 | Q&A Forum | Science & Tech. | 52.642353 |
Back in the 1960s and early 1970s, there was quite a lot of research published on the circadian rhythms in earthworms, mostly by Miriam Bennett. As far as I can tell, nobody’s followed up on that work since. I know, from a trusted source, that earthworms will not run in running-wheels, believe it or not! The wheels were modified to contain a groove down the middle (so that the worm can go only in one direction and not off the wheel), the groove was covered with filter paper (to prevent the worm from escaping the groove) and the paper was kept moist with some kind of automated sprinkler system. Still, the earthworms pretty much stood still and the experiments were abandoned.
Dr.Bennett measured locomotion rhythms in other ways, as well as rhythms of oxygen consumption, light-avoidance behavior, etc. With one of my students, some years ago, I tried to use earthworms as well – we placed groups of worms in different lighting conditions (they were inside some soil, but not deep enough for them to completely avoid light) – the data were messy and inconclusive, except that worms kept in constant light all laid egg-cases and all died (evolutionary trade-off between longevity and fecundity, or just a last-ditch effort at reproduction before imminent death?). Worms in (short-day and long-day) LD cycles and in constant dark did not lay eggs and more-or-less survived a few days.
I intended to write a long post reviewing the earthworm clock literature, but that was before I got a job….perhaps one day. But the news today is that there is a new paper that suggests that clocks may have something to do with a behavior all of us have seen before: earthworms coming out to the surface during or after a rain.
In the paper, Role of diurnal rhythm of oxygen consumption in emergence from soil at night after heavy rain by earthworms, Shu-Chun Chuang and Jiun Hong Chen from the Institute of Zoology at National Taiwan University, compared responses of two different species of earthworms, one of which sufraces during rain and the other does not. They say:
Two species of earthworms were used to unravel why some earthworm species crawl out of the soil at night after heavy rain. Specimens of Amynthas gracilis, which show this behavior, were found to have poor tolerance to water immersion and a diurnal rhythm of oxygen consumption, using more oxygen at night than during the day. The other species, Pontoscolex corethrurus, survived longer under water and was never observed to crawl out of the soil after heavy rain; its oxygen consumption was not only lower than that of A. gracilis but also lacked a diurnal rhythm. Accordingly, we suggest that earthworms have at least two types of physical strategies to deal with water immersion and attendant oxygen depletion of the soil. The first is represented by A. gracilis; they crawl out of the waterlogged soil, especially at night when their oxygen consumption increases. The other strategy, shown by P. corethrurus, allows the earthworms to survive at a lower concentration of oxygen due to lower consumption; these worms can therefore remain longer in oxygen-poor conditions, and never crawl out of the soil after heavy rain.
So, one species has low oxygen consumption AND no rhythm of it. It survives fine, for a long time, when the soil is saturated with water. The other species has greater oxygen consumption and is thus more sensitive to depletion of oxygen when the ground is saturated with water. Furthermore, they also exhibit a daily rhythm of oxygen consumption – they consume more oxygen during the night than during the day. Thus, if it rains during the day, they may or may not surface, but if it rains as night they have to resurface pretty quickly.
Aydin Orstan describes the work in more detail on his blog Snail’s Tales, and he gets the hat-tip for alerting me to this paper.
Chuang, S., Chen, J.H. (2008). Role of diurnal rhythm of oxygen consumption in emergence from soil at night after heavy rain by earthworms. Invertebrate Biology, 127(1), 80-86. DOI: 10.1111/j.1744-7410.2007.00117.x | <urn:uuid:e276e1a9-b5e4-4395-be37-c21708a62998> | 2.875 | 912 | Personal Blog | Science & Tech. | 56.395606 |
Today, a woman doesn't quite get the Nobel Prize.
The University of Houston's College of Engineering
presents this series about the machines that make
our civilization run, and the people whose
ingenuity created Them.
In 1913, Swiss chemist
Alfred Werner won the Nobel Prize for explaining
something Louis Pasteur had pointed out. Pasteur
had found two crystalline salts with exactly the
same chemical makeup. One bent polarized light to
the left, the other bent it to the right. One salt
seemed to be left-handed, the other, right-handed.
In 1897, Werner claimed that the molecular
arrangements in such molecules had to be mirror
images of each other. He also claimed that a huge
class of molecules had mirror images like that.
Other chemists laughed at him -- said he was
Finally, in 1911, an American student did a
terribly complex sequence of processes that
produced right and left-handed cobalt-based salts
for Werner. When other chemists saw that, their
resistance broke down. Two years later, Werner had
the Nobel Prize.
So far, this makes a fairly conventional story of
scientific discovery. But chemist Ivan Bernal has
found the oddest wrinkle in it. He picks up the
tale just after Werner first made his claims.
Around 1898, a remarkable young woman named Edith
Humphrey came from England to do her doctorate with
An English woman doing doctoral work in chemistry
in a foreign university was unheard-of a century
ago. But Humphrey was no ordinary woman. She became
Werner's first woman Ph.D.
She did her dissertation on the same cobalt salt
crystals that American would synthesize ten years
later. But she did it without all that fancy
processing. In the course of her work she prepared
many crystals and left them with Werner, carefully
marked, in a box. And there they sat for 86 years.
Then Bernal heard about them and predicted they
would have the necessary left/right optical
property. Sure enough, Humphrey's crystals showed
exactly the same behavior Pasteur had seen. Werner
had his validation right there, and he'd missed it.
Meanwhile Werner had sent that American all around
the mulberry bush recreating Humphrey's crystals.
Bernal points out that his work was completely
unnecessary because her crystals already had
sufficient purity. Worse yet, Werner idolized
Pasteur and was completely aware of his use of
polarized light. Yet he'd never thought to shine
polarized light through Edith Humphrey's crystals.
If he had, the matter would've been settled in
Edith Humphrey went back to England and lived to
the age of 102. She set up a research laboratory
for a British dye and fabric company. She was its
chief chemist. If she or Werner had only thought to
test her crystals, she might've had part of a Nobel
Prize as well. But she died just before Bernal
figured that out. She died without ever knowing --
just how close she had come.
I'm John Lienhard, at the University of Houston,
where we're interested in the way inventive minds
Bernal, I., A Sketch of the Life of Edith Humphrey, A
Pioneer Inorganic Chemist Who Barely Missed Proving
Werner's Theory of Coordination Chemistry a Decade
Before It Was Eventually Demonstrated Correct.
Chemical Intelligencer, January 1999, pp.
I am grateful to Ivan Bernal, UH Chemistry
Department, for suggesting this topic and for
providing considerable counsel.
For more on chirality and mirror imaging, see
Episodes 604 and 1184.
The Engines of Our Ingenuity is
Copyright © 1988-1997 by John H.
Episode | Search Episodes | | <urn:uuid:d9e11a8a-ce17-4fb3-b166-4a9f783e401c> | 3.5 | 832 | Audio Transcript | Science & Tech. | 52.417904 |
Find out about
all the facilities available to this site.
around this site.
Space Debris is also known as Space
Junk. Space Debris consists of millions of pieces of man-made
material orbiting the Earth.
- ISS adjusts orbit to avoid Space Debris
The International Space Station moved into a slightly higher orbit
on Friday 13, January, 2012 to avoid a close call with debris from
a 2009 satellite collision. Thusters on the ISS's Zvezda module
fired for nearly a minute at 11:10 am EST (1610 GMT) Friday,
raising the station's orbit by 305 meters. The maneuver was
approved after the US Strategic Command detected a piece of debris
about 10 centimeters in diameter projected to come as close as one
kilometer to the station. The debris was a fragment of the Iridium
33 satellite, which collided with a defunct Russian satellite in
2009. The maneuver was the 13th debris avoidance maneuver in the
station's history; the maneuver also took the place of a
previously-planned reboost of the station next week.
Where did all Space Debris come from?
Space debris consists of natural
(meteoroid) and artificial (man-made) particles. Meteoroids are in
orbit about the sun, while most artificial debris is in orbit
about the Earth.
Orbital debris is any man-made object in orbit about the Earth
which no longer serves a useful function. Such debris includes
nonfunctional spacecraft, abandoned launch vehicle stages,
mission-related debris and fragmentation debris.
There are more than 20,000 pieces of debris larger than a softball
orbiting the Earth. They travel at speeds up to 17,500 mph, fast
enough for a relatively small piece of orbital debris to damage a
satellite or a spacecraft. There are 500,000 pieces of debris the
size of a marble or larger. There are many millions of pieces of
debris that are so small they canít be tracked.
Today, telescopes and radar are
monitoring more than 12,000 pieces of junk down to 10 cm in size.
Many millions of pieces are too small to be recorded, such as
flecks of paint and dust.
Even tiny paint flecks can damage a spacecraft when traveling at
these velocities. In fact a number of space shuttle windows have
been replaced because of damage caused by material that was
analyzed and shown to be paint flecks.
In 1958 the United States launched Vanguard I. It became one of
the longest surviving pieces of space junk. As of January 2012
remains the oldest piece of junk still in orbit.
In 1996, a French satellite was hit and damaged by debris from a
French rocket that had exploded a decade earlier.
On 10 February, 2009, a defunct Russian satellite collided with
and destroyed a functioning U.S. Iridium commercial satellite. The
collision added more than 2,000 pieces of trackable debris to the
inventory of space junk.
China's 2007 anti-satellite test, which used a missile to destroy
an old weather satellite, added more than 3,000 pieces to the
* Space Junk threatens future space travel.
* When the Hubble Space Telescopeís solar panels were
brought back to Earth in 2002, they were peppered with impact
craters up to 8 mm across.
Space Debris: Models and Risk Analysis (Springer Praxis Books /
Astronautical Engineering) [Hardcover]
Heiner Klinkrad (Author)
comments or suggestions, then click on
Space Projects and Info Home Page
Copyright © 2000-2013 Vic Stathopoulos. All rights reserved.
Updated: Saturday 23rd, February, 2013
Star Wars Costumes for Kids
Flown In Space - ISS Soyuz
Sale Price: $ 115.00
laptop bag made | <urn:uuid:0bb0f7f6-1403-447f-a5c6-53a5d54d2e40> | 4.28125 | 810 | Knowledge Article | Science & Tech. | 51.334568 |
Understanding how plants regulate element composition of tissues is critical for agriculture, the environment, and human health. Sustainably meeting the increasing food and biofuel demands of the planet will require growing crops with fewer inputs such as the primary macronutrients phosphorus (P) and potassium (K). P in fertilizer is non-renewable, too expensive for subsistence farmers, and inefficiently utilized by crops, leading to runoff and severe downstream ecological consequences. Plants comprise the major portion of the human diet, and improving their elemental nutrient content can greatly affect human health. However, efforts directed at a single element can have unforeseen deleterious effects. For example, limiting iron (Fe) or P can lead to increased accumulation of the toxic elements cadmium (Cd) and arsenic (As).
The Baxter lab is interested in understanding how plants regulate the mobilization, uptake, translocation, and storage of elements in different environments. We are focusing our efforts on the seeds of corn and soybeans, the two most commonly grown crops in the United States. The seeds are important not only as the component of the plant that gets used for food, but also as a summary tissue of many physiological processes that are important for plant growth. We also study model systems to understand basic processes and apply this knowledge to the crop plants.
The focus of our attempts to study this question is ionomics. We analyze the elemental content of 500-1000 samples per week using ICP-MS. We use this high throughput to study structured genetic populations grown in different environments. When we combine this data with cutting-edge statistical and bioinformatics methodologies, we are able to identify genes and Gene X Environment interactions which a | <urn:uuid:6f54992d-0bcc-4b41-acef-f30bafa38a6e> | 2.75 | 342 | Knowledge Article | Science & Tech. | 26.161216 |
One of the most exciting developments in programming over the last few years is the emergence of Ruby-a free, highly portable, open-source scripting language that was created by Yukihiro Matsumoto in 1995. It is already extremely popular in Japan and is coming on strong in the U.S. Web development community.
Ruby is an interpreted, object-oriented language with functionality that is similar to Java, Perl and Python. In addition, the Ruby-based Web-application framework-Ruby on Rails-enables developers to build powerful Web applications.
Ruby and Ruby on Rails are known for their simplicity. Some developers have experienced many-fold increases in productivity over .NET and Java/J2EE when building similar Web applications.
Our New Ruby Resource Center includes links to Ruby and Ruby on Rails resources, downloads, tutorials, documentation, books, e-books, journals, articles, blogs and more. | <urn:uuid:66e5d6f1-eee9-44bf-8041-fd1c40d181c8> | 2.78125 | 185 | Content Listing | Software Dev. | 31.562845 |
What Happened to MVC?
What Happened to MVC?
Remember MVC? Model-View-Controller? It was this great idea that said
if you seperated these three aspects of an application that it would
be much easier to write and manage your code. It disappeared from
sight a few years ago. Why?
All these frameworks, Tapestry, JSP, Struts, Spring for web
applications have sprung up and none of them are MVC. Why?
Let's review MVC and see what we can figure out.
Model: This is the set of objects that define the stuff you're working
with, be it customer information for a business or strings of DNA for
Genomic research such as I work with. Much of the model will be
persistant objects of some sort. A typical public method would be
Set findMatchingCustomers(CustomerFilter f)
This would be called by a controller in a web application or by any
application that wanted to do something with customers.
View: This displays things, specifically strings and images (what else
is there to display?). It makes no decisions about the data it
displays. It doesn't know anything about the data it displays. You
should be able to completely change the code for the model and not
even recompile the view code. About the only decision the view code is
allowed to make is how to handle longer-than-expected strings and
different sized images.
Controller: These objects receive events from the GUI (mouse clicks
and user-input strings), make requests of the model, analyze the
results, and decide on which view object to call and what strings and
images to pass it.
It is essential to recognize that these three sets of objects are
completely seperate -- they live in different files, the Model and
View are compiled independently of each other. The Model code is
usable by lots of other applications.
The Model and Controller are both computational kinds of objects. They
are concerned with logical operations on data objects -- exactly what
OOPLs (such as Java) are very good at. By contrast, the View is
concerned with how things will look to the human being -- where the
strings and images will be positioned, what will be mouse-sensitive,
etc. Java is a lousy language for specifying GUI layout.
HTML is a giant step up from Java, although I still find it
If the web were the place I think it should be, this wouldn't be an
issue. Not anymore than the lousy instruction set of the x86 is a
problem for high-level language programmers. I've written gobs of C
programs and never had to worry about which register a variable was
going to be stored in.
HTML should be completely hidden from sight in the same way.
Unfortunately, the web is not that nice place and HTML hangs out of
our applications like dirty skivies on a 10-year old kid. And I know
about dirty skivies. I was a Boy Scout.
Oh, we're the boys from 33,
we aren't so very neat.
We never wash our underwear
or clean our dirty feet.
We're always filling our water buckets,
and water pistols too.
Oh we're the boys from 33.
Who in the heck are you!
So we're stuck with HTML for the time being. Fine. We can deal with
it. We have to generate the HTML anyway.
So, how should we generate the HTML? Java is lousy at it, so people
have written frameworks such as JSPs, Tapestry, Struts, Spring,
etc. These are all clearly much better than Java, but still kinda
Tapestry's not MCV?
But it says it is, right there on page 10 of Shipp's book!
Well, there's mvc and then there's MVC.
If someone points to a line of code and says "That line is part of the
Controller" and then points to the
next line and says "That's part of the View" then they're talking
In other words, they're lying through their teeth
(possibility to themselves too). They're pretending their product fits
some societly accepted norm, when it doesn't.
Here's an example of some typical code from Shipp's Tapestry book:
image='ognl:getAssets("digit" + visit.game.incorrectGuessesLeft)'
It displays an image on a page. That's View code. But what's all that
OGNL stuff? Why it's calling Model code! Oh, dear.
Can I change the name of a method in the model (say, getAssets()?)
without changing this code?
This does not mean that Tapestry isn't an effective framework for
building web applications, but it does predict a set of expected
difficulities. Adding in the fact that Tapestry is not a complied
language with declared types, we can predict:
Minor changes to the Model code will ripple through to the end users
who will discover that little used operations (which lack complete
test coverage) that used work, now either fail or display incorrect
We expect that the Controller code for deciding which page to display
next will be complex and have numerous subtle variations that relate
to the now ad-hoc nature of page selection.
We expect that when the user is in the middle of a transaction,
reissuing a previous request (because the user pushed the BACK button)
will prove awkward and often produce highly unsatisfactory results,
because it's hard to do.
If we look at radically different display paradigms, such as Swing, we
don't see these problems. Changes to the model are confined to the
model and the controller. The basic controller code for selecting the
next page to display is very simple, such as:
Set goodProspects = findMatchingCustomers(richCustomerFilter);
if (goodProspects.size() < 1000)
The final problem doesn't exist in Swing applications because there's
no independent form of navigation such as the browser's BACK button,
so that's not a fair comparision. Spring is able to deal with this
issue by adding information about the intended flow of the
application. (Unfortunately, it requires some redundant information.)
Are there better ways of writing browser-based applications than
Tapestry and Spring? I think yes, extremely yes.
I predict that someone is going to make a really big pile of money by
designing an MVC framework for web applications.
No, let me go one further. Somebody is going to design a window system
independent MVC framework that will behave indentically in browsers,
in Swing, SWT, etc.
That person is going to be rich, Rich, RICH!!!
But that's not my focus here. My only objective here is to clarify the
notion of MVC and point out some of the hurdles that have to be
crossed by the monolithic frameworks we use today. In otherwords, many
of the problems that we see with these frameworks are exactly what MVC | <urn:uuid:b5c2be54-8ba5-46e2-a9f6-05e8ab1f2811> | 2.75 | 1,523 | Personal Blog | Software Dev. | 61.731258 |
Astronomy for Kids
What is a black hole?
Black holes are one of the most mysterious and powerful forces in the universe. A black hole is where gravity has become so strong that nothing around it can escape, not even light. The mass of a black hole is so compact, or dense, that the force of gravity is too strong for even light to escape.
Can we see them?
Black holes are truly invisible. We can't actually see black holes because they don't reflect light. Scientists know they exist by observing light and objects around black holes. Strange things happen around black holes to do with quantum physics and space time. This makes them a popular subject of science fiction stories even though they are very real.
How are they formed?
Black holes are formed when giant stars explode at the end of their lifecycle. This explosion is called a supernova. If the star has enough mass, it will collapse on itself down to a very small size. Due its small size and enormous mass, the gravity will be so strong it will absorb light and become a black hole. Black holes can grow incredibly huge as they continue to absorb light and mass around them. They can even absorb other stars. Many scientists think that there are super-massive black holes at the center of galaxies.
There is a special boundary around a black hole called an event horizon. It is at this point that everything, even light, must go toward the black hole. There is no escape once you've crossed the event horizon!
Black hole absorbing light
Who discovered the black hole?
The idea of the black hole was first proposed by two different scientists in the 18th century: John Michell and Pierre-Simon Laplace. In 1967 a physicist named John Archibald Wheeler came up with the name "black hole" and we've called these amazing objects that ever since.
Fun Facts about black holes
For more information on the Solar system:
More Outer Space:
- Black holes can have the mass of several million suns.
- They don't live forever, but slowly evaporate returning their energy to the universe.
- The center of a black hole, where all its mass resides, is a point called a singularity.
- Black holes differ from each other in mass and their spin. Other than that, they are all very similar.
- The black holes we know about tend to fit into two size categories: "stellar" size are around the mass of one star while "supermassive" are the mass of several millions of stars. The big ones are located at the centers of large galaxies.
Back to Kids Science
Back to Kids Study
Vote for your favorite US President:
Joke of the Day
Q: What would you do if an elephant sat in front of you at a movie?|
A: Miss most of the film. | <urn:uuid:b200ddcb-b1ac-4f8a-a711-94a3e206821e> | 3.921875 | 588 | Knowledge Article | Science & Tech. | 59.91201 |
The following HTML text is provided to enhance online
readability. Many aspects of typography translate only awkwardly to HTML.
Please use the page image
as the authoritative form to ensure accuracy.
Twenty-First Symposium on NAVAL HYDRODYNAMICS
can absorb waves efficiently in some cases, but are not always effective in general. Even when they are effective, they have to be placed sufficiently far away from the body. This means that the computational domain is usually big, which will require a large number of elements. On the other hand, when the required memory exceeds the physical memory of the computer, only a small percentage of CPU will be used. This makes the calculation extremely inefficient. Wu and Eatock Taylor therefore adopted domain decomposition. The required memory will then depend on the sizes of the subdomains which can always be subdivided if necessary. The continuity across the subdomain is achieved through iteration.
In this work, we shall use the three dimensional finite element method to consider the problem of a vertical cylinder in a wave tank. The methodology is first verified using the analytical solution for a two dimensional wave maker. The case may seem simple enough, but it is found that care is needed in dealing with cross waves. The computer code is also verified by the linearized analytical solution for three dimensional standing waves. The relative merits of various domain decomposition schemes are discussed. Results for the vertical cylinders in the tank are provided.
MATHEMATICAL FORMULATION AND NUMERICAL TECHNIQUE
We consider the problem of a vertical cylinder in a wave tank as shown in figure 1. (x,y,z) denotes a Cartesian co-ordinate system with x axis pointing in the longitudinal direction of the tank and z upwards. The origin of the
Fig. 1 The layout of the wave tank
system is located on the mean position of the free surface and the centre of the cylinder. B, L and d in the figure indicate the width, length and depth of the tank, respectively.
Based on the usual assumptions of ideal flow, the velocity potential satisfies Laplace's equation:
in the fluid domain Ω. The condition on the piston wave maker can be written as
where U(t) is the velocity of the wave maker. On the fixed boundary the condition is:
where n is the normal of the surface pointing out of the fluid domain. On the free surface z=η(x,y,t), the kinematic and dynamic conditions can be written as
where g is the gravitational acceleration. These are then combined with the initial conditions which usually assume that the wave elevation and the potential on the free surface are zero.
When the wave generated by the wavemaker encounters the cylinder, it will be diffracted. The reflected wave will travel back towards the wave maker. The transmitted wave, on the other hand, will propagate towards the other end of the tank. As the time step increases, the waves reflected by the wavemaker and the far end of the tank will arrive at the cylinder. This will distort the wave loading on the cylinder. Several approaches have been proposed to absorb the reflected wave (e.g. ) and the transmitted wave (e.g. ). They are effective in some cases but they all have their limitations. Here we do not intend to investigate the effectiveness of various wave absorption schemes at the far end. Instead we simply use a relatively long tank and the computation is | <urn:uuid:a0273e6a-f377-4741-904c-dc890bb061e8> | 2.765625 | 703 | Academic Writing | Science & Tech. | 41.385623 |
Name: G High
Why did the dinosaurs die out?
The current theory involves the impact of a huge asteroid from
space, which resulted in a large dust cloud going into the atmosphere when
this supposed asteroid struck the earth ages ago. The theory is that
because of the large, thick cloud of dust, the sunlight intensity
was cut considerably; this reduction in sunlight had the effect of
reduced vegetation which served as food for the dinosaurs, thereby
partially causing their demise. A secondary factor mentioned is
a cooling of the earth over a substantial % as a contributory factor.
Please let me know if this answers your question satisfactorily.
Thanks for using NEWTON!
Click here to return to the Biology Archives
Update: June 2012 | <urn:uuid:68ab38e2-d768-4379-b544-674b74e2e78d> | 3.625 | 156 | Q&A Forum | Science & Tech. | 40.900671 |
(Lansing State Journal, December 4, 1992)
Question submitted by: J.D. Jackson of DeWitt
Let's consider a simple cooking process: boiling water. The energy of water molecules increases as they are heated. When water boils,the more energetic water molecules escape from the liquid as steam.
In normal atmospheric pressure, the boiling point of water is 100 degrees Celsius (212 degrees Fahrenheit) If you put a tight lid on a pot, steam will accumulate under the lid, so the pressure on the top of the water becomes greater than atmospheric pressure. For a water molecule to escape, it must now have more energy than if it had to escape from an uncovered pot.
At high altitudes, we encounter the opposite case; the air is thinner and the atmospheric pressure is less than at sea level. Water molecules can escape as vapor at much lower energies, so water will boil at a lower temperature. The higher the altitude, the lower the boiling point.
In Denver (1700 m above sea level), the boiling point of water can be as low as 95 degrees Celsius (203 degrees Fahrenheit). At the top of Mt. Everest (9500 m above sea level), water boils at 75 degrees Celsius (167 degrees Fahrenheit).
Cooking at high altitudes requires either higher cooking temperatures or longer cooking times due to the lower atmospheric pressure. In baking, increasing the temperature of the oven contracts the lower internal temperature of the food. In cooking on top of the stove, the lower atmospheric pressure means liquids boil at lower temperatures, requiring longer cooking times. | <urn:uuid:ab24b97d-3eaf-4684-b466-dd43880f8e24> | 3.875 | 319 | Q&A Forum | Science & Tech. | 46.200635 |
Mysterious Signal From Outer Space
Posted 31 January 2012 - 12:04 PM
A mysterious signal coming from a region of space between the constellations Pisces and Aries has been picked up on three different occasions by the Arecibo radio telescope in Puerto Rico.The signal is very puzzling and does not resemble any known astronomical phenomenon. Researchers who have studied its frequency pattern do not believe it is natural interference or noise.
Was the signal transmitted deliberately by an extraterrestrial civilization on a distant planet? Scientists remain cautious but we cannot dismiss the possibility.
Astronomers believe there are about 10,000 intelligent civilizations in our galaxy alone. Lets not forget that there are hundreds of billions of galaxies in the Universe, which means the Universe may actually be teeming with life.
Some years ago, in 2008, astronomers announced they picked up a mysterious signal from outer space. It was not the last time they heard the mysterious sound.
SETI and other astronomers were excited about the news, but they were also worried the signal may never be completely decoded.
"We probably won't be able to decode it. We'll know something's out there, but we won't know much about their civilization, " said Dan Wertheimer of the UC Berkeley SETI Project.
SHGb02+14a, as the signal has been named has been heard on three occasions adding up to about a minute. This is not long enough firmly to establish its source, but its frequency of 1420 megahertz has interested scientists, as it is a main frequency at which hydrogen, the most common element in the Universe, absorbs and emits energy.
Scientists have various opinions about the nature of SHGb02+14a.
Eric Korpela of Berkeley, who has analyzed the signal, said: "We are looking for something that screams out artificial. This doesn't, but it could be because it is distant."
Dr. Korpela point out that the interference with the Arecibo telescope could also make the signal look like it is always coming from the same point. "Perhaps there is an object on the ground near the telescope emitting at about this frequency."David Anderson, director of Seti@home, said: "It is unlikely to be real, but we will definitely be re- observing it."
Jocelyn Bell Burnell, of the University of Bath, said that the signal could be a previously unknown astronomical phenomenon, such as a pulsar she detected in 1967. "It may be a natural phenomenon of a previously undreamt-of kind like I stumbled over," she said.
Woodruff Sullivan, of the University of Washington in Seattle, said the research suggests that a message from an advanced alien civilization could already be lurking undetected in the solar system.
"This scenario is reminiscent of Arthur C. Clarke's 2001: A Space Odyssey, in which a monolith discovered on the Moon has been left by extra-terrestrials. If archaeologists were to find such an object, it would hardly be the first time that science fiction had become science fact," said Sullivan
Of course, even if astronomers somehow manage to decode the signal, they will face another problem - What should we reply to an alien civilization? How can we communicate with these beings?
We still don't know if someone has been trying to contact us, but the signal coming from a galaxy very away remains intriguing and we hope we may one day find out whether it is of artificial origin or an unknown natural phenomenon.
I doubt this means the Reapers are arriving, but it's interesting regardless.
It's over... 1 MILLION!
Posted 31 January 2012 - 01:18 PM
Seems that it's from around 2003 though.
Edited by Piglet, 31 January 2012 - 01:28 PM.
Posted 31 January 2012 - 01:31 PM
pfft, how did they come up with that figure? We've yet to actually find life outside of earth, let alone sentient beings.
This source doesn't look legit either. PHAIL
Posted 31 January 2012 - 01:34 PM
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users | <urn:uuid:fe59da83-03db-45ad-8584-f2774ca359b7> | 2.96875 | 859 | Comment Section | Science & Tech. | 51.006483 |
Leapin' Lizards; May 2009; Scientific American Magazine; by Stuart Fox; 2 Page(s)
For almost a century, scientists struggled to explain how the extinct reptiles called pterosaurs managed to get off the ground. In regard to the smaller pterosaurs, bird models sufficed; flapping from standstill or a running start could work. But for the larger pterosaurs, some of which had a 26-foot wingspan and weighed 200 pounds, scientists could not find a bird model that explained takeoff.
That is because they did not take off like birds, thinks Michael Habib, who studies functional anatomy and evolution at Johns Hopkins University. After analyzing the biomechanics of the creatures, Habib proposes that pterosaurs took flight by using all four limbs to make a standing jump into the sky, not by running on their two hind limbs or jumping off a height, as more widely assumed. | <urn:uuid:60afad16-a8fe-4e47-9541-8833dee4005f> | 3.984375 | 189 | Truncated | Science & Tech. | 42.478 |
From the bunch of SOLID , I guess this is a fairly easy principle to understand. One of the basics of OOP is that we must code to interfaces and not to classes. Assuming that you have done that just make sure that your interfaces are not getting FAT. FAT is a common word used to explain this principle. By FAT we mean an interface with loads of methods in it and eventually all of them are not required by the classes that implement these interfaces.One direct issue with FAT…Continue
Added by Rohit Pant on May 31, 2012 at 1:00am — No Comments
Barbara Jane Liskov was the first women american computer scientst who earned a PHd in computer science. She is the one who also came up with Liskov substituion principle and it is the 'L' in SOLID. She coined this term in 1987. First let's see what it means in her own words. "If for each object o1 of type S there is an object o2 of type T such that for all programs P defined in terms…Continue
Added by Rohit Pant on February 19, 2012 at 12:00am — No Comments
Open Close Principle
'O' in SOLID is Open Close Principle. Termed a long time back and of course is very effective today also. It was coined somewhere 1988 by Bertrand Meyer in his …Continue
Added by Rohit Pant on February 11, 2012 at 6:30am — No Comments
Added by Rohit Pant on December 3, 2011 at 2:00am — No Comments
Time for Singleton.
“Ensure a class only has one instance, and provide a global point of access to it”
Why Singleton ?
Well, why not ? This is a very common pattern and as it's intent suggests it is all about making only one instance per class. There might be a strong…Continue
Added by Rohit Pant on September 1, 2011 at 12:30am — No Comments
Time for another pattern. Abstract Factory (AF). Let's start with the GoF intent.
"Provide an interface for creating families of related or dependent objects without specifying their concrete classes"
There are many ways of arriving at this pattern when it…Continue
Added by Rohit Pant on August 14, 2011 at 10:00pm — No Comments
it's time for Decorator pattern today. It's an easy pattern and as the name suggests it surely decorates core objects as per the business requirements. Let's start.
“Attach additional responsibilities to an object dynamically. Decorators provide a flexible alternative to sub classing for extending…Continue
Added by Rohit Pant on July 31, 2011 at 1:30pm — No Comments
It's time for Composite pattern.
“Compose objects into tree structures to represent partwhole hierarchies. Composite lets clients treat individual objects and compositions of objects uniformly” .
This is definitely a better definition than Bridge, isn't it ? Gives a clear hint that there might be times when we need a tree structure to come to our rescue…Continue
Added by Rohit Pant on July 9, 2011 at 6:30am — No Comments
GoF Intent “Decouple an abstraction from its implementation so that the two can vary independently”. This is one definition which is magical. If I see the words used for constructing the definition, I feel that I know these words. Words like Decouple, Abstraction, Implementation we all come along with them all the time. But if I read the sentence I don't understand it. How is that possible, it's pure magic. Let us try to understand this pure…Continue
Added by Rohit Pant on July 2, 2011 at 11:30pm — No Comments
GoF define Façades intent as “Provide a unified interface to a set of interfaces in a subsystem. Façade defines a higherlevel interface that makes the subsystem easier to use”.
A good way to remember Façade is to imagine a sub system that is badly designed or complex or legacy system. In all the cases there might be a need to reuse the system because of some reason or the other. You know sometimes we do get to work in a project which needs to interact…Continue
Added by Rohit Pant on June 11, 2011 at 10:30pm — No Comments
There is no real reason to pick these three patterns for the first blog on series of pattern related blogs. It's just that we had to make a start, somewhere ! But these three patterns are very interesting and to some extent very similar. Consider a diagram below
Client → Adapter → Original Class
Client → Façade → Original…Continue
Added by Rohit Pant on June 10, 2011 at 3:00am — No Comments
There is a possibility of making one of the systems that we developed for one of the client to be available to more than one of his clients :) The requirement is straight forward - one installation should be accessible to multiple organizations/set of people or whatever. I think it's a common scenario these days but something I had actually not done in past. At i2 (where I used to work earlier) there was no such concept at least in the product lines…Continue
Added by Rohit Pant on May 10, 2011 at 4:00am — No Comments
I recently hosted a podcast on AWS on Bangalore Software Radio. You can get it here. But this podcast got me thinking about many other things that I see changing around me and how they clearly depend on each other. For instance – top class hardware is now a commodity. What about software ? Some time back hardware was expensive. So after spending million of dollars on hardware one would not mind spending a…Continue
Added by Rohit Pant on April 11, 2011 at 1:06am — No Comments
Very recently I started Bangalore Software Radio and did our first recording. It was a tech news podcast and we could talk a bit about Twitter, HTTPS, Security, Google and Duck Duck Go !
In future we are planning to do more newscasts and more detail technical podcasts so watch this space !
Added by Rohit Pant on March 29, 2011 at 9:00am — No Comments
Added by Rohit Pant on March 7, 2011 at 3:00am — No Comments
Added by Rohit Pant on February 14, 2011 at 11:30pm — No Comments
Some of the engineers are just too crazy about cricket. Earlier at Stragure they had made India Vs SA android app over the weekend. Before I knew in last two days, they made one more small app for world cup. Today I saw them recording a video about the same and lo before I could blink they uploaded it on youtube.
By the way Ind Vs SA series android app did well in it's…Continue
Added by Rohit Pant on February 14, 2011 at 10:00pm — No Comments
Added by Rohit Pant on February 6, 2011 at 7:30am — No Comments
Around 2 weeks back when we decided to write some server side code which would finally come down to mobile (android to start with), we did not know what to expect. We forgot to put Google analytics and therefore, could not do deep slice and dice on hits. Finally reading server logs was not too much fun and we decided to put Google analytics in place and got some…Continue
Added by Rohit Pant on January 21, 2011 at 3:53am — No Comments
Hey - guess what , yesterday Friday afternoon a bunch of us sat together and thought 'let's do something for getting the live cricket scores on mobiles'.
So this is what we did -
1) We wrote a small server side Java code which runs inside a servlet container to handle the load, hopefully :)
2) We wrote a small android code which runs on the device
3) We don't want the stuff to go down under load , so we load balanced it and for the time being…Continue
Added by Rohit Pant on January 8, 2011 at 10:01am — No Comments | <urn:uuid:67f2840b-23fa-4e1a-b557-65002209cb35> | 3.328125 | 1,671 | Content Listing | Software Dev. | 55.826942 |
If Water Vapor Were a “Greenhouse Gas” Droughts Would Cause Cold Snaps Not Heatwaves
Here we go again. Multiple news outlets have been asserting of late that manmade global warming is causing the current Australian drought and heatwave.
Here we go again. Multiple news outlets have been asserting of late that manmade global warming is causing the current Australian drought and heatwave. During the summer of 2012 it was the drought and heatwaves of the Central Plains of the United States that were said to be proof of manmade global warming; in 2011 it was the drought and heatwaves in Texas; in 2010 it was the drought and heatwaves in Russia. Just two months ago the World Bank released a report entitled, “Why a 4°C Warmer World Must be Avoided, Turn Down the Heat,” which has since been cited in dozens of news outlets bolstering the mass hysteria currently sweeping the globe over impending catastrophic manmade global warming. Attributing droughts and heatwaves to manmade global warming they wrote, “an exceptional number of extreme heat waves occurred in the last decade; major food crop growing areas are increasingly affected by drought” and “Increasing vulnerability to heat and drought stress will likely lead to increased mortality and species extinction.” Regardless of how alarming these reports may be and how frequently they are cited in the news they all betray an unfortunate reality; those who fret over impending catastrophic manmade global warming don’t even understand the scientific hypothesis upon which it is based—anthropogenic humidity.
To review, the hypothesis in question asserts that at current emission levels carbon dioxide will not by itself cause significant “greenhouse warming” but will induce the oceans to evaporate more and more water into water vapor, which is said to be the most powerful, heat trapping, “greenhouse gas.” This “anthropogenic humidity”, in turn, is anticipated through “positive feedback” to cause the catastrophic global warming that is presumed to loom over the horizon. The problem with this hypothesis is that the heatwaves mentioned in the above news reports were not caused by humidity, anthropogenic or otherwise; rather they were caused by its absence; they were caused by droughts, which is a dearth of humidity. If the “greenhouse effect” hypothesis were true, if “greenhouse gases” cause atmospheric warming then droughts would cause cold snaps not heat waves!
You see, the IPCC asserts that the “greenhouse effect” causes ~33 °C of atmospheric warming and other sources assert that water vapor is responsible for at least 20 °C of this warming. Therefore, when nature creates a drought and takes at least half of the water vapor out of the air in a particular region the temperature in that region should drop at least 10 °C (18 °F), but the opposite occurs—heatwaves ensue. Let’s take a look at a several recent and historical examples of this natural phenomenon.
2013- Australia: The current heat wave occurring in Australia has been preceded by 5 months of unusually low rainfall. “Severe rainfall deficiencies persist across most of South Australia and in southern Queensland. This follows below average rainfall across eastern Queensland, central and northwestern New South Wales in December, and persistent dry conditions over southeast Australia since August.”.
2012- Central Plains of the United States: Last years heatwaves afflicting the Central Plains of the United States were brought about by a concomitant drought.
2011- Texas: The Texas heatwave of 2011 was also brought on by a drought.
2010- Russia: The record breaking Russian heatwave of 2010 resulted from the worst drought in 40 years.
1923 & 1924 Marble Bar in Australia:“The town is far enough inland that, during the summer months, the only mechanisms likely to prevent the air from reaching such a temperature involve a southward excursion of humid air associated with the monsoon trough, or heavy cloud, and/or rain, in the immediate area.” Said humidity, clouds and rain were very low during these years.
1936- North American Dust Bowl:“The phenomenon was caused by severe drought . . .”
1976- Great Britain:“. . . from June 22 until August 26, a period of nine weeks, the weather was consistently dry, sunny and hot. It should also be remembered that summer 1976 marked the culmination of a prolonged drought which had begun in April 1975.”
Even though it is typically reported that a heat wave will bring on a drought the opposite is always the case—heat waves are invariably preceded by lower than normal precipitation just as deserts, which are places of permanent drought, are invariably several degrees warmer on average than their more humid counterparts along the same latitude. If droughts where, in fact, caused by the heat trapped by water vapor via a “greenhouse effect” then as soon as the water vapor was wrung out of the air the temperature would plummet since the “greenhouse effect” hypothesis asserts that without the presence of “greenhouse gases” the atmosphere loses its capacity to prevent heat from escaping into space.
You will not find in any history book an incident of a heatwave being brought on by too much rain—by too much humidity—and this is no small technicality. Take Atlanta for example in the summer of 2011; when the temperature reached 105 °F after a month of no rain a calamity was declared, while in arid Phoenix when temperatures routinely reach 110 °F in July these are seen as typical balmy summer days. So again, if the “greenhouse effect” hypothesis were true droughts and heatwaves could not co-exist, because the dry air in drought stricken regions would be deprived of the atmosphere’s most powerful “greenhouse gas” and temperatures would plummet. Since temperatures in drought stricken regions sore instead of plummet we must conclude that water vapor is not actually a “greenhouse gas” because it causes atmospheric cooling rather than atmospheric warming and that the planet and humanity have nothing to fear from “anthropogenic humidity”. | <urn:uuid:395c49c8-e208-4457-9d99-71edf236e24e> | 2.859375 | 1,273 | Personal Blog | Science & Tech. | 36.767159 |
The Ocean Biome
The ocean holds the largest of all biomes on Earth. It covers 70% of the planet’s surface.
Life in the ocean is diverse. The smallest creatures that call the ocean home are microscopic and made of a single cell. The largest creatures are blue whales, which can be as much as 34 meters (110 feet) long. There are many different ways to live in the ocean too. Some animals travel thousands of miles through ocean water while others stay in the same place on the ocean floor for their entire lives. Some burrow beneath the sand while others float near the water surface.
The ocean is not uniform, nor is the marine life within it. While life in the ocean is often described as one biome, there are actually many specific ecosystems within the ocean that are characterized by physical conditions such as water temperature, the amount of sunlight that penetrates through the water, and the amount of nutrients.
Sunlight penetrates the top layer of ocean water, as much as 200 meters (656 feet) deep. This allows phytoplankton, algae, and plants like seagrass to make their own food through the process of photosynthesis. Almost all marine life (about 90%) lives within this top, sunlit layer of the ocean. Photosynthesizing organisms are the start of most marine food chains except for those in the deep ocean where there is no sunlight.
The temperature of ocean water varies depending on its location. Closer to the Earth’s polar regions, ocean water is colder. Closer to the equator, ocean water is warmer. Water that is deep in the ocean is colder than water that is near the ocean surface. Many animals can only survive at certain temperatures. Other animals in the ocean are able to survive at a range of temperatures and can live in more places in the world’s ocean basins.
The following links give a broad overview of four different environments where life flourishes in the ocean. Within these areas, a variety of specific ecosystems exist such as coral reefs, kelp forests, and hydrothermal vents.
- Life in the Intertidal Zone - Where the ocean meets the land, hardy animals, plants, and algae make their homes between the low tide and high tide levels.
- Life in the Open Ocean – The largest area of the marine ecosystem, the open ocean is home to swimming fish, drifting plankton, and other creatures.
- Life in the Shallow Ocean – In the sunlit parts of the shallow ocean, many animals, plants, and algae thrive on the seafloor as fish zoom above.
- Life in the Deep Ocean – This cold, dark world is home to some very unusual extreme environments where animals and bacteria have found ways to survive. | <urn:uuid:68fae255-8d8a-4ebc-ae04-b1f59496a2ac> | 4.0625 | 569 | Knowledge Article | Science & Tech. | 47.403557 |
Geometric Structures: Finite Arithmetics
York College (CUNY)
Jamaica, New York 11451
There are many points of view for discussing numbers. These include the use of numbers in counting and measuring. Another important point of view is the role numbers play in thinking about the solutions of equations.
If one is given the equation x + 8 = 2, then this equation has no solution if x is confined to be a counting number number. However, x + 3 = 45 does have a solution among the counting numbers, namely x = 42. If an equation does not have a solution for a particular collection of numbers Y, one can ask whether or not one can extend Y to a new collection of numbers Y', which in some sense will contain Y, and where the equation will have a solution.
For the equation y + 8 = 2, one can extend the counting numbers to a bigger collection of numbers, usually denoted Z and called the integers where the equation does have a solution. Furthermore, there is a natural way in which the integers contain the counting numbers as a subset.
Now consider the equations 4x = 12 and 4x = 11 where x is restricted to Z. The first equation has the solution x = 3 while the second equation has no solution in Z. Yet, once again, one can extend Z to a new number system where 4x =11 does have a solution. This number system is usually denoted by Q, the rational numbers. One perspective on rational numbers is that they have the form a/b where a and b are members of Z with the restriction that b can not be 0. In a very formal setting one thinks of Q as being a "new system" with its own addition and multiplication which has a subset of numbers which behave exactly the way the elements of Z behave. Although this approach is rather "dry" it is necessary for mathematical precision. Here, I take a more informal approach.
The rational numbers, too, are not rich enough to enable one to solve all the equations that one might want to. Thus, the simple equation x2 = 2 has no solution when restricted to Q. This result is quite astonishing and surprised Greek geometers when they realized that an isosceles right triangle with leg 1 had a hypothenuse whose length was not rational. The numbers that make it possible to solve polynomial equations with integer coefficients are known as the algebraic numbers. Yet, they are not rich enough to capture all the numbers that arise naturally in Euclidean geometry. A circle of radius 1 has a circumference of length 2π. It turns out that π is neither a rational number nor an algebraic number, though neither of these facts is easy to show. Numbers such as π, belong to an extension of the rational number system, known as the real numbers. These numbers can be thought of as the "limits" of infinite sequences of rational numbers. In fact, there are a variety of approaches to constructing the real numbers. Those numbers which are real but are not algebraic are known as transcendental numbers. In an astonishing sense, made precise by Georg Cantor, there are many "more" transcendental numbers than algebraic real numbers.
Although the real numbers are a very rich collection of numbers with many nifty properties, they still do not enable one to solve some very simple equations. For example, x2 + 1 = 0 has no real number solution. The way to deal with this problem is again to extend an existing number system, creating a new number system which "contains" the real numbers and for which x2 + 1 = 0 does have a solution. This system is known as the complex numbers. The complex numbers have the form a + bi where a and b are real numbers and i is a special symbol with the property that i2 = -1. To multiply two complex numbers, one multiplies them as if they were polynomials and replaces any i2 that appears with -1. We will see that this "trick" enables one to construct many other interesting number system in a similar way.
The real numbers obey a lot of nice "rules" such as commutativity of addition and multiplication and associativity of addition and multiplication. Addition and multiplication are linked by a distributive law: a(b + c) = ab + ac. The real numbers, in addition to their algebraic properties, such as the ones just listed, also are ordered. This means that for any pair of real numbers, it is possible to say that either they are in fact the same real number or one is "bigger" than the other. Though the complex numbers share the nice algebraic properties of the real numbers, they are not ordered. The traditional name for an algebraic system with two operations + and x which obey the algebraic operations shared by the rational, real, and complex numbers is a field. There are a variety of ways to give a formal definition of a field. For those familiar with group theory the additive structure of a field is a group which obeys the commutative law (i.e. an Abelian group) and the multiplication is also a commutative group on the non-zero elements. The operations addition and multiplication are connected by the "distributive law": a ( b + c ) = ab + ac.
An interesting question is whether or not there are number systems which have the algebraic properties such as the rational, numbers, the real numbers and complex numbers obey. In addition to the properties mentioned above, these systems have special numbers denoted 0 and 1, where the equations x + a = 0 has a solution (-a) and ax = 1 (a not zero) has the solution x = 1/a. Thus, every element in the number system has an additive inverse and every element other than 0 has a multiplicative inverse.
The necessary concepts to show that there are indeed finite arithmetics which have the same algebraic properties as the real numbers or rational numbers are surprisingly new. The first tools were developed by Legendre and Gauss. and Euler: the theory of congruences. Rather than do this in general we will consider a specific example. Consider the prime number 5. (The primes are those positive integers 2 or more which have only 1 and themselves as divisors.) When an integer n is divided by 5, whether or not it is positive or negative, the number can be written in the form n = 5q + r where q is an integer and r is an integers satisfying 0 ≤ r < 5. Thus, any integer when divided by 5 can be thought of as being associated with either 0, 1, 2, 3, or 4. The standard way to express this is using the idea of congruence, When two numbers a and b have the same remainder when divided by m, we write a congruent to b mod m. (The notation of using three parallel bars is due to Gauss.)
If one takes any integer it is congruent to 0, 1, 2, 3, or 4 modulo m. The next step is to treat the classes into which all the integers are partitioned by what remainder they have mod 5 as if they were numbers! These numbers will be denoted by the same digit in boldface: 0, 1, 2, 3, and 4. To find the sum or product of two of these numbers we merely take any integer that leaves this remainder mod 5 as the numbers being added or multiplied, perform the usual addition or multiplication on these, taking as the answer, the value obtained mod 5! Thus, to find the product of 2 and 4 we can take the product of 7 and 9 (which leave remainders of 2 and 4 when divided by 5). Since 7(9) is 63, and 63 divided by 5 leaves the remainder of 3, the product of 2 and 4 is 3. For this approach to be valid we have to prove that had we used, for example, any other integer than 7 whose remainder is 2 when divided by 5 to represent 2 that we would get the same answer. It is not difficult to provide these proofs. One easily can construct the 5x5 addition and multiplication facts tables for the "bold" numbers, which is known as Z5!
I mentioned above that for the reals and rational numbers there were additive inverses and multiplicative inverses for non-zero values. Is that true in Z5? It turns out the answer is yes. For example, since 3 + 2 = 0 we see that 2 is the additive inverse of 3 and 3 is the additive inverse of 2. Also, since 2 times 3 equals 1, it follows that 2 is the multiplicative inverse of 3 and 3 is the multiplicative inverse of 2. In fact, the numbers in Z5 obey all of the algebraic properties of the real numbers, the complex numbers, or the rational numbers, although, like the complex numbers, they can not be ordered.
It turns out that there is a different finite arithmetic for each prime p, and the arithmetic associated with the prime p is called Zp. Are there other finite arithmetics that also are algebraically like the rationals or reals? The surprising answer is yes. In fact, infinitely many additional such fields. The general result is that for each prime power, pk, there is a finite field. When k = 1 we have seen how to construct these. How can one get the others?
Here is a way to see how to construct a finite field of 4 elements. Start with the finite field of two elements, Z2. Now we look at the polynomial 1x2 + 1x + 1 = 0. This polynomial has no root in Z2. This can be checked by substitution of the two elements in the field into the polynomial and verifying that neither of these numbers satisfies it. So now, we proceed just the way we did to construct the complex numbers over the reals. For convenience we will no longer use bold numerals to indicate the numbers in Z2. Define λ 2= λ + 1. Recall that we have -1 = 1 in Z2. The elements of our new field will have the form a + bλ where a and b are selected from Z2. Since a and b can only take on two values we have exactly 4 elements in our new arithmetic: 0, 1, λ, and 1+ λ. It is not difficult to check that these 4 numbers each has an additive inverse. In fact, adding each of the elements to itself results in the value 0. For multiplication in this 4 number arithmetic, to compute for example λ(1+ λ) we get λ + λ2 = λ + λ + 1 (as we defined it above) = 1. Thus, λ and λ + 1 are multiplicative inverses of each other. It is good practice to make up the 4x4 addition and multiplication tables for this 4 element arithmetic.
Recall that because we are working in Z2, 1 + 1 = 0 (remember strictly speaking we should be using bold face but have dropped it out of convenience).
A polynomial (degree at least 2) is called irreducible over a field (see above) if it has coefficients taken from the field but can not be factored into lower degree polynomials with coefficients from that field. In particular, the polynomial has no roots in the "base" field. It turns out that for every; finite arithmetic Zp there is an kth degree polynomial which is irreducible over Zp. Using this fact one can construct a finite arithmetic (finite field) with pk elements as explained above. These fields are traditionally known as the Galois Fields, honoring Erviste Galois who was a pioneer in their study.
Finite arithmetics find applications in the theory of error correcting codes, and also in the theory of finite geometries. For example, one can imitate the approach used in analytical geometry to construct the Euclidean plane by using for points, ordered pairs (x, y ) of real numbers and for lines equations of first degree: a x + by + c = 0 where not both a and b are 0. Thus, if one uses elements from the field Z5 and defines points to be ordered pairs (x, y) where the coordinates are taken from Z5 and lines have the form ax + by + c = 0 where a, b, and C are again taken from Z5 and a and b are not both 0. Since Z5 has 5 elements it is easy to check that one gets exactly 25 points and exactly 30 lines. The geometry one gets obeys the very nice rules that:
a. Given two point there is exactly one line which contains them.
b. Given a point P not on a line l, there is a unique line through P which is parallel to l.
One can use the concept of slope, in this finite geometry, to tell when lines are parallel in much the same way that one uses it in analytical geometry. For example, in the 25 point geometry mentioned above, if one wants the line through (2, 4 ) parallel to the line y = 1 x + 4 the answer would be y = 1 x + 2. Also, one can find exactly what 5 points lie on the line y = 1 x + 4. These points are ( 0, 4), ( 1, 0 ), ( 2, 1 ), ( 3, 2 ), ( 4, 3 ). Remember that just as in analytical geometry there is a collection of lines with undefined slope. Since for each of the 5 possible slopes there are 5 lines with that slope and there are 5 lines with undefined slope, there are exactly 30 lines in this plane.
One can take this 25 point plane and construct a 31 point plane by adding one new line and putting one additional point on each line of the old plane. This geometry, is a projective geometry, and has 31 points and 31 lines. One can use as coordinates for this plane, triples of elements from the field Z5 except for the triple of all 0's.
Gallian, J., Contemporary Abstract Algebra, Heath, Lexington.
Rosen, K., Elementary Number Theory and its Applications, Addison-Wesley, Reading. | <urn:uuid:fe37c694-b3fc-4e52-8b8d-e0c214dbb7c7> | 3.625 | 2,964 | Academic Writing | Science & Tech. | 59.275128 |