text large_stringlengths 148 17k | id large_stringlengths 47 47 | score float64 2.69 5.31 | tokens int64 36 7.79k | format large_stringclasses 13 values | topic large_stringclasses 2 values | fr_ease float64 20 157 |
|---|---|---|---|---|---|---|
What is the concept behind when it is said that for the first fundamental f=c/λ , λ should be equal to 2L . I have read the page http://en.wikipedia.org/wiki/Fundamental_frequency but I am still confused about why 2L ?
The image above (shamelessly cribbed from Google images!) shows various features of a sine wave. The wavelength is the distance between two crests. Note that there are two nodes every wavelength, so the distance between nodes is $\lambda/2$.
Suppose you're plucking a guitar string. A guitar string is fixed at either end, and because the fixed ends can't move there must be a node at the ends. The fundamental frequency is the one with the fewest number of nodes, so it's the one with only two nodes, one at each end of the string. This means that if the string length is $L$, the distance $L$ must be equal to $\lambda/2$ so $\lambda = 2L$.
However we've concluded that the fundamental has a wavelength of $2L$ only because the guitar string has a node at each end, and this is not true for all instruments. For example an organ pipe is closed at one end and open at the other. This means it has a node at the closed end but a crest at the open end. The fundamental of an organ pipe therefore has a wavelength of $\lambda/4$ (the minimum distance between a node and a crest), so if $L$ is the length of the organ pipe the wavelength of the fundamental is $4L$ not $2L$.
$L$ is the length of the tube. The fundamental frequency looks like $\sin (\pi x / L)$, one upper wave of a sine (or the same with cosine if it's the other kind of the wave). However, the function $\sin (\pi x / L)$ has periodicity $\Delta x = 2L$, and the periodicity of the wave is what we call the wavelength, so $\lambda = 2L$.
The number 2 just means that there are 2 half-waves in a period – and one half-wave is exactly the minimum that is needed to be squeezed in between the ends of the tube. That's how the fundamental frequency is defined. | <urn:uuid:680f952c-bedb-4358-ad96-6d27b3cc2187> | 3.890625 | 481 | Q&A Forum | Science & Tech. | 67.818158 |
A composite of all the major global temperature records via Skeptical Science.
The last decade was easily the hottest on record. We’ve known that sulfate aerosols (from volcanoes and/or Chinese coal) and the “the deepest solar minimum in nearly a century” masked the rate of warming somewhat.
Even so, NASA’s Goddard Institute for Space Studies (GISS), which probably has the best of the long temperature datasets, reported the 12-month running mean global temperature reached a new record in 2010. As a NASA analysis found: “We conclude that global temperature continued to rise rapidly in the past decade” and “there has been no reduction in the global warming trend of 0.15-0.20°C/decade that began in the late 1970s.”
But other datasets appeared to show a slight slowing in the rate of warming, though even that may have been due to flawed data, as in the case of the UK’s Hadley Center.
Scientists have long known that the overwhelming majority of human-caused warming was expected to go into the oceans (see figure below). And many have suspected that deep ocean warming has also been masking surface warming.
Now a new study led by led by the National Center for Atmospheric Research (NCAR) finds that may indeed be the case:
The planet’s deep oceans at times may absorb enough heat to flatten the rate of global warming for periods of as long as a decade even in the midst of longer-term warming….
The study, based on computer simulations of global climate, points to ocean layers deeper than 1,000 feet (300 meters) as the main location of the “missing heat” during periods such as the past decade when global air temperatures showed little trend. The findings also suggest that several more intervals like this can be expected over the next century, even as the trend toward overall warming continues….
“This study suggests the missing energy has indeed been buried in the ocean,” [coauthor Kevin] Trenberth says. “The heat has not disappeared, and so it cannot be ignored. It must have consequences.”
These potential consequences include accelerated warming in the coming decade and melting of the West Antarctic Ice Sheet. Let’s take these two in order. | <urn:uuid:aec57c9c-d133-416b-bd0e-095976473830> | 3.640625 | 483 | Knowledge Article | Science & Tech. | 47.396367 |
How the Owl Tracks Its Prey
Experiments with trained barn owls reveal how their acute sense of hearing enables them to catch prey in the dark
The structure of owls' ears enable them to rapidly spatially locate the origin of sounds that are literally as quiet as a mouse. Author Masakazu Konishi describes clever and elegant experimentation to discover owls' binaural hearing, while including beautiful infrared photography of owl flight. This Classic article was first published in the July–August 1973 issue, and is reprinted as part of American Scientist’s centennial year celebration. The author is widely recognized for his neuroethological research on prey capture auditory systems in owls and singing in songbirds.
Go to Article | <urn:uuid:d3744212-2c94-4ab2-b850-886ce0d1632a> | 3.75 | 148 | Truncated | Science & Tech. | 21.684712 |
The team used the device to capture a single cell of a “candidate division” called OP11 from an anaerobic sulfide and sulfur-rich spring in southwestern Oklahoma called Zodletone Spring. There was nothing particularly special about OP11 a priori, Blainey says, because nothing was known about it; the organism was sequenced to fill in a phylogenetic gap.
Yet the resulting sequence provided more than just As, Cs, Gs, and Ts, according to Blainey; it provided glimpses of how OP11 lives —oxidizing organic molecules for energy, just as humans do, and breaking down complex polymers, which humans cannot. “We are actually quite excited about the data we got back—not in the sense that it encodes genes for turning lead in to gold, but that we got information about enough of the right genes to start outlining the nature of the OP11 cell we isolated.”Single cell SNPs
The other major issue with MDA is amplification bias; researchers using MDA consistently observe uneven amplification across the genome. According to Blainey, genome completeness in single-cell genome projects can run the gamut from 0 to 95%; Woyke says her team typically recovers between 20% and 80%.
Such uneven reads tend to confound genome assemblers, which are designed to anticipate relatively even sequence coverage. “This is absolutely not true at all for single cell data and really fouls up the assemblies,” notes Blainey.
Patrick Chain, team lead for metagenomics and next-gen sequencing applications at the Los Alamos National Laboratory, along with LANL colleague Cliff Han, has developed a process to normalize genome coverage by inducing artificial polyploidy—that is, multiple genome copies — by using cell division inhibitors to produce cells with 2, 4, 8, or more genomes per cell. The result, he says, is “better genome coverage and a more normalized distribution of genome coverage after sequencing.”
Meanwhile a group led by Sunney Xie, professor of chemistry and chemical biology at Harvard University, has developed an alternative approach to genome amplification. Called MALBAC, the technique employs a linear thermal cycling amplification step prior to PCR amplification, rather than MDA.
“That solves the bias problem,” Xie says — so much so, in fact, that his team has amplified the DNA from a single human cell with 93% coverage at 30x sequencing depth. The team could even call a single SNP variant based on their data. “If one base is different, we can call it.”Metagenomics meets single cell genomics
Single-cell genomics and metagenomics represent two sides of the same coin — understanding the biology of organisms that cannot be grown in the lab. Metagenomics techniques can reveal the genetic potential of a community, but not the players. Single-cell approaches can close that gap, for instance by providing scaffolds for assembling metagenomics data or reference genomes for variation studies. As a result, for many researchers the two technologies are complementary.
According to Woyke, JGI received 12 single cell genomics proposals during a recent call for proposals, 10 of which combine metagenomics and single cell approaches and account for more than 400 single-cell genomes in total.
“It's not like you should only do one or the other, they inform each other,” says Philip Hugenholtz, director of the Australian Center for Ecogenomics in Queensland.
In one 2009 study (3), for instance, Stepanauskas and his team (including Woyke) isolated two individual “uncultured, proteorhodopsin-containing marine flavobacteria” from the Gulf of Maine, collecting 1.9 Mb and 1.5 Mb of genomic DNA representing an estimated 91% and 78% genome recovery, respectively.
The team used those genome assemblies as scaffolds to “recruit” individual reads from the Venter's GOS to map where in the world's oceans those organisms reside. “In theory we could do the same with single-cell sequencing alone,” Stepanauskas concedes. “But it would be much more expensive.”Single cell viromics
Of course, to get a really complete picture of a microbial community, researchers must study more than just its microbes; there also are the viruses that prey upon them.
It has been estimated there are 10 viruses for every bacterial species, and as Lasken points out, “we don't even know how many bacterial species there are.”
Recently Lasken, along with former JCVI colleague Shannon Williamson, published a proof-of-principle study addressing “single virus genomics” (4). The team mixed two known bacteriophages (T4 and lambda), sorted them into individual “cooled agarose beads” on a microscope slide, and used those as templates for MDA and sequencing. The team showed they could read a single lambda phage at 437x coverage, capturing all but the first five bases of the virus’ genome.
Now, the Lasken team is trying to adopt the technology to environmental samples. But it won't be easy. Viruses represent an even bigger challenge than microbes at the single particle level, as viruses by definition cannot replicate on their own. Researchers trying to interpret viral genomes must therefore figure out not only what their genomes encode, but also which organisms they infect.
Still, says Lasken, “It is fair to say that single viral particle sequencing would solve a very difficult problem of how to get access to this enormous number of viruses in the environment.”
As with most everything in biology, such sequences will be only the beginning. After all, even well-studied of organisms have their secrets. Says Hugenholtz, “There are still genes in E. coli that they haven't worked out what the function is, despite having the genome for almost 20 years.” | <urn:uuid:6141f229-84fd-4a51-8d12-7c6d7b2bb1bf> | 2.984375 | 1,255 | Academic Writing | Science & Tech. | 33.242281 |
limitArticle Free Pass
limit, mathematical concept based on the idea of closeness, used primarily to assign values to certain functions at points where no values are defined, in such a way as to be consistent with nearby values. For example, the function (x2 − 1)/(x − 1) is not defined when x is 1, because division by zero is not a valid mathematical operation. For any other value of x, the numerator can be factored and divided by the (x − 1), giving x + 1. Thus, this quotient is equal to 2 for all values of x except 1, which has no value. However, 2 can be assigned to the function (x2 − 1)/(x − 1) not as its value when x equals 1 but as its limit when x approaches 1. See analysis: Continuity of functions.
One way of defining the limit of a function f(x) at a point x0, written asis by the following: if there is a continuous (unbroken) function g(x) such that g(x) = f(x) in some interval around x0, except possibly at x0 itself, then
The following more-basic definition of limit, independent of the concept of continuity, can also be given:if, for any desired degree of closeness ε, one can find an interval around x0 so that all values of f(x) calculated here differ from L by an amount less than ε (i.e., if |x − x0| < δ, then |f (x) − L| < ε). This last definition can be used to determine whether or not a given number is in fact a limit. The calculation of limits, especially of quotients, usually involves manipulations of the function so that it can be written in a form in which the limit is more obvious, as in the above example of (x2 − 1)/(x − 1).
Limits are the method by which the derivative, or rate of change, of a function is calculated, and they are used throughout analysis as a way of making approximations into exact quantities, as when the area inside a curved region is defined to be the limit of approximations by rectangles.
What made you want to look up "limit"? Please share what surprised you most... | <urn:uuid:257cc7a5-9f40-44da-b0a0-f541e4c74fff> | 4.3125 | 485 | Knowledge Article | Science & Tech. | 59.121237 |
The general form of a quadratic equation is where a, b, c are constants (generally integers).
A quadratic equation with real coefficients can have none, one or two distinct real roots.
To find them, use the Quadratic Formula:
or factorise it.
We require to numbers that multiply together to give 8 and add together to give 6.
So the numbers are 2 and 4. | <urn:uuid:e37af162-6227-4983-bac9-4f1edaf2adc7> | 3.046875 | 88 | Tutorial | Science & Tech. | 57.689875 |
Zermelo Fraenkel set theory ZFC
Zermelo Fraenkel set theory
The standard set theory in which most mathematics is formalized. Its axioms include the pairing axiom, the power set axiom, the axiom of infinity, the axiom of extensionality, the axiom of replacement, the separation axiom, the union axiom, and the foundation axiom. Abbreviated ZF. When the axiom of choice is assumed, this theory is abbreviated ZFC.
The ring with only one element, its additive identity.
The function z(s) given by:
This function gives series representations of many significant numbers, e.g., z(2) = p2/6, and z(4) = p4/90.
Cf. Riemann Hypothesis.
See Zermelo Fraenkel set theory.
Zermelo Fraenkel set theory with the axiom of choice. | <urn:uuid:81d54854-1534-4f65-acdd-05e6fc119aec> | 2.890625 | 203 | Knowledge Article | Science & Tech. | 54.492596 |
Explanation Of Orbital Elements
The quantities given in orbital elements published in the
Minor Planet Circulars
Minor Planet Electronic Circulars
have the following meanings (for a detailed explanation of the physical
meaning of these quantities, and on any relationships between different
quantities, you are referred to standard textbooks on celestial
- The epoch of osculation of the orbital elements.
- Mean anomaly at the epoch.
- Date of perihelion passage.
- Mean daily motion (in degrees/day).
- Semimajor axis (in AU).
- Reciprocal semimajor axis (in 1/AU).
- Perihelion distance (in AU).
- Orbital eccentricity.
- Orbital period (in years).
- The J2000.0 argument of perihelion (in degrees).
- The J2000.0 longitude of the ascending node (in degrees).
- The J2000.0 inclination (in degrees).
- P and Q vectors
- The vectors P and Q are an alternate form of representing the angular
elements Peri., Node and Incl. For an explanation of how to convert between
the two sets of quantities you are referred to standard celestial mechanics
- Uncertainty parameter.
Not all of these quantities will be given with every orbit, but enough
information will always be given to describe an orbit completely.
The following two quantities are not orbital elements but are generally
given with them.
- Absolute visual magnitude. A table converting
H to a diameter is available.
- Slope parameter. For an explanation of the H,G magnitude system
refer to Application of Photometric Models to Asteroids,
Bowell et al., in Asteroids II, 524-556 (published by
the University of Arizona Press, ISBN 0-8165-1123-3) and the | <urn:uuid:8afc2c9d-9340-4fa8-bf6a-8d9951ee74cc> | 3.234375 | 392 | Tutorial | Science & Tech. | 38.142501 |
Joined: 16 Mar 2004
|Posted: Tue Aug 19, 2008 12:08 pm Post subject: Nanopositioning breaks new record
|Nanopositioning breaks new record
One of the main obstacles facing nanotechnology today is the lack of effective devices for building and characterizing nanoscale structures. For the field to progress, scientists need to be able to control the position of a mechanical system with sub-nanometric accuracy over a moving range of several millimetres. A French team has now taken an important step forward in this direction with the development of a 2D nano-positioning system that can do just this. What's more, the new instrument, based on an interferometric sensor and an optoelectronics board, can be used in a standard atomic force microscope or lithography set-up.
While many modern instruments can move over the millimetre range with nanoscale resolution, their repeatability and accuracy are still larger than tens of nanometres across. These devices are mainly limited by mechanical defects in the translation stage. Indeed, the best positioning device made to date has an accuracy of around 100 nm. This problem could be a serious technological drawback in the near future, especially in photolithography techniques as devices become ever smaller.
Luc Chassagne of the University of Versailles Saint-Quentin and colleagues have made a long-range displacement nanopositioning device that allows a sample holder to move over a long range of millimetres with nanometre scale accuracy – something that has been difficult to achieve until now. The instrument uses an optoelectronic system board that controls the position of a moving mirror with sub-nanometric accuracy. The information on the position of the mirror is determined by comparing the phases of optical beams coming in and out of an interferometer. Finally, there are two translation stages that work in both X and Y directions.
The first stage consists of two linear motors that can move over 50 mm and have a resolution of 10 nm. The second is composed of 2D piezoelectric actuators that can move over 15 µm and has sub-nanometric resolution. This second stage compensates for the defects of the motors in real time and coupling the two ensures accuracy, explains Chassagne.
The system will be useful for all sorts of nanofabrication processes, be they top-down or bottom-up, and any other applications related to nanotechnology, say the researchers. | <urn:uuid:09a251fa-750d-41ae-a70d-397a50c04da0> | 3.046875 | 506 | Comment Section | Science & Tech. | 26.126683 |
Amphibians are integral links in the flow of energy in natural ecosystems. They are predators of many animals, some of which are pests, and they are prey for other animals in the food web. Amphibians are declining worldwide. The primary cause is habitat loss, but commercialization for food and skins, disease, introduced species, environmental pollution, and global climate change also cause population decline and loss. Park records list the presence of 49 amphibians, including toads, frogs and salamanders. A survey for herps (reptiles and amphibians) was recently completed for the Jamestown environs. That survey found eleven species of frogs, seven species of salamanders, seven species of turtles, three species of lizards and eight species of snakes. A parkwide survey of herps (amphibians and reptiles)is presently being conducted. | <urn:uuid:569ae0e9-25f1-44ff-81b2-59f28c2c4ccb> | 3.5625 | 177 | Knowledge Article | Science & Tech. | 31.309063 |
A telescope to peer deep into the heart of our planet
Designed to track North America’s geological evolution, EarthScope is the largest science project on the planet. This earth-sciences observatory records data over 3.8 million square miles. Since 2003, its more than 4,000 instruments have amassed 67 terabytes of data—that’s equivalent to more than a quarter of the data in the Library of Congress—and add another terabyte every six to eight weeks
Researchers are using EarthScope, which consists of many kinds of experiments, to examine all facets of North America’s geological composition. Across the continental U.S. and Puerto Rico, 1,100 permanent GPS units track deformations in the land’s surface caused by tectonic shifts below. Seismic sensors next to the active San Andreas Fault in California record its tiniest slips, while rock samples pulled from a drill site that extends two miles into the fault reveal the grinding and strain on the rocks that occur when the two sides of the fault slide past each other during an earthquake. And over the course of 10 years, small crews have hauled a moveable array of 400 seismographs across the country using backhoes and sweat. By the time the stations reach the East Coast next year, they will have collected data from almost 2,000 locations.
What's In It For You
Collectively, EarthScope’s measurements could help explain the forces behind geological events such as earthquakes and volcanic eruptions, leading to better detection. So far, data from the project has shown that rocks in the San Andreas Fault are weaker than those outside it and that the plume of magma under Yellowstone’s supervolcano is even bigger than previously suspected. | <urn:uuid:0d128747-dbeb-4805-83ab-3ed14c30ce24> | 4.03125 | 359 | Truncated | Science & Tech. | 45.391109 |
They give birth astride a grave, the light gleams an instant, then it's night once more.
The supernova that was spotted in the Large Magellanic Cloud in 1987 reached 3rd magnitude and was the brightest to grace our skies in 383 years.
Courtesy David Malin, Ray Sharples, and the Anglo-Australian Observatory.
Samuel Beckett, Waiting for Godot
The last person to see and chronicle a supernova outburst in our galaxy was Johannes Kepler. That was in 1604, when the star now named after him rivaled Venus in brightness. By some measures we're overdue for another brilliant supernova, yet the next star to explode in our galaxy is more likely to be a visual pipsqueak compared to Kepler's Star. Yet even a dim supernova is unlikely to be overlooked; its birth will be trumpeted by physicists' subterranean particle detectors rather than by astronomers' telescopes.
What's seen of Supernova 1987A today are the faded star and surrounding rings of gas that it has lit up. This three-color Hubble Space Telescope composite is from several images taken in 1994, 1996, and 1997.
Courtesy the Hubble Heritage Team (NASA/STScI/AURA).
Supernovae are stars that brighten by a dozen magnitudes or so and at their peak are some 10,000 times more luminous than ordinary novae. (The physical processes operating during these two explosions are completely different: supernovae blow themselves to smithereens; ordinary novae don't.) The enormous luminosity of supernovae at their brightest makes it possible to readily spot them in distant galaxies for weeks they can match the light output from all the other stars in a hefty system like our own Milky Way. Indeed, the identification of supernovae as a unique phenomenon had to wait until the 1920s, when galaxies themselves were recognized as independent star systems.
"Behold, directly overhead, a certain strange star was suddenly seen, flashing its light with a radiant gleam.... Astonished and stupefied, I stood still.... When I had satisfied myself that no star of that kind had ever shone forth before...I began to doubt the faith of my own eyes.... Having confirmed that my vision was not deceiving me...and marveling that the sky had brought forth a certain new phenomenon to be compared with the other stars, I immediately got ready my instrument."
Tycho Brahe reflecting on the supernova of 1572
In his book De nova stella Tycho included this sketch of Cassiopeia with the supernova of 1572 at the top, near a star now called Kappa.
S&T photo by Craig Michael Utter.
No supernova outburst has been studied in our Milky Way for nearly 400 years. Therefore, everything we know about the workings of these stars has come from observing them in other galaxies. At such immense distances even these celestial powerhouses are dim, and their secrets have to be teased from the paltry number of photons that strike our detectors. Furthermore, catching these critters in the act has been a matter of chance, even after systematic searches began in the 1930s. The idea is simple: look at enough galaxies and, sooner or later, you'll find a supernova.
A recent theoretical model (curves) tracks the light variations of SN 1969L. Note that this 'plateau' Type II supernova's ultraviolet (U) light rises and fades faster than its blue ((B)) and yellow (V) light. Such a star halfway to the center of the galaxy could shine as brightly as Venus but would probably be dimmed by interstellar dust. The theoretical curves are for a star having 15 times the Sun's mass and 240 times the Sun's diameter.
Courtesy Sergei Blinnikov.
Even so, these distant supernovae are detected days or weeks after their outbursts begin. What hasn't been observed are the earliest stages of a supernova's development. Particularly interesting, for example, will be observations to assess the chemical composition and physical state of the fastest-moving pieces of the star's blown-off atmosphere. Such information should provide insight about the end point in the evolution of a very massive star as well as clues about how heavy elements are injected into the interstellar medium.
Wonderfully detailed mathematical models of supernova explosions have been built on theorists' computers (see the diagram above, right). They tell us what to expect, but wouldn't it be nice to have a sanity check? This may now be possible thanks to a group of neutrino (symbol ν, the lowercase Greek letter "nu") observatories that can warn us when a star blows up in our cosmic backyard the Milky Way and its nearby attendants in the Local Group even before its light begins to turn on! Although this new tool is wielded by high-energy physicists, its impact will likely rest with amateur astronomers and other small-telescope users. | <urn:uuid:7bd4f737-ea06-4859-942f-7d418e38e27e> | 3.265625 | 1,017 | Knowledge Article | Science & Tech. | 50.972495 |
Dimples on Golf Balls
A boundary layer is the layer of fluid (gas or liquid) next to a moving surface (wall) where the fluid transitions from zero velocity at the wall to the free stream velocity away from the wall. By inducing a turbulent boundary layer at an otherwise laminar boundary layer speed (determined by the Reynolds number), the detachment (separation) of the boundary layer from the wall is delayed, thus reducing drag.
Drag force is considered an unavoidable nuisance, always opposing the direction of travel through a fluid. For example small drag decreases in passenger airplanes translate into substantial fuel cost savings, but for golfers...priceless.
If the laminar boundary layer is unlikely to separate, e.g. low speed flow over a wing at low incidence, then inducing a turbulent boundary layer will not reduce drag and will actually increase it.
Flow around a sphere of golf-ball size and moving through the air at golf-ball speed will always separate with or without spin. The combination of the size and speed of a golf ball doesn't allow a turbulent boundary layer to develop naturally. Thus without dimples the laminar boundary layer will separate earlier inducing more drag than the dimpled, and therefore turbulent, version.
Recent blog posts
- Fluid Wrist Watch v2.0
- Caedium v5 Sneak Peek: 3Dconnexion 3D Mouse Support
- Caedium v5 Sneak Peek: Enhanced Accuracy Tool
- Questionable Approach Reverse
- Counterintuitive Usability With The Questionable Approach
- CFD Performance Comparison Between GPU and CPU
- Aerodynamics News: Speed on Wheels and Snow
- Caedium v5 Sneak Peek: Ribbon GUI
- Caedium v5 Sneak Peek: Polyhedral Meshes Improve CFD
- Fluid News: F1 Construction, Hand Washer-Dryer, Superomniphobic Coating, and Robofish | <urn:uuid:d5902793-630e-4786-9cd8-382b399b818a> | 3.171875 | 401 | Personal Blog | Science & Tech. | 41.825324 |
Documentation and Resources
The Clojure website contains many articles, most of which are listed on their Documentation page. There’s also has a great cheatsheet there (which includes a link at the bottom to enhanced cheatsheets with tooltips).
The clojure-doc.org site contains a number of tutorials and other guides.
The Confluence wiki is an enterprise wiki + communication medium for core developers. It contains a number of useful and up-to-date pages.
A well-known (though now somewhat dated) tutorial is Clojure - Functional Programming for the JVM.
The Learn Clojure site contains a number of pointers to various docs, books, and other community resources.
I also found this curated reading list of useful articles.
Clojure (like Python) supports docstrings. These are strings embedded in the source code of functions for the purpose of documenting them. You can interactively access the docstring (at the repl) for any function like so:
To see a listing of what’s available in a given library, use:
You can also search the docstrings using
(find-doc #"search string"). Note that the string inside
#"" is a regex.
You can see the source code for any Clojure function right at the repl, for example:
When searching for documentation on a given library, consider reading the source. Look in ~/.m2/repository for the jars. To open one up, you might do this:
mkdir ~/temp/foo cp ~/.m2/repository/path/to/bar-1.0.0.jar ~/temp/foo cd ~/temp/foo jar xvf bar-1.0.0.jar
To generate docs from library source code, see Generating Docs from Libraries.
For example code, you can always read the source code files of Clojure itself (download and look in clojure-i.j.k/src/clj/clojure). You might also look at contrib and 3rd-party libraries (see available libraries).
- The Clojure section at http://pleac.sourceforge.net/
It will sometimes be useful to look up documentation on various Java standard classes. You could download and install the full Java API docs onto your own system (and will probably want to at some point), but for now you can access those docs at: http://docs.oracle.com/javase/7/docs/api/.
4clojure is a site that has you work through successively more difficult programming problems, solving them with Clojure.
Here’s a collection of talks given by Rich Hickey (creator of Clojure). A number of other Clojure talks are available at InfoQ as well.
There’s also a blip.tv page containing various talks.
A discussion between Rich Hickey and Brian Beckman.
My current favorite Clojure book is Clojure Programming. There are a number of others though (in no particular order): | <urn:uuid:4536e475-3ba8-42f5-8a43-3c3c5d6ad6b7> | 2.828125 | 625 | Content Listing | Software Dev. | 58.274253 |
Binary stars play a very significant role in astrophysics: the only way you can directly obtain the mass and size of a star is if you can observe it orbiting with another star - then Kepler's laws of orbital motion and Newton's laws of gravitation can be applied. When they eclipse, occult or transit one another, we can often derive the ratio of luminosities and radii of the two stars, the ratio of the radii to the orbital radius, and information about the orbital inclination and eccentricity - just from the light curve that shows the eclipses, such as the one below. Add radial velocity data from spectroscopy and you pin down the absolute values of masses, and stellar and orbital radii.
Nature provides us with a rich zoo of eclipsing binaries, which are not easy to classify. You can classify them empirically by the shape of their light curves - though there are no neat boxes in which to put the different light curves. You can classify them astrophysically by the relation of the two stars to each other: all the way from two completely distinct spherical stars orbiting each other around their common centre of gravity, to a pair so close they touch and even share an atmosphere. For more on classification see the official GCVS classification guide.
The "Phase plot" above shows the light curve of RR Centauri. (The vertical axis is brightness, not magnitude, relative to its maximum brightness. The horizontal axis is its phase over a full orbit, with the primary minimum taken as phase 0.) There is much to be learned from phase plots like this. Particularly in the Southern Hemisphere, catalogue data on eclipsing binaries can be decades old, with little or no new data since discovery - so new data can give rich pickings.
It is possible to analyse these light curves using software, to determine the shapes and relative distances of the two stars. Here's a model of RR Cen derived from the above light curve (images credit: R.E. Wilson). The open circle is the Sun to the same scale. Note these two stars are actually in contact with each other. It's not hard to relate the nine positions shown to particular phases in the above phase plot, and explain things like why only one minimum has a flat bottom.
The basic data an observer needs to obtain on an eclipsing binary is a time series of time and magnitude data. Time needs to be expressed as Heliocentric Julian Date to avoid light-time effects from the orbiting Earth.
An important goal of eclipsing binary studies is to find times of minimum - the midpoint of the eclipses. The phase plot of RR Cen above shows one minimum on the left (partial eclipse of the larger star) and on the right a total eclipse of the smaller star. A widely used program for finding times of minimum, which also produces phase plots from your files of HJD - mag observational data, is Tonny Vanmunster's PERANSO - available fromwww.peranso.com. Others can be found on the AAVSO website www.aavso.org. A typical research project on an eclipser, such as VSS's Equatorial Eclipsing Binaries Project, will gather weeks or months of time series data on a star, then analyse it in PERANSO to find not only times of minima but the star's light elements. Light elements consist of the HJD of one observed time of minimum or epoch, usually written E0, and the orbital period P in days. Then any future eclipse HJDmin can be predicted using the equation HJDmin = E0 + P x E, where E is the number of epochs (times of minimum) elapsed since E0. For
RR Cen this is 2452500.523 + 0.6056922 x E (from the Krakow database http://www.as.up.krakow.pl/o-c/ - see below).
Frequently one finds that over years or decades the predicted time of minimum diverges from the observed time. This departure is plotted in an (O-C) diagram, (for Observed minus Calculated). Here it is for RR Cen, from the Krakow database.
This covers over 60 years of data, and shows observed epochs getting steadily later than predicted. From the shape of these diagrams much can be learned about mass transfer from one star to the other, or mass loss from the system.
The Eclipsing Binary work of VSS is aiming to fill in a lot of gaps in our knowledge of southern and equatorial eclipsing binaries. While observers with any type of equipment can contribute to Eclipsing Binary work, CCD-equipped telescopes are particularly valuable, as you can obtain a precise and lengthy time-series of data over a night that might capture in detail an entire eclipse, or indeed the full orbit of the binary. DSLR cameras are rapidly becoming the instruments of choice for bright eclipsing stars. VSS eclipsing binary work needs observers with any of the above equipment setups, and it also needs armchair analysts. If eclipsers interest you, please contact me and I'll give you a heap of work to do! At present there are two projects: QZ Carinae (leader: Stan Walker) and Equatorial Eclipsers (leader: myself).
The World Wide Web has many useful resources on eclipsing binaries. A particularly important one is the Krakow (Mount Suhora Observatory) Database of linear elements for eclipsing binaries, a statistics of minima database to which you can add your results, and an atlas of O-C diagrams. See http://www.as.up.krakow.pl/ephem/. Another is the Czech Astronomical Society's "O-C Gateway" with tables of times-of-minimum data for eclipsing binaries from which O-C diagrams can be displayed. Again, you can add your own data. Seehttp://var.astro.cz/ocgate/.
VSS work on Eclipsing Binaries is very much aided by the generous assistance of Dr Bob Nelson of Canada. Bob is an internationally recognised expert on eclipsers, which he has studied over a long career. His website,http://members.shaw.ca/bob.nelson/, has many useful programs for downloading, such as Minima (finds times of minimum using 6 methods), Period Search (finds periods based on portions of light curves), EB_Min (tells you what stars will have observable minima from your location on a given night) and WDwint (a Windows front-end to the Wilson-Devinney star modelling package, which is included).
The WD program with Bob's front-end is a standard tool used by eclipsing binary researchers. Once you have a phase plot for an eclipser you can use WD to find out a whole range of geometrical and astrophysical information about the two stars, as described at the beginning of this article. Warning - it's a test-and-compare iterative process which can take a long time to get good results. Also, radial-velocity data from spectroscopy is often needed. Then you can also use BinaryMaker 3 (www.binarymaker.com) to derive pictorial models of the rotating star system, like the one for RR Cen above.
Bob has also written a must-read three-part article on observing eclipsers, for the VSS Newsletters (2009 May, August and November issues).
Plainly there is a very great deal the amateur astronomer can do with eclipsing binaries. Join a VSS eclipsing binary project and contribute to our knowledge of this important area of astrophysics. | <urn:uuid:703be652-8338-4254-ad0f-aa663e708508> | 4.03125 | 1,585 | Knowledge Article | Science & Tech. | 50.948718 |
Scientific name: Jordanita globulariae
June - July. Wiltshire, Gloucestershire, Hampshire, Sussex and Kent. Small shiny green moth found on chalk downland and often on flowers like knapweed. Similar to Forester and Cistus Forester.
One of three similar species, this iridescent green moth can be difficult to distinguish from related species. Generally larger than the Cistus Forester (which is usually found near Common Rock-rose). Can be distinguished from the Forester on antennal characters, those of the male Scarce Forester being more pointed, whereas those of the Forester are rounded and broader.
The male flies in sunshine, although the female is generally more lethargic, and visits flowers such as knapweeds and Salad Burnet. In duller weather the moth sits around on flowers and other vegetation. The male occasionally flies at night.
Size and Family
- Family – Burnets and Foresters (Zygaenids)
- Small Sized
- UK BAP: Not listed
- Scarce (Nationally Scarce A)
Particular Caterpillar Food Plants
Common Knapweed and Greater Knapweed, at least initially mining the leaves.
- Countries – England
- Restricted to two areas of chalk downland. One centred on Wiltshire, with populations in Hampshire and formerly in Gloucestershire. The other is in Sussex, with an outlying population near Dover, Kent.
Found on permanent chalk grassland, usually in areas of longer turf. | <urn:uuid:223eec67-0bac-4531-bd9e-667e093c41b9> | 3.1875 | 325 | Knowledge Article | Science & Tech. | 35.194696 |
8.2.1 REPL commands
- M-x slime-repl-return
Evaluate the current input in Lisp if it is complete. If incomplete,
open a new line and indent. If a prefix argument is given then the
input is evaluated without checking for completeness.
- M-x slime-repl-closing-return
Close any unmatched parenthesis and then evaluate the current input in
Lisp. Also bound to M-RET.
- M-x slime-indent-and-complete-symbol
Indent the current line and perform symbol completion.
- M-x slime-repl-newline-and-indent
Open and indent a new line.
- M-x slime-repl-bol
Go to the beginning of the line, but stop at the REPL prompt.
- C-c C-c
- M-x slime-interrupt
Interrupt the Lisp process with
- C-c M-o
- M-x slime-repl-clear-buffer
Clear the entire buffer, leaving only a prompt.
- C-c C-o
- M-x slime-repl-clear-output
Remove the output and result of the previous expression from the | <urn:uuid:ca3c33cf-42ba-4afa-ba7c-b7e37d554733> | 2.6875 | 261 | Documentation | Software Dev. | 76.151053 |
Comprehensive DescriptionRead full entry
BiologyFound usually in open rocky areas but retreats to crevices or holes or hides among spines of sea urchins when threatened. Territorial. Sits on exposed rock surfaces near a hole or crevice. Feeds mainly on small crustaceans. Oviparous (Ref. 56079). Males guard the eggs which are found attached to the walls of brood chamber (Ref. 56079). Female deposits oblong eggs in empty shells; male guards them. Often found with Lythrypnus zebra. | <urn:uuid:36448e6e-5551-40e8-9e35-2db7e0ef73ef> | 2.78125 | 116 | Knowledge Article | Science & Tech. | 51.4545 |
Definition of the core floating point types and basic manipulation of them.
The Double type. This is expected to be an identical declaration to the one found in GHC.Prim. We avoid simply using GHC's type because we need to define our own class instances.
The Float type.
Coercion to floating point types.
Convert to a floating point type. Conversions from integers and real types are provided, as well as conversions between floating point types. Conversions between floating point types preserve infinities, negative zeros and NaNs. | <urn:uuid:fe0ce2bd-df64-4628-b93f-bf69d58f7841> | 3.078125 | 112 | Documentation | Software Dev. | 52.142431 |
OSO-6 operated from August 1969 until January 1972. The orbital period was
~ 95 minutes, with the orbital day lasting ~ 60 minutes of each orbit. The
spin rate was 0.5 rps.
The hard X-ray detector (27-189 keV) was a 5.1 cm2 NaI(Tl) scintillator,
collimated to 17 deg x 23 deg FWHM. The system had 4 energy channels (separated
27-49-75-118-189 keV). The detector spun with the spacecraft on a plane
containing the Sun direction within +/- 3.5 degrees. Data were read with
alternate 70 ms and 30 ms integrations for 5 intervals every 320 ms.
Also on board was a NRL experiment meant primarily to monitor solar flares.
This X-ray detector operated from August 1969 until January 1972. The NaI(Tl)
scintillator had a frontal area of 1.3 cm2 and was 2.54 cm thick. The
operated during daylight periods only (~ 70% of each 99.8 minute orbit). It
had 6 energy channels covering 23-82 keV, and an integral channel for >82 keV
(out to about 500 keV). Spectra were accumulated for 2.56 s.
Intended primarily to study bursts and flares from the Sun, the instrument
was also used to search for hard X-ray coincidences with known gamma-ray
bursts (primarily those seen by the Vela satellites). Three such
coincidences were observed. The NRL instrument, when combined with data
from the OGO-5 satellite, confirmed 5 hard X-ray bursts (they detected 12 | <urn:uuid:72f9379a-d1e6-41d4-933b-1c4bef03a707> | 2.90625 | 355 | Knowledge Article | Science & Tech. | 80.62204 |
I cant get the right value out for w. the angular velocity, which is 7.3*10^-5 rad/sec.
An aircraft travels a straight line from west to east along the equator at a
constant speed of 100 ms'. Taking the Earth to be a sphere of radius
6400 km, evaluate the angular velocity. | <urn:uuid:1b441f4f-5e8e-474d-a39f-e515400ffe24> | 2.84375 | 72 | Q&A Forum | Science & Tech. | 87.997273 |
LOUISE Emmerson has spent the past five years in a Hobart laboratory preoccupied with understanding the lives of Antarctic Adelie penguins. But it was only a few days ago that she finally saw the real thing waddle into view across the ice. It was love.
Perched quietly amid the Antarctic equivalent of chattering suburbia, a crowded rookery just a fast boat ride from Australia’s Casey research station, she watches the parent penguins return from the sea with bellies full of food for their rapidly growing, voraciously consuming young. “Great big balls of fluff,” whispers Dr Emmerson. “Don’t you just want to cuddle them?”
Her task here is rather less appealing, but no less doting. She has come to collect their crap. With Australian Antarctic Division project leader Simon Jarman and colleague Mike Double, she will spend the next two weeks living in a remote field hut, playing Pictionary by night, and venturing out into the frigid icescape each day to scrape penguin poo from the snow and rocks, photograph it, catalogue it, and deposit it in tubes to be shipped back to a Hobart laboratory. The DNA within the samples will be used to gain new insight into the Adelie’s diet, foraging habits and breeding patterns.
It really is an interesting article. Be sure to check out the “New Ice Age” blog and multimedia sites linked at the end of the article. | <urn:uuid:4c92f4ad-5db1-41fb-88b5-e460e41a0dd3> | 2.734375 | 315 | Personal Blog | Science & Tech. | 51.597917 |
Previously the enthalpies of formation and combustion have been discussed. However, there are numerous others. In the table below, some of these have been outlined, with a definition given.
|Atomisation ΔHθa||The standard enthalpy change when 1mol of gaseous atoms are formed. Depending on the standard state of the element, this could also be called the enthalpy of sublimation if going from solid to gas.|
|Bond dissociation, ΔHθdiss||The standard enthalpy change when 1mol of gaseous covalently bonded molecule is broken to form 2 radicals. So the enthalpy change for A (g) ® X. (g) + Y..|
|Electron Affinity, ΔHθea||The standard molar enthalpy change when an electron is added to an atom in the gas phase. For example: Cl (g) + e- ® Cl- (g)|
|Lattice Enthalpy, ΔHθL||The standard enthalpy change accompanying dissociation: the seperation of 1 mol of solid ionic lattice into gaseous ions; formation the creation of 1 mol of solid from gas ions. Note that formation enthalpies are always negative and dissociation positive.|
|Hydration, ΔHθhyd||The standard molar enthalpy change for the process: X+/- (g)® X+/- (aq).|
|Solution, ΔHθsol||The standard enthalpy change when 1mole of ionic solid dissolves in enough water that none of the dissolved ions interact with each other.|
Born-Haber cycles expand Hess's Law by breaking up the lattice formation enthalpy in to numerous other steps. Given the data for various enthalpies, it shouldn't be too difficult to deduce various reactions. Below is an example.
The enthalpy we are finding has been highlighted, it is the lattice enthalpy of formation of NaCl. It has the reaction:
Na+ (g) + Cl- (g) ® NaCl (s)
Once the cycle has been constructed, calculating the enthalpy is a simple case of following the alternative route for the reaction. Remember that if you go in the opposite direction of the arrow, the sign of the enthalpy is reversed (ie. +20 become -20 or vice versa).
364 - (½x242) - 494 - 109 - 411
711 kJ mol-1
It is possible to calculate the enthalpy of any reaction using the same principle of making several steps of different enthalpies, for example, the enthalpy of solution. Be careful of state symbols and the number of moles involved though!
Mean Bond Enthalpy
There is a certain amount of energy required for breaking bonds and given out in forming them. Therefore it should be a simple case of using the amount of energy in each bond to calculate the enthalpy change of a reaction. However, this is not possible because the bond enthalpy for the same bond will vary in different compounds and conditions.
However, in the absence of sufficient enthalpies, an average of the energy in each bond can be used as an estimate. This is called the mean bond enthalpy. The example below shows how they can be used. | <urn:uuid:db1a1401-888d-478d-b988-46dd6839dae0> | 3.71875 | 718 | Knowledge Article | Science & Tech. | 49.741935 |
To follow up on this quote, this whole story is fascinating.
New data collected by the Hubble Space Telescope proves, NASA says, that in 4 billion years the Milky Way and Andromeda will collide or pass each other by so closely that the gravitational force each exerts on the other will cause them to slow down to the point of merging. The merger will be completed 6 billion years from now.
“The clear finding is, we are going to merge with Andromeda,” van der Marel said. “In the past, it was just a possibility, but now it is a known fact that this will happen.”
There is a 9% chance that M-33, a satellite galaxy of Andromeda, will hit the Milky Way first in what van der Marel called a “one-two punch,” causing it to become a satellite of the new galaxy that is formed.
When Andromeda gets here, the sun will likely be pushed out much farther into the universe. By that time though, Earth will have become too hot to be inhabited by humans anyway.
Our sun will not be directly hit when the initial collision happens in 4 billion years. But in 6 billion years, when the merger is complete, our sun will die. | <urn:uuid:bcd031b0-0867-4849-9e9d-749505d39825> | 3.296875 | 256 | Personal Blog | Science & Tech. | 59.733462 |
Where did you think I went?
The sun emitted a mid-level solar flare on Thursday from a sunspot known as AR 1520, according to a NASA report. The solar flare began at 1:13 a.m. EDT and peaked at 1:58 a.m. The space agency notes that solar flares are “gigantic bursts of radiation” that are harmless to humans because they can’t pass through Earth’s atmosphere.
However, strong solar flares, like an X-class flare, can disrupt the atmosphere and cause radio blackouts. An X-class solar flare erupted from the sun on July 12, according to a Space.com report. The X-class solar flare erupted from the same region as last Thursday’s mid-level solar flare.
U.S. News & World Reports notes that the X-class solar flare was strong enough to black out NOAA radio. It was the sixth X-class flare of 2012. Citing NASA scientists Phillip Chamberlain, the article says that strong solar flares will continue through the beginning of 2014 as the sun enters the most active period of its eight-year solar cycle.
Radio blackouts can also occur with mid-level solar flares. | <urn:uuid:a6a5ef95-9875-4164-b982-742a41771192> | 3.3125 | 252 | Personal Blog | Science & Tech. | 75.13877 |
A man comes face to face with an alien for the first time in this illustration from H G Wells's The War of the Worlds. Photograph: Bettmann/Corbis
The more we consider the possible consequences of contact with an alien intelligence, the better prepared we will be."No one would have believed in the last years of the nineteenth century that this world was being watched keenly and closely by intelligences greater than man's"
So starts H G Wells's 1898 novel The War of the Worlds, which continues with a military invasion by Martians. While contact with aliens may be a common theme in science fiction, could it also be a serious topic in science?
Indeed it could. Ever since 1960 with the first serious search for radio transmissions from other civilisations (known as SETI, the Search for Extraterrestrial Intelligence), scientists have been thinking about what would happen if evidence for ET were found. Examples of their efforts include the 2010 Royal Society conference on "The detection of extra-terrestrial life and the consequences for science and society"
Last week the Guardian reported
on a recent paper led by Seth Baum of Pennsylvania State University on this topic
, categorising some of the possible consequences – ranging from beneficial through neutral to harmful.
So what's the point? We have never seen these Little Green Men, so why expend effort thinking about what might happen?
Scientists have already had to face this problem in real life. An early occasion was in 1967 when astronomers at Cambridge University using a new radio telescope detected regular blips coming from deep space
. They were puzzled because no known source should do that. One possible explanation was ET and the director of the group, Nobel prizewinner Sir Martin Ryle, suggested that they should keep quiet about their discovery and dismantle the telescope, because if it was ET then sooner or later someone on Earth would start signalling back, alerting a possibly evil-minded alien intelligence to our existence.
Fortunately, they soon concluded that it was a natural source – they had in fact discovered pulsars. But there is a continuing controversy in the SETI community about whether it is wise to try and contact ET by sending out messages. For example, the main SETI searchers have agreed a protocol for how to spread the news if and when they discover ET
, but have not yet been able to agree a common position on the wisdom of sending out messages.
The main problem is the nature of ETs. What are they like? To be able to influence us, they must be more advanced than us, so will they be wise and benevolent, since otherwise they would have destroyed themselves by now? Or perhaps as a result of a Hobbesian all-against-all struggle the only ET now out there has become dominant by destroying any potential competitors. But even if they were evil would they be able to get at us given the vast distances between the stars?
And it goes wider. Is it wise even to use our radio telescopes to try and detect ET? In 1962 the famous astronomer Fred Hoyle
and John Elliot dramatised the risk in a TV series "A for Andromeda
" starring Julie Christie. A message from ET was detected which turned out to contain instructions for building a computer. After this was assembled it set about destroying the human race, before being thwarted by the scientist hero.
Considering dangers like that, and applying the precautionary principle, should we shut down all our SETI searches?
Can we tell anything about ET that would guide us, first of all in deciding whether to search at all, then in matching our searches to its nature, and finally in whether to send out signals? My own position, as I argued in a paper presented at the Royal Society Kavli Centre last year, is that our total ignorance about the nature of ET means that we cannot say whether listening or talking is good or bad.
For example, sending a message may cause an evil ET to come and destroy us. Alternatively it may preserve us from destruction by an ET that has become aware of us from seeing our cities and is worried by the aggressive nature of new civilisations, but would be reassured by the peaceful content of a message.
We cannot tell which of the many possible benefits and dangers are more likely, and so we SETI folk can go about our business without reproach. But the more thinking, such as the Baum paper, we do about possible outcomes, the better prepared we may be for the actual outcome after the day of discovery, if and when it ever comes.
It may be good to do this, but is it worth spending real money on? Well, in fact very little money is spent on SETI. There are probably about the equivalent of 20 full-time people worldwide working on SETI, most funded from private sources, together with a little money from individual universities, supplemented with a very small amount from governments. And like all high-tech work it has spinoffs, most noticeably the Berkeley BOINC distributed computing system
which started as Seti@home
, but is now used widely from biotechnology to meteorology. SETI is used as part of university teaching in the sciences, and it provokes thinking in allied sciences from sociology to linguistics.
Regardless of the chances of success, SETI is of real value. But here in the UK, practically no private or government money goes into it. With around 0.5% of the government funds that now go into astronomy
(the 1-in-200 effort, I call it) the UK could make a big splash in the Seti world.Alan Penny is an honorary reader in the School of Physics and Astronomy at the University of St Andrews, using the Lofar telescope to search for low-frequency radio signals from ET | <urn:uuid:8f6f7518-cdc9-4c47-9794-6285bce63012> | 2.84375 | 1,177 | Personal Blog | Science & Tech. | 44.36328 |
The beauty and diversity of terrestrial and aquatic organisms never ceases to
Glaucus atlanticus, also known as Blue Dragon or Sea Swallow is a relative of the snails and slugs.
amaze me. A few days ago I posted a blog about a strange-looking sea creature, Glaucus atlanticus. Although small, the beautiful Blue Dragon looks almost intimidating and somewhat sinister. It's like the Darth Vader of the nudibranchs.
Today we have a blog about a related sea creature, but with a look completely opposed to G. atlanticus. Its scientific name is Chromodoris splendida, although it’s best known as the Splendid chromodoris.
This is also a nudibranch. Although related, both sea slugs could not be more dissimilar. C. splendida is extraordinarily colorful. Its charming appearance is evocative of the popular Japanese cultural expression known as Kawaii. C. splendida is cute, delicate, “groovy”, and undeniably adorable.
Chromodoris splendida, a stunningly beautiful sea slug, is endemic to New South Wales, Australia.
This nudibranch is small, about 2 to 5 cm. long. It has a milk-white mantle with a thin gold rim. Its mantle is covered with large red polka dots; in the frontal section has two purple-red sensory prominences called rhinophores, and in the posterior region, a gill rosette composed of 9 to 12 white plume-like gills with a purple-red trim.
Chromodoris splendida is a stunningly beautiful sea slug endemic to New South Wales, Australia. Posterior view, showing detail of the gill system.
This enchanting little slug is endemic to eastern Australia being present only in coastal and marine areas of New South Wales and southern Queensland. It was first described in 1864 by English explorer, naturalist and painter George F. Angas, an accomplished watercolor painter, who was appointed Secretary of the Australian Museum in Sydney and made a significant contribution to the knowledge of the nudibranchs of Australia.
For most snails and bivalves (double shelled mollusks), the hard shell is the main defence against predators. Nudibranchs, however, lack a shell and have evolved various strategies to avoid becoming prey of fish and birds. In some cases they use camouflage to blend with their surroundings, others, such a Chromodoris splendida use distinct, brilliant coloration to warn potential predators of a distasteful flavor. The pretty red dots of the Splendid chromodoris are, in fact, an anti-predator adaptation known as aposematism (warning colouration). It advertises that the species contains foul tasting toxins and noxious acid secretions. Predators recognize the benefit of not attempting to eat such an unpleasant tasting morsel.
Chromodoris splendida. The bright red dots warn potential predators about a distasteful flavor. | <urn:uuid:8b37696e-f7ae-497f-943c-cb2596615544> | 3.1875 | 624 | Personal Blog | Science & Tech. | 40.786489 |
Protein Data BankThe Protein Data Bank (PDB) is a repository for 3-D structural data of proteins and nucleic acids. This data, typically obtained by X-ray crystallography or NMR spectroscopy, is submitted by biologists from around the world, is released into the public domain, and can be accessed for free.
The structural data can be used to visualize the biomolecules with appropriate software, such as rasmol, chime or a VRML plugin. The PDB website also contains resources for education, structural genomics, and related software.
As of 2002, the database contained about 18,000 structures and took in about 2,000-3,000 new ones per year. Data is stored in the mmCIF format specifically developed for the purpose. Note that the database stores information about the exact location of all atoms in a large biomolecule; if one is only interested in sequence data, i.e. the list of amino acids making up a particular protein or the list of nucleotides making up a particular nucleic acid, the much larger databases from Swiss-Prot and the International Nucleotide Sequence Database Collaboration should be used.
Each structure published in PDB receives a four character alphanumeric identifier, its PDB ID. This should not be used as an identifier for biomolecules, since often several structures for the same molecule (in different environments or conformations) are contained in PDB with different PDB IDs.
If a biologist submits structure data for a protein or nucleic acid, PDB staff reviews and annotates it. The data is then automatically checked for plausibility. The source code for this validation software has been released for free. The main data base accepts only experimentally derived structures, and not theoretically predicted ones.
Various funding agencies and scientific journals now require scientists to submit their structure data to PDB.
Founded in 1971 by Brookhaven National Laboratory, the Protein Data Bank was transferred in 1998 to the Research Collaboratoy for Structural Bioinformatics (RCSB), which is composed of Rutgers University, the University of Wisconsin, Madison, NIST and the San Diego Supercomputer Center. Funding comes from the National Science Foundation, Department of Energy, National Library of Medicine and the National Institute of General Medical Sciences. The European Bioinformatics Institute in the UK and the Institute for Protein Research in Japan also collect, process and submit data files. | <urn:uuid:e45400f0-6ec7-4236-83c0-0db033d24905> | 3.140625 | 499 | Knowledge Article | Science & Tech. | 25.643865 |
If one were able to move information or matter from one point to another faster than light, then according to special relativity, there would be some inertial frame of reference in which the signal or object was moving backwards in time
A 2008 quantum physics experiment performed in Geneva, Switzerland has determined that the "speed" of the quantum non-local connection (what Einstein called spooky action at a distance) has a minimum lower bound of 10,000 times the speed of light. However, modern quantum physics cannot expect to determine the maximum given that we do not know the sufficient causal condition of the system we are proposing.
Where is a mistake? Because if there is no mistake about that, then time traveling is possible and is almost reachable by today's technology.
looks like not | <urn:uuid:018f3c3f-e9c7-4ade-8186-797e93f2c74d> | 2.796875 | 159 | Comment Section | Science & Tech. | 25.519118 |
Thu Sep 24 06:22:22 BST 2009 by DonRoberto
I'd be interested in seeing the methodology used to determine these results.
Did the two spacecraft only take data when the Earth was out of view? Did their surveys take reflected earthlight into account?
These are exciting results --- I'd just like to know a little more about how the conclusions were reached, and what alternate explanations --- if any --- are possible.
Thu Sep 24 07:25:40 BST 2009 by Randall Bart
The Earth doesn't go out of view. On the moon, the Earth just hangs in sky and doesn't move much.
Thu Sep 24 08:23:49 BST 2009 by Nate
Maybe you haven't heard of the "dark" side of the Moon?
The question is valid. Light reflected from the Earth should be devoid of some fraction of its light at the wavelengths in question (due to absorption by clouds and ocean). Some amount of that light would be re-reflected from the Moon to the instruments on the spacecraft. If you only consider the data collected when the Earth is out of view of the instruments, then it shouldn't be a problem.
All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us.
If you are having a technical problem posting a comment, please contact technical support. | <urn:uuid:c66abbdb-3550-48cf-88ae-58d003590e10> | 3.78125 | 292 | Comment Section | Science & Tech. | 73.049643 |
In this tutorial, you will see the use of embed tag of html5. It is used for including external resource in your HTML document. With the help of embed tag, you can add video, audio, or image etc. It is a singleton tag. you can not write any thing in it.
There are following attributes in embed tag:
|src||URL||Address of resource file.|
|height||pixel||height of media.|
|type||type of media||define type of media.|
|width||pixel||define width of media.|
Declaration syntax of embed tag in HTML5.
|<embed src="" type="">|
<!doctype html >|
<title>embed Tag </title>
<p><b>Example of embed tag in HTML5.</b></p>
<embed src="Pick.jpg" height="300" width="300" />
The embed tag is only available in HTML5. It is not available in HTML4.01.
Ask your questions, our development team will try to give answers to your questions. | <urn:uuid:dd96c375-f3cb-4079-9579-f632e42ec2da> | 2.875 | 233 | Tutorial | Software Dev. | 78.750818 |
NASA's billion-dollar budget for space science survived relatively unscathed for the current year, and officials are hopeful that the same will be true for fiscal 1988. But flight time, not money, is the biggest immediate problem for scientists, acknowledged John Holtz, assistant director of NASA's Astrophysics Division.
"We need these alternatives to support the PIs [principal investigators] due to the lack of flight opportunities in the wake of the down time and extensions in the [shuttle] manifest," Holtz said. The astronomy and physics programs received the majority of the funds for space science, $528 million in fiscal 1987, with planetary exploration receiving $374 million and the life sciences $70 million.
The space program relies heavily on the work of university-based researchers and their graduate students, who design many of the experiments and later help to analyze the data. In many cases, the projects serve as material for dissertations and theses. Since the next shuttle flight is, at best, still 13 months away, and the first scientific payload—the Hubble Space Telescope—is not scheduled to be launched until November 1988, a lot of graduate students will be out of luck if they wait for the next available payload on a future shuttle mission.
Fill 'er Up
"With astronomical instruments," he said, "you either have to collect more light in order to get brighter images, which means bigger optics, or you have to have large structures in order to use interferometry and achieve nice spatial resolutions. Either way, the astronomical instruments tend to get big. Once the shuttle bay was established as a unit of measure for the space program, there was a tremendous temptation to fill the thing up."
The Challenger accident was a particularly heavy blow to the large payloads needed for much of the planetary exploration program. There are slots for planetary missions listed on the shuttle manifest in 1989, 1990 and 1992 (with which NASA hopes to launch the Galileo mission to Jupiter, the Magellan mission to Venus and the Ulysses mission to study the solar poles), but obstacles remain that could delay indefinitely any or all of these missions.
Without any shuttle bays to fill, NASA has begun trimming its payloads. First on the list is the Cosmic Background Explorer (COBE), a 10,000-pound payload to be placed into polar orbit to search for near-infrared radiation emanating from the creation of the universe. COBE will shrink by one-half in an attempt to get into space in 1989 atop an expendable Delta launcher that NASA is persuading the Air Force to surrender. Structural changes will have to be made, but no experiments will be dropped.
Next on the list may be the Roentgen Satellite (ROSAT), a cooperative program with West Germany and the United Kingdom to make a full-sky survey in the X-ray portion of the spectrum. ROSAT is scheduled for a 1994 flight aboard the shuttle, but NASA is shopping around for an available Atlas-Centaur booster to be launched in late 1989 or early 1990. Although NASA would like to please the Europeans with an early flight, a British camera would probably have to be bumped to meet the new payload limitation.
The problems with the shuttle will also give a boost to balloon launches, which have been plagued in the past few years by faulty materials. An improved version of the polyethylene material used to carry the payloads is expected to increase the success rate of the launches (only 26 of 36 were successful in 1986) as well as permit heavier payloads, up to 5,000 pounds. NASA officials hope to conduct 49 balloon launches this year.
NASA is even using its research aircraft to fill the gap left by the shuttle. The Kuiper Airborne Observatory, a C-141 military transport equipped with a 91-centimeter telescope, is scheduled to make. 72 research flights this year to permit 29 groups of scientists to make measurements in the near-infrared and sub-millimeter wavelength spectral regions.
The aircraft program has been yielding a scientific paper every two flights, noted Nancy Boggess, who manages infrared scientific programs at NASA. Last year, for example, despite the cancellation of an Astro payload (which would have conducted several ultraviolet experiments with equipment extended from the bay of the shuttle), the aircraft flew to New Zealand for infrared observations of Comet Halley and, for the first time, detected measurable amounts of water in a comet. | <urn:uuid:94479c05-cbb3-4939-8f49-2497c8569240> | 3 | 900 | Truncated | Science & Tech. | 37.638692 |
U.S. Water News Online
LOS ANGELES, Calif. -- NASA researchers have the strongest
evidence yet that one of Jupiter's most mysterious moons hides a
fermenting ocean of water underneath its icy coat. This evidence
comes from magnetic readings by NASA's Galileo spacecraft, reported
in the Friday, Aug. 25, edition of the journal Science.
Europa, the fourth largest satellite of Jupiter, has long been
suspected of harboring vast quantities of water. Since life as we
know it requires water, this makes the moon a prime target for the
search of exobiology -- life beyond Earth.
"The direction that a magnetic compass on Europa would point to
flips around in a way that's best explained by the presence of a
layer of electrically conducting liquid, such as saltwater, beneath
the ice," explained Dr. Margaret Kivelson, one of five co- authors at
the University of California, Los Angeles (UCLA).
Kivelson announced that conclusion when she first received
telltale readings from the Galileo magnetometer after the veteran
spacecraft flew near Europa in January. Her team details its theory
about the liquid layer in the formal report.
"We have good reason to believe the surface layers of Europa are
made up of water that is either frozen or liquid," Kivelson said,
pointing out that earlier gravity measurements show a low density,
such as water's, for the moon's outer portions. "But ice is not a
good conductor, and therefore we infer that the conductor may be a
Galileo has flown near Europa frequently since the spacecraft
began orbiting Jupiter and its moons in December 1995. Pictures from
those flybys show patterns that scientists see as evidence of a
hidden ocean. In some, rafts of ice appear to have shifted position
by floating on fluid below. In others, fluid appears to have risen to
the surface and frozen.
However, those features could be explained by a past ocean that
has subsequently frozen solid, said Galileo's project scientist, Dr.
Torrence Johnson of NASA's Jet Propulsion Laboratory, Pasadena, CA.
"This magnetometer data is the only indication we have that there's
an ocean there now, rather than in the geological past," Johnson
Johnson said the case for liquid water on Europa is still not
clinched. "The evidence is still indirect and requires several steps
of inference to get to the conclusion there is really a salty ocean,"
he said. "A definitive answer could come from precise measurements of
gravity and altitude to check for effects of tides."
NASA is planning a Europa Oribiter mission to carry instruments
capable of providing that information. Magnetic evidence for an ocean
is possible because Europa orbits within the magnetic field of
Jupiter. That field induces electric current to flow through a
conductive layer near Europa's surface, and the current creates a
secondary magnetic field at Europa, the new report explains.
Key evidence that the magnetic readings near Europa result from
this type of secondary effect, implying a saltwater layer, relies on
timing. The direction of Jupiter's magnetic field at Europa reverses
predictably as the moon's position within the field changes. During
Galileo's flyby in January, the direction of Jupiter's field at
Europa was the opposite of what it had been during passes in 1996 and
1998. Kivelson's team predicted how that would change the direction
of Europa's magnetic polarity if Europa has a saltwater layer, and
Galileo's measurements matched their prediction.
"It makes a very strong case that the source of the magnetic
signature is a conducting layer near the surface," Kivelson said.
Galileo's magnetometer is also expected to play an important role
this fall and winter in joint studies of Jupiter while NASA's
Saturn-bound Cassini spacecraft passes near Jupiter. Galileo will be
inside Jupiter's magnetic field while Cassini is just outside it, in
the solar wind of particles streaming away from the Sun. Scientists
plan to take advantage of that positioning to learn more about how
the solar wind affects the magnetic field.
Galileo completed its original mission nearly three years ago, but
has been given a three-year extension and has survived three times
the amount of radiation it was designed to endure.
Return to the
U.S. Water News' past archives page
Return to the U.S. Water
Use a comma to separate e-mail addresses:
Hi, I thought you might like to read this article. | <urn:uuid:71a7490d-35d8-4275-ad0d-cf1ab7f25318> | 3.375 | 966 | Truncated | Science & Tech. | 40.66188 |
Lenticular, or lee wave, clouds form downwind of an obstacle in the path of a strong air current. In the Boulder, Colorado, area, the obstacle is the Front Range of the Rocky Mountains, seen through clouds at the bottom of the picture.
Click on image for full size
Courtesy of UCAR Digital Image Library
Lenticular clouds form on the downwind side of mountains. Wind blows most types of clouds across the sky, but lenticular clouds seem to stay in one place. Air moves up and over a mountain, and at the point where the air goes past the mountaintop the lenticular cloud forms, and then the air evaporates on the side farther away from the mountains.
As this photo on this page shows, lenticular clouds are lens-shaped and look like flying saucers.
Shop Windows to the Universe Science Store!
The Fall 2009 issue of The Earth Scientist
, which includes articles on student research into building design for earthquakes and a classroom lab on the composition of the Earth’s ancient atmosphere, is available in our online store
You might also be interested in:
Wind is moving air. Warm air rises, and cool air comes in to take its place. This movement creates different pressures in the atmosphere which creates the winds around the globe. Since the Earth spins,...more
One process which transfers water from the ground back to the atmosphere is evaporation. Evaporation is when water passes from a liquid phase to a gas phase. Rates of evaporation of water depend on things...more
Altocumulus clouds are part of the Middle Cloud group (2000-7000m up). They are grayish-white with one part of the cloud darker than the other. Altocumulus clouds usually form in groups and are about...more
Altostratus belong to the Middle Cloud group (2000-7000m up). An altostratus cloud usually covers the whole sky and has a gray or blue-gray appearance. The sun or moon may shine through an altostratus...more
Cirrocumulus clouds belong to the High Cloud group (5000-13000m). They are small rounded puffs that usually appear in long rows. Cirrocumulus are usually white, but sometimes appear gray. Cirrocumulus...more
Cirrostratus clouds belong to the High Cloud (5000-13000m) group. They are sheetlike thin clouds that usually cover the entire sky. The sun or moon can shine through cirrostratus clouds . Sometimes, the...more
Cirrus clouds are the most common of the High Cloud (5000-13000m) group. They are composed entirely of ice and consist of long, thin, wispy streamers. They are commonly known as "mare's tails" because...more | <urn:uuid:1755d6a9-437e-4fbd-8a7c-f38c5d9bef1a> | 3.75 | 583 | Content Listing | Science & Tech. | 62.967671 |
In WBEM, a class is a collection of objects that represents the most basic unit of management. For example, in Solaris WBEM Services, the three main functional classes include CIMClass, CIMProperty, and CIMInstance.
Abstractly, classes are used to create managed objects. Class characteristics are inherited by the child objects, or instances, that are created from a class. For example, using CIMClass, you can create an instance, CIMClass (Solaris_Computer_System).
This instance of CIMClass answers the question, "What is the computer system?" The value of the instance is Solaris_Computer_System. All instances of the same class type are created from the same class template. In the example, the name of the computer system provides a template to create managed objects of the type Computer_System.
Classes can be static or dynamic. Instances of static classes are stored by the CIM Object Manager and can be retrieved from the CIM Repository when a request is made. Instances of dynamic classes--classes containing data that changes regularly, such as system usage--are created by provider applications as the data changes.
For extensions to CIM, custom classes can be developed to support managed objects that are specific to their managed environment. The CIM Object Manager API provides new classes to extend CIM for the Solaris operating environment. | <urn:uuid:09e59212-4ec4-4c69-95f4-82a8160fad39> | 3.109375 | 285 | Documentation | Software Dev. | 29.661429 |
Title: Nuclear Magnetic Resonance (NMR) Data: Chemical Shifts for Carbon-13, Part 5: Organometallic Compounds
Author: Mikhova, Bozhana
Editor: Gupta, R.R.; Lechner, M.D.
Source: Landolt-Börnstein, New Series
Keyword: Carbon-13; chemical shift; magnetic properties of nuclei; nuclear magnetic resonance data
ISBN: 978-3-540-74188-6 (print)
ISBN: 978-3-540-74189-3 (electronic)
RefComment: VIII, 241 p., Hardcover
RefComment: Written for Scientists and engineers in the fields of physics, chemistry and physical chemistry who intend to use NMR to study the structure and the binding of molecules
Abstract: Nuclear Magnetic Resonance (NMR) is based on the fact that certain nuclei exhibit a magnetic moment, oriented by a magnetic field, and absorb characteristic frequencies in the radiofrequency part of the spectrum. The spectral lines of the nuclei are highly influenced by the chemical environment i.e. the structure and interaction of the molecules. NMR is now the leading technique and a powerful tool for the investigation of the structure and interaction of molecules. The present Landolt-Börnstein volume III/35 Nuclear Magnetic Resonance (NMR) Data is therefore of major interest to all scientists and engineers who intend to use NMR to study the structure and the binding of molecules. Volume III/35 "NMR-Data" is divided into several subvolumes and parts. Subvolume III/35A contains the nuclei 11B and 31P, subvolume III/35B contains the nuclei 19F and 15N, subvolume III/35C contains the nucleus 1H, subvolume III/35D contains the nucleus 13C, subvolume III/35E contains the nucleus 17O, and subvolume III/35G contains the nucleus 77Se. More nuclei will be presented later. | <urn:uuid:448def0c-a811-4c8d-8afd-06c6e65d793d> | 2.71875 | 418 | Truncated | Science & Tech. | 46.160646 |
What is LROC?
The Lunar Reconnaissance Orbiter Camera (LROC) is designed to address two of the prime LRO measurement requirements: 1) Assess meter scale features to facilitate selection of future landing sites on the Moon. 2) Acquire images of the poles every orbit to characterize the polar illumination environment (100 meter scale), identifying regions of permanent shadow and permanent or near-permanent illumination over a full lunar year. In addition to these two main objectives, the LROC team is conducting meter-scale mapping of polar regions, stereo images that provide meter-scale topographic measurements, global multi-spectral imaging, and has produced a global landform map. We have imaged over 20% of the Moon at high resolution using the Narrow Angle Cameras; if the LRO mission continues for several more years, we will eventually image the whole moon at 1/2 m/pixel. LROC images will also be used to map and determine current impact hazards by imaging areas photographed by Apollo astronauts. Comparing the new and old images reveals impact craters that formed over the past 40 years.
LROC consists of two Narrow Angle Cameras (NACs) to provide 0.5 meter-scale panchromatic images over a 5 km swath, a Wide Angle Camera (WAC) to provide images at a scale of 100 meters/pixel in seven color bands over a 60 km swath, and a Sequence and Compressor System (SCS) supporting data acquisition for both cameras. LROC is a modified version of the Mars Reconnaissance Orbiters ConTeXt Camera (CTX) and MARs Color Imager (MARCI) provided by Malin Space Science Systems (MSSS) in San Diego, CA. | <urn:uuid:8c419860-b358-46ca-b6b0-2ce8262776bb> | 3.046875 | 355 | Knowledge Article | Science & Tech. | 28.615236 |
Frontogenesis is a measure of the “creation" of a front. Usually,
Frontogenesis refers to the strength of the horizontal temperature gradient.
Horizontal temperature gradient refers to the change of temperature over distance
A simple version of the Frontogenesis equation is: .
Another way to write this equation is: .
To find change in heating, subtract the smaller temperature value from the
larger one. Then find the distance between the two points you measured, this
is the change in distance. Divide the numerical value of the change in heating
by the distance. The resulting number will be in the format Xº(F or C)/unit
of distance. For example, if the temperature over land is 40ºF, and the
temperature over water is 70ºF, and the two points are 30 miles apart,
then the temperature gradient would be 1ºF/mile. The higher the answer,
the greater the rate of Frontogenesis.
|Intro||Question & Hypothesis||Frontogenesis||Procedure||Results||Conclusions||Credits||References| | <urn:uuid:c05313a5-fe63-4c74-bd1e-378eb140cc5c> | 3.453125 | 225 | Knowledge Article | Science & Tech. | 35.547661 |
Getting hit by a pitcher's best fastball would hurt. So would getting hit by the most powerful cosmic rays -- particles that zip through the universe at nearly the speed of light. Though they're smaller than an atom, the particles carry as big a punch as a fastball. And the "pitchers" for such particles may be supermassive black holes.
A black hole is an ultradense object with such powerful gravity that not even light can escape from it. Supermassive black holes are millions or billions of times heavier than the Sun, and inhabit the cores of most galaxies, including the Milky Way.
These black holes pull in gas from the space around them. The matter gets so hot that its atoms are ripped apart, leaving a "soup" of charged particles. Magnetic fields can shoot some of these particles into space at near lightspeed.
A recent study found that the black hole at the center of the Milky Way may shoot particles all over the place. When they ram into other particles around the black hole, they emit gamma rays -- perhaps accounting for a gamma-ray "glow" in the center of the galaxy.
Another study found that black holes that are more massive than the Milky Way's can boost particles to even higher energy levels, producing the most powerful cosmic rays. They can race across billions of light-years before they hit -- with the impact of a speeding fastball.
We'll have more about black holes tomorrow.
Script by Damond Benningfield, Copyright 2007
For more skywatching tips, astronomy news, and much more, read StarDate magazine. | <urn:uuid:034a33fb-4794-4122-be52-e43611d362c6> | 3.546875 | 322 | Truncated | Science & Tech. | 56.210855 |
Maathai’s longleg (Notogomphus maathaiae)
|Also known as:||Maathai’s clubtail|
|Size||Male abdomen length: 35.1 mm (2)|
Female abdomen length: 35.7 mm (2)
Classified as Endangered (EN) on the IUCN Red List 2006.
First described in 2005, Maathai’s longleg is a clubtail (Gomphidae spp.) belonging to the genus Notogomphus, commonly referred to as ‘longlegs’ on account of their extended hind thighs (3). The conspicuously contrasting bright green sides of the thorax of this otherwise fairly dark clubtail dragonfly immediately distinguished it as a species previously unknown to science (2) (3). For a clubtail - a family of dragonflies named for the enlarged tip of their abdomen - this species has a relatively little expanded tip (2).
Recorded from the forests of Mount Elgon National Park, Katamayu Forest and Marioshoni Forest, Kenya (1).
Found from around 2,200 to 2,600 m above sea level in and around clear montane forested streams (1) (2).
Odonata species start their life as aquatic larvae or nymphs, passing through a series of developmental stages or ‘stadia’, and undergoing several moults as they grow. This larval period can last anything between three months and ten years, depending upon the species. Before the final moult (emergence), metamorphosis occurs in which the larvae transform into the adult form. After emergence, adults undergo a pre-reproductive phase known as the maturation period, when individuals normally develop their full adult colour. Soon after this, individuals will begin to mate (4). Mature Maathai’s longlegs have been recorded in January, April, June, September and November, and emerging individuals have been seen in March and November, indicating that this species is not seasonal (2). Of the two female Maathai’s longlegs observed laying eggs (ovipositing) in the water, neither were guarded by males (2), as is the case for many Odonata species (4).
Odonata usually feed on flying insects and are generalised, opportunistic feeders, often congregating around abundant prey sources such as swarms of termites or near beehives (4).
The montane forest habitat on which this species appears to rely has been widely destroyed in recent decades, and Maathai’s longleg is therefore presumed to have suffered significant declines. As deforestation continues due to an expanding and encroaching human population, this rare dragonfly is expected to be up-listed to Critically Endangered on the IUCN Red List before too long (1).
In the densely populated Kenyan highlands, Maathai’s longleg serves as an indicator of habitat quality and is therefore being promoted as a flagship species to raise awareness about the need to protect the natural forest and watershed (1) (5). Protection of its riverside forests will not only help this endangered dragonfly, but also the farmers of the foothills, by guaranteeing soil stability and a steady flow of water (5). To this end, dragonflies such as this species are being dubbed the "guardians of the watershed" in East Africa, helping to raise their profile in the field of conservation (5) (6).
For more information on Maathai’s longleg see:
- Clausnitzer, V. and Dijkstra, K.B. (2005) Honouring Nobel Peace Prize winner Wangari Maathai: Notogomphus maathaiae sp. nov., a threatened dragonfly of Kenya’s forest streams (Odonata: Gomphidae). International Journal of Odonatology, 8(2): 177 – 182.
Authenticated (28/11/2006) by Dr. Viola Clausnitzer, Chair, IUCN/SSC Odonata Specialist Group.
- Larvae: stage in an animal’s lifecycle after it hatches from the egg. Larvae are typically very different in appearance to adults; they are able to feed and move around but usually are unable to reproduce.
- Metamorphosis: an abrupt physical change from the larval to the adult form.
- Thorax: part of the body located near the head in animals. In insects, the three segments between the head and the abdomen, each of which has a pair of legs.
- Watershed: the total land area from which water drains into a particular stream or river.
IUCN Red List (October, 2006)
- Clausnitzer, V. and Dijkstra, K.B. (2005) Honouring Nobel Peace Prize winner Wangari Maathai: Notogomphus maathaiae sp. nov., a threatened dragonfly of Kenya’s forest streams (Odonata: Gomphidae). International Journal of Odonatology, 8(2): 177 - 182.
- Boy, G. (2005) Maathai’s clubtail. SWARA, 0: 8 - 9.
- O’Toole, C. (2002) The New Encyclopedia of Insects and Their Allies. Oxford University Press, Oxford.
IUCN News Release: Release of the 2006 IUCN Red List of Threatened Species reveals ongoing decline of the status of plants and animals (October, 2006)
2006 Red List of Threatened Species: Fighting the extinction crisis: conservation in action (October, 2006) | <urn:uuid:1c59d83f-7c26-45f0-9786-2d5bc1a03cf0> | 3.265625 | 1,189 | Knowledge Article | Science & Tech. | 48.862403 |
Organoxenon compounds in organic chemistry contain carbon to xenon chemical bonds. The first organoxenon compounds were divalent, such as (C6F5)2Xe. The first tetravalent organoxenon compound, [C6F5XeF2[BF4, was synthesized in 2004.1 So far, more than one hundred organoxenon compounds have been researched.
Most of the organoxenon compounds are more unstable than xenon fluorides due to the high polarity. The molecular dipoles of xenon difluoride and xenon tetrafluoride are both 0 D. The early synthesized ones only contain perfluoro groups, but later some other groups were found, e.g. 2,4,6-trifluorophenyl.2
The most common bivalent organoxenon compound is C6F5XeF, which is always used as a precursor to other organoxenon compounds. Due to the unstability of xenon fluoride, it is impossible to synthesize organoxenon compound by using general organic reagents. Most frequently used fluorinating agents include Cd(ArF)2(subscript "F" means fluorine-including aryl), C6F5SiF3, and C6F5SiMe3 (should be used along with fluoride).
With the use of stronger Lewis acids, such as C6F5BF2, ionic compounds like [RXe][ArFBF3 can be produced. Alkenyl and alkyl organoxenon compounds are prepared in this way as well, for example, C6F5XeCF=CF2 and C6F5XeCF3.2
Some typical reactions are listed below:
The third reaction also produces (C6F5)2Xe, Xe(2,4,6-C6H2F3)2 and so on.
The precursor C6F5XeF can be prepared by the reaction of trimethyl(pentaflurophenyl)silane(C6F5SiMe3) and xenon difluoride. Adding fluoride to the adduct of C6F5XeF and arsenic pentafluoride is another method.2
- Compounds of carbon with other elements in the periodic table:
|Core organic chemistry||Many uses in chemistry|
|Academic research, but no widespread use||Bond unknown|
- "The First Organoxenon(IV) Compound: Pentafluorophenyldifluoroxenonium(IV) Tetrafluoroborate". Angewandte Chemie International Edition 39 (2): 391–393. 2000-01-16. doi:10.1002/(SICI)1521-3773(20000117)39:2<391::AID-ANIE391>3.0.CO;2-U.
- FROHN, H (2004-05-31). "C6F5XeF, a versatile starting material in xenon-carbon chemistry". Journal of Fluorine Chemistry 125 (6): 981–988. doi:10.1016/j.jfluchem.2004.01.019. | <urn:uuid:05e3df31-cf58-443a-bfac-87bd2db48899> | 3.671875 | 701 | Knowledge Article | Science & Tech. | 52.518821 |
Before wastewater reaches recipient waters, nutrients must be removed in order to avoid eutrophication and large algal blooms, which may result in serious damage to animal and plant life. Robert Almstrand at the University of Gothenburg, Sweden, has shown in his thesis that better removal of nitrogen from wastewater can be achieved by providing the bacteria that purify the water with alternating high and low levels of nutrients.
Sometimes a picture says it all, but occasionally a picture just fascinates you even though you’re not quite sure what you’re looking at. The pictures from the Olympus BioScapes Competition fall into the latter category and we just couldn’t withhold … Continue reading
Carbon nanotubes are stronger than steel, harder than diamond, light as plastic and conduct electricity better than copper. It is no wonder they can be found in an increasing range of products, ranging from tennis rackets to solar cells and … Continue reading
As if anthropogenic pollution and overfishing isn’t damaging enough for coral reefs worldwide, now certain seaweeds seem determined to see the end of reefs as well. These macroalgae produce chemicals that inhibit the growth of reef-building coral or even kill … Continue reading
When talking about a biobased economy, most people think biofuel. And who can blame them, since gasoline alone is good for about half of global petroleum use? A transition from petroleum to biomass as a source for fuel would put … Continue reading
Existing batteries are not known for their environmentally friendly components, since most contain heavily toxic chemicals. The much used lithium-ion batteries, best known for their use in cell phones and electric cars, for instance can contain pollutants that may decrease … Continue reading
Ocean iron fertilisation, one of the most discussed CDR geoengineering proposals, deliberately tries to stimulate biological activity in the upper ocean. New research shows this in turn affects ecology at the ocean floor too. Let’s just hope sea cucumbers don’t … Continue reading
Marine bacteria produce two types of sulphur compounds as they eat dead algae biomass. The one, methanethiol, or MeSH, is cycled downwater into the food chain. The other forms a liquid aerosol, dimethylsulfide, or DMS. The latter plays an important … Continue reading | <urn:uuid:13b59875-fca3-4a3a-8b7b-c7be01665252> | 3.15625 | 477 | Content Listing | Science & Tech. | 25.666538 |
Underwater BalloonScience brain teasers require understanding of the physical or biological world and the laws that govern it.
You have a rubber balloon with a string attached to it, and a weight attached to the other end of the string. You put just the right amount of air in the balloon such that if you submerge the balloon and weight under water to a depth of 30 feet, the balloon will not rise or sink (its buoyancy force is exactly balanced by gravity).
You then pull the balloon and weight down another 30 feet. If you let the balloon go at this depth, will it rise or sink?
Assume that the water temperature is the same at both depths.
HintThe overall density of the water/weight system at 30 feet is exactly the same as the density of water. How does the density of the system change at 60 feet?
AnswerThe balloon will sink. At a depth of 60 feet, the water pressure is greater than it is at 30 feet (by about 15 psi). Because of this, the balloon will shrink, causing the balloon/weight system to increase in density (the total volume is smaller while the total mass stays the same). Since the system is now more dense than it was at 30 feet, it will sink.
Also important is the fact that since water is pretty much incompressible, the water density at both depths will be about the same.
See another brain teaser just like this one...
Or, just get a random brain teaser
If you become a registered user you can vote on this brain teaser, keep track of
which ones you have seen, and even make your own.
Back to Top | <urn:uuid:44c1ce37-53bd-4584-9cf1-b993c4f12aca> | 3.578125 | 339 | Tutorial | Science & Tech. | 67.296181 |
Differential Equations Workbook For Dummies
Once you’ve figured out the type of differential equation you’re dealing with, you can move on to solving the problem by using the method of undetermined coefficients or the power series method. If a stubborn equation comes your way, try using Laplace transform solutions to help.
How to Tell One Differential Equation from Another
Before you can solve a differential equation, you need to know what kind it is. There are several different types of equations, including linear, separable, exact, homogeneous, and nonhomogeneous.
Linear differential equations deal solely with derivatives to the first power (forget about derivatives raised to any higher power).
The power referred to here is the power the derivative is raised to, not the order of the derivative. Here’s a pretty typical-looking linear differential equation:
Separable differential equations can be written so that all terms in x and all terms in y appear on opposite sides of the equation, as you can see in this example:
which can also be written as
Exact differential equations are those where you can find a function whose partial derivatives correspond to the terms in the differential equation. Here’s an example:
Homogeneous differential equations contain only derivatives of y and terms involving y. As you can see in this equation, they’re also set to 0:
Nonhomogeneous differential equations are the same as homogeneous differential equations but with one exception: They can only have terms involving x and/or constants on the right side. Here’s an example of a nonhomogeneous differential equation:
The general solution of this nonhomogeneous differential equation:
where c1y1(x) + c2y2(x) is the general solution of the corresponding homogeneous differential equation
and yp(x) is a particular solution to the nonhomogeneous equation.
Two Effective Ways to Solve Differential Equations
You can solve a differential equation in a number of ways. The two most effective techniques you can use are the method of undetermined coefficients and the power series method.
The method of undetermined coefficients is a useful way to solve differential equations. To apply this method, simply plug a solution that uses unknown constant coefficients into the differential equation and then solve for those coefficients by using the specified initial conditions.
Power series are another tool in your differential equation solving toolkit. You can substitute a power series such as the following into a differential equation:
Then all you have to do is find a recurrence relation that gives you the coefficient an.
Solving Differential Equations Using Laplace Transform Solutions
Laplace transforms are a type of integral transform that are great for making unruly differential equations more manageable. Simply take the Laplace transform of the differential equation in question, solve that equation algebraically, and try to find the inverse transform. Here’s the Laplace transform of the function f (t):
Check out this handy table of Laplace transforms for common functions whenever you don’t want to take the time to calculate a Laplace transform on your own. | <urn:uuid:2d40b7f4-d62d-4f37-98b9-e9e7c4da9b10> | 3.953125 | 645 | Tutorial | Science & Tech. | 20.255082 |
acetaldehyde (ăsˌĭtălˈdəhĪd) [key] or ethanal ĕthˈənălˌ, CH3CHO, colorless liquid aldehyde, sometimes simply called aldehyde. It melts at - 123°C, boils at 20.8°C, and is soluble in water and ethanol. It is formed by the partial oxidation of ethanol; oxidation of acetaldehyde forms acetic acid. Acetaldehyde is made commercially by the oxidation of ethylene with a palladium catalyst (see Wacker process). It is used as a reducing agent (e.g., for silvering mirrors), in the manufacture of synthetic resins and dyestuffs, and as a preservative. When treated with a small amount of sulfuric acid it forms paraldehyde, (CH3CHO)3, a trimer, which is used as a hypnotic drug.
The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved.
More on acetaldehyde from Fact Monster:
See more Encyclopedia articles on: Organic Chemistry | <urn:uuid:8fbeb247-1364-4f1e-9957-48940e4c52ed> | 3.453125 | 237 | Knowledge Article | Science & Tech. | 34.218431 |
Joined: 03 Oct 2005
|Posted: Wed Oct 12, 2005 2:26 pm Post subject: Buckyballs and Polymers Used in New, Organic Solar Panel
|Imagine being able to paint a renewable energy source on the walls of your house, without having to shell out most of your life's earnings.
Well, this may no longer be in the realm of fantasy, with researchers from New Mexico State University and Wake Forest University working on an organic solar panel that is not only flexible, but can also be wrapped around structures, and comes much cheaper than the conventional ones.
Unlike traditional solar panels, which are made of silicon and are expensive and brittle like glass, organic solar cells are made of plastic, and are inexpensive.
Physicist Seamus Curran, head of the nanotechnology laboratory at NMSU, where this was developed, said that while traditional solar panels had an efficiency of three to four percent, the plastic ones had an efficiency level of 5.2 percent.
Curran said that though it was in the developmental stage, it would be available within a span of four to five years. The findings were presented made at the Santa Fe Workshop on Nano-engineered Materials and Macro-Molecular Technologies.
"We are closer to making organic solar cells that are available on the market. We need to look into alternative energy sources if the United States is to reduce its dependence on foreign sources. Our expectation is to get beyond 10 percent in the next five years. Our current mix is using polymer and carbon buckyballs (fullerenes) and good engineering from Wake Forest and unique NSOM imaging from NMSU to get to that point," he said.
New Mexico Economic Development Department Secretary Rick Homans said: "This breakthrough pushes the state of New Mexico further ahead in the development of usable solar energy, a vital national resource. It combines two of the important clusters on which the state is focused: renewable energy and micro nano systems, and underlines the strong research base of our state universities". | <urn:uuid:2226fbd3-28d8-4673-91b8-bab8573a427b> | 3.0625 | 413 | Comment Section | Science & Tech. | 30.904086 |
SCIENTISTS from IBM's Almaden Research Center in San Jose, California, have used a scanning tunnelling microscope to produce the first image of a defect in the surface layer of gold. The scientists constructed artificial 'buckling' in the metal, so that 24 atoms were squeezed into the space that would normally hold 23 atoms. The atomic film buckles up to form elevated ridges just 0.015-millionths of a millimetre high.
Results from the scanning tunnelling microscope closely match data from previous experiments. These look at how electrons are diffracted, and helium atoms scattered, by films of gold. According to Shirley Chiang of IBM, this close match indicates that the microscope is giving information on the positions of the cores of atoms as well as the elec- trons which surround them. She claims that this indicates that the technique could be used to study other metals.
Scientists have already seen similar buckling in ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:9523e10c-afeb-4e79-b0c6-5913b2377fc0> | 4 | 221 | Truncated | Science & Tech. | 49.067577 |
WITH the world's aid resources concentrated on the immediate needs of survivors, little has so far been done to address the tsunami's environmental impact. But the recovery of coastal ecosystems will be crucial as fractured communities try to piece their lives back together.
Along the Indonesian coast, for example, habitats change rapidly as you move inland from the water's edge. "A kilometre will take you through at least two zones," says Lisa Curran, director of the Tropical Resources Institute at Yale University. This means that the waves could have torn out entire ecosystems along vast stretches, leaving little from which they could regenerate. Moreover, Aceh, the most heavily hit province of Sumatra, was also one of the most biologically rich. The less damaged regions to the south are far more densely settled so they support less natural vegetation to recolonise devastated areas.
Only when fishermen return to the sea in ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:94d83fff-6574-4a19-95a6-dae3f78412e0> | 3.953125 | 207 | Truncated | Science & Tech. | 38.161621 |
IN VIENNA, Austria, scientists are listening for clandestine nuclear tests. In Norway, other researchers are trying out a device that reveals the contents of a nuclear missile without betraying its deepest secrets. And 1000 kilometres east of Moscow in Votkinsk, 30 Americans who watch Russians make missiles to aim at other Americans may not be coming home at Christmas after all.
The world is in the midst of an unprecedented wave of negotiations aimed at saving global agreements to keep nuclear weapons in check. Few realise that this involves at least as much science as it does diplomacy. The weapons treaties involved are totally dependent on verification science: inspections, remote monitoring and other methods of ensuring that people do not build or conceal banned weapons.
The diplomats can only rescue the treaties if they convince sceptics that verification works, so scientists are launching a renewed research effort to enable verifiers ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:22f063b3-e288-4126-ba2a-ffb3b9501b59> | 2.796875 | 201 | Truncated | Science & Tech. | 39.366283 |
The current range of the African elephant is Africa, south of the Sahara. It occurs in about 35 African states.
The species formerly extended into North Africa up to the Mediterranean coast. In West Africa, only thinly scattered, small populations remain. Central African rainforests still harbour substantial, largely continuous populations of L. a. cyclotis.
Savannah elephants, L. a. africana, occur in East and southern African savannas down to northernmost Namibia, Botswana, Zimbabwe and South Africa (with a large gap in central Angola and neighbouring areas).
Although occupying exclusively tropical and subtropical zones, African elephants live in a wide range of habitats, including:
They are essentially mixed feeders, so accessibility to a wide range of plants, and to water within one day’s walk, are essential prerequisites.
Estimates of natural animal density are hard to make. The carrying capacity will also vary enormously with the environment. In general, an area of about 2 mi2 (5 km2) per animal is probably typical in the wild, although the figure may be as high as 7.7 mi2 (20 km2) in rainforest habitats.
In many areas where they live, elephants are the dominant mammalian species in terms of biomass, and have a major ecological role. Their massive dung production recycles nutrients back into the soil. They can disperse seeds and fruits over wide distances,
Elephants’ habit of destroying trees has led to debate about their role in changing their own environment. In some parts of Africa, elephants have transformed wooded areas into open grassland. However it us likely that, originally, such phenomena formed part of a natural cycle, with long-term balance between different habitats.
If a high number of elephants in one area caused a reduction in the tree density, either the elephant population would limit its own reproduction, or the animals would migrate to another area, allowing regeneration of woodland. In some areas even today, vegetation regeneration seems to keep pace with elephant feeding; it is primarily in savanna habitats, and particularly where elephants have been constrained within the boundaries of reserves, that problems arise, and in the present situation these are certainly important issues for conservation.
Many other factors, such as fire and climate change, also contribute to the balance between elephants and their habitats. In the severe drought of 1970-71, thousands of elephants died in Africa as a result of food and water shortage. | <urn:uuid:a55c7f62-78c3-4228-913f-eab321f41021> | 4.03125 | 502 | Knowledge Article | Science & Tech. | 29.49688 |
Harmful Algal Blooms
There are over 5000 species of tiny single celled plants that make up the marine phytoplankton. These are at the start of virtually all marine food chains and are essential to support marine life.
Harmful phytoplankton produce a range of toxic biochemicals. These are harmless to shellfish, which filter feed on the phytoplankton and in doing so can concentrate toxins in their tissue . Some very serious neurological and gastrointestinal sicknesses result in people if they eat contaminated shellfish.
Diarrheic Shellfish Poisoning (DSP)
Amnesic Shellfish Poisoning (ASP)
Paralytic Shellfish Poisoning (PSP)
There is a fourth category, Azasparacid Shellfish Poisoning (AZP), which causes symptoms similar to DSP in low doses. The toxins which cause the most problems in western Europe are those which result in DSP. This group of toxins comprises okadaic acid and its derivatives (DTX1, DTX2, DTX3, etc.). Most often contamination is linked with the presence of Dinophysis, particularly Dinophysis acuminata and Dinophysis acuta. These species have a very distinctive appearance under a microscope. The ASP toxin, domoic acid, is produced by the diatom Pseudo-nitzschia whereas PSP toxins are mainly produced by species from the genus Alexandrium, which is a very small, innocuous phytoplankton that along with Dinophysis belongs to a group known as the dinoflagellates. The toxin produced by Alexandrium tamarense found around the Northern Isles of Britain is highly potent indeed.
|Dinophysis acuta||Dinophysis acuminata|
The diatom Pseudo-nitzschia is a small spindle shaped cell which often occurs in chains of slightly overlapping cells. The cells in these images are 70 micrometres long and about 6 micrometers wide (0.07 x 0.006 mm).<
Open Seminar, Scalloway, October 2011
The WATER project partnership held a highly successful Open Seminar at Scalloway on 27Read more...
WATER project goes to Norway.Meetings
The final project meeting was held in Bergen, Norway over the 26-27 September. The paRead more...
Joint Study, Shetland 2011Field Studies
16-22 July 2011. A survey was carried out by scientists from Scotland and Ireland usiRead more... | <urn:uuid:014d7d99-b79a-4d5e-89ce-2f7a3c04f26c> | 3.328125 | 533 | Content Listing | Science & Tech. | 41.567313 |
As teachers settle into a new school year, it seems a good time to provide some general tips and suggestions on how to make use of popular movies or television in the science classroom. Some of these ideas may be more appropriate for the high school setting, but I hope elementary and middle school teachers will also find some useful suggestions. I will use examples from Monsters vs. Aliens, which comes out on DVD today, September 29, to illustrate each suggestion.
Monsters vs. Aliens tells the story of Susan Murphy’s transformation from normal woman into Ginormica, a 15-meter tall, white-haired version of herself. Susan’s transformation was caused by being struck by a meteorite on her wedding day. (I really appreciate that the writers got that right: the object that hit Susan is a “meteorite,” since it reached the ground. While still in space, rocky objects like this are “meteoroids” and while in the atmosphere are “meteors.”) She is captured by the military and taken to a secret facility where she meets several other monsters based on 1950s science-fiction films. There is Bob, a blue gelatinous mass, à la The Blob (1958); The Missing Link, who bears a striking resemblance to The Creature from the Black Lagoon (1954); Insectasaurus, who is a giant mutated larva not so different from Mothra (1961); and, finally, Dr. Cockroach Ph.D., who is the result of a teleportation experiment gone wrong, as in The Fly (1958).
First, and probably my favorite way to use movies in class, is to have students collect some data from a short segment and use that data in a quick calculation. Actual data collection in this film is hard to come by, but it is possible to estimate the kinetic energy of the meteorite that hits Susan at the start. Students can estimate the size of the meteorite if you pause the film just before the collision, and you could even get a rough speed number by advancing frame by frame. Combine the estimated volume with a density of about 3 grams per cubic centimeter, and you have the approximate mass of the meteorite. Then the kinetic energy is ½ mv^2, and can be compared to the energy of a soccer ball or moving car.
As a check on students’ ability to apply a science concept in a new context, you might show a short scene from a film and ask students to “spot the errors.” For this application, the film you use depends on the topic you want to review, and Monsters vs. Aliens is especially good to check on students understanding of scale.
A classic way to make a movie monster is to put an actor wearing a rubber suit in a model city. He stomps around knocking down buildings and power lines until confronted by another actor in a different suit who stops him. The problem is that simply scaling up an animal does not work for a variety of physical and physiological reasons. First, consider the volume (and if the density is maintained, also the mass) when an object is doubled in size. If you take a cube 1 cm on a side and double all the dimensions, you get a cube 2 cm on each side, with a volume of 2×2×2=8 cubic centimeters. That means doubling the size of the cube made it 8 times more massive (23=8). Susan/Ginormica went from being a normal height (say 1.7 meters) to approximately 15 meters tall. That’s a factor of about 9, so her mass would increase by a factor of 93 or about 730. Her mass would go from a very reasonable 60 kilograms to over 43,000 kilograms. This might not seem so bad, since she has larger bones and muscles as well. The problem is that the strength of a bone is determined by its cross section, not its volume. When she grew by a factor of 9 in all directions, the cross section of her bones grew by a factor of 81 (9×9). While Ginormica’s mass has gotten more than 700 times larger, her bones have only gotten 80 times stronger, and her skeleton would collapse. For a more complete treatment of the biology of science fiction movies, check out Michael LaBarbera’s article at http://fathom.lib.uchicago.edu/2/21701757/.
Finally, you can use films to initiate a class discussion of how science is perceived by the public. Monsters vs. Aliens perpetuates a stereotype of the “Mad Scientist” common in science fiction and the media generally. (Try an image search on the Internet for “scientist” and see how many of them show a man in a lab coat with wild eyes and crazy hair.) As a follow up to the discussion you might invite a real scientist to your classroom for a visit, or arrange for an online video chat so that your students can see that scientists are real people, not caricatures.
I hope these ideas will inspire some of you to try incorporating at least a bit of Hollywood science into your classroom as a way to bridge the gap between school science and students’ everyday lives.
Another version of this column was originally published by NSTA. See www.nsta.org/publications/blickonflicks.aspx.
Jacob Clark Blickenstaff is Assistant Professor of Physics and Assistant Director of the Center for Science and Mathematics Education at the University of Southern Mississippi. His web site through the National Science Teachers Association is Blick on Flicks | <urn:uuid:4649b8d2-7bdf-4793-a499-56822ee9d3f1> | 3.515625 | 1,156 | Personal Blog | Science & Tech. | 58.053294 |
This is the preface of a series of articles I intend to post in the coming weeks about simple ideas, ubiquitous among all programmers, but rarely understood, and too often preached for without any justifiable reason.
Follow me in the coming weeks to understand, once and for all, what's so important in the following concepts, and why.
Do that, and you'll be able to explain, both to yourself, and to your friends and co-workers, why you, as a programmer, make all those (somewhat intuitive) design decisions that you make.
What I'm going to talk about is why and when the following concepts are considered a bad thing:
- Global state
- Global access
- Tight coupling
- Multiple responsibilities per class
- Operator overloading
- Bad or inconsistent naming conventions
But first, a few key notes you should be aware of:
It turns out that, as programmers, we rely heavily on intuition. Why is that? Simply because we can never be bothered to learn and understand all the smallest peculiarities found in Computer Science. We often feel competent as soon as we think we get the general idea of something, without bothering to philosophize about it, which can be quite a time waster, accumulatively.
However, it is fundamental that we understand that that's exactly what makes up our intuition. Things which we kind of get, but not entirely.
This may often prove to still be valuable and useful, but we must never assume that it is unquestionably correct.
One obvious problem resulting from reliance on intuition is dogmatism.
When we don't understand an idea to its core, but know that it is preached for and is widely considered a good idea, we tend to apply it even where it's redundant or even harmful.
One side effect of not knowing how to explain such ideas is transitive fanatic evangelism. Meaning, if you manage to use your charisma to persuade other people that an idea is good, without explaining it thoroughly, they will often listen to you, and evangelize others in exactly the same way.
This is how many common bad ideas are formed.
We often find ourselves asking such unnecessarily general questions, such as "Is this idea important?".
It is vital to understand, once and for all (get it in your head) that there is no such thing as "important things".
One must always define a context. When a context is not defined, or implicitly and unambiguously rendered, one should not bother to answer questions.
Self-rationalization is a disease. If you wish to open yourself to practicality, you must embrace objectivity. Nothing is good or bad without a context. Nothing!
While subjectivity may often lead to quite humorous catchy mnemonics, such as this quote from some guy at CodeReview.SE:
"Daddy, Daddy, he defined a global!"
"Now son, what have I told you about using language like that?"
It should also be explicitly taken into consideration that, for example, defining a global variable might not always be a bad idea, until it is proven otherwise.
A lack of practical examples justifies avoidance, but does not constitute groundless aversion.
I would like to clarify that everything I say is based on personal first-hand experience.
If anything portrays me faithfully, it is the ability to never skip making impulsive mistakes, but, quite fortunately, to learn from some.
I hope that you find sympathy, understanding, and enlightenment in the upcoming posts. | <urn:uuid:5907c9ad-4ab1-4dcb-9299-aa9d39031768> | 2.75 | 724 | Personal Blog | Software Dev. | 43.005839 |
LONG flags );
Fill the supplied buffer with info about the available fonts.
The buffer will after function execution first contains a
struct AvailFontsHeader, and then an array of struct AvailFonts
element (or TAvailFonts elements if AFF_TAGGED is specified in the
flags parameter). If the buffer is not big enough for the
descriptions than the additional length needed will be returned.
buffer - pointer to a buffer in which the font descriptions
should be placed.
bufBytes - size of the supplied buffer.
flags - flags telling what kind of fonts to load,
for example AFF_TAGGED for tagged fonts also,
AFF_MEMORY for fonts in memory, AFF_DISK for fonts
shortage - 0 if buffer was big enough or a number telling
how much additional place is needed.
If the routine failes, then the afh_Numentries field
in the AvailFontsHeader will be 0.
struct FontContentsHeader * NewFontContents(
STRPTR fontName );
Create an array of FontContents entries describing the fonts related
with 'fontName' -- this is those in the directory with the same name
as 'fontName' without the ".font" suffix.
fontsLock -- A lock on the FONTS: directory or another directory
containing the font file and associated directory
fontName -- The font name (with the ".font" suffix).
Pointer to a struct FontContentsHeader describing the font or NULL
if something went wrong.
struct TextFont * OpenDiskFont(
struct TextAttr * textAttr );
Tries to open the font specified by textAttr. If the font has allready
been loaded into memory, it will be opened with OpenFont(). Otherwise
OpenDiskFont() will try to load it from disk.
textAttr - Description of the font to load. If the textAttr->ta_Style
FSF_TAGGED bit is set, it will be treated as a struct TTextAttr.
Pointer to a struct TextFont on success, 0 on failure. | <urn:uuid:32a40156-12ce-4db0-8835-f5c0f69334ed> | 2.921875 | 457 | Documentation | Software Dev. | 49.808058 |
Giant Earthquakes Will Continue
You would think this would be huge news in America. California or Alaska, at least. Mainland Alaska hasn't had a huge 9+ earthquake since before the Alaskan Pipeline was built but you can bet, it has a very good chance of happening now. Failure of this pipeline will have catastrophic effects on our entire economy.
Then there is California. They haven't had a 9+ earthquake in over a century. Such a beast would pretty much destroy nearly everything in its path.
So here it is:
THE Indonesian earthquake behind the Boxing Day tsunami that killed 300,000 people could be the first of a series of giant quakes that will rock the world in the next 10 to 15 years, scientists have warned.
The Mediterranean is among areas at high risk, particularly the coasts of Greece and Turkey, both popular tourist destinations. The scientists are urging the installation of a tsunami warning system there as a matter of urgency.
They found that quakes such as the one in Indonesia can destabilise the whole of the earth's crust, so that one is followed by others, often thousands of kilometres away, within a few years.
"The four biggest earthquakes of the 20th century all happened within 12 years of each other, a pattern we see repeated with other quakes over many decades," said Vladimir Kossobokov of the International Institute of Earthquake Prediction in Moscow.
My own take on this: Not only do earthquakes travel in groups, so do volcanic eruptions. Roughly on a twenty year/seventy year cycle. Like the sun, it is periodic and it is because of the earth's crust flexing in unison. Energy for this builds up until released. The moon is the forgotten force here. It warps our oceans so much we have this thing called "tides" and in full moon phases, the tides are much greater than other times. The moon moves closer and then further. The relative position of the earth and sun varies too. This causes the planetary surface to flex and warp. On top of this are forces on earth itself. When vast sheets of ice form or melt, this affects the shape of the crust. Changes in the status quo leads to increases in energy released or locked up.
Right now, the Australian plate seems to have accelerated its movement as mass leaves the Antarctic plate. This shoves other plates together. Thus the earthquakes. After this, the volcanoes which are already pretty active, will increase activity. This is a far greater danger than mere earthquakes. Volcanoes alter the weather in very drastic ways.
So why isn't this news here in America? Aside from Indonesia, we are the most likely place for earth girdling catastrophic events like Yellowstone blowing up. | <urn:uuid:d4c7d409-1b9c-4e86-8052-93bef308667a> | 2.796875 | 561 | Personal Blog | Science & Tech. | 53.996749 |
A fresh study by a leading hurricane researcher has raised new questions about how hurricane strength and frequency might, or might not, be influenced by global warming. Eric Berger of the Houston Chronicle nicely summarized the research on Friday.
The research is important because the lead author is Kerry Emanuel, the M.I.T. climate scientist who in the 1980’s foresaw a rise in hurricane intensity in a human-warmed world and in 2005, just a few weeks before Hurricane Katrina swamped New Orleans, asserted in a Nature paper that he had found statistical evidence linking rising hurricane energy and warming.
That work was supported by some subsequent studies, but refuted by others. Despite the uncertainty in the science, hurricanes quickly became a potent icon in environmental campaigns, as well as in “An Inconvenient Truth,” the popular climate documentary featuring former Vice President Al Gore. The message was that global warming was no longer a looming issue and was exacting a deadly toll now.
The new study, in the Bulletin of the American Meteorological Society, is hardly definitive in its own right, essentially raising more questions than it resolves. But it definitely rolls back Dr. Emanuel’s sense of confidence about a recent role for global warming. (The abstract is here. A pdf is downloadable on Dr. Emanuel’s ftp page.)
I queried Dr. Emanuel about it and he sent this note Friday night:
The models are telling us something quite different from what nature seems to be telling us. There are various interpretations possible, e.g. a) The big increase in hurricane power over the past 30 years or so may not have much to do with global warming, or b) The models are simply not faithfully reproducing what nature is doing. Hard to know which to believe yet.
The study essentially meshed two kinds of computer models — the massive global climate simulations used to project long-term consequences of building greenhouse gases and small high-resolution simulations of little atmospheric disturbances that can grow into hurricanes. When hundreds of potential storms were seeded across warming oceans, some places in some computer runs — like the North Pacific — saw more activity, but others saw less intensification and fewer storms.
As Dr. Emanuel told Eric in the Chronicle:
“The take-home message is that we’ve got a lot of work to do,” Emanuel said. “There’s still a lot of uncertainty in this problem. The bulk of the evidence is that hurricane power will go up, but in some places it will go down.”
The fresh findings, and Dr. Emanuel’s willingness to follow the science, remind me of something he told my colleague Claudia Dreifus in 2006: “[I]t’s a really bad thing for a scientist to have an immovable, intractable position.”
On his SciGuy blog, Eric discusses some of the ramifications of Dr. Emanuel’s new storm study:
• This should put to rest a lot of the nonsense about a global warming conspiracy among scientists. Emanuel, faced with new evidence, has moderated his viewpoint. That’s what responsible scientists do, and most are responsible. The amount of scientist-bashing when it comes to global warming is generally quite deplorable.
• Anyone who doubts that the threat of large hurricanes is still being used as part of global warming campaigns should look no further than the energy and climate platform of a presidential candidate [pdf alert], who writes, “Global warming is real, is happening now and is the result of human activities. The number of Category 4 and 5 hurricanes has almost doubled in the last 30 years.”
• If you’re a skeptic, and you welcome these results, please remember that these are the same climate models you bash when they show global temperatures steadily rising during the next century.
They are solid points that hold lessons for advocates on both sides of the charged debate over climate science and its implications for society. There are lessons here for journalists, too. Science is a trajectory toward understanding, not a set of truths. Sometimes that can be inconvenient, whether writing a headline or advocating for a climate bill.
But somehow society has to learn how to be comfortable with this aspect of the scientific enterprise, while not fuzzing out because things aren’t crystal clear. As Stephen Schneider, a veteran climatologist at Stanford, recently mused, the question is, “Can democracy survive complexity?”
It’s clear that Dr. Emanuel’s admonition about the need for a lot more work applies beyond the realm of science, as well. | <urn:uuid:d352eaa8-3e4e-470b-94d8-6763ba062883> | 2.765625 | 955 | Personal Blog | Science & Tech. | 45.035213 |
El Niņo: ATSR monthly average Sea Surface Temperature over the
Keywords: ATSR, ESA/ESRIN, SST, El Niņo.
|SST difference between July 1995 and July 1997
El Niņo Effect
ATSR monthly average Sea Surface Temperature at the spatial resolution of 30
arcminutesThe Pacific ocean is usually warmer and higher on the Western
side than along the South American coast, due to the Trade Winds blowing
constantly from East to West. These winds transport the warm surface water
from East to West, where it is piled up. In the East, the warm surface water is
replaced by cold, nutricious water coming from deeper layers: this vertical
movement is called upwelling.
During an El Niņo event the Trade Winds
relax, become very weak, and may even reverse. This causes the warm surface
water to flow back eastwards, and stops the upwelling on the eastern side. This
means that along the South American coast the sea surface temperature will rise,
and it will drop on the Asian and Indonesian coasts.
El Niņo is very strong this year. The image difference between July 1995 and
July 1997 highlights the El Niņo phenomena. In July 95 the water along the South
American Coast is much colder than the July 97 water (in blue in the image).
Some studies show the interconnection between El Niņo and anormal
climatic events all over the world.
The 1997 El Niņo event may be
responsible for drought and associated fires in Indonesia and for the Pauline
cyclone in Mexico.
Image descriptionThe monthly composite images have been made using
the ATSR spatially averaged Sea Surface Temperature product. One single product
consists of about one ATSR orbit. About 340 orbits were used to produced a
monthly composite. The spatial resolution is 30 arcminutes, the temperature
precision is better than 0.3 Celsius. The data have been processed at RAL
July 1997 image is derived from
ATSR-2 data, July 1995 is
derived from ATSR-1 data. The
difference image is just SST (July 1995) minus SST(July 1997). The
temperature scale is given in degree Celsius.
Animation SST ATSR-1 from August 1991 to July 1995Monthly SST
animation from August 1991 to July 1995 ATSR-1 (5.2 Mbyte). Data
Processed at RAL.
Click here to view the image
Others Sites Related to El Niņo
Study presented at ERS Symposium (Florence, 1997):
Use of ATSR data and in situ observations to study ocean
Keywords: ESA European
Space Agency - Agence spatiale europeenne,
observation de la terre, earth observation,
satellite remote sensing,
teledetection, geophysique, altimetrie, radar,
chimique atmospherique, geophysics, altimetry, radar, | <urn:uuid:846aaf47-eaa3-44c5-bab2-491fd908762a> | 3.453125 | 658 | Knowledge Article | Science & Tech. | 39.886139 |
The reinterpret_cast keyword is used to simply cast one type bitwise to another. Any pointer or integral type can be cast to any other with reinterpret cast, easily allowing for misuse. For instance, with reinterpret cast one might, unsafely, cast an integer pointer to a string pointer. It should be used to cast between incompatible pointer types.
TYPE reinterpret_cast<TYPE> (object);
reinterpret_cast<>() is used for all non portable casting operations. This makes it simpler to find these non portable casts when porting an application from one OS to another.
reinterpret_cast<T>() will change the type of an expression without altering its underlying bit pattern. This is useful to cast pointers of a particular type into a
void* and subsequently back to the original type.
int a = 0xffe38024; int* b = reinterpret_cast<int*>(a);
Last modified on 26 September 2012, at 22:47 | <urn:uuid:45fdffa5-e08a-4699-b02c-d1bcc509593d> | 3.734375 | 204 | Documentation | Software Dev. | 41.945625 |
This map of the ancient sky shows the minute variations in the microwave background discovered by the team led by Lawrence Berkeley Laboratory astrophysicist George Smoot. As seen in the map, vast regions of space have minute variations in temperature. Over billions of years, gravity magnified these small differences into the clusters of galaxies we observe today. Displayed horizontally across the middle of the map is the Milky Way galaxy.
The image, a 360-degree map of the whole sky, shows the relic radiation from the Big Bang. The map was derived from one year of data taken by the Differential Microwave Radiometers onboard NASA's Cosmic Background Explorer satellite. Using Galactic coordinates, the map shows the plane of the Milky Way galaxy horizontally and the center of our galaxy at its center.
The colors represent temperature variations with red indicating regions that are a hundredth of a percent warmer and blue indicating regions that are a hundredth of a percent cooler than the average temperature of 2.7 degrees above absolute zero. Away from the plane of the galaxy, many of the features shown are noise.
According to the "inflationary Big Bang" theory on the birth of the universe, as the universe began to expand in the instant after the primeval explosion, its energy density was nearly uniform in all directions save for very small amplitude variations. Gravity working over billions of years magnified these variations, causing galaxies to form and to cluster in the sky. These galaxy clustering patterns, predicted by theory, have been seen for several years, but this study provides the first evidence for the corresponding fluctuations in the background radiation in the sky. Thus, a 15-billion-year-old fossil of the conditions of the universe has been detected and measured for the first time.
Computer analyses show that the pattern of the fluctuations agrees with the predictions from the inflationary Big Bang scenario. The amplitudes of the observed fluctuations are consistent with theories that explain the birth and growth of galaxies using large amounts of an enigmatic material called "dark matter." According to these theories, most of the universe is composed of material that we not only know very little about, but that has never been seen directly. | <urn:uuid:247e3c44-61c3-43c2-8eff-44db30106a27> | 4.21875 | 436 | Knowledge Article | Science & Tech. | 31.591631 |
File:Sulfur Hexafluoride Molecule VdW.png
From Global Warming Art
This is a space filling model showing the chemical structure of sulfur hexafluoride. The size of each atom is determined by its Van der Waals radius and the relative positions faithfully reproduce the structure of the molecule. Colors follow traditional conventions used in popular molecular visualization software (e.g. Jmol).
Sulfur hexaflouride is the most potent known greenhouse gas, with a per molecule global warming potential at 100 years of 22,000 times that of carbon dioxide. This property is a consequence of the large number of vibrational modes accessible to the molecule due to the cage like structure in which the sulfur atom is suspended.
GWArt images and pages linking to this file
Wikipedia pages and images linking to this file
Click on a date/time to view the file as it appeared at that time.
|current||05:49, 30 January 2007||400×399 (75 KB)||Robert A. Rohde| | <urn:uuid:82ec4b3c-c3ce-4e36-9d38-7a573c33d7d5> | 3.09375 | 217 | Truncated | Science & Tech. | 43.6025 |
Jamais Cascio has made it official… “there’s a considerable amount of methane (CH4) coming from the East Siberian Arctic Shelf, where it had been trapped under the permafrost. There’s as much coming out from one small section of the Arctic ocean as from all the rest of the oceans combined. This is officially Not Good.”
This blog was a bit more dramatic: “Ever wondered what makes for a Climate Progress nightmare. Wonder no longer.”
And, Climate Progress commentary reflected the deep concern among scientists that Natalia Shakhova’s announcement stimulated. “The methane CLATHRATES were first observed to be melting and releasing methane into the atmosphere in 2008, and this was not expected for at least another couple of decades, if at all this century. The climate models could be seriously underestimating the potential positive feedbacks.”
Editor’s note: Shakhova is the researcher at UAF’s International Arctic Research Center who announced the research findings.
Andrew C. Revkin covered the story, and even connected it with the prior BBC story, and yet in his next post quoted David Archer, “Methane sells newspapers, but it’s not the big story, nor does it look to be a game changer to the big story, which is CO2.” Which made me recall David Robert’s Syllogism of Doom.
The Syllogism of Doom goes:
1. If we (that is, humanity) increase our use of coal, the atmosphere will likely tip over into irreversible, catastrophic warming.
2. We are going to increase our use of coal.
3. The atmosphere will likely tip over into irreversible, catastrophic warming.
With melting permafrost comes increased GHG emissions, atmospheric degradation leading to irreversible catastrophes from global warming. This blog relayed a warning about such positive feedback in August 2007. The melting permafrost is a tipping point. “If we see runaway methane from underneath the Siberian permafrost,” portends Jamais Cascio, “we could see temperatures increasing far faster than even the most pessimistic CO2-driven scenarios — perhaps as much as 8-10° C, very much into the global catastrophe realm.”
Other AG Posts on topic of Melting Permafrost | <urn:uuid:7edd05e9-5fff-46d4-8a48-e176769f92de> | 2.8125 | 496 | Personal Blog | Science & Tech. | 44.917476 |
Stationary Orbit Over the Moon
Date: 1993 - 1999
At what altitude off the surface of the moon would a satellite be in
a stationary orbit over the moon?
This is a neat question, but one which does not have a simple answer.
For an isolated planet we can work out the radius of a circular orbit with a
certain period. Top to be motionless, stationary the period must be 24 hours
for earth, while for the moon the period must be 28 days. The longer the period the farther away is
the orbit from the planet. For an orbit far away from the moon our satellite
would encounter the gravity of the earth and that would alter the orbit and make
a stationary orbit unlikely. Because of the earth's presence such an orbit is n ot possible. If
it were, you could use the value of the radius for earth and multiply it by the
one-third power of the ratio of the moon to the earth masses and multiply by the
two -thirds power of the number 28. That should give the value without the prese nce of the
earth. Let me know if this answer is confusing.
Samuel P Bowen
Click here to return to the Astronomy Archives
Update: June 2012 | <urn:uuid:31cae12e-e477-42ab-9b81-fbfba96f749a> | 3.421875 | 256 | Q&A Forum | Science & Tech. | 61.771944 |
These measurements enable us to monitor changes to the environment and patterns of land use and can influence environmental policy. Within two years of satellite monitoring showing a hole in the ozone layer, for example, Europe and 24 non-European countries had signed up to The Montreal Protocol on Substances that Deplete the Ozone Layer.
This protocol protects the ozone layer by controlling the production and consumption of harmful chemicals. More than 160 countries have now formally approved the protocol.
Co-ordinating space missions
The UK recognises the importance of Earth Observation satellites and is involved in the following activities and strategies:
BNSC funds GIFTSS (Government Information From The Space Sector). GIFTSS develops partnerships between UK Government departments and agencies. The aim is to investigate and develop the use of satellite-based data to address challenges such as intelligent transport, long-term landuse monitoring and humanitarian aid distribution.
GMES (Global Monitoring for Environment and Security) is a European Union led initiative in partnership with the European Space Agency (ESA). GMES will build a co-ordinated system for Earth observation and monitoring and has a particular emphasis on climate change. With GMES, Europe's politicians will have access to independent environmental data to form and support their policy decisions.
CEOS (Committee on Earth Observation Satellites) provides an international forum in which to discuss international Earth Observation issues. The CEOS ensures that countries work together to get the most from international civil space missions that observe and study planet Earth.
CEOS consists of 23 members (most of which are space agencies) and 21 associates (associated national and international organisations). It is recognised as the major international forum for the co-ordination of Earth observation satellite programs and users of satellite data worldwide.
Envisat is the world's largest and most complex environmental satellite. Launched in 2002, it is the size of an articulated lorry and takes many different measurements of the Earth's land, oceans, atmosphere and ice caps at the same time. This is so we can discover how one factor affects the other.
Envisat has already exceeded its original lifetime and remains in good health. The mission is now extended until 2010.
Data from the Envisat mission will build on information obtained from ERS (European Remote Sensing Satellite) 1 and 2. The ERS satellites have been watching over the Earth for more than 16 years, providing scientists with plenty of data about our planet. Already there is enough satellite information to show long-term trends in sea level rise and reduced ice cover in the Arctic.
Examples in the rest of this section show how scientists use satellites to help tackle climate change, pollution and improve land monitoring. | <urn:uuid:d3361c6e-b52a-4ef8-9862-68287ef46e60> | 3.640625 | 546 | Knowledge Article | Science & Tech. | 25.418137 |
Simply begin typing or use the editing tools above to add to this article.
Once you are finished and click submit, your modifications will be sent to our editors for review.
Subtractive colour mixing involves the absorption and selective transmission or reflection of light. It occurs when colorants (such as pigments or dyes) are mixed or when several coloured filters are inserted into a single beam of white light. For example, if a projector is fitted with a deep red filter, the filter will transmit red light and absorb other colours. If the projector is fitted...
history of motion pictures
Photographic colour can be produced in motion pictures by using either an additive process or a subtractive one. The first systems to be developed and used were all additive ones, such as Charles Urban’s Kinemacolor ( c. 1906) and Gaumont’s Chronochrome ( c. 1912). They achieved varying degrees of popularity, but none was entirely successful, largely because all additive systems involve...
In subtractive synthesis yellow, magenta, and cyan filters or dye layers subtract varying proportions of the primary colours from white light. The yellow filter absorbs the blue component of white light and so controls the amount of blue present in a white-light beam that has passed through the filter. Similarly, the magenta filter controls the amount of green light left, and the cyan controls...
What made you want to look up "subtractive mixture"? Please share what surprised you most... | <urn:uuid:3393a07a-df0a-4485-a379-f1ac3f9f5f78> | 3.296875 | 303 | Knowledge Article | Science & Tech. | 49.207024 |
If you were a small insect you wouldn't want to meet the Northern Silver-stiletto fly (Spiriverpa lunulata) while out strolling on the river bank.
The white eel-like larva of the fly lives just under the soil surface. It detects the vibrations of something walking on the surface; stealthily it wriggles to beneath its unaware prey, before piercing them from below with its sharp mouthparts, injecting poison that leads to instant paralysis. Finally it deftly drags its prey out of sight beneath the soil. It all happens so quickly it's a case of now you see it, now you don’t. The ultimate sci-fi horror monster, fortunately it's only a couple of centimetres long.
|Watch where you tread: the larva of the Stiletto fly lies in wait for prey beneath the soil|
In Britain there are 14 species of stiletto fly. Some have silver males but mostly they are brown in colour. Most species are very habitat specific, especially where loose sand is present as on sand dunes and beside rivers. Only one species is very widespread, with the others mainly nationally scarce, rare or endangered. Hence such flies can be important in site evaluation and decisions over site management.
The Northern Silver-stiletto fly lives on Exposed Riverine Sediment - pebbles and gravels found at the river's edge, a habitat of great importance for its invertebrate life. This extraordinary species is protected on the Biodiversity Action Plan, and Buglife is currently working to conserve both the fly and its habitat. | <urn:uuid:7675eacf-97ba-4156-b6c7-9c21dd7e1fec> | 3.015625 | 332 | Knowledge Article | Science & Tech. | 44.084808 |
Scientific research has come under fire recently in the brouhaha over e-mails among climate scientists at the Climatic Research Unit at the University of East Anglia in England.
But regardless of where any of us might stand on climate change, it's evident that much of our environmental understanding today has come from what may have begun as small, lesser-known studies.
A few interesting studies in the news this week in Ohio.
Phosphates and fish studies
The Ohio Lake Erie Commission announced two new studies through its Lake Erie Protection Fund totaling about $25,000.
The commission on Wednesday approved a $14,998 grant to the University of Toledo's Department of Civil Engineering for testing a microsensor to detect phosphates in Lake Erie.
Phosphates are often cited for the surprising surge of various algae in the lake, after environmental experts thought that the pollution controls of the 1970s had eliminated the problem.
The commission also gave $9,900 to the Ohio Sea Grant College Program and Ohio State University to "build complex fish habitat structures" at seven marinas along Lake Erie's south shore. The devices are expected to attract more fish and "increase angling success for Ohioans," according to the commission.
The commission is a state agency staffed by the Ohio Department of Natural Resources.
OSU glacier study
Researchers at Ohio State are starting to learn more about how water beneath glaciers contributes to disturbing ice loss in Greenland -- gaining knowledge they say "has implications for ice loss elsewhere in the world, including Antarctica, and could ultimately lead to better estimates of future sea level rise due to climate change."
Previously, most researchers believed that meltwater from a glacier simply lubricated the ice, causing glaciers to "speed" (relatively speaking, of course) across bedrock.
Now, studies in Greenland and Alaska by OSU researchers indicate that the process is far more complex -- and that meltwater under the glacier may not be the key factor in glacial melting.
"We've come to realize that sub-glacial meltwater is not responsible for the big accelerations that we've seen for the last 10 years," Ian Howat, an OSU assistant professor of earth sciences, said in an e-mail. "Changes in the glacial fronts, where the ice meets the ocean, are the real key."
Howat's team found that the "moulins" or network of water passageways in a glacier, form "an ever-changing plumbing system that regulates where water collects between the ice and bedrock at different times of the year."
In short, that means that changes in water pressure can actually slow down or speed up the glacier. But further research showed that the speed of the glacier nearer the coast probably had less to do with the moulins and more to do with how the ocean waters worked on lubricating the path of the glacier.
Howat spoke at a news conference Wednesday at the American Geophysical Union meeting in San Francisco. | <urn:uuid:10e54898-d1e1-4460-85d5-65978280edd4> | 3.15625 | 610 | Content Listing | Science & Tech. | 34.30874 |
Scientific Assessment of Ozone Depletion: 1994
The 1994 WMO/UNEP assessment, Scientific Assessment of Ozone Depletion: 1994, contains the understanding of ozone depletion and reflects the thinking of 295 international scientific experts who contributed to its preparation and review. Co-chairs of the 1994 assessment were Dr. Daniel L. Albritton of CSD (formerly the Aeronomy Laboratory), Dr. Robert T. Watson of the White House Office of Science and Technology Policy, and Dr. Piet J. Aucamp of the Department of National Health in South Africa. Other members of the Aeronomy Laboratory made substantial contributions to the report, serving as lead authors, co-authors, contributors, reviewers, coordinating editor, and editorial support staff.
This Assessment includes three major sections of an Executive Summary, Preface, 13 detailed chapters in five parts, and "Common Questions About Ozone" (a special section):
- Part 1. Observed Changes in Ozone and Source Gases
- Chapter 1. Total and Vertical-Column Ozone (Lead Author: Neil R.P. Harris)
- Chapter 2. Source Gases (Lead Author: Eugenio Sanhueza)
- Part 2. Atmospheric Processes Responsible for the Observed Changes in Ozone
- Chapter 3. Polar Ozone (Lead Author: David W. Fahey)
- Chapter 4. Tropical and Midlatitude Stratosphere (Lead Author: Roderic L. Jones)
- Chapter 5. Tropospheric Ozone (Lead Authors: Andreas Volz-Thomas and Brian A. Ridley)
- Part 3. Model Simulations of Global Ozone
- Chapter 6. Stratospheric Ozone (Lead Author: Malcolm K.W. Ko)
- Chapter 7. Tropospheric Ozone (Lead Author: Frode Stordal)
- Part 4. Consequences of Ozone Change
- Chapter 8. Radiative Forcing (Lead Author: Keith P. Shine)
- Chapter 9. Surface Ultraviolet Radiation (Lead Author: Richard McKenzie)
- Part 5. Scientific Information for Future Decisions
- Chapter 10. Methyl Bromide (Lead Author: Stuart A. Penkett)
- Chapter 11. Subsonic and Supersonic Aircraft Emissions (Lead Authors: Andreas Wahner and Marvin A. Geller)
- Chapter 12. Substitutes for the Long-Lived Halocarbons and Their Degradation Products (Lead Author: R.A. Cox)
- Chapter 13. Ozone Depletion Potentials, Global Warming Potentials, and Future Chlorine/Bromine Loading (Lead Authors: Susan Solomon and Donald Wuebbles)
- "Common Questions About Ozone" (Coordinators: Susan Solomon and F. Sherwood Rowland)
Text of the Executive Summary
The Executive Summary gives a synopsis of major scientific findings of the 13 chapters of the full assessment. This portion includes:
- Recent Major Scientific Findings and Observations
- Supporting Scientific Evidence and Related Issues
- Ozone Changes in the Tropics and Midlatitudes and Their Interpretation
- Polar Ozone Depletion
- Coupling Between Polar Regions and Midlatitudes
- Tropospheric Ozone
- Trends in Source Gases Relating to Ozone Changes
- Consequences of Ozone Changes
- Related Phenomena and Issues
- Implications for Policy Formulation
"Common Questions About Ozone"
The international scientific community included this new section in their 1994 assessment. In it, they answer several of the general questions that are most commonly asked by students, the general public, and leaders in industry and government. After a general introduction about ozone, the questions addressed are:
- How can chlorofluorocarbons (CFCs) get to the stratosphere if they're heavier than air?
- What is the evidence that stratospheric ozone is destroyed by chlorine and bromine?
- Does most of the chlorine in the stratosphere come from human or natural sources?
- Can changes in the Sun's output be responsible for the observed changes in ozone?
- When did the Antarctic ozone hole first appear?
- Why is the ozone hole observed over Antarctica when CFCs are released mainly in the Northern Hemisphere?
- Is the depletion of the ozone layer leading to an increase in ground-level ultraviolet radiation?
- How severe is the ozone depletion now, and is it expected to get worse?
List of International Authors, Contributors, and Reviewers of the 1994 Assessment
Hundreds of scientists from around the world write and review the periodic WMO/UNEP "state-of-the-science" assessments of ozone depletion; hundreds of additional scientists author the studies that are referenced within them. As a result, the WMO/UNEP assessments are truly "global" documents, reflecting the thinking of the international scientific community.
Nearly 300 international scientists from the developed and developing world contributed to the preparation and review of the latest WMO/UNEP assessment, Scientific Assessment of Ozone Depletion: 1994. Listed are the names of those individuals and the supporting organizations and staff. | <urn:uuid:4027671a-ea8a-47a4-b515-c0216c8fbe6f> | 2.828125 | 1,074 | Knowledge Article | Science & Tech. | 32.214733 |
This ASP.NET tutorial explains how to create cookie in ASP.NET.
A cookie, also known as an HTTP cookie, web cookie, or browser cookie, is used for an origin website to send state information to a user’s browser and for the browser to return the state information to the origin site. The state information can be used for authentication, identification of a user session, user’s preferences, shopping cart contents, or anything else that can be accomplished through storing text data.
You can add cookies to the Cookies collection:
HttpCookie myCookie = new HttpCookie(“visit”);
myCookie.Value = DateTime.Now.ToString();
myCookie.Expires = DateTime.Now.AddDays(1); | <urn:uuid:e28d0232-ccbd-43bc-931e-e76f293c03f5> | 3.25 | 165 | Tutorial | Software Dev. | 48.318828 |
© Supernova Cosmology Project, Lawrence Berkeley National Laboratory.
The two primary lines of evidence for dark energy are the accelerated expansion of the universe measured by supernova surveys and the discrepancy between the geometry and amount of matter in the universe measured by the CMB. The plot above, with on the vertical axis and on the horizontal axis, compares the results of these two measurements. The broad orange line shows values of and that are consistent with the geometry and amount of matter allowed by the CMB measurement. The blue ellipses show values consistent with the supernova measurement, with the darker shades of blue corresponding to more likely values. The green line shows the result of another set of measurements of galaxy clusters. All three measurements are consistent—the orange, blue, and green parts of the graph meet in a single point. The gray ellipses show the values of and most consistent with all of the measurements, which we take as a sign that the picture we are developing for a universe filled with dark energy is a reliable one. (Unit: 11) | <urn:uuid:fdbf642a-5e10-4629-8083-12779e875eee> | 3.234375 | 212 | Truncated | Science & Tech. | 30.562338 |
When the first stars blinked on
December 5, 2012The very first stars may have turned on when the universe was 750 million years old.
GRAIL reveals a battered lunar history
December 5, 2012Twin spacecraft create a highly detailed gravity map of the moon, finding an interior pulverized by early impacts.
Paintballs may deflect an incoming asteroid
October 26, 2012With 20 years’ notice, paint pellets could cause an asteroid to veer off course.
Explained: Near-miss asteroids
June 29, 2012What to do in the event of an asteroid streaking toward Earth? Activate the asteroid ‘fire drill.’
Lincoln Laboratory helps celebrate the official unveiling of the Space Surveillance Telescope
January 20, 2012The laboratory developed enabling technologies for the telescope
April 29, 2011From deep space to deep sea, two-day symposium examined MIT’s impacts and innovations.
3 Questions: Sara Seager on discovering a trove of new planets
February 3, 2011NASA’s Kepler orbiting telescope has found hundreds of new possible planets, including 54 in the so-called 'habitable zone.'
Building a list of Earth candidates
December 14, 2010MIT researchers increase their odds of detecting an Earthlike planet by working on a combination of satellite missions.
Explained: the Doppler effect
August 3, 2010The same phenomenon behind changes in the pitch of a moving ambulance’s siren is helping astronomers locate and study distant planets.
3 Questions: Richard Binzel on astronomers’ powerful new tool
July 13, 2010Pan-STARRS, a telescope designed to reveal the ‘unexpected surprises’ in our solar system, including possible threats to Earth, just became fully operational. | <urn:uuid:01441b42-631d-473b-98d8-c0563f0090f3> | 3.546875 | 359 | Content Listing | Science & Tech. | 29.004265 |
Wood on the seafloor- an oasis for deep-sea life
Trees do not grow in the deep sea, nevertheless sunken pieces of wood can develop into oases for deep-sea life - at least temporarily until the wood is fully degraded. A team of Max Planck researchers from Germany now showed how sunken wood can develop into an attractive habitats for a variety of microorganisms and invertebrates. By using underwater robot technology, they confirmed their hypothesis that animals from hot and cold seeps would be attracted to the wood due to the activity of bacteria, which produce hydrogen sulfide during wood degradation.
Many of the animals thriving at hydrothermal vents and cold seeps require special forms of energy such as methane and hydrogen sulfide emerging from the ocean floor. They carry bacterial symbionts in their body, which convert the energy from these compounds into food. The vents and seeps are often separated by hundreds of kilometers of deep-sea desert, with no connection between them.
For a long time it was an unsolved mystery how animals can disperse between those rare oases of energy in the deep sea. One hypothesis was that sunken whale carcasses, large dead algae, and also sunken woods could serve as food source and temporary habitat for deep-sea animals, but only if bacteria were able to produce methane and sulfur compounds from it.
Colonization of wood in the deep sea. (© Bienhold et al., PLoS ONE 8(1): e53590).
To tackle this question, the team deposited wood logs on the Eastern Mediterranean seafloor at depths of 1700 meters and returned after one year to study the fauna, bacteria, and chemical microgradients.
“We were surprised how many animals had populated the wood already after one year. The main colonizers were wood-boring bivalves of the genus Xylophaga, also named “shipworms” after their shallow-water counterparts. The wood-boring Xylophaga essentially constitute the vanguard and prepare the habitat for other followers,” Bienhold said. „But they also need assistance from bacteria, namely to make use of the cellulose from the wood, which is difficult to digest.”
The team of researchers observed that the wood-boring bivalves had cut large parts of the wood into smaller chips, which were further degraded by many other organisms. This activity led to the consumption of oxygen, enabling the production of hydrogen sulfide by sulfate-reducing microorganisms. And indeed, the researchers also found a mussel, which is typically only found at cold seeps or similar environments where it uses sulfur compounds as an energy source. “It is amazing to see how deep-sea bacteria can transform foreign substances such as wood to provide energy for cold-seep mussels on their journey through the deep ocean”, said Antje Boetius, chief scientist of the expedition. Furthermore, the researchers discovered unknown species of deep-sea worms, which have been described by taxonomic experts in Germany and the USA. Thus, sunken woods do not only promote the dispersal of rare deep-sea animals, but also form hotspots of biodiversity at the deep seafloor.
This study was part of the German-French project DIWOOD, which is supported by the Max Planck society and the CNRS. Further support came from the EU Projects HERMES (6 FP) and HERMIONE (7 FP).
One of the wood experiments after one year at the seafloor. Wood-boring bivalves of the genus Xylophaga had populated the wood. | <urn:uuid:985e65d1-d5b1-4680-bff1-15399efe9682> | 4 | 749 | Knowledge Article | Science & Tech. | 38.80715 |
A SINGLE distant star, 600 light years away, plays a greater role in ionising hydrogen gas in our region of the Galaxy than do all other stars combined, say astronomers in the US. The star's feat is all the more remarkable because some 3 million stars lie closer to our "local" cloud of hydrogen than it does.
The star is known as Adhara. Astronomers estimate that the temperature at its surface is some 21 000 K - nearly four times hotter than the Sun. Because of its high temperature, the star emits very energetic radiation in the extreme ultraviolet region of the spectrum. The radiation strips electrons from hydrogen atoms, converting them into positively charged ions. In 1992, NASA's Extreme Ultraviolet Explorer satellite discovered that viewed from near the Earth, Adhara is the brightest source of extreme UV radiation apart from the Sun.
Now John Vallerga and Barry Welsh of ...
To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content. | <urn:uuid:1298d267-58df-49ec-acf3-584740a05032> | 3.796875 | 209 | Truncated | Science & Tech. | 49.392303 |
The information is detailed in a study by evolutionary scientist John Hawks. Since the Stone Age, there have been many changes in the human body, most of which are unsurprising. But the revelation that our brains are actually smaller now than they were then could strike some as shocking.
Hawks says over the last 20,000 years, the average male brain has shrunk from 1,500 cubic centimeters to 1,350 cubic centimeters. That’s a chunk about the size of a tennis ball. Hawks also says the same has happened to the female brain, and isn’t limited to any region in the world. There’s little incentive to think we’re getting dumber, though. Brain size, generally speaking, isn’t an accurate indicator of intelligence.
So the big question is why? There is a series of theories about the subject. While some do argue the shrinking of gray matter means we’re getting dumber, others contend that our brains are shrinking because they’re becoming more efficient. If that doesn’t make sense, think of it like new iterations of certain kinds technology that grow smaller with each generation but also more advanced and more powerful.
The most obvious example is probably the ever-thinning various smartphones. Still, many aren’t aware of this, or consider it an insignificant detail. You can read more on the report at the source. | <urn:uuid:6299a19d-9758-4fac-a880-a48fd5255d2f> | 3.09375 | 288 | Truncated | Science & Tech. | 55.119954 |
Search our database of handpicked sites
Looking for a great physics site? We've tracked down the very best and checked them for accuracy. Just fill out the fields below and we'll do the rest.
You searched for
We found 16 results on physics.org and 212 results in our database of sites
211 are Websites,
0 are Videos,
and 1 is a Experiments)
Search results on physics.org
Search results from our links database
Basic information and equations relating to potential and kinetic energy, with details of an experimental procedure for demonstrating energy transfer. Simulation of a falling ball with adjustable ...
The Physics Classroom gives you all the basics on kinetic energy along with the essential equations and a test to check your understanding.
A description of kinetic energy for rotating objects.
The student can study the effect on distance and time of altering the kinetic energy and mass of a sphere. Results could be plotted onto a spreadsheet.
Perfectly elastic collisions are those in which no kinetic energy is lost in the collision. Macroscopic collisions are generally inelastic and do not conserve kinetic energy, though of course the ...
A beginner's definition of kinetic energy.
A series of articles on Kinetic Energy, Potential Energy, Relativity, Work, Fossil Fuels, Nuclear, Hydroelectric, Biomass, Wind and Geothermal power. The site uses text and colour illustrations and ...
How fleas and catapults and other similar devices and animals use elastic energy storage devices to convert slow muscle energy into faster kinetic energy.
Concepts of work, kinetic energy and potential energy are discussed in this simple introduction.
The site gives details of the different forms of energy ( Light, Heat, Sound, Chemical, Kinetic, Potential, Nuclear) and their sources.
Showing 1 - 10 of 212 | <urn:uuid:070e4797-7dec-47c7-98b2-8c0e3871d43d> | 3.140625 | 374 | Content Listing | Science & Tech. | 41.123745 |
Note to readers: this is my best attempt to describe some issues in accelerator operations; I welcome comments from people more expert than me if you think I don’t have things quite right.
The operators of the Large Hadron Collider seek to collide as many protons as possible. The experimenters who study these collisions seek to observe as many proton collisions as possible. Everyone can agree on the goal of maximizing the number of collisions that can be used to make discoveries. But where the accelerator physicists and particle physicists might part ways over just how those collisions might best be delivered.
Let’s remember that the proton beams that circulate in the LHC are not a continuous current like you might imagine running through your electric appliances. Instead, the beam is bunched — about 1011 protons are gathered in a formation that is about as long as a sewing needle, and each proton beam is made up of 1380 such bunches. As the bunches travel around the LHC ring, they are separated by 50 nanoseconds in time. This bunching is necessary for the operation of the experiments — it ensures that collisions occur only at certain spots along the ring (where the detectors are) and the experiments can know exactly when the collisions are occurring and synchronize the response of the detector to that time. Note that because there are so many protons in each beam, there can be multiple collisions each time two bunches pass by each other. At the end of the last LHC run, there were typically 30 collisions that occurred per bunch crossing.
There are several ways to maximize the number of collisions that occur. Increasing the number of protons in each bunch crossing will certainly increase the number of collisions. Or, one could imagine increasing the total number of bunches per beam, and thus the number of bunch crossings. The collision rate increases like the square of the number of particles per bunch, but only linearly with the number of bunches. On the face of it, then, it would make more sense to add more particles to each bunch rather than to increase the number of bunches if one wanted to maximize the total number of collisions.
But the issue is slightly more subtle than that. The more collisions that occur per beam crossing, the harder the collisions are to interpret. With 30 collisions happening at the same time, one must contend with hundreds, if not thousands, of charged particle tracks that cross each other and are harder to reconstruct, which means more computing time to process the event. With more stuff going on each event, the most important parts of the event are increasingly obscured by everything else that is going on, degrading the energy and momentum resolution that are needed to help identify the decay products of particles like the Higgs boson. So from the perspective of an experimenter at the LHC, one wants to maximize the number of collisions while having as few collisions per bunch crossing as possible, to keep the interpretation of each bunch crossing simple. This argument favors increasing the number of bunches, even if this might ultimately mean having fewer total collisions than could be obtained by increasing the number of protons per bunch. It’s not very useful to record collisions that you can’t interpret because the events are just too busy.
This is the dilemma that the LHC and the experiments will face as we get ready to run in 2015. In the current jargon, the question is whether to run with 50 ns between collisions, as we did in 2010-12, or 25 ns between collisions. For the reasons given above, the experiments generally prefer to run with a 25 ns spacing. At peak collision rates, the number of collisions per crossing is expected to be about 25, a number that we know we can handle on the basis of previous experience. In contrast, the LHC operators generally to prefer the 50 ns spacing, for a variety of operational reasons, including being able to focus the beams better. The total number of collisions delivered per year could be about twice as large with 50 ns spacing…but with many more collisions per bunch crossing, perhaps by a factor of three. This is possibly more than the experiments could handle, and it could well be necessary to limit the peak beam intensities, and thus the total number of collisions, to allow the experiment to operate.
So how will the LHC operate in 2015 — at 25 ns or 50 ns spacing? One factor in this is that the machine has only done test runs at 25 ns spacing, to understand what issues might be faced. The LHC operators will re-commission the machine with 50 ns spacing, with the intention of switching to 25 ns spacing later, as soon as a couple of months later if all goes well. But then imagine that 50 ns running works very well outset. Would the collision pileup issues motivate the LHC to change the bunch spacing? Or would the machine operators just like to keep going with a machine that is operating well?
In ancient history I worked on the CDF experiment at the Tevatron, which was preparing to start running again in 2001 after some major reconfigurations. It was anticipated that the Tevatron was going to start out with a 396 ns bunch spacing and then eventually switch over to 132 ns, just like we’re imagining for the LHC in 2015. We designed all of the experiment’s electronics to be able to function in either mode. But in the end, 132 ns running never happened; increases in collision rates were achieved by increasing beam currents. This was less of an issue at the Tevatron, as the overall collision rate was much smaller, but the detectors still ended up operating with numbers of collisions per bunch crossing much larger than they were designed for.
In light of that, I find myself asking — will the LHC ever operate in 25 ns mode? What do you think? If anyone would like to make an informal wager (as much as is permitted by law) on the matter, let me know. We’ll pay out at the start of the next long shutdown at the end of 2017. | <urn:uuid:077a4e88-3cd3-44ce-a5e1-6e39f90212d4> | 3.3125 | 1,234 | Personal Blog | Science & Tech. | 49.570694 |
Let's write a function to compute factorials. The
mathematical definition of
n factorial is:
n! = 1 (when n==0) = n * (n-1)! (otherwise)
In ruby, this can be written as:
if n == 0
n * fact(n-1)
You may notice the repeated occurrence of
has been called "Algol-like" because of this. (Actually, the
syntax of ruby more closely mimics that of a langage named
Eiffel.) You may also notice the lack of a
statement. It is unneeded because a ruby function returns the
last thing that was evaluated in it. Use of a
statement here is permissible but unnecessary.
Let's try out our factorial function. Adding one line of code gives us a working program:
# Save this as fact.rb
if n == 0
n * fact(n-1)
ARGV is an array which contains the command line
to_i converts a character string to an
% ruby fact.rb 5
Does it work with an argument of 40? It would make your calculator overflow...
It does work. Indeed, ruby can deal with any integer which is allowed by your machine's memory. So 400! can be calculated:
We cannot check the correctness at a glance, but it must be right. :-)
When you invoke ruby with no arguments, it reads commands from standard input and executes them after the end of input:
puts "hello world"
puts "good-bye world"
The ^D above means control-D, a conventional way to signal end-of-input in a Unix context. In DOS/Windows, try pressing F6 or ^Z instead.
Ruby also comes with a program called
eval.rb that allows you to
enter ruby code from the keyboard in an interactive loop, showing you
the results as you go. It will be used extensively through the rest
of this guide.
If you have an ANSI-compliant terminal (this is almost certainly true
if you are running some flavor of UNIX; under old versions of DOS you
need to have installed
ANSI.COM; Windows XP,
unfortunately, has now made this nearly impossible), you should use
eval.rb that adds visual
indenting assistance, warning reports, and color highlighting.
Otherwise, look in the
sample subdirectory of the ruby
distribution for the non-ANSI version that works on any terminal.
Here is a short
ruby> puts "Hello, world."
hello world is produced by
puts. The next
line, in this case
nil, reports on whatever was last
evaluated; ruby does not distinguish between statements and
expressions, so evaluating a piece of code basically means
the same thing as executing it. Here,
puts does not return a meaningful value. Note
that we can leave this interpreter loop by saying
^D still works too.
Throughout this guide, "
ruby>" denotes the input prompt
for our useful little | <urn:uuid:5f6f5062-904f-40a2-8648-021537d294f3> | 3.140625 | 651 | Documentation | Software Dev. | 62.686457 |
Establishing a key link between the solar cycle and global climate, new research led by the National Center for Atmospheric Research (NCAR) shows that maximum solar activity and its aftermath have impacts on Earth that resemble La Niña and El Niño events in the tropical Pacific Ocean.
By simulating 8,000 years of climate, a team led by scientists from the University of Wisconsin–Madison and NCAR has found a new explanation for the last major period of global warming, which occurred about 14,500 years ago.
In a breakthrough that will help scientists unlock mysteries of the Sun and its impacts on Earth, an international team of scientists led by NCAR has created the first-ever comprehensive computer model of sunspots.
Melting of the Greenland ice sheet may drive more water than previously thought toward the already threatened coastlines of New York, Boston, Halifax, and other cities in the northeastern United States and Canada.
The largest and most ambitious tornado study in history will begin next week, as dozens of scientists deploy radars and other ground-based instruments across the Great Plains to gain a better understanding of these often deadly weather events.
As part of its Walter Orr Roberts Distinguished Lecture series, UCAR will present a special lecture next week by NASA scientist James E. Hansen, one of the nation's most respected experts on climate change.
NCAR and its managing organization, UCAR, announced today the selection of an architectural design team for a supercomputing center dedicated to advancing scientists' understanding of climate, weather, and other Earth and atmospheric processes.
A team of scientists has successfully flown from the Arctic to the Antarctic this month, the first step in a three-year project to make the most extensive airborne measurements of carbon dioxide and other greenhouse gases to date.
Researchers are working toward predictions of climate change impacts in specific regions and even metropolitan areas. But are local and regional decision makers taking advantage of this science to begin to prepare for the impacts of global warming?
UCAR, working with an international team of health and weather organizations, is launching a project this month to provide long-term weather forecasts to medical officials in Africa to help reduce outbreaks of meningitis.
Schoolchildren, families, and citizen scientists around the world will gaze skyward after dark from October 20 to November 3, looking for specific constellations and then sharing their observations through the Internet. | <urn:uuid:0eb6016e-4e37-48da-8c57-6adc32689210> | 3.46875 | 484 | Content Listing | Science & Tech. | 24.729052 |
Beginners Guide’s to ASP.NET MVC Framework – Part 1 of n October 19, 2009Posted by Abhijit Jana in ASP.NET, MVC Framework.
Tags: .NET, ASP.Net, C# 4.0, codeproject, MVC
This article describes overview of ASP.NET MVC Framework , MVC Control Flow etc.
This article is the Part 1 of the ASP.NET MVC Framework series. In this article I have describes very basic over view of MVC Framework and the control flow of MVC. I will write a few articles in this series which will help all the beginners to move ahead. This article is only about what MVC is.
The Model-View-Controller (MVC) design pattern is an architectural design patterns for any standard development that separates the components of an application. This allows application to handle very flexible and extensible and easy to handle. ASP.NET MVC Framework is also one of the standard web development frameworks which separate the components of web development application different components.
ASP.NET MVC Framework having three main components
Model: The model manages the behavior and data of the application domain, responds to requests for information about its state from the view, and responds to instructions to change state (usually from the controller).
View: This represent the presentation layer of the web application. The view manages the display of information based on the data of model that is requested by controller.
Controller: Controller handles the user interaction with the web application. User request comes through controller to model and manipulate the records from it and then render the data using View to UI.
Below diagram showing the overview of these three components
Request Flow for ASP.NET MVC Framework
- Request comes from User to Controller
- Controller processes request and forms a data Model
- Model is passed to View
- View transforms Model into appropriate output format
- Response is rendered to Browser
Above picture showing you the actual flow of ASP.NET MVP Framework. Request comes from client to Controller and controller decided which model to used and based on that data rendered into browser.
Now, just have a closer look into to the MVC Flow,
In the next article I will give the explanation of each of every step. You just need to remember these are the basic flow of an MVC Application.
ASP.NET Web Forms and MVC
MVC is not a replacement if ASP.NET Web Form based development. This seats on the top of ASP.NET Development. MVC Framework simply divides the overall application architecture into three components.
For More information on the basic of MVC Framework please read :
ASP.NET MVC Overview (C#)
This is the startup article for MVC beginners. Many more to come. Where I will explain details of each of them with sample application. Finally there would be a complete ASP.NET project on MVC Framework. Hope this series will be helpful for all. | <urn:uuid:aa4ad089-a93e-4874-9350-433730b1dc4d> | 3.34375 | 614 | Truncated | Software Dev. | 56.790459 |
Solar energy is derived from the light and radiant heat from the sun, an abundant source of renewable energy that can be captured and employed in many ways.
Since ancient times, solar energy has been harnessed by humans using a range of technologies. Current solar technologies include photovoltaic systems (PV), concentrating solar power systems (CSP), passive solar heating and daylighting, solar hot water, solar process heat, and solar space heating and cooling.
Solar power can be used in both large-scale industrial and commercial applications and in smaller systems for the residential sector. Businesses and industry can diversify their energy sources, improve efficiency, and save money by choosing solar technologies for heating and cooling, industrial processes, electricity, and water heating. Public utilities and power plants are also capturing the sun's abundant energy to deliver cleaner energy to their customers. For example, concentrating solar power plants (CSP), first developed in the 1980s, use mirrors to reflect and focus sunlight onto receivers that collect and convert solar energy into heat and electricity. This allows large-scale power plants to save consumers the hassle and capital investment in household solar technology systems.
Two Types: Passive and Active
Solar technologies are characterized as either passive or active depending on how they capture, convert and distribute the sun’s heat and light. Active solar techniques use photovoltaic panels, pumps, and fans to convert sunlight into useable energy, while passive solar techniques use materials with favorable thermal properties, employ design techniques that naturally circulate air, and orient building placement to maximize (or minimize) the sun’s heat and light.
Solar Energy Potential
Solar energy has massive potential to power our modern society. The total solar energy absorbed by Earth's atmosphere, oceans and land mass is approximately 3,850,000 exajoules (EJ) per year. To put that in perspective, the energy from sunlight striking the Earth for just 40 minutes is equivalent to global energy consumption for an entire year. The amount of solar energy reaching the Earth’s surface is so vast that in one year it is about twice as much energy as will ever be obtained from all of the nonrenewable resources of coal, oil, natural gas, and uranium for nuclear plants, combined.
The popular magazine Scientific American reported in 2007 that “a massive switch from coal, oil, natural gas and nuclear power plants to solar power plants could supply 69 percent of the U.S.’s electricity and 35 percent of its total energy by 2050” with a sizeable, but achievable investment.
Development, Deployment and Economics
Leading industry analysts and solar company executives predict that solar power will reach equal cost competitiveness with fossil fuels for electricity generation in North America and elsewhere within the next few years, possibly as soon as 2010.
The likelihood that tougher regulations on global warming emissions from the energy sector will be implemented by individual governments and under the successor to the global Kyoto Protocol (expected at the end of 2009) will make solar and other renewables more attractive to investors, as prices of electricity produced by polluting fossil fuel sources will rise in response.
Photovoltaic (PV) panels currently represent 90 percent of sales in the solar market. While solar PV panels are easy to install, building them is currently expensive because of tight supplies of silicon, their costliest component. Most industry analysts expect the constraint on silicon supplies to end within two years, driving down the cost of making PV panels.
However, emerging technologies such as thin-film solar panels made from cadmium telluride, which require much less silicon, are expected to challenge the market domination of PV panels. Nobody knows for sure which type of solar panel technology will dominate the market in the coming years, making the solar power industry an exciting sector to track.
Germany, Japan and Spain currently dominate the solar power market, although the fourth place United States may overtake Germany as the world’s biggest solar market in the next few years if investments and regulations converge to propel it to the top. China is also emerging as a major solar power interest, with growing demand for clean energy supplies and a competitive cadre of Chinese solar panel manufacturers vying for increased market share of the fast-growing demand for solar panels worldwide. France, Italy and South Korea are also experiencing rapid growth in solar power due to various incentive programs and local market conditions.
Deepen your understanding of renewable energy technologies by browsing our embedded Wikipedia. It’s fully interactive and searchable. | <urn:uuid:90eff130-7109-4c90-970b-92f0d5e635ba> | 3.96875 | 913 | Knowledge Article | Science & Tech. | 21.294768 |
Major Section: PROGRAMMING
NOTE: It is probably more efficient to use the Common Lisp function
concatenate in place of
string-append. That is,
(string-append str1 str2)is equal to
(concatenate 'string str1 str2).
At any rate,
string-append takes two arguments, which are both
strings (if the guard is to be met), and returns a string obtained
concatenating together the characters in the first string followed
by those in the second. See concatenate. | <urn:uuid:3612cec4-8577-4f59-8656-6259c940890a> | 2.765625 | 119 | Documentation | Software Dev. | 49.725068 |
|Is that a cloud hovering over the Sun?
Yes, but it is quite different than a cloud hovering over the Earth.
The long light feature on the left of the
above color-inverted image
is actually a solar filament and is composed of mostly charged
hydrogen gas held aloft by the Sun's
looping magnetic field.
By contrast, clouds over the Earth are usually much
composed mostly of tiny water droplets, and are
by upward air motions because they are weigh so little.
The above filament was captured on the Sun about two weeks ago near the
active solar region
AR 1535 visible on the
right with dark sunspots.
Filaments typically last for a few days to a week, but a long
like this might hover over the Sun's surface for a month or more.
Some filaments trigger large
if they suddenly collapse back
onto the Sun.
Credit & Copyright: | <urn:uuid:7f2642a3-1f49-4a5b-a641-b2792cf46b75> | 3.125 | 195 | Knowledge Article | Science & Tech. | 51.417 |
(Science: chemistry) Having the same percentage composition; said of two or more different substances which contain the same ingredients in the same proportions by weight, often used with with. Specif., Polymeric; i. E, having the same elements united in the same proportion by weight, but with different molecular weights; as, acetylene and benzine are isomeric (polymeric) with each other in this sense. See polymeric. Metameric; i. E, having the same elements united in the same proportions by weight, and with the same molecular weight, but which a different structure or arrangement of the ultimate parts; as, ethyl alcohol and methyl ether are isomeric (metameric) with each other in this sense. See metameric.
Results from our forum
... soil. Kerosene-based hydrocarbon fuels are complex mixtures of up to 260+ aliphatic and aromatic hydrocarbon compounds (C6 -C17+; possibly 2000+ isomeric forms), including varying concentrations of potential toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, ...
See entire post | <urn:uuid:6126c828-51c7-45b0-bb31-c0a0f7126782> | 2.9375 | 247 | Structured Data | Science & Tech. | 27.550187 |
And the heat goes on...
One of the great difficulties in testing theories of global warming is doing conclusive experiments. For example, having but one world, it is not possible to manipulate greenhouse gases in the atmosphere while also observing the climate of an unmanipulated experimental control. Thus, there are inherent limitations on knowledge obtainable by experimentation, and computer models have been developed to help us understand the interactions of the various phenomena that determine global climate.
Problems should be expected now that these models have become tools, not so much for understanding, but for making predictions. Trying to explain away why the models say we should have experienced much more warming than has actually materialized, Wallace C. Broecker (Winter 1996) and other scientists contend that man-made warming has been offset by a natural cooling or by a cooling effect of aerosol pollution.
The validity of this explanation is difficult to assess because the theories of global cooling are as untestable as the theories of global warming. The antennae of healthy skeptics should shoot up when hearing untestable excuses, which tend to sound like new bogus theories invented to explain why the old bogus theory doesn't seem to work.
Nature remains unmoved by the popular theories of the day. Allan Mazur (Winter 1996) misunderstands science when he suggests that science should be done by consensus, for scientific truth, to the extent that we can find it, ought not be determined by vote. The findings of one scientist backed by sound experimental results are more reliable than the contrary opinions of 999 others who lack such results.
Neither should scientific truth be sought by appeal to authority. In a scientific debate, whenever you hear statements like, "Broecker says so and so," or "Michaels says such and such," this should be a signal to discount them.
Scientific understanding, to be sound, must instead be advanced by appeal to experimental and observational evidence. This, and not intelligent scientists or logical thought, is the unique characteristic that distinguishes science from other ways of knowing. Sound scientific argument should be advanced by statements that begin, "Smith's experiment with controls showed...." With so little solid experimental evidence behind global warming and the political and economic stakes so high, major deceptions are occurring (see Frederick Seitz, "A Major Deception on 'Global Warming,'" Wall Street Journal, June 12, 1996, p. A16).
Asimov's popular guides to science recount dozens of instances where the vast majority of scientists and the leading authority figures have been wrong. With the global warming debate sustained largely by science done by vote and appeal to authority, future Asimovs could have a lot to write about.
Walter L. Warnick, Ph.D.
U.S. Department of Energy
The views expressed are those of the author and do not necessarily reflect the views of the Department of Energy.
Professor Mazur replies:
Walter Warnick misunderstands science when he suggests that scientific truth is based exclusively on "sound experimental results." In the first place, major fields of science--astronomy, geology, evolutionary biology--make little use of experiments. In the second place, when two "sound experiment results" contradict each other, as they often do, how does the scientific community decide which one is "really sound" except through consensus formation? In the third place, every "sound observation" permits multiple theoretical interpretations, so how is the "One True Interpretation" reached if not by debate among scientists until they reach consensus?
Maxwell School of Citizenship and Public Affairs | <urn:uuid:c2ba725b-f448-4340-bdd9-729e6a50b334> | 2.984375 | 723 | Comment Section | Science & Tech. | 36.214044 |
The Galápagos sea lion (Scientific name: Zalophus wollebaeki) is one of 16 species of marine mammals in the family of Eared seals which include sea lions and fur seals. Together with the families of true seals and Walruses, Eared seals form the group of marine mammals known as pinnipeds.
Eared seals differ from the true seals in having small external earflaps and hind flippers that can be turned to face forwards. Together with strong front flippers, this gives them extra mobility on land and an adult fur seal can move extremely fast across the beach if it has to. They also use their front flippers for swimming, whereas true seals use their hind flippers.
The Galápagos sea lion is found in the Galápagos Archipelago where it is one of the most conspicuous and numerous marine mammals.
Galápagos sea lion. Source: Kelley Kane/Wikipedia
Kingdom: Anamalia (Animals)
Adult males are much larger than females and are brown in colour while females are a lighter tan. Adult males are also distinguished by their raised foreheads, and the hair on the crest may be a lighter colour. Juvenile Galápagos sea lions are chestnut brown in colour and measure around 75 cm at birth.
Each colony is dominated by one bull that aggressively defends his territory from invading bachelor males. This territorial activity occurs throughout the year and males hold their territories for only 27 days or so before being displaced by another male. Within this territory the bull has dominance over a group of between 5 and 25 cows.
The breeding season is not dependent on migration patterns, as seen in other sea lion species, since the Galápagos sea lion remains around the Galápagos Archipelago all year round. In fact the breeding season is thought to vary from year to year in its onset and duration, though it usually lasts 16 to 40 weeks between June and December. Births therefore also take place throughout the year, with females coming ashore to give birth to a single pup.
Within two to three weeks of giving birth females go into oestrous again and actively solicit a male. Gestation lasts around 11 months, though it probably includes a three month period in which implantation of the fertilized egg is delayed while the female nurses her young.
The Galápagos sea lion is essentially a coastal animal and is rarely found more than 16 kilometres out to sea. Individuals are active during the day and hunt in relatively shallow waters (up to about 200 metres deep) where they feed on fish, octopus, and crustaceans. Sea lions and seals are also capable of making extraordinarily deep dives of up to 200 metres for 20 minutes or more, then rapidly surfacing with no ill effects. When ashore, the Galápagos sea lions rest on sandy beaches and rocky areas in colonies of about 30 individuals. They are extremely gregarious and pack together on the shore even when space is available.
Like other sea lions this species relies on cooperation within the group. Often, a single adult female will watch over a group of young pups while other mothers are fishing. They are careful to keep the young pups out of deep water where they may be eaten by sharks. The bull will also watch out for his "family" by warning them of the presence of a nearby shark with barks, and even occasionally chasing away the intruder.
This sea lion is found on islands in the Galápagos Archipelago and off the coast of Ecuador where a population has been introduced.
On land this sea lion prefers sandy or rocky flat beaches where there is vegetation for shade, tide pools to keep cool and good access to calm waters. It also spends much of its time in the cool, fish-rich waters that surround the Galápagos Islands
The Galápagos sea lion is currently classified as Endangered on the IUCN Red List
The Galápagos sea lion occurs in one of the most biologically diverse areas of the world. The Galápagos Islands have long been studied and protected and were influential in the formulation of Darwin's theory of evolution by natural selection. Most recently, in March 1998, a 133,000 km2 area was designated as the Galápagos Marine Reserve, making it one of the world's largest protected areas. Detailed conservation and research programmes have been developed, which focus on studying the islands' ecology, the effects of environmental fluctuations on species and the effects of humans on wildlife. These measures have to some extent protected this sea lion, especially from hunting.
The Charles Darwin Research Centre has implemented an ecological monitoring project of the Galápagos sea-lion to determine the state and abundance of the sea lions. This project also studies the ongoing threats to this mammal and has developed simple rescue methods for injured or caught sea lions. Elsewhere in the world, sea lions are suffering dramatic population declines for unknown reasons, and so conservation measures like these, which both monitor and protect the sea lion, are invaluable in the future of the Galápagos sea lion
The Galápagos sea lion faces various threats. In the 19th century, sea-lions worldwide were hunted for their meat, skin and oil. The hunting of some sea-lions, including the Galápagos species, has now been banned and populations have recovered. Galápagos sea lions are still vulnerable to human activity as their inquisitive and social nature means they are more likely to approach areas inhabited by humans. This brings them into contact with fishing nets, hooks and human waste, all of which can be fatal. There are also problems resulting from the increase in numbers of deep-water tuna and billfish fisheries as these sea-lions become victims of by-catch. Research indicates that the majority of these incidents (67%) involve juveniles, probably due to their more curious and playful nature.
These marine mammals are also negatively affected by the phenomenon El Niño. During El Nino 1997 and 1998, Galápagos sea lion populations of the main colonies declined by 48 percent. Many sea lions migrated and, amongst those that stayed in the Galápagos Archipelago, there was high mortality due to starvation. A viral disease, known as sea lion pox, is another threat to this marine mammal. The illness is spread by mosquitoes and causes paralysis, which in turn prevents the sea lion from feeding and may result in death.
- wollebaeki Sivertsen, 1953 Encyclopedia of Life (accessed April 8, 2009)
- Zalophus, Seal Conservation Society (accessed April 8, 2009)
- The Pinnipeds: Seals, Sea Lions, and Walruses, Marianne Riedman, University of California Press, 1991 ISBN: 0520064984
- Encyclopedia of Marine Mammals, Bernd Wursig, Academic Press, 2002 ISBN: 0125513402
- Marine Mammal Research: Conservation beyond Crisis, edited by John E. Reynolds III, William F. Perrin, Randall R. Reeves, Suzanne Montgomery and Timothy J. Ragen, Johns Hopkins University Press, 2005 ISBN: 0801882559
- IUCN Red List (January, 2008)
- Heath, C.B. (2002) California, Galapagos, and Japanese sea lions. In: Perrin, W.F., Wursig, B. and Thewissen, J.G.M. Eds. Encyclopedia of Marine Mammals. Academic Press, San Diego.
- Walker's Mammals of the World, Ronald M. Nowak, Johns Hopkins University Press, 1999 ISBN: 0801857899
- , MarineBio.org (accessed, April 8, 2009)
- Galápagos and Californian sea lions are separate species: Genetic analysis of the genus ''Zalophus'' and its implications for conservation management, Wolf, JB; Tautz, D; Trillmich, F, Frontiers in zoology, 2007. | <urn:uuid:4813339e-093b-49a4-aa10-38026a0fbb52> | 3.546875 | 1,659 | Knowledge Article | Science & Tech. | 45.872919 |
- You'll need these materials:
- a glass or beaker with straight sides
- a ruler (12 inch)
- one foot of clear plastic tubing
- a stick of chewing gum
|Begin by standing the ruler in the glass and
holding it against the side. Tape the ruler to the inside of
the glass. Make sure that the numbers on the ruler are visible.
|Stand the plastic tube against the ruler in
the glass. Make sure that the tube is not touching the
bottom of the glass by positioning the tube up a half inch
on the ruler. Secure the tube by taping it to the ruler.
|Chew the stick of gum so that it is soft.
While you're chewing, fill the glass about half way with
water. Use the plastic tube like a straw and draw some water
half way up the tube. Use your tongue to trap the water in
the tube. Quickly move the gum onto the top of the tube to seal it.
|Make a mark on the ruler to record where the water level is
in the tube. Each time you notice a change in the water
level, make another mark. You'll notice, over time, that the
water level rises and falls. Pay attention to the change in
weather as the water level changes.
|The water in the tube rises and falls because
of air pressure exerted on the water in the glass. As the
air presses down (increased atmospheric pressure) on the
water in the glass, more water is pushed into the tube,
causing the water level to rise. When the air pressure
decreases on the water in the glass, some of the water will
move down out of the tube, causing the water level to fall.
The change in barometric pressure will help you to forecast
the weather. Decreasing air pressure often indicates the approach
of a low pressure area, which often brings clouds and precipitation.
Increasing air pressure often means that a high pressure area
is approaching, bringing with it clearing or fair weather. | <urn:uuid:0df9cf22-d216-4051-a0b4-194ffbbf9dfb> | 3.234375 | 430 | Tutorial | Science & Tech. | 63.252843 |
Function Inverse Example 1 Function Inverse Example 1
Function Inverse Example 1
⇐ Use this menu to view and help create subtitles for this video in many different languages. You'll probably want to hide YouTube's captions if using these subtitles.
- So we have f of x is equal to negative x plus 4, and f of x
- is graphed right here on our coordinate plane.
- Let's try to figure out what the inverse of f is.
- And to figure out the inverse, what I like to do is I set y, I
- set the variable y, equal to f of x, or we could write that y
- is equal to negative x plus 4.
- Right now, we've solved for y in terms of x.
- To solve for the inverse, we do the opposite.
- We solve for x in terms of y.
- So let's subtract 4 from both sides.
- You get y minus 4 is equal to negative x.
- And then to solve for x, we can multiply both sides of this
- equation times negative 1.
- And so you get negative y plus 4 is equal to x.
- Or just because we're always used to writing the dependent
- variable on the left-hand side, we could rewrite this as x is
- equal to negative y plus 4.
- Or another way to write it is we could say that f
- inverse of y is equal to negative y plus 4.
- So this is the inverse function right here, and we've written
- it as a function of y, but we can just rename the y as x
- so it's a function of x.
- So let's do that.
- So if we just rename this y as x, we get f inverse of x is
- equal to the negative x plus 4.
- These two functions are identical.
- Here, we just used y as the independent variable, or
- as the input variable.
- Here we just use x, but they are identical functions.
- Now, just out of interest, let's graph the inverse
- function and see how it might relate to this
- one right over here.
- So if you look at it, it actually looks
- fairly identical.
- It's a negative x plus 4.
- It's the exact same function.
- So let's see, if we have-- the y-intercept is 4, it's going
- to be the exact same thing.
- The function is its own inverse.
- So if we were to graph it, we would put it right
- on top of this.
- And so, there's a couple of ways to think about it.
- In the first inverse function video, I talked about how a
- function and their inverse-- they are the reflection
- over the line y equals x.
- So where's the line y equals x here?
- Well, line y equals x looks like this.
- And negative x plus 4 is actually perpendicular to y is
- equal to x, so when you reflect it, you're just kind of
- flipping it over, but it's going to be the same line.
- It is its own reflection.
- Now, let's make sure that that actually makes sense.
- When we're dealing with the standard function right
- there, if you input a 2, it gets mapped to a 2.
- If you input a 4, it gets mapped to 0.
- What happens if you go the other way?
- If you input a 2, well, 2 gets mapped to 2 either
- way, so that makes sense.
- For the regular function, 4 gets mapped to 0.
- For the inverse function, 0 gets mapped to 4.
- So it actually makes complete sense.
- Let's think about it another way.
- For the regular function-- let me write it explicitly down.
- This might be obvious to you, but just in case it's
- not, it might be helpful.
- Let's pick f of 5.
- f of 5 is equal to negative 1.
- Or we could say, the function f maps us from 5 to negative 1.
- Now, what does f inverse do?
- What's f inverse of negative 1?
- f inverse of negative 1 is 5.
- Or we could say that f maps us from negative 1 to 5.
- So once again, if you think about kind of the sets, they're
- our domains and our ranges.
- So let's say that this is the domain of f, this
- is the range of f.
- f will take us from to negative 1.
- That's what the function f does.
- And we see that f inverse takes us back from negative 1 to 5.
- f inverse takes us back from negative 1 to 5, just
- like it's supposed to do.
- Let's do one more of these.
- So here I have g of x is equal to negative 2x minus 1.
- So just like the last problem, I like to set y equal to this.
- So we say y is equal to g of x, which is equal to
- negative 2x minus 1.
- Now we just solve for x.
- y plus 1 is equal to negative 2x.
- Just added 1 to both sides.
- Now we can divide both sides of this equation by negative 2,
- and so you get negative y over 2 minus 1/2 is equal to x, or
- we could write x is equal to negative y over 2 minus 1/2, or
- we could write f inverse as a function of y is equal to
- negative y over 2 minus 1/2, or we can just rename y as x.
- And we could say that f inverse of-- oh, let me careful here.
- That shouldn't be an f.
- The original function was g , so let me be clear.
- That is g inverse of y is equal to negative y over 2 minus 1/2
- because we started with a g of x, not an f of x.
- Make sure we get our notation right.
- Or we could just rename the y and say g inverse of x is equal
- to negative x over 2 minus 1/2.
- Now, let's graph it.
- Its y-intercept is negative 1/2.
- It's right over there.
- And it has a slope of negative 1/2.
- Let's see, if we start at negative 1/2, if we move over
- to 1 in the positive direction, it will go down half.
- If we move over 1 again, it will go down half again.
- If we move back-- so it'll go like that.
- So the line, I'll try my best to draw it, will
- look something like that.
- It'll just keep going, so it'll look something like that, and
- it'll keep going in both directions.
- And now let's see if this really is a reflection over y
- equals x. y equals x looks like that, and you can see
- they are a reflection.
- If you reflect this guy, if you reflect this blue line, it
- becomes this orange line.
- But the general idea, you literally just-- a function
- is originally expressed, is solved for y in terms of x.
- You just do some algebra.
- Solve for x in terms of y, and that's essentially your inverse
- function as a function of y, but then you can rename
- it as a function of x.
Be specific, and indicate a time in the video:
At 5:31, how is the moon large enough to block the sun? Isn't the sun way larger?
Have something that's not a question about this content?
This discussion area is not meant for answering homework questions.
Share a tip
When naming a variable, it is okay to use most letters, but some are reserved, like 'e', which represents the value 2.7831...
Have something that's not a tip or feedback about this content?
This discussion area is not meant for answering homework questions.
Discuss the site
For general discussions about Khan Academy, visit our Reddit discussion page.
Flag inappropriate posts
Here are posts to avoid making. If you do encounter them, flag them for attention from our Guardians.
- disrespectful or offensive
- an advertisement
- low quality
- not about the video topic
- soliciting votes or seeking badges
- a homework question
- a duplicate answer
- repeatedly making the same post
- a tip or feedback in Questions
- a question in Tips & Feedback
- an answer that should be its own question
about the site | <urn:uuid:9bacbf99-2ad8-4227-8cd8-129ecad8382d> | 4.53125 | 1,905 | Audio Transcript | Science & Tech. | 80.932511 |
The Thing Called "Chiron"
Like a snowball rolling downhill, some things just take off and get bigger as they go. That's sort of the case with the story about the object called Chiron, which I first came across when looking up some information about the astronomer Charles Kowal. (See the December, 1998, edition of NightTimes.) I mentioned that Kowal had discovered an object, Chiron, that might be either a comet or an asteroid. I was curious and decided to follow up with some library research. Here's what I found.
On October 18, 1977, Kowal was using the 48-inch Schmidt telescope at Mount Palomar when he came across a previously unknown object. Assuming that it was an asteroid, it was given the preliminary name "1977 UB". But it was a slowly moving object with an orbit that extended far beyond the orbits of known asteroids -- out between Saturn and Uranus. Further research showed that this same object had been recorded on old photographic plates going as far back as 1895. Its orbital period was determined to be 49 years, but unlike a comet, its orbit appeared to be very stable.
Brian Marsden of the Smithsonian Astrophysical Observatory said: "We really don't know what it is. It could be an asteroid or a comet or a small planet. We think it may be a lump of cometary material, but for now we really have to call it a minor planet until we can prove observationally that it is or isn't a comet."
Then, at a conference in 1978, Kowal suggested that the object he had found might really be a comet. It had been previously predicted that thousands of comets were orbiting the sun at about the same distance as 1977 UB. Kowal proposed that the object be called "Chiron", since Chiron is a mythical centaur, half man and half horse. Since then, such questionable objects are said to belong to the so-called Centaur family.
Jumping ahead to 1988, the object 1977 UB was finally observed with a coma and tail. It appeared as though Chiron really is a comet, but one with a diameter much larger than that of any other known comet. The official designation is now that of a comet -- "95/P Chiron". In February 1996, Chiron passed its perihelion (the nearest it comes to the sun) for the first time since it was discovered. There's a web site with info about Chiron:Published in the February 1999 issue of the NightTimes | <urn:uuid:64e11515-753e-484c-a1c0-a1545a2beedc> | 3.15625 | 522 | Knowledge Article | Science & Tech. | 56.730584 |
News From the Field
DNA Better Than Eyes When Counting Endangered Species
March 7, 2011
Using genetic methods to count endangered eagles, a group of scientists showed that traditional counting methods can lead to significantly incorrect totals that they believe could adversely affect conservation efforts.
The National Science Foundation (NSF) is an independent federal agency that supports fundamental research and education across all fields of science and engineering. In fiscal year (FY) 2012, its budget was $7.0 billion. NSF funds reach all 50 states through grants to nearly 2,000 colleges, universities and other institutions. Each year, NSF receives about 50,000 competitive requests for funding, and makes about 11,500 new funding awards. NSF also awards about $593 million in professional and service contracts yearly.
Get News Updates by Email
Useful NSF Web Sites:
NSF Home Page: http://www.nsf.gov
NSF News: http://www.nsf.gov/news/
For the News Media: http://www.nsf.gov/news/newsroom.jsp
Science and Engineering Statistics: http://www.nsf.gov/statistics/
Awards Searches: http://www.nsf.gov/awardsearch/ | <urn:uuid:025d55ba-3297-4398-8a97-ae20f485866a> | 2.859375 | 263 | Content Listing | Science & Tech. | 70.19315 |
One drawback to the use of a Python list for source files is that each file name must be enclosed in quotes (either single quotes or double quotes). This can get cumbersome and difficult to read when the list of file names is long. Fortunately, SCons and Python provide a number of ways to make sure that the SConstruct file stays easy to read.
To make long lists of file names
easier to deal with, SCons provides a
that takes a quoted list of file names,
with the names separated by spaces or other white-space characters,
and turns it into a list of separate file names.
Split function turns the
previous example into:
Program('program', Split('main.c file1.c file2.c'))
(If you're already familiar with Python,
you'll have realized that this is similar to the
in the Python standard
does not require a string as input
and will wrap up a single non-string object in a list,
or return its argument untouched if it's already a list.
This comes in handy as a way to make sure
arbitrary values can be passed to SCons functions
without having to check the type of the variable by hand.)
Putting the call to the
can also be a little unwieldy.
A more readable alternative is to
assign the output from the
to a variable name,
and then use the variable when calling the
list = Split('main.c file1.c file2.c') Program('program', list)
doesn't care how much white space separates
the file names in the quoted string.
This allows you to create lists of file
names that span multiple lines,
which often makes for easier editing:
list = Split("""main.c file1.c file2.c""") Program('program', list)
(Note in this example that we used the Python "triple-quote" syntax, which allows a string to contain multiple lines. The three quotes can be either single or double quotes.) | <urn:uuid:684781da-539a-4dad-8a0f-7dc1fedfb3b8> | 3.6875 | 429 | Documentation | Software Dev. | 61.501362 |
Ho - Holminm
Atomic Number: 67 Period Number:6 Group Number: None
Holmium is a silvery-white metal, which belongs to lanthanide series. There is less free element found in nature and most of them exist in the minerals monazite, gadolinite, etc. Only one stable isotope of holmium is found in nature: 165Ho.
Holmium has not been widely used until now. Because it has the highest magnetic strength of all the elements, this will make foundation for its future function in magnetic field application.
Physical and Chemical properties:
Atomic Weight: 164.93032
Melting Point: 1747 K
Boiling Point: 2973 K
Density: 8.80 g/cm3
Phase at Room Temperature: Solid
Ionization Energy: 6.022 eV
Oxidation State: +3 | <urn:uuid:041179dd-df4e-49c8-b992-96793c360fa6> | 3.125 | 186 | Knowledge Article | Science & Tech. | 51.713702 |
Want to stay on top of all the space news? Follow @universetoday on Twitter
Quantum Foam is the series of subatomic disturbances in space time. Quantum Mechanics states that at the subatomic level of a Planck Length that the uncertainty principle comes into play. Essentially that means that space time at such a level can not be exactly measured or determined since it can exist one moment and not exist the next all without violation of the law of conservation.
It is also called space time foam based on the fact that Einstein stated in his theory of relativity that gravity and space time are one. This means that any disturbances on a the quantum level would effect space time as a whole.
The concept was first proposed in the 60s but it has not been definitely used for much outside of theory. However there have been some writers who have used the concept in interesting ways. Michael Crichton in his book Timeline, uses the concept of quantum foam wormholes as a way to time travel. However it is not traveling to a specific time but more to a specific universe or dimension concurrently experiencing a time period in our universe’s history.
While still a theory scientist are starting to find interesting clues that may prove Quantum foam’s existence without a doubt. One possible source of data is the radio signals of black holes and pulsars. Scientist have hypothesized that the radiation that these bodies emit is actually a portion of the “cosmic noise” that would be produced by quantum foam. In the case of black holes, scientist think that the radiation emitted at the event horizon of a black hole is actually the data of all objects or energy that fell into it. If this is true it would seem that the Universe is a large hologram being projected from a cosmic master program. This master program would be the fluctuations of space time at the atomic level that we call quantum foam.
Another possible source of information would be the Large Hadron Collider. As the largest and most powerful particle collider yet made it would give scientist to look at even smaller subatomic particles close up. It may help physicist to answer many of the remaining questions left about quantum mechanics.
You can also find some great resources online. You can check out the Goddard spaceflight web page at NASA.gov. Here is a link to lectures on the subject from the University of Oregon web site.
You should also check out Astronomy Cast. Episode 44 talks about Einstein’s General Theory of Relativity. | <urn:uuid:2694bee9-29c3-45d2-9567-a4d42d95e8a8> | 3.5625 | 504 | Personal Blog | Science & Tech. | 45.927542 |
Complex interactions among hydrologic events initiated by people and the behaviors and characteristics of animal species (both native and introduced) lead to important scientific and management problems.
This endangered species prefers native trees in large, continuous areas of riparian habitat. Armed with this information, resource managers may identify and preserve areas favorable to this population.
Review of current research on stock assessment of the Pacific walrus in the Chukchi and Bering Seas and interactions between walruses and their environment with links to walrus taxonomy, distribution, behavior, and relation to man
Shooting them doesn't work, they just breed more. And they trample on the native plants. These animals were brought to the islands during the last 150 years, and we're trying to develop ways of managing their impact on the native life.
How will the increasing use of wind turbines affect populations of wild birds and bats? This shows which birds and bats we study, and the aspects of their ecology that may be affected by wind energy development. | <urn:uuid:8673d37e-65a8-4374-9345-256708bac197> | 3.21875 | 206 | Content Listing | Science & Tech. | 31.990956 |
Is a ride into space on the world's longest elevator in store for you?
Currently, the Cambridge team can make about 1 gram of the new carbon material per day, which can stretch to 18 miles in length. Alan Windle, professor of materials science at Cambridge, says that industrial-level production would be required to manufacture NASA's request for 144,000 miles of nanotube. Nevertheless, the web-like nanotube material is promising. . .
"The biggest problem has always been finding a material that is strong enough and lightweight enough to stretch tens of thousands of miles into space," said EuroSpaceward's John Winter. "This isn't going to happen probably for the next decade at least, but in theory this is now possible. The advances in materials for the tether are very exciting."
Hm, the next decade...that would make it about 2018 or 2019, right? That time frame sounds familiar to me somehow. Oh. yes, that's it --
Previously we've written about the Liftport Group, who once projected an April 12, 2018 launch date for their space elevator. For unknown reasons they have pushed their target date back considerably -- it now stands at October 27, 2031. (How they can determine the precise day so far in advance is beyond me.) But maybe that original 2018 date was not so outlandish after all.
As we've suggested before, molecular manufacturing (MM) likely would make construction of a space elevator far easier and much less expensive than currently expected. Even if MM is not achieved until a later date, though, this new method of CNT development seems quite promising for an early gateway to space industrialization and commercialization.
"Next stop, Hilton Earthview Hotel." | <urn:uuid:0ce025a4-25fd-4eaa-a3a2-e3bca21a6ad5> | 3.078125 | 354 | Personal Blog | Science & Tech. | 55.810442 |
The test package is meant for internal use by Python only. It is documented for the benefit of the core developers of Python. Any use of this package outside of Python’s standard library is discouraged as code mentioned here can change or be removed without notice between releases of Python.
The test package contains all regression tests for Python as well as the modules test.support and test.regrtest. test.support is used to enhance your tests while test.regrtest drives the testing suite.
Each module in the test package whose name starts with test_ is a testing suite for a specific module or feature. All new tests should be written using the unittest or doctest module. Some older tests are written using a “traditional” testing style that compares output printed to sys.stdout; this style of test is considered deprecated.
It is preferred that tests that use the unittest module follow a few guidelines. One is to name the test module by starting it with test_ and end it with the name of the module being tested. The test methods in the test module should start with test_ and end with a description of what the method is testing. This is needed so that the methods are recognized by the test driver as test methods. Also, no documentation string for the method should be included. A comment (such as # Tests function returns only True or False) should be used to provide documentation for test methods. This is done because documentation strings get printed out if they exist and thus what test is being run is not stated.
A basic boilerplate is often used:
import unittest from test import support class MyTestCase1(unittest.TestCase): # Only use setUp() and tearDown() if necessary def setUp(self): ... code to execute in preparation for tests ... def tearDown(self): ... code to execute to clean up after tests ... def test_feature_one(self): # Test feature one. ... testing code ... def test_feature_two(self): # Test feature two. ... testing code ... ... more test methods ... class MyTestCase2(unittest.TestCase): ... same structure as MyTestCase1 ... ... more test classes ... if __name__ == '__main__': unittest.main()
This code pattern allows the testing suite to be run by test.regrtest, on its own as a script that supports the unittest CLI, or via the python -m unittest CLI.
The goal for regression testing is to try to break code. This leads to a few guidelines to be followed:
The testing suite should exercise all classes, functions, and constants. This includes not just the external API that is to be presented to the outside world but also “private” code.
Whitebox testing (examining the code being tested when the tests are being written) is preferred. Blackbox testing (testing only the published user interface) is not complete enough to make sure all boundary and edge cases are tested.
Make sure all possible values are tested including invalid ones. This makes sure that not only all valid values are acceptable but also that improper values are handled correctly.
Exhaust as many code paths as possible. Test where branching occurs and thus tailor input to make sure as many different paths through the code are taken.
Add an explicit test for any bugs discovered for the tested code. This will make sure that the error does not crop up again if the code is changed in the future.
Make sure to clean up after your tests (such as close and remove all temporary files).
If a test is dependent on a specific condition of the operating system then verify the condition already exists before attempting the test.
Import as few modules as possible and do it as soon as possible. This minimizes external dependencies of tests and also minimizes possible anomalous behavior from side-effects of importing a module.
Try to maximize code reuse. On occasion, tests will vary by something as small as what type of input is used. Minimize code duplication by subclassing a basic test class with a class that specifies the input:
class TestFuncAcceptsSequencesMixin: func = mySuperWhammyFunction def test_func(self): self.func(self.arg) class AcceptLists(TestFuncAcceptsSequencesMixin, unittest.TestCase): arg = [1, 2, 3] class AcceptStrings(TestFuncAcceptsSequencesMixin, unittest.TestCase): arg = 'abc' class AcceptTuples(TestFuncAcceptsSequencesMixin, unittest.TestCase): arg = (1, 2, 3)
When using this pattern, remember that all classes that inherit from unittest.TestCase are run as tests. The Mixin class in the example above does not have any data and so can’t be run by itself, thus it does not inherit from unittest.TestCase.
The test package can be run as a script to drive Python’s regression test suite, thanks to the -m option: python -m test. Under the hood, it uses test.regrtest; the call python -m test.regrtest used in previous Python versions still works). Running the script by itself automatically starts running all regression tests in the test package. It does this by finding all modules in the package whose name starts with test_, importing them, and executing the function test_main() if present or loading the tests via unittest.TestLoader.loadTestsFromModule if test_main does not exist. The names of tests to execute may also be passed to the script. Specifying a single regression test (python -m test test_spam) will minimize output and only print whether the test passed or failed.
Running test directly allows what resources are available for tests to use to be set. You do this by using the -u command-line option. Specifying all as the value for the -u option enables all possible resources: python -m test -uall. If all but one resource is desired (a more common case), a comma-separated list of resources that are not desired may be listed after all. The command python -m test -uall,-audio,-largefile will run test with all resources except the audio and largefile resources. For a list of all resources and more command-line options, run python -m test -h.
Some other ways to execute the regression tests depend on what platform the tests are being executed on. On Unix, you can run make test at the top-level directory where Python was built. On Windows, executing rt.bat from your PCBuild directory will run all regression tests.
The test.support module provides support for Python’s regression test suite.
test.support is not a public module. It is documented here to help Python developers write tests. The API of this module is subject to change without backwards compatibility concerns between releases.
This module defines the following exceptions:
The test.support module defines the following constants:
True when verbose output is enabled. Should be checked when more detailed information is desired about a running test. verbose is set by test.regrtest.
True if the running interpreter is Jython.
Set to a name that is safe to use as the name of a temporary file. Any temporary file that is created should be closed and unlinked (removed).
The test.support module defines the following functions:
Remove the module named module_name from sys.modules and delete any byte-compiled files of the module.
Return True if resource is enabled and available. The list of available resources is only set when test.regrtest is executing the tests.
Raise ResourceDenied if resource is not available. msg is the argument to ResourceDenied if it is raised. Always returns True if called by a function whose __name__ is '__main__'. Used when tests are executed by test.regrtest.
Return the path to the file named filename. If no match is found filename is returned. This does not equal a failure since it could be the path to the file.
Execute unittest.TestCase subclasses passed to the function. The function scans the classes for methods starting with the prefix test_ and executes the tests individually.
It is also legal to pass strings as parameters; these should be keys in sys.modules. Each associated module will be scanned by unittest.TestLoader.loadTestsFromModule(). This is usually seen in the following test_main() function:
def test_main(): support.run_unittest(__name__)
This will run all tests defined in the named module.
Run doctest.testmod() on the given module. Return (failure_count, test_count).
A convenience wrapper for warnings.catch_warnings() that makes it easier to test that a warning was correctly raised. It is approximately equivalent to calling warnings.catch_warnings(record=True) with warnings.simplefilter() set to always and with the option to automatically validate the results that are recorded.
check_warnings accepts 2-tuples of the form ("message regexp", WarningCategory) as positional arguments. If one or more filters are provided, or if the optional keyword argument quiet is False, it checks to make sure the warnings are as expected: each specified filter must match at least one of the warnings raised by the enclosed code or the test fails, and if any warnings are raised that do not match any of the specified filters the test fails. To disable the first of these checks, set quiet to True.
If no arguments are specified, it defaults to:
check_warnings(("", Warning), quiet=True)
In this case all warnings are caught and no errors are raised.
On entry to the context manager, a WarningRecorder instance is returned. The underlying warnings list from catch_warnings() is available via the recorder object’s warnings attribute. As a convenience, the attributes of the object representing the most recent warning can also be accessed directly through the recorder object (see example below). If no warning has been raised, then any of the attributes that would otherwise be expected on an object representing a warning will return None.
The recorder object also has a reset() method, which clears the warnings list.
The context manager is designed to be used like this:
with check_warnings(("assertion is always true", SyntaxWarning), ("", UserWarning)): exec('assert(False, "Hey!")') warnings.warn(UserWarning("Hide me!"))
In this case if either warning was not raised, or some other warning was raised, check_warnings() would raise an error.
When a test needs to look more deeply into the warnings, rather than just checking whether or not they occurred, code like this can be used:
with check_warnings(quiet=True) as w: warnings.warn("foo") assert str(w.args) == "foo" warnings.warn("bar") assert str(w.args) == "bar" assert str(w.warnings.args) == "foo" assert str(w.warnings.args) == "bar" w.reset() assert len(w.warnings) == 0
Here all warnings will be caught, and the test code tests the captured warnings directly.
Changed in version 3.2: New optional arguments filters and quiet.
with captured_stdout() as s: print("hello") assert s.getvalue() == "hello\n"
A context manager that temporarily changes the current working directory (CWD).
An existing path may be provided as path, in which case this function makes no changes to the file system.
Otherwise, the new CWD is created in the current directory and it’s named name. If quiet is False and it’s not possible to create or change the CWD, an error is raised. If it’s True, only a warning is raised and the original CWD is used.
A context manager that temporarily sets the process umask.
Return True if the OS supports symbolic links, False otherwise.
A decorator for running tests that require support for symbolic links.
A context manager that disables Windows Error Reporting dialogs using SetErrorMode. On other platforms it’s a no-op.
A decorator to conditionally mark tests with unittest.expectedFailure(). Any use of this decorator should have an associated comment identifying the relevant tracker issue.
A decorator for running a function in a different locale, correctly resetting it after it has finished. catstr is the locale category as a string (for example "LC_ALL"). The locales passed will be tried sequentially, and the first valid locale will be used.
Create an invalid file descriptor by opening and closing a temporary file, and returning its descripor.
This function imports and returns the named module. Unlike a normal import, this function raises unittest.SkipTest if the module cannot be imported.
Module and package deprecation messages are suppressed during this import if deprecated is True.
New in version 3.1.
This function imports and returns a fresh copy of the named Python module by removing the named module from sys.modules before doing the import. Note that unlike reload(), the original module is not affected by this operation.
fresh is an iterable of additional module names that are also removed from the sys.modules cache before doing the import.
blocked is an iterable of module names that are replaced with 0 in the module cache during the import to ensure that attempts to import them raise ImportError.
The named module and any modules named in the fresh and blocked parameters are saved before starting the import and then reinserted into sys.modules when the fresh import is complete.
Module and package deprecation messages are suppressed during this import if deprecated is True.
This function will raise unittest.SkipTest if the named module cannot be imported.
# Get copies of the warnings module for testing without # affecting the version being used by the rest of the test suite # One copy uses the C implementation, the other is forced to use # the pure Python fallback implementation py_warnings = import_fresh_module('warnings', blocked=['_warnings']) c_warnings = import_fresh_module('warnings', fresh=['_warnings'])
New in version 3.1.
Bind the socket to a free port and return the port number. Relies on ephemeral ports in order to ensure we are using an unbound port. This is important as many tests may be running simultaneously, especially in a buildbot environment. This method raises an exception if the sock.family is AF_INET and sock.type is SOCK_STREAM, and the socket has SO_REUSEADDR or SO_REUSEPORT set on it. Tests should never set these socket options for TCP/IP sockets. The only case for setting these options is testing multicasting via multiple UDP sockets.
Additionally, if the SO_EXCLUSIVEADDRUSE socket option is available (i.e. on Windows), it will be set on the socket. This will prevent anyone else from binding to our host/port for the duration of the test.
Returns an unused port that should be suitable for binding. This is achieved by creating a temporary socket with the same family and type as the sock parameter (default is AF_INET, SOCK_STREAM), and binding it to the specified host address (defaults to 0.0.0.0) with the port set to 0, eliciting an unused ephemeral port from the OS. The temporary socket is then closed and deleted, and the ephemeral port is returned.
Either this method or bind_port() should be used for any tests where a server socket needs to be bound to a particular port for the duration of the test. Which one to use depends on whether the calling code is creating a python socket, or if an unused port needs to be provided in a constructor or passed to an external program (i.e. the -accept argument to openssl’s s_server mode). Always prefer bind_port() over find_unused_port() where possible. Using a hard coded port is discouraged since it can makes multiple instances of the test impossible to run simultaneously, which is a problem for buildbots.
The test.support module defines the following classes:
Instances are a context manager that raises ResourceDenied if the specified exception type is raised. Any keyword arguments are treated as attribute/value pairs to be compared against any exception raised within the with statement. Only if all pairs match properly against attributes on the exception is ResourceDenied raised.
Class used to temporarily set or unset environment variables. Instances can be used as a context manager and have a complete dictionary interface for querying/modifying the underlying os.environ. After exit from the context manager all changes to environment variables done through this instance will be rolled back.
Changed in version 3.1: Added dictionary interface.
Temporarily set the environment variable envvar to the value of value.
Temporarily unset the environment variable envvar. | <urn:uuid:84be95db-5e3b-4303-bb9e-ae40cf37ff41> | 2.765625 | 3,632 | Documentation | Software Dev. | 49.795791 |
Comprehensive DescriptionRead full entry
A moderate-sized slender salamander identified by having four digits on the hind limb and having a relatively broad and well demarcated head, long limbs and large hands and feet, and a tapering tail. The species has a distinctive color pattern consisting of bright coppery markings arranged as a pair of parallel streaks over the shoulders and in the pelvic region and spots or blotches on the tail. The ground color is black and the venter of the trunk is shiny dark black with a few small, widely scattered, indistinct white spots that are absent along the midline. The tail is shorter and less cylindrical than is typical of nearby and sympatric (Batrachoseps nigriventris) species.
This species has no close relatives. Data from allozymes and mitochondrial DNA sequences have been interpreted as indicating that the species has been separated from other species in the subgenus Batrachoseps for on the order of 8 to 13 million years.
See another account at californiaherps.com. | <urn:uuid:ecd477e0-c45f-4623-953a-61e84dfca434> | 2.984375 | 220 | Knowledge Article | Science & Tech. | 38.997846 |
Roland Piquepaille writes "The GLOBCOVER project, started by the European Space Agency (ESA), has a very simple goal. It will create the most detailed portrait of the Earth's land surface with a resolution three times sharper than any previous satellite map. The image acquisition will be done throughout 2005 and use the Medium Resolution Imaging Spectrometer (MERIS) instrument of the Envisat environmental satellite. To create this sharp map, the GLOBCOVER project will analyze about 20 terabytes of data gathered by the European satellite. When it's completed, the map will have numerous uses, 'including plotting worldwide land use trends, studying natural and managed ecosystems and modelling climate change extent and impacts.'" | <urn:uuid:c55cd69d-2819-45c8-a5eb-10acdce88e46> | 3.640625 | 145 | Truncated | Science & Tech. | 20.941857 |
NOAA Teacher at Sea
Aboard NOAA Ship Henry B. Bigelow
July 20 — August 1, 2011
Mission: Cetacean and Seabird Abundance Survey
Geographical Area: North Atlantic
Date: July 30, 2011
Air Temp: 19 ºC
Water Temp: 18 ºC
Wind Speed: 12 knots
Water Depth: 64 meters
Science and Technology Log
When traveling in the ocean you never know what you will get. Scientists can try to predict the weather or the amount of animals that will be seen in a particular area but nothing is as valuable as going to the area and recording what you see. For the last couple of days we have been traveling in deep water off the continental shelf of the east coast of the United States. Yesterday, we made a turn toward the edge of the shelf and we were very surprised by what we found. (Check the Ship Tracker to view our path.)
The ocean can best be described as a patchy, dynamic environment. Some days we have traveled for hours and not seen a single animal but on days like yesterday, we saw so many animals our single data recorder was busy all day. Since the start of this cruise we have averaged about 30 sightings a days. Yesterday, we had 30 sightings in the first 30 minutes of observation and ended with over 115 sightings.
Species ranged from abundant Common Dolphin, to rare and elusive beaked whales. The sighting conditions were so outstanding the Marine Mammal Observers were identifying everything from a small warbler to the second largest whale, a Fin Whale. Large whales, like Sei and Minke Whales, were concentrated in one area, while the dolphins were seen in other areas. We passed over several undersea canyons and cetacean abundance over these canyons was like nothing one of the scientists had ever seen.
Two tools in the ship’s wide array of scientific tools, help scientists document the small animals that the whales and dolphins might be feeding on over the top of the canyons. One is the XBT, or Expendable Bathythermograph, and the second is a VPR, or Video Plankton Recorder. The XBT is launched from the moving ship to document the temperature and water density along the ship’s track. They are inexpensive and record data in real-time, giving accurate and up to date information about the area the animals are most abundant. The VPR is a tool used at night, while the ship moves slowly, to take pictures of the plankton that occurs along our route.
The combination of temperature, depth and photographs of plankton gives scientists a clear picture of the environment that congregates large densities of cetaceans. By understanding the factors that contribute to cetacean population changes, scientists are able to make recommendations to lawmakers about how to protect this natural resource from human impact like bycatch from the fishing industry or ship strikes in commonly trafficked shipping lanes.
I am disappointed that we only have two days left on our trip. I have thoroughly enjoyed my time at sea. Crazy weather this morning of 30 knot winds and 6-8 foot seas will not be a fun memory but thankfully, this evening the weather settled down and we watched a beautiful sunset while playing games on the top deck. I am not sure that I could be a marine mammal observer but I look forward to taking this unique opportunity and turning it into a learning experience for my students.
Since this will be my last post from sea I thought I would leave you with some images of ocean life that was not a marine mammal or seabird. Enjoy. | <urn:uuid:6a0dbe2c-6404-4309-a34d-5b2edcf648d7> | 3.15625 | 748 | Personal Blog | Science & Tech. | 48.638047 |
For value types, “==” and Equals() works same way : Compare two objects by VALUE
int i = 5;
int k= 5;
i == k > True
i.Equals(k) > True
For reference types, both works differently :
“==” compares REFERENCE – returns true if and only if both references point to the SAME object.
Equals method compares object by VALUE.
StringBuilder sb1 = new StringBuilder(”Mahesh”);
StringBuilder sb2 = new StringBuilder(”Mahesh”);
sb1 == sb2 > False
sb1.Equals(sb2) > True
String s1 = “zzz”;
String s2 = “zzz”;
In above case the results will be,
s1 == s2 > True
s1.Equals(s2) > True
Why? Does that mean String a Value Type?
No, String IS a Reference Type. Although string is a reference type, the equality operators (== and !=) are defined to compare the values of string objects, not references. This makes testing for string equality more intuitive. For example:
The starter kit comes as an MSI installable package. It adds a new template into Visual Studio, making it possible for you to choose File | New | Role Playing Game from the menu. This one step process initiates the creation of a new solution containing the source for a complete tile-based role playing game. In Figure 1 you can see the Solution Explorer and Class View for the created project. By viewing this screen shot you can get some sense of what is available inside the start kit. Obviously this is a fairly extensive bit of source code with lots of logic for you to digest and learn from, especially if you are new to game development.
Acquiring the Starting Kit
The downloads for the RGP Starter Kit are broken out into versions for XNA Game Studio 2.0 and 3.0. There is also a distinction between code that targets Windows and code for the XBox:
- For XNA Game Studio 3.0 Windows
- For XNA Game Studio 3.0 XBox 360
- For XNA Game Studio 2.0 Windows
- For XNA Game Studio 2.0 XBox 360
Here are some additional links | <urn:uuid:857fa3b4-98bd-4a21-a30b-edc0a76d0070> | 3.28125 | 504 | Personal Blog | Software Dev. | 66.752414 |
Hydrogen sulfide erupted along the coast of Namibia in mid-March 2010. Pale-hued waters along the shore hinted at gaseous rumblings as the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite passed overhead and captured this true-color image on March 13, 2010. Although ocean water appears navy blue farther from shore, water along the coast ranges in color from peacock green to off-white. Ocean water wells up in this area along the continental shelf.
The milky surface waters that coincide with gaseous eruptions along Namabia’s coast have a low oxygen content. As reported in a 2009 study, the frequent hydrogen sulfide emissions in this area result form a combination of factors: ocean-current delivery of oxygen-poor water from the north, oxygen-depleting demands of biological and chemical processes in the local water column, and carbon-rich organic sediments under the water column.
Commercially important fish species have hatching grounds along the Namibian coast, and hydrogen sulfide eruptions can often kill large numbers of fish. In addition, the gas eruptions send a noxious rotten-egg smell inland. These events bring some benefits, however. Sea birds eat the fish carcasses, and humans can make meals of lobsters fleeing onshore to escape the oxygen-deprived waters.
Inland, this MODIS image shows the rippling sand dunes of the Namib Desert, which stretches for hundreds of kilometers along the southern African coast.
- Brüchert, V., Currie, B., Peard, K.R. (2009). Hydrogen sulphide and methane emissions on the central Namibian shelf. Progress in Oceanography, 83, 169–179.
- Why Files. (2002). The Biggest Burp. Accessed March 15, 2010. | <urn:uuid:69c81533-719e-422f-821d-058ed6b935cd> | 3.71875 | 388 | Knowledge Article | Science & Tech. | 48.314484 |
Scientists study a diversity of ocean life!
Every day, scientists at the Alaska Fisheries Science Center work to increase our knowledge about ocean life. How do they do this? What do they learn?
Through this activity you’ll discover how scientists find out where whales go, count fish, study microscopic plants and animals and study rockfish. Then find out how you can help the ocean!
- Solving the mystery of where seals and whales go!
- How do scientists count fish?
- How do scientists study microscopic organisms?
- Why study rockfish?
- How to help the ocean! | <urn:uuid:d358e671-22ca-45a9-9550-c37d733a2717> | 3.59375 | 124 | Tutorial | Science & Tech. | 63.621378 |