text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
As we check our coding page in .net application we found that at the top after the namespaces there is a call that is partially defined in every web page. The question arrives in our mind that what is this Partial and why we use the class with partial access specifier. Why we not use the class as publicly or privately? Here we are to answer all these question. First we will get the partial class definition and its need. The Normalisation is a data analysis technique to design a database system. The Normalisation allows the database designer to understand the current data structure within an organisation. The end result of a normalisation is a set of entity. We remove the unnecessary redudency by normalising the database table. Read the rest of this entry » The Alias name is the name that is referred to any column name or table name that is given by user. The alias name also used to represent some column or table without using its real name. As we will proceed in this article we will see that how we can use both the alias column name and alias table name in SQL Server. Using the Querystring is the another method to pass information between pages in your ASP.NET application. As we know that Querystring is the portion of the URL after a Question Mark(?). The information is always retrieved as a string that can be converted with any type.Here we get the code to pass multiple values at a single time in Querystring. The Cast() Function is used to change the data type of a column. We can use the cast() function for various purpose. Cast(Original_Expression as Desired_DataType) Read the rest of this entry » The convert() function is used to convert an expression of one specific data type to another type. Also this function can be used to represent the value of date/time type variable in different different format. As we will discuss later in this post we will see how we accomplish this task. Reference type are important features of the C# language. They enable us to write complex and powerful application and effectively use the run-time framework. If we define the reference type variables in C# then the Reference type variables contain a reference to the data not the value. The value is stored in a separate memory area. for example in C# we used several reference type variables such as Classes, Structures, Array, Enumeration etc.
<urn:uuid:65ffd00f-bfa8-4e85-ba0a-061109f0599f>
3.40625
491
Content Listing
Software Dev.
46.28012
The Pasterze Glacier in western Austria has been receding since 1856. A combination of higher summer temperatures and lower winter snowfall is causing the retreat. Glaciers in nearby Switzerland receded more rapidly in 2003 than in any other year since annual measurements began in 1880. Despite the record heat in Europe that summer, scientists from the Swiss Academy of Natural Sciences attributed the melting to long-term climate change. NASA scientists use satellite data to measure the advance and retreat of glaciers all around the world. This true-color image was acquired by Space Imaging’s Ikonos satellite on October 3, 2001. The full-resolution image has a resolution of 4 meters per pixel. For more information about monitoring Glaciers, read At the Edge: Monitoring Glaciers to Watch Global Change. Image by Robert Simmon, NASA’s Earth Observatory, based on data copyright Space Imaging
<urn:uuid:3bd74f56-0527-4a18-8035-023e5cfe289a>
4.40625
180
Knowledge Article
Science & Tech.
32.29
General Chemistry/Periodicity and Electron Configurations Blocks of the Periodic Table The Periodic Table does more than just list the elements. The word periodic means that in each row, or period, there is a pattern of characteristics in the elements. This is because the elements are listed in part by their electron configuration. The Alkali metals and Alkaline earth metals have one and two valence electrons (electrons in the outer shell) respectively. These elements lose electrons to form bonds easily, and are thus very reactive. These elements are the s-block of the periodic table. The p-block, on the right, contains common non-metals such as chlorine and helium. The noble gases, in the column on the right, almost never react, since they have eight valence electrons, which makes it very stable. The halogens, directly to the left of the noble gases, readily gain electrons and react with metals. The s and p blocks make up the main-group elements, also known as representative elements. The d-block, which is the largest, consists of transition metals such as copper, iron, and gold. The f-block, on the bottom, contains rarer metals including uranium. Elements in the same Group or Family have the same configuration of valence electrons, making them behave in chemically similar ways. Causes for Trends There are certain phenomena that cause the periodic trends to occur. You must understand them before learning the trends. Effective Nuclear Charge The effective nuclear charge is the amount of positive charge acting on an electron. It is the number of protons in the nucleus minus the number of electrons in between the nucleus and the electron in question. Basically, the nucleus attracts an electron, but other electrons in lower shells repel it (opposites attract, likes repel). Shielding Effect The shielding (or screening) effect is similar to effective nuclear charge. The core electrons repel the valence electrons to some degree. The more electron shells there are (a new shell for each row in the periodic table), the greater the shielding effect is. Essentially, the core electrons shield the valence electrons from the positive charge of the nucleus. Electron-Electron Repulsions When two electrons are in the same shell, they will repel each other slightly. This effect is mostly canceled out due to the strong attraction to the nucleus, but it does cause electrons in the same shell to spread out a little bit. Lower shells experience this effect more because they are smaller and allow the electrons to interact more. Coulomb's Law Coulomb's law is an equation that determines the amount of force with which two charged particles attract or repel each other. It is , where is the amount of charge (+1e for protons, -1e for electrons), is the distance between them, and is a constant. You can see that doubling the distance would quarter the force. Also, a large number of protons would attract an electron with much more force than just a few protons would. Trends in the Periodic table Most of the elements occur naturally on Earth. However, all elements beyond uranium (number 92) are called trans-uranium elements and never occur outside of a laboratory. Most of the elements occur as solids or gases at STP. STP is standard temperature and pressure, which is 0° C and 1 atmosphere of pressure. There are only two elements that occur as liquids at STP: mercury (Hg) and bromine (Br). Bismuth (Bi) is the last stable element on the chart. All elements after bismuth are radioactive and decay into more stable elements. Some elements before bismuth are radioactive, however. Atomic Radius Leaving out the noble gases, atomic radii are larger on the left side of the periodic chart and are progressively smaller as you move to the right across the period. Conversely, as you move down the group, radii increase. Atomic radii decrease along a period due to greater effective nuclear charge. Atomic radii increase down a group due to the shielding effect of the additional core electrons, and the presence of another electron shell. Ionic Radius For nonmetals, ions are bigger than atoms, as the ions have extra electrons. For metals, it is the opposite. Extra electrons (negative ions, called anions) cause additional electron-electron repulsions, making them spread out farther. Fewer electrons (positive ions, called cations) cause fewer repulsions, allowing them to be closer. |Ionization energy is the energy required to strip an electron from the atom (when in the gas state). Ionization energy is also a periodic trend within the periodic table organization. Moving left to right within a period or upward within a group, the first ionization energy generally increases. As the atomic radius decreases, it becomes harder to remove an electron that is closer to a more positively charged nucleus. Ionization energy decreases going left across a period because there is a lower effective nuclear charge keeping the electrons attracted to the nucleus, so less energy is needed to pull one out. It decreases going down a group due to the shielding effect. Remember Coulomb's Law: as the distance between the nucleus and electrons increases, the force decreases at a quadratic rate. It is considered a measure of the tendency of an atom or ion to surrender an electron, or the strength of the electron binding; the greater the ionization energy, the more difficult it is to remove an electron. The ionization energy may be an indicator of the reactivity of an element. Elements with a low ionization energy tend to be reducing agents and form cations, which in turn combine with anions to form salts. Electron Affinity |Electron affinity is the opposite of ionization energy. It is the energy released when an electron is added to an atom. Electron affinity is highest in the upper left, lowest on the bottom right. However, electron affinity is actually negative for the noble gasses. They already have a complete valence shell, so there is no room in their orbitals for another electron. Adding an electron would require creating a whole new shell, which takes energy instead of releasing it. Several other elements have extremely low electron affinities because they are already in a stable configuration, and adding an electron would decrease stability. Electron affinity occurs due to the same reasons as ionization energy. Electronegativity is how much an atom attracts electrons within a bond. It is measured on a scale with fluorine at 4.0 and francium at 0.7. Electronegativity decreases from upper right to lower left. Electronegativity decreases because of atomic radius, shielding effect, and effective nuclear charge in the same manner that ionization energy decreases. Metallic Character Metallic elements are shiny, usually gray or silver colored, and good conductors of heat and electricity. They are malleable (can be hammered into thin sheets), and ductile (can be stretched into wires). Some metals, like sodium, are soft and can be cut with a knife. Others, like iron, are very hard. Non-metallic atoms are dull, usually colorful or colorless, and poor conductors. They are brittle when solid, and many are gases at STP. Metals give away their valence electrons when bonding, whereas non-metals take electrons. The metals are towards the left and center of the periodic table—in the s-block, d-block, and f-block . Poor metals and metalloids (somewhat metal, somewhat non-metal) are in the lower left of the p-block. Non-metals are on the right of the table. Metallic character increases from right to left and top to bottom. Non-metallic character is just the opposite. This is because of the other trends: ionization energy, electron affinity, and electronegativity.
<urn:uuid:7ab562e2-c61b-4988-9c51-24c5b3cb1d20>
4.4375
1,666
Knowledge Article
Science & Tech.
39.408244
Most Atlantic hurricanes start to take shape when thunderstorms along the west coast of Africa drift out over warm ocean waters that are at least 80 degrees Fahrenheit (27 degrees Celsius), where they encounter converging winds from around the equator. Warm Air, Warm Water Make Conditions Right for Hurricanes Hurricanes start when warm, moist air from the ocean surface begins to rise rapidly, where it encounters cooler air that causes the warm water vapor to condense and to form storm clouds and drops of rain. The condensation also releases latent heat, which warms the cool air above, causing it to rise and make way for more warm humid air from the ocean below. As this cycle continues, more warm moist air is drawn into the developing storm and more heat is transferred from the surface of the ocean to the atmosphere. This continuing heat exchange creates a wind pattern that spirals around a relatively calm center, or eye, like water swirling down a drain. Converging Winds Create Hurricanes Converging winds near the surface of the water collide, pushing more water vapor upward, increasing the circulation of warm air, and accelerating the speed of the wind. At the same time, strong winds blowing steadily at higher altitudes pull the rising warm air away from the storms center and send it swirling into the hurricanes classic cyclone pattern. High-pressure air at high altitudes, usually above 30,000 feet (9,000 meters), also pull heat away from the storms center and cool the rising air. As high-pressure air is drawn into the low-pressure center of the storm, the speed of the wind continues to increase. As the storm builds from thunderstorm to hurricane, it passes through three distinct stages based on wind speed: - Tropical depressionwind speeds of less than 38 miles per hour (61.15 kilometers per hour) - Tropical stormwind speeds of 39 mph to 73 mph (62.76 kph to 117.48 kph) - Hurricanewind speeds greater than 74 mph (119.09 kph) Scientists Debate Cause of Temperature Changes that Create Hurricanes While scientists agree on the mechanics of hurricane formation, and they agree that hurricanes are becoming more frequent and severe, thats where consensus ends. Some scientists believe that human activity already has contributed significantly to global warming, which is increasing air and water temperatures worldwide and making it easier for hurricanes to form and gain destructive force. Other scientists believe that the increase in severe hurricanes over the past decade is due to natural salinity and temperature changes deep in the Atlanticpart of a natural environmental cycle that shifts back and forth every 40-60 years. Frequency and Severity of Hurricanes Likely to Increase While the scientific community debates the root cause of the temperature changes that are contributing to the current increase in destructive hurricanes, three things are apparent: - Air and water temperatures are rising worldwide. - Human activities such as deforestation and greenhouse gas emissions from a wide range of industrial and agricultural processes are contributing to those temperature changes at a greater rate today than in the past. - Failure to take action now to lower atmospheric levels of greenhouse gases is likely to lead to more frequent and severe hurricanes in the future.
<urn:uuid:2529c9ff-fac1-4c7a-81c9-51e424e73008>
4.21875
647
Knowledge Article
Science & Tech.
31.318844
What does it mean? and What is it for? It is used to map a canonical name for a servlet (not an actual Servlet class that you've written) to a JSP (which happens to be a servlet). On its own it isn't quite useful. You'll often need to map the servlet to a url-pattern as: All requests now arriving at /test/* will now be serviced by the JSP. Additionally, the servlet specification also states: jsp-file element contains the full path to a JSP file within the web application beginning with a “/”. If a jsp-file is specified and the load-onstartup element is present, then the JSP should be precompiled and So, it can be used for pre-compiling servlets, in case your build process hasn't precompiled them. Do keep in mind, that precompiling JSPs this way, isn't exactly a best practice. Ideally, your build script ought to take care of such matters. Is it like code behind architecture in ASP .NET? No, if you're looking for code-behind architecture, the closest resemblance to such, is in the Managed Beans support offered by JSF.
<urn:uuid:d74cec90-49ba-4472-9fd3-5508360e9b05>
2.796875
271
Q&A Forum
Software Dev.
67.660625
Good Answer by Fishtoaster. The science is ancient, discovered by Archimedes. 1: Any object, wholly or partially immersed in a fluid, is buoyed up by a force equal to the weight of the fluid displaced by the object. In other words, if you put a ball, with volume 1 litre completely under water, there is an upwards force on the ball (buoyancy or flotation) equal to the weight of 1 litre of water. (i.e. 1 kilogram-force or 9.81 Newtons) 2: (Corollary) Any floating object displaces its own weight of fluid. If we place a floating object of mass 1 kilo, it will displace exactly 1 kilo of water, or 1 litre of water. If the volume of our object is less than 1 litre, it will float. So, with a Hydrometer, it is weighted such that the submerged volume at the 1.000 reading is exactly equal to the weight of the hydrometer. If we dissolve solids into the water (sugar) that volume of water is heavier, and less of it needs to be displaced in order for the hydrometer's weight to be matched, and the Hydrometer floats higher.
<urn:uuid:710105c9-fdc7-4e38-9ce1-cfb2eb9ec746>
3.65625
266
Q&A Forum
Science & Tech.
67.277035
Robots game activity Bring the stack of CRC index cards you developed in lab yesterday to lecture, one for each class you hope to design in your robots program. This activity will involve elaboration of these cards, giving greater specificity to responsibilities and describing each class's attributes. Your task in class is to embellish these cards as follows: - Each card should list of member functions and instance variables of the class. (Instance variables are the private variables.) Insofar as possible, you should give the type/class of all parameters and return values of the member functions, and the types of the instance variables. - One additional card lists non-member functions (if any) along with the purpose of each. Again, give the type/class of all parameters and return values. - Number all member functions and procedures in the order you intend to code and test them. You'll put the number (1) in front of all methods and procedures you can test on their own without writing any others. Put a (2) in front of procedures you can't write until you've written procedures from (1), and so forth. - Identify with a (*) all methods which appear challenging to write. These are ones which you hope you can break up later into smaller procedures once you've thought about the robots program more.
<urn:uuid:4fe2fa59-6eeb-4077-99a2-3e6678c046da>
3.4375
295
Tutorial
Software Dev.
49.502065
In addition to the above types of problems, considerable research is directed to basic questions such as, Do we understand how quasars form and evolve? Can we connect theories of galaxy and black hole formation with the observations of quasars at high redshift and the incidence of black holes in galaxies at low redshift? Here I mention briefly some recent theoretical work that demonstrates progress in our understanding of quasars and ties in with present and future observational work. Haiman, Madau, and Loeb (1998) point out that the scarcity of quasars at z > 3.5 in the Hubble Deep Field implies that the formation of quasars in halos with circular velocities less than 50 km/s is suppressed (on the assumption that black holes form with constant efficiency in cold dark matter halos). They note that the Next Generation Space Telescope should be able to detect the epoch of formation of the earliest quasars. Cavaliere and Vittorini (1998) note that the observed form for the evolution of the space density of quasars can be understood at early times when cosmology and the processes of structure formation provide material for accretion onto central black holes as galaxies assemble. Quasars then turn off at later times because interaction with companions cause the accretion to diminish. Haehnelt, Natarajan, and Rees (1998) show that the peak of quasar activity occurs at the same time as the first deep potential wells form. The Press-Schechter approach provides a way to estimate the space density of dark matter halos. But the space density of z = 3 quasars is less than 1% that of star-forming galaxies, which implies the quasar lifetime is much less than a Hubble time. For an assumed relation between quasar luminosity and timescale and the Eddington limit, it is possible to connect the observed quasar luminosity density with dark matter halos and the numbers of black holes in nearby galaxies. The apparently large number of local galaxies with black holes implies that accretion processes for quasars are inefficient in producing blue light.
<urn:uuid:3b1a9e85-b862-4186-8471-8747530a00ce>
3.046875
435
Academic Writing
Science & Tech.
34.975905
Major Section: DOCUMENTATION ACL2 documentation strings make special use of the tilde character (~). In particular, we describe here a ``markup language'' for which the tilde character plays a special role. The markup language is valuable if you want to write documentation that is to be displayed outside your ACL2 session. If you are not writing such documentation, and if also you do not use the character `~', then there is no need to read on. Three uses of the tilde character (~) in documentation strings are as follows. Below we explain the uses that constitute the ACL2 markup language. The other uses of the tilde character are of the following form. Indicates the end of a documentation section; see doc-string. Indicates the literal insertion of a tilde character (~). This directive in a documentation string is effective only during the processing of part 2, the details (see doc-string), and controls how much is shown on each round of moreprocessing when printing to the terminal. If the system is not doing moreprocessing, then it acts as though the ~] is not present. Otherwise, the system put out a newline and halts documentation printing on the present topic, which can be resumed if the user types moreat the terminal. ~key[arg]Before launching into an explanation of how this works in detail, let us consider some small examples. Here is a word that is code: ~c[function-name].Here is a phrase with an ``emphasized'' word, ``not'': Do ~em[not] do that.Here is the same phrase, but where ``not'' receives stronger emphasis (presumably boldface in a printed version): Do ~st[not] do that.Here is a passage that is set off as a display, in a fixed-width font: ~bv This passage has been set off as ``verbatim''. The present line starts just after a line break. Normally, printed text is formatted, but inside ~bv...~ev, line breaks are taken literally. ~evIn general, the idea is to provide a ``markup language'' that can be reasonably interpreted not only at the terminal (via doc), but also via translators into other languages. In fact, translators have been written into Texinfo and HTML. Let us turn to a more systematic consideration of how to mark text in documentation strings using expressions of the form ~key[arg], which we will call ``doc-string tilde directives.'' The idea is that key informs the documentation printer (which could be the terminal, a hardcopy printer, or some hypertext tool) about the ``style'' used to display arg. The intention is that each such printer should do the best it can. For example, we have seen above that ~em[arg] tells the printer to emphasize arg if possible, using an appropriate display to indicate emphasis (italics, or perhaps surrounding arg with some character _, or ...). For another example, the directive for bold ~b[arg], says that printed text for arg should be in bold if possible, but if there is no bold font available (such as at the terminal), then the argument should be printed in some other reasonable manner (for example, as ordinary text). The is case-insensitive; for example, you can use ~BV or ~Bv or ~bV in place of ~bv. Every form below may have any string as the argument (inside [..]), as long as it does not contain a newline (more on that below). However, when an argument does not make much sense to us, we show it below as the empty string, e.g., `` ~- Print the equivalent of a dash ~b[arg] Print the argument in bold font, if available ~bid[arg] ``Begin implementation dependent'' -- Ignores argument at terminal. ~bf Begin formatted text (respecting spaces and line breaks), but in ordinary font (rather than, say, fixed-width font) if possible ~bq Begin quotation (indented text, if possible) ~bv Begin verbatim (print in fixed-width font, respecting spaces and line breaks) ~c[arg] Print arg as ``code'', such as in a fixed-width font ~ef End format; balances ~bf ~eid[arg] ``End implementation dependent'' -- Ignores argument at terminal. ~em[arg] Emphasize arg, perhaps using italics ~eq End quotation; balances ~bq ~ev End verbatim; balances ~bv ~i[arg] Print arg in italics font ~id[arg] ``Implementation dependent'' -- Ignores argument at terminal. ~il[arg] Print argument as is, but make it a link (for true hypertext environments) ~ilc[arg] Same as ~il[arg], except that arg should be printed as with ~c[arg] ~l[arg] Ordinary link; prints as ``See :DOC arg'' at the terminal (but also see ~pl below, which puts ``see'' in lower case) ~nl Print a newline ~par Paragraph mark, of no significance at the terminal (can be safely ignored; see also notes below) ~pl[arg] Parenthetical link (borrowing from Texinfo): same as ~l[arg], except that ``see'' is in lower case. This is typically used at other than the beginning of a sentence. ~sc[arg] Print arg in (small, if possible) capital letters ~st[arg] Strongly emphasize arg, perhaps using a bold font ~t[arg] Typewriter font; similar to ~c[arg], but leaves less doubt about the font that will be used. ~terminal[arg] Terminal only; arg is to be ignored except when reading documentation at the terminal, using :DOC. Style notes and further details It is not a good idea to put doc-string tilde directives inside ~bv ... ~ev. Do not nest doc-string tilde directives; that is, do not write The ~c[~il[append] function ...but note that the ``equivalent'' expression The ~ilc[append] function ...is fine. The following phrase is also acceptable: ~bfThis is ~em[formatted] text. ~efbecause the nesting is only conceptual, not literal. We recommend that for displayed text, should usually each be on lines by themselves. That way, printed text may be less encumbered with excessive blank lines. Here is an Here is some normal text. Now start a display: ~bv 2 + 2 = 4 ~ev And here is the end of that paragraph.The analogous consideration applies to Here is the start of the next paragraph. ~efas well as You may ``quote'' characters inside the arg part of ~key[arg], by preceding them with ~. This is, in fact, the only legal way to use a newline character or a right bracket (]) inside the argument to a doc-string tilde directive. Write your documentation strings without hyphens. Otherwise, you may find your text printed on paper (via TeX, for example) like this -- Here is a hyphe- nated word.even if what you had in mind was: Here is a hyphe- nated word.When you want to use a dash (as opposed to a hyphen), consider using ~-, which is intended to be interpreted as a ``dash.'' For example: This sentence ~- which is broken with dashes ~- is boring.would be written to the terminal (using doc) by replacing ~-with two hyphen characters, but would presumably be printed on paper with a dash. Be careful to balance the ``begin'' and ``end'' pairs, such as ~ev. Also, do not use two ``begin'' ~bv) without an intervening ``end'' directive. It is permissible (and perhaps this is not surprising) to use the doc-string part separator between such a begin-end pair. Because of a bug in texinfo (as of this writing), you may wish to avoid beginning a line with (any number of spaces followed by) the - character or The ``paragraph'' directive, ~par, is rarely if ever used. There is a low-level capability, not presently documented, that interprets two successive newlines as though they were This is useful for the HTML driver. For further details, see the authors of ACL2. Emacs code is available for manipulating documentation strings that contain doc-string tilde-directives (for example, for doing a reasonable job filling such documentation strings). See the authors if you are interested. We tend to use ~em[arg] for ``section headers,'' such as ``Style notes and further details'' above. We tend to use ~st[arg] for emphasis of words inside text. This division seems to work well for our Texinfo driver. Note that arg to be printed in upper-case at the terminal (using arg to be printed at the terminal as though arg were not marked for emphasis. Our Texinfo and HTML drivers both take advantage of capabilities for indicating which characters need to be ``escaped,'' and how. Unless you intend to write your own driver, you probably do not need to know more about this issue; otherwise, contact the ACL2 authors. We should probably mention, however, that Texinfo makes the following requirement: when using one of the special characters }, you must immediately follow this use with a period or comma. Also, the Emacs ``info'' documentation that we generate by using our Texinfo driver has the property that in node names, : has been replaced by (because of quirks in info); so for example, the ``proof-checker'' s, is documented under rather than under We have tried to keep this markup language fairly simple; in particular, there is no way to refer to a link by other than the actual name. So for example, when we want to make invisible link in ``code'' font, we write the following form, which : should be in that font and then both be in that font and be an invisible link.
<urn:uuid:8e49dca7-353b-497d-b5e0-bae637794215>
3.390625
2,246
Documentation
Software Dev.
52.52327
“Understanding which species are most vulnerable to human impacts is a prerequisite for designing effective conservation strategies. Surveys of terrestrial species have suggested that large-bodied species and top predators are the most at risk, and it is commonly assumed that such patterns also apply in the ocean. However, there has been no global test of this hypothesis in the sea. We analyzed two fisheries datasets (stock assessments and landings) to determine the life-history traits of species that have suffered dramatic population collapses. Contrary to expectations, our data suggest that up to twice as many fisheries for small, low trophic-level species have collapsed compared with those for large predators. These patterns contrast with those on land, suggesting fundamental differences in the ways that industrial fisheries and land conversion affect natural communities. Even temporary collapses of small, low trophic-level fishes can have ecosystem-wide impacts by reducing food supply to larger fish, seabirds, and marine mammals.” Aceder ao artigo completo aqui. Aceder a mais artigos aqui. Fonte: Sea Web Marine Science Review – 07 de Setembro de 2012
<urn:uuid:7ef17589-6334-4f7f-8707-b2e85202e9c1>
3.59375
235
Truncated
Science & Tech.
23.591912
Java vs. C Is Java easier or harder than C?. Java Virtual Machine The key to Java's portability and security is the Java Virtual Machine.. History of Java Java was designed by Sun Microsystems in the early 1990s to solve the problem of connecting many household machines together. This project failed because no one wanted to use it.. Java is arguably the best overall programming languages, but there are problems with it.. Java is an excellent programming language.. GUI - Swing vs. AWT The original graphical user interface (GUI) for Java was called the Abstract Windowing Toolkit (AWT)..
<urn:uuid:fbe2a20f-b715-4f1b-9655-5b4acc5d4d06>
2.75
131
Content Listing
Software Dev.
49.995503
SOHO is part of the first Cornerstone project in ESA's science programme, in which the other part is the Cluster mission. Both are joint ESA/NASA projects in which ESA is the senior partner. SOHO and Cluster are also contributions to the International Solar-Terrestrial Physics Programme, to which ESA, NASA and the space agencies of Japan, Russia, Sweden and Denmark all contribute satellites monitoring the Sun and solar effects. Of the spacecraft's 12 sets of instruments, nine come from multinational teams led by European scientists, and three from US-led teams. More than 1500 scientists from around the world have been involved with the SOHO programme, analysing and interpreting SOHO data for their research projects. SOHO was built for ESA by industrial companies in 14 European countries, led by Matra Marconi (now called ASTRIUM). The service module, with solar panels, thrusters, attitude control systems, communications and housekeeping functions, was prepared in Toulouse, France. The payload module carrying the scientific instruments was assembled in Portsmouth, United Kingdom, and mated with the service module in Toulouse, France. NASA launched SOHO and is responsible for tracking, telemetry reception and commanding.
<urn:uuid:7e3e2279-e48f-4fbc-aec0-bb81b03f34c5>
2.84375
251
Knowledge Article
Science & Tech.
30.455643
This is a tricky question to answer because weather, what you experience at your house right now, is not really that same thing as climate, the patterns of global air and sea movements that bring weather. So milder winters can be a possibility in certain locations, as they will be exposed to an overall warming of the entire atmosphere. But colder winters can be experienced. Since the mid 1970s, global temperatures have been warming at around 0.2 degrees Celsius per decade. However, weather imposes its own dramatic ups and downs over the long term trend. We expect to see record cold temperatures even during global warming. Nevertheless over the last decade, daily record high temperatures occurred twice as often as record lows. This tendency towards hotter days is expected to increase as global warming continues into the 21st Century. Vladimir Petoukhov, a climate scientist at the Potsdam Institute for Climate Impact Research, has recently completed a study on the effect of climate change on winter. According to Petoukhov, These anomalies could triple the probability of cold winter extremes in Europe and northern Asia. Recent severe winters like last year's or the one of 2005-06 do not conflict with the global warming picture, but rather supplement it. Weather being a local response to climatic conditions means that you have to understand what has changed in the climatic patterns in your region. What are your local weather drivers? How have they changed since the 1970s? Thus, you could end up with some areas experiencing colder winters; due to greater moisture levels in the air, more precipitation of snow, greater heat loss at night due to clear skies, etc. Or you could have an area that will experience milder temps in winter due to warmer air currents, warmer oceans, localised heat island impacts, etc. For further information you should investigate the weather and climate agencies publications for your area.
<urn:uuid:46831906-816c-4216-bb15-f2eeb6252799>
2.953125
383
Q&A Forum
Science & Tech.
43.250079
Science Fair Project Encyclopedia In differential geometry, a pseudo-Riemannian manifold is a smooth manifold equipped with a smooth, symmetric, (0,2) tensor which is nondegenerate at each point on the manifold. This tensor is called a pseudo-Riemannian metric or, simply, a (pseudo-)metric tensor. The key difference between a Riemannian metric and a pseudo-Riemannian metric is that a pseudo-Riemannian metric need not be positive-definite, merely nondegenerate. Since every positive-definite form is also nondegenerate a Riemannian metric is a special case of a pseudo-Riemannian one. Thus pseudo-Riemannian manifolds can be considered generalizations of Riemannian manifolds. Every nondegenerate, symmetric, bilinear form has a fixed signature (p,q). Here p and q denote the number of positive and negative eigenvalues of the form. The signature of a pseudo-Riemannian manifold is just the signature of the metric (one should insist that the signature is the same on every connected component). Note that p + q = n is the dimension of the manifold. Riemannian manifolds are simply those with signature (n,0). Pseudo-Riemannian metrics of signature (p,1) (or sometimes (1,q), see sign convention) are called Lorentzian metrics. A manifold equipped with a Lorentzian metric is naturally called a Lorentzian manifold. After Riemannian manifolds, Lorentzian manifolds form the most important subclass of pseudo-Riemannian manifolds. They are important because of their physical applications to the theory of general relativity. A principal assumption of general relativity is that spacetime can be modeled as a Lorentzian manifold of signature (3,1). Just as Euclidean space can be thought of as the model Riemannian manifold, Minkowski space with the flat Minkowski metric is the model Lorentzian manifold. Likewise, the model space for a pseudo-Riemannian manifold of signature (p,q) is with the metric Some basic theorems of Riemannian geometry can be generalized to the pseudo-Riemannian case. In particular, the fundamental theorem of Riemannian geometry is true of pseudo-Riemannian manifolds as well. This allows one to speak of the Levi-Civita connection on a pseudo-Riemannian manifold along with the associated curvature tensor. On the other hand, there are many theorems in Riemannian geometry which do not hold in the generalized case. For example, it is not true that every smooth manifold admits a pseudo-Riemannian metric of a given signature; there are certain topological obstructions. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:b9e1a8f8-67e4-46d6-add8-080dedfd2f12>
3.625
644
Knowledge Article
Science & Tech.
25.882828
Questions: G to A and C to T substitutions, is this a rule ? zxiong at arizvm1.ccit.arizona.edu Mon Apr 11 11:59:05 EST 1994 Being not familiar with molecular evolution. I have been troubled with some of my data in RNA virus sequence. We are working on a small RNA virus and nearly complete the sequence of the viral RNA genome from cDNA clones. RNA viruses are known to be heterogenous (quasi-species), so it was not surprising to see nucleotide sequence variations when sequences are obtained from different clones. What was surprising was a consistent rule of sequence variaions. G is always subsituted with a A, or vice versa. C is always substituted with a T, or vice versa. But there is never a G to (C, T) change or vice versa. Let me try to explain it a little better. We have found 16 nucleotide substitutions in about 1500 nucleotide of overlapping sequences. There are 11 C to T or T to C substitutions and 5 G to A or A to G substitutions. We have not found any other possible substituions. Is there a theory describing the rule of nucleotide substitution during evolution? I feel very ignorant and hope someone can give me a pointer to how to explain my observation. Any comments or suggestions are welcome. Zxiong at arizvm1.ccit.arizona.edu More information about the Methods
<urn:uuid:46fef4e6-8218-47b0-a114-75e38b50bf1a>
2.875
327
Q&A Forum
Science & Tech.
59.739091
Paul Painter¹ and Lucas McConnell² ¹Materials Science and Engineering and The Energy Institute ²Renewergy Corporation, Erie PA. Presently, biofuels in this country usually means one of two things, ethanol (in the U.S. principally produced from corn) or biodiesel (largely from oilseeds or yellow grease). However, large-scale production of these fuels will inevitably lead to the displacement of croplands used to produce food and there will clearly be a limit on the quantity of ethanol and biodiesel that can be obtained from these sources. Furthermore, although both biodiesel and ethanol have a number of attractive properties (in addition to being derived from a renewable source), they are not without problems (lower energy content, clogging of fuel lines and filters because of their ability to dissolve gums and other deposits, etc.). It would clearly be advantageous if a cheap, relatively simple method were available to produce a predominantly hydrocarbon fuel (i.e., largely decarboxylated oils) from feedstocks that contain high contents of free fatty acids. One source that we wish to particularly focus on is algae, for the purposes of this project being produced on by Renewergy Corporation. Renewergy has developed a proprietary, “aeroponic algalculture” technique that uses a fraction of the water needed by conventional processes and a simple way of increasing surface area for light and CO2 absorption. In preliminary work, we have applied Kolbe electrolysis to the processing of algal oil. Kolbe electrolysis of fatty (alkanoic) acids was the first known electrochemical synthesis. Faraday had originally observed (in 1834) that hydrocarbons are formed upon electrolysis of acetate solutions, but it was H. Kolbe who performed the first detailed investigations of the reactions of carboxylic acids at an anode some fifteen years later. Essentially, the reaction involves the electrochemical oxidative decarboxylation of carboxylic acid salts that leads to radicals, which can then combine to form simple hydrocarbons. We have found that a number of side reactions occur, but these can be advantageous in producing biofuels.
<urn:uuid:5808c6cc-13e7-4e25-8330-97fe63475956>
3.625
446
Academic Writing
Science & Tech.
21.524274
Roy J. Plunkett Roy J. Plunkett with a cable insulated with Teflon and a Teflon-coated muffin tin. Gift of Roy Plunkett. Courtesy Hagley Museum and Library. From the 1930s to the present, beginning with neoprene and nylon, the American chemical industry has introduced a cornucopia of polymers to the consumer. Teflon, discovered by Roy J. Plunkett (1910–1994) at the DuPont Company’s Jackson Laboratory in 1938, was an accidental invention—unlike most of the other polymer products. But as Plunkett often told student audiences, his mind was prepared by education and training to recognize novelty. As a poor Ohio farm boy during the Depression, Plunkett attended Manchester College in Indiana. His roommate for a time at this small college was Paul Flory, who would win the 1974 Nobel Prize in chemistry for his contributions to the theory of polymers. Like Flory, Plunkett went on to The Ohio State University for a doctorate, and also like Flory he was hired by DuPont. Unlike Flory, Plunkett made his entire career at DuPont. Reenactment of the 1938 discovery of Teflon. Left to right: Jack Rebok, Robert McHarness, and Roy Plunkett. Courtesy Hagley Museum and Library. Plunkett’s first assignment at DuPont was researching new chlorofluorocarbon refrigerants—then seen as great advances over earlier refrigerants like sulfur dioxide and ammonia, which regularly poisoned food-industry workers and people in their homes. Plunkett had produced 100 pounds of tetrafluoroethylene gas (TFE) and stored it in small cylinders at dry-ice temperatures preparatory to chlorinating it. When he and his helper prepared a cylinder for use, none of the gas came out—yet the cylinder weighed the same as before. They opened it and found a white powder, which Plunkett had the presence of mind to characterize for properties other than refrigeration potential. He found the substance to be heat resistant and chemically inert, and to have very low surface friction so that most other substances would not adhere to it. Plunkett realized that, against the predictions of polymer science of the day, TFE had polymerized to produce this substance—later named Teflon—with such potentially useful characteristics. Chemists and engineers in the Central Research Department with special experience in polymer research and development investigated the substance further. Meanwhile, Plunkett was transferred to the tetraethyl lead division of DuPont, which produced the additive that for many years boosted gasoline octane levels. At first it seemed that Teflon was so expensive to produce that it would never find a market. Its first use was fulfilling the requirements of the gaseous diffusion process of the Manhattan Project for materials that could resist corrosion by fluorine or its compounds (see Ralph Landau). Teflon pots and pans were invented years later. The awarding of Philadelphia’s Scott Medal in 1951 to Plunkett—the first of many honors for his discovery—provided the occasion for the introduction of Teflon bakeware to the public: each guest at the banquet went home with a Teflon-coated muffin tin.
<urn:uuid:0b8f571c-a249-4c01-9b3a-6ebe34815c6d>
3.3125
683
Knowledge Article
Science & Tech.
37.662211
Now that you're comfortable using the MySQL client tools to manipulate data in the database, you can begin using PHP to display and modify data from the database. PHP has standard functions for working with the database. First, we're going to discuss PHPs built-in database functions. We'll also show you how to use the PEAR database functions that provide the ability to use the same functions to access any supported database. This type of flexibility comes from a process called abstraction. Abstraction is the information you need to log into a database that is placed into a standard format. This standard format allows you to interact with MySQL as well as other databases using the same format. Similarly, MySQL-specific functions are replaced with generic ones that know how to talk to many databases. In this chapter, you'll learn how to connect to a MySQL server from PHP, learn how to use PHP to access and retrieve stored data, and how to correctly display information to the user. The basic steps of performing a query, whether using the mysql command-line tool or PHP, are the same: Connect to the database. Select the database to use. Build a SELECT statement. Perform the query. Display the results. We'll walk through each of these steps for both plain PHP and PEAR functions. When connecting to a MySQL database, you will use two new resources. The first is the link identifier that holds all of the information necessary to connect to the database for an active connection. The other resource is the results resource. It contains all information required to retrieve results from an active database query's result set. You'll be creating and assigning both resources in this chapter.
<urn:uuid:c21bbd34-5b0b-4017-90f1-4b9d4a0ca4df>
3.140625
343
Tutorial
Software Dev.
52.928786
Why Does Copper Turn Green? Copper turns green because of chemical reactions with the elements. For the same reason that iron rusts. Just as iron that is left unprotected in open air will corrode and form a flaky orange-red outer layer, copper that is exposed to the elements undergoes a series of chemical reactions that give the shiny metal a pale green outer layer called a patina. The patina actually protects the copper below the surface from further corrosion, making it a good water-proofing material for roofs (which is why the roofs of so many old buildings are bright green). In fact, the weathering and oxidation of the Statue of Liberty's copper skin has amounted to just .005 of an inch over the last century, according to the Copper Development Association. MORE FROM LiveScience.com
<urn:uuid:74fec3df-1ff7-4026-9308-fe6758e0d9e2>
3.546875
169
Knowledge Article
Science & Tech.
51.391467
Published in Lunar and Planetary Science XXVII, pp. 1183-1184, LPI, Houston. Introduction: Crustal processes and reactions during hydrothermal and biogenic activity result in extreme degrees of sulfur isotopic fractionation on Earth. For example, delta 34S in terrestrial sulfides ranges from -70 to +70 on Earth . In contrast, delta 34S values for sulfides from other planetary bodies that have been sampled (Moon, asteroids) show a very limited mass fractionation. The standard deviation in the bulk isotopic composition of sulfur in meteorites of all types is less than 0.1 . However, the isotopic composition of sulfides in meteorites shows slightly more variability. Troilite in Orgueil, a carbonaceous chondrite, has a delta 34S of 2.6 . Kaplan and Hulston showed that sulfides in enstatite chondrites have delta 34S of between +1.6 to +2.5. The delta 34S in troilite from ordinary chondrites ranges from -2.7 to +2.5 . The slight fractionation of 34S into these sulfides has been attributed to nebular heterogeneity , low temperature (100°C) reactions between water and elemental sulfur , and oxidation of FeS in an aqueous environment [2,4,5]. Lunar materials exhibit a much broader variation in bulk delta 34S than has been observed in meteorites. Whereas bulk lunar rocks show variability on the order of +0.37 to +0.68, lunar soils have delta 34S as high as +9.76 . These high values in the bulk lunar soils have been attributed to preferential volatilization of 32S during sputtering caused by micrometeorite bombardment . Until now, S fractionation processes on the larger terrestrial planets such as Mercury, Venus, and Mars has been only speculative. With the discovery of a possible Martian meteorite with an imprint of a Martian hydrothermal system, we can gain insights into S fractionation on another planet. SNC Meteorite ALH 84001: ALH 84001 is a coarse-grained, clastic orthopyroxenite meteorite related to the SNC meteorite group . A hydrothermal signature is supimposed upon the orthopyroxene-dominant igneous mineral assemblage. This hydrothermal overprint consists of carbonate assemblages occurring in spheroidal aggregates and as fine-grained carbonate and sulfide vug-filling structures [7-10]. The sulfide has been identified as pyrite . Textural interpretations of shock features in the carbonates has lead to the interpretation that the carbonate-sulfide mineralization was a result of influxes of fluids during Martian hydrothermal activity . Isotopic Analysis of Pyrite in ALH 84001: The sulfur isotopic measurements were made using a Cameca IMS-4f ion microprobe operated by a University of New Mexico-Sandia National Laboratory consortium on the UNM campus. A Cs+ primary beam was focused to a spot of between 8 and 10 µm. 32S- and 34S- were analyzed in the secondary ion beam. A S-isotope pyrite standard was analyzed in order to measure the degree of instrument-induced fractionation, precision, accuracy, and instrument drift over the period of an analytical session. The analytical precision measured on the standards is better than ±0.2, whereas the analytical precision measured on the samples is better than ±0.5. These reported precision values far exceed those reported in the literature for ion microprobe analysis of sulfur isotopes in sulfides [5,10]. Results: delta 34S values for five pyrite grains were obtained from ALH 84001. Values for the pyrite range from +4.8 to +7. These delta 34S values are 34S enriched relative to Canon Diablo troilite. Based on the 2-sigma precision, there are real isotopic differences among pyrite grains. Discussion: Sulfur isotopic characteristics of sulfides are constrained by a large number of variables, such as the sulfur isotopic characteristics of the hydrothermal fluid, temperature, pH, and fO2 . The stability field of pyrite also influences the range of expected delta 34S values of the pyrite . Therefore, although sulfur isotopic systematics provide some information concerning the hydrothermal system, they are best used in conjunction with other data (mineral stability, other stable isotopes). In comparison with sulfides from other meteorites, the delta 34S of the pyrite from ALH 84001 is enriched in 34S. This signature implies that the planetary body represented by ALH 84001 experienced processes capable of fractionating S isotopes that were not functional on asteroidal bodies represented by chondrite and achondrite meteorites. As was noted previously, the terrestrial delta 34S exhibits a wide variability. In particular, the large negative values in terrestrial delta 34S has been attributed, in many cases, to the bacterial reduction of sulfate to sulfide. The positive delta 34S measured in the ALH 84001 pyrite therefore suggests that the sulfur in this hydrothermal sulfide was not processed by bacteria in a manner analogous to terrestrial processes. The positive delta 34S measured in the ALH 84001 pyrite may be attributed to several different processes that may be functioning on the Martian surface or in the shallow Martian crust: Model 1: Assuming that the delta 34S in the fluid was essentially 0, the pyrite may be enriched in delta 34S by pH, temperature, and fO2 conditions during precipitation. The pH and the fugacity of oxygen may be approximated using the delta 34S data presented here, delta 13C data on the carbonates , a relatively low Sigma S, a temperature of precipitation of ~100°C and the coexistence of pyrite and carbonate. Making these assumptions, precipitation occurred in a reduced and moderately alkaline environment with the dominant sulfur-bearing species in solution being HS-. At higher temperatures (~700°C) as suggested by [7,9], the delta 34S of pyrite in the stability fields of carhonate + pyrite will not have values that approach +5 to +8. Model 2: The above interpretation makes the assumption that the delta 34S in the fluid was equal to 0. At more acidic conditions than suggested above (but at the same reducing conditions), delta 34S will not be strongly fractionated during pyrite precipitation from an aqueous solution . Therefore, under these conditions, the pyrite will approximate the delta 34S in the fluid . There are several potential processes that can generate positive delta 34S in the fluid under these pH and fO2 conditions: (2a) Previous isotopic studies of SNC meteorites indicated that the present Martian atmosphere is isotopically heavy in O, C, N, and H . Therefore, it is perhaps not surprising that other stable isotopes in the Martian atmosphere such as S should also be isotopically heavy. (2b) Alternatively, it has been documented that during lunar regolith formation and evolution, the bulk delta 34S increases . Therefore, impact-generated hydrothermal system models as suggested by may result in the preferential volatilization of 32S relative to 34S during impact. (2c) Assessments of Martian soil mineralogy based on both Viking XRF measurements and SNC documentation have suggested that phases such as clays, Fe-oxides, carbonates, and Ca- and Mg-sulfates will be stable in the oxidizing Martian environment [i.e., 15]. It is expected that under such weathering environments, particularly with the stabilization of sulfates, 34S should be enriched in water-soluble components (Ca- and Mg-sulfates) in the soil. Leaching of the 34S-enriched water-soluble minerals in Martian soil produced by processes 2a,b,c will result in a positive delta 34Sfluid. Model (2a,b,c) implies that the source for the sulfur is rather shallow and that this groundwater-hydrothermal system is in isotopic communication with processes occurring at the Martian surface. Under this second model, the temperature of precipitation cannot be constrained by the sulfur data. Conclusions: Our data indicates that the sulfur isotopes 32S and 34S in the sulfides in meteorite ALH 84001 have been fractionated to a greater extent than what has been documented in other meteorites. This, in itself, is another piece of information that links this orthopyroxenite to a planetary body that has experienced processes not present on chondrite and achondrite parent bodies. Mineralogical data suggests that the alteration assemblages were deposited under reducing conditions and that SO42- was not a dominant species in the solution. Therefore, the extent of sulfur isotopic fractionation during pyrite precipitation from the hydrothermal solution was moderate, at alkaline conditions (delta 34Sfluid < delta 34Spyrite), to minor at low pH conditions (delta 34Sfluid = delta 34Spyrite) This suggests two different models for the generation of positive delta 34S in the pyrite. If the pyrite precipitated at low temperature (100°-150°C) reducing conditions and high pH (<9), a delta 34Sfluid equal to 0 would precipitate pyrite with delta 34Spyrite between 5 and 8. Under more acidic conditions, the delta 34Sfluid will be equal to that of the pyrite. This requires the positive delta 34Sfluid signature to be produced prior to pyrite deposition. The positive delta 34S in the fluid may be attributed to upper atmospheric processes, impact processes, or low-temperature weathering reactions enriching the soil in 34S. These components may then be leached and their delta 34S signature transported to the location of precipitation. This process requires isotopic communication between the hydrothermal system and the Martian surface. If the isotopic signature of the sulfide reflects communication with surfacial-atmospheric processes, it may constrain additional aspects of Martian atmosphere evolution. References: Ohmoto H. and Rye R. O. (1979) in Geochemistry of Hydrothermal Ore Deposits (ed. Barnes H. L.), pp. 509-567. Pillinger C. T. (1984) Geochim. Cosmochim. Acta, 48, 2739-2766. Monster J. et al. (1965) Geochim. Cosmochim. Acta, 29, 773-779. Kaplan I. R. and Hulston J. R. (1965) Geochim. Cosmochim. Acta, 30, 479-496. Paterson B. A. et al. (1994) Lunar and Planetary Science XXV, 1057-1058. Kerridge J. F. and Kaplan I. R. (1978) Proc. Lunar Planet. Sci. Conf. 9th, 1687-1709. Mittlefehldt D. W. (1994) Meteoritics, 29, 214-221. Romanek C. S. et al. (1995) Meteoritics, 30, 567-568. Harvey R. P. and McSween H. Y. (1995) Lunar and Planetary Science XXVI, 555-556. McKibben M. A. and Eldridge C. S. (1995) Economic Geology, 90, 228-245. Rye R. O. and Ohmoto H. (1974) Economic Geology, 69, 826-842. Romanek C. S. et al. (1994) Nature, 372, 655-656. Wentworth S. J. and Gooding J. L. (1995) Lunar and Planetary Science XXVI, 1489-1490. Jakosky B. M. (1993) Geophys. Res. Lett., 20, 1591-1594. Gooding J. L. et al. (1988) Meteoritics, 26, 135-143.
<urn:uuid:bccae671-f765-40b4-9299-ad77496383ea>
3.390625
2,578
Academic Writing
Science & Tech.
44.855372
Functions in Lisp |Column Tag:||Lisp Listener "Functions in Lisp" By Andy Cohen, Human Factors Engineering, Hughes Aircraft, MacTutor Contributing Editor As you may recall from the first installment of the Lisp Listener, a procedure is a description of an action or computation. A primitive is a predefined or "builtin" procedure (e.g. "+"). As in Forth, Lisp can have procedures which are defined by the programmer. DEFUN, from DEfine FUNction, is used for this purpose. The syntax for DEFUN in Experlisp is as follows: (DEFUN FunctionName (symbols) (All sorts of computations which may or may Not use the values represented by the symbols)) The function name is exactly that. Whenever the name is used the defined procedure associated with that function name is performed. The symbols are values which may or may not be required by the procedures within the defined function. If required, the values must follow the function name. When given, these values are assigned to the symbol. This is similar to the way values are assigned to a symbol when using SETQ. It is easier to see how DEFUN works when observed within an example: ;(DEFUN Reciprocal (n) (/ 1 n)) The word "Reciprocal" is the function name and the numbers following are the values for which the reciprocal (1/n) are found. After the list containing DEFUN is entered and the carriage return is pressed the function and it's title are assigned a location in memory. The function name is then printed in the Listener window. ;(DEFUN Square (x) (* x x)) ;(DEFUN Cubed (y) (* y (* y y)) ;(DEFUN AVERAGE (W X Y Z) (/ (+ W X Y Z) 4)) ;(Average 2 3 4 5) You might recognize "Average" from last month's Lisp Listener. One might imagine using defined functions inside other defined functions. If it was possible to have variables which have the same values in each procedure, then the version of Lisp used has what is called dynamic scoping. In this context the values of the variable are determined by the Lisp environment which is resident when the procedure is called. Experlisp, however, is lexically scoped. That means that variable values are local to each procedure. Two defined procedures can use the same labels for variables, but the values will not be considered as the same. Each variable is defined locally. This is in accordance to the Common Lisp standard. Lexical scoping makes it easier to debug someone elses'' programs. If you don't know what I mean yet, don't worry. This subject will come up again in more detail later. If no values are required by the defined function then "nil" or an empty list must follow the function name. ;(DEFUN Line () The empty list obviously contains no atoms (I'll describe the above function, "Line" later in the section on bunnies). It is synonymous to the special term nil, which is considered by Lisp as the opposite of T or True. Nil is used in many other contexts. ;(cddr '( one two)) In the above, the first cdr returns "two". The second cdr returns nothing, hence "nil". The values of true and false are returned by procedures called predicates. While nil represents a false condition, anything other then nil, including "T", is generally considered true. Please note that I used lowercase letters in the above. ExperLisp recognizes both upper and lowercase. I've been using uppercase only to make it clear within the text when I'm referring to Lisp EQUAL is a predicate which checks the equality of two arguments. Note the arguments can be integers or symbols. If the two arguments are equal then "T" is returned. If they are not equal then "nil" is returned. ;(EQUAL try try) ;(EQUAL 6732837 6732837) ;(EQUAL 6732837 6732833) ;(EQUAL First Second) ATOM checks to see if it's argument is a list or an atom. Remember, the single quote is used to indicate that what follows is a not evaluated as in the case of a list. Symbols are evaluated. ;(ATOM (A B C D)) In the first of the above 'thing is an atom due to the single quote. In the second, thing is considered a symbol. A symbol is evaluated and contains a value or values as a list. In the third, (A B C D) is obviously a list. LISTP checks if it's argument is a list. ;(LISTP '( 23 45 65 12 1)) ;(SETQ babble '(wd ihc wi kw)) One interesting observation is that nil is both an atom and a list, ()=nil. Therefore ATOM and LISTP both return true for nil. When one needs to know if a list is empty, NULL does the job. ;(NULL (X Y Z)) NUMBERP checks if the argument that follows is or represents a number rather than a string. ;(SETQ fifty-six '(56)) Now for a real slick one. MEMBER tests whether or not an argument is a part of a list. An easy demonstration follows: ;(MEMBER 'bananas (apples pears bananas)) ;(apples pears bananas) ;(MEMBER 'grapes (apples pears bananas)) When the argument is a member, then the contents of the list are given. If not then nil is returned. MEMBER also checks symbols of lists. ;(SETQ fruit '(apples grapes pears)) ;(MEMBER 'grapes fruit) ; (apples grapes pears) ;(MEMBER 'banana fruit) EVENP tests to see if an integer is even and MINUSP checks if an integer is negative. ODDP and PLUSP are not needed since they are simply opposite of the first two. ;(EVENP (- 806 35)) ;(MINUSP (-34 86)) In the second and fourth examples above the lists contained within are calculated prior to MEMBERP evaluation. (806-35=771 & 34-86=-52. There's a few more simple predicates such as NOT, <, >, and ZEROP. I'll discuss them along with conditionals next month. Now for something completely different. If you've ever learned Logo, the concept of Bunny graphics should sound familiar. As mentioned last month, the Bunny is Expertelligence's version of the Turtle. All one needs to do in order to make a Bunny move is to tell it to. FORWARD X initially moves the Bunny upwards on the screen for 'X' display pixels. A negative number initially moves it down. When one enters the following in the Listener window, the default graphics window (I'll discuss windows in more detail very soon in future installments) is then opened and the following is drawn: RIGHT X aims the front of the line to the right by X degrees. If one then uses forward again the line moves in a different direction. For example: ;((RIGHT 50) (FORWARD 50)) or better yet (RIGHT 50) (FORWARD 50)) After a line is moved, the end of the line remains where it was. If one made the Bunny move again the beginning of the new line would begin where the old left off. The original starting point is the graphics window default home position. This position is in the center of each graphics window when the window is first created. In order to return the Bunny to the original starting point one must use HOME. The following produces a much neater triangle: (DEFUN Triangle () (Penup) (Left 45) (Forward 10) (Pendown) (Right 90) (Forward 25) (Right 90) (Forward 50) (Right 135) (Forward 71) (Right 135) (Forward 25)) After the above is typed into the edit buffer the "Compile All" selection should be chosen from the Menu Bar. The source code in the Edit Buffer quickly inverts to white letters on a black background as if the whole file was selected for a moment. The function name "Triangle is then printed in the Listener window. If the user enters the following in the Listener Window a different triangle is drawn in the default Graphics Window: If you Look at the in Triangle you will see a couple more Bunny commands. LEFT does the same as RIGHT but in the opposite direction. PENUP raises the Bunny's pen so that when the Bunny moves no lines are drawn. PENDOWN returns the Bunny to the drawing orientation. The first line of code in "Triangle" puts the Bunny off the Home position so that the drawn triangle will be centered on the screen. As mentioned earlier, the orientation of the bunny remains. The last line of code in "Triangle left the Bunny aimed at about 1:00 rather than the initial position, 12:00. If we were to make "Triangle" execute ten times without eliminating the Graphics Window the following would result: In getting "Triangle" to execute recompilation of the code in the edit buffer is not necessary. To get the above one can type the function name into a list ten times within the Listener window. The following however, is easier: ;(Dotimes (a 10) (Triangle)) DOTIMES is very similar to the FOR...NEXT looping routine in BASIC. I'll discuss it next month in a description of iteration and recursion in ExperLisp. If we wanted to use a three dimensional bunny then the following would be added before "Triangle" in the Edit Buffer window: (SETQ curbun (new3dbun)) (Pitch 30) (Yaw 45) (Roll 50) Something like the following is drawn after the source code is recompiled and "(Triangle)" is entered into the Listener Window: CURBUN is a special symbol in ExperLisp which always refers to the Bunny cursor. NEW3DBUN is a special term which always changes CURBUN. The default Bunny is 2 dimensional. If one wanted the Spherical Bunny then the following would be entered into the beginning of the first version of "Triangle": (SETQ curbun (newspbun)) This would then produce what follows: In order to have the above drawn in a different orientation, different Bunny direction would be required. Windows, two and three dimensional Bunny graphics and toolbox graphics use the same X,Y coordinate system. Home is 0,0. Dual negative coordinates are situated towards the upper left corner. Dual positive coordinates are situated towards the lower right corner. The range is +32767 to -32768 for each dimension. In ExperLisp one can sometimes use the third dimension, as in the 3D sample of "Triangle". Negative Z values are behind Home, while positive Z values are in front. The following illustrates the coordinate system in ExperLisp: The ExperLisp disk contains three essential files; Compiler, LispENV and Experlisp. Compiler is not actually the entire Lisp compiler. It contains the information needed in generating all of the higher level Lisp syntactics, such as the Bunny graphics. LispENV stands for Lisp Environment and it is simply a duplication of Compile. LispENV contains information on how the Macintosh memory was organized by the programmer and ExperLisp during the previous session. It also contains information on the system configuration such as the number of disk drives, the amount of memory, etc. Sometimes LispENV can be messed up (i.e. by changing the variable table). When this happens one might not be able to start ExperLisp. In this case LispENV should be removed from the disk. Afterward, when ExperLisp is opened, Compiler generates a new LispENV. Compiler is not needed on the disk unless the LispENV is ruined. Deleting it will provide 100K more space on the disk. Before eliminating it from the disk however, be sure you have a backup as it is an essential file. The Experlisp file contains the assembly language routines which represent the lower level Lisp routines like CAR and CDR. It also allows access to the Macintosh toolbox routines and contains the Listener Window. One opens the Experlisp file in starting a programming session with ExperLisp. Another file on the disk is automatically loaded and activated when Experlisp is booted. It is labeled ªlispinit. The contents of this file can be added to so that when one boots up ExperLisp a program can be automatically executed. It can also do automatic configurations. However the contents of ªlispinit should not be changed since it configures the Macintosh memory for Exper- Lisp. Next month I'll discuss a few more predicate procedures. I also hope to start discussing iteration, recursion and conditionals. If there is enough room left over I might also begin discussing how to access the toolbox graphics.
<urn:uuid:9ed9dd59-1ad8-422d-be1a-e1d062d262b6>
3.53125
2,807
Documentation
Software Dev.
54.579133
A tree-killing invasive insect, the hemlock woolly adelgid (HWA), was found for the first time in Indiana on a landscape tree in LaPorte County in mid-April. Since its introduction to the Eastern United States in the mid-1920s, the HWA has infested about half the native range of Eastern hemlock. In certain areas of the Great Smoky Mountains, as many as 80 percent of the hemlocks have died due to infestation. The finding of the tiny aphid-like insect, which destroys native hemlocks by feeding on the tree sap at the base of the needles, was confirmed by the USDA Animal Plant Health Inspection Service (APHIS). The insect was identified on a single hemlock as a result of a homeowner’s report. The infested tree may have originated from a landscape planting in Michigan and been brought into Indiana about five years ago. Preliminary searches have revealed no other infested trees in the area, but an extensive survey is underway. “Fortunately, this find occurred outside of the native range of hemlock trees in Indiana, which greatly increases our chances of preventing spread to them,” said Phil Marshall, state entomologist for the DNR. In Indiana, forests containing hemlocks are scattered throughout the west central and southern half of the state. Evergreen hemlock trees dot the steep slopes along Big Walnut Creek in Putnam County, relics of an earlier, cooler climate. The Nature Conservancy and the DNR Division of Nature Preserves own and manage over 2,000 acres along this creek to protect the hemlock trees, as well as the rest of the forested land. “It’s hard to imagine losing this species from Indiana’s forests”, said Chad Bladow, Director of Southern Indiana Stewardship. “There are already few places in the state where visitors can see hemlocks, and HWA could eliminate all of them”. Other Indiana sites which are well-known for having eastern hemlock include Turkey Run State Park and Shades State Park in Parke County and Hemlock Cliffs in Hoosier National Forest in Crawford County. The Conservancy has acquired lands to help expand each of these sites. HWA is easily spread by wind, movement on birds and mammals such as deer, but most rapidly as a hitchhiker on infested horticultural material. The best way to protect hemlocks in Indiana from HWA is to simply not buy or plant hemlocks. “Purchasing plant materials from areas of known HWA infestation are very likely to provide the source of any potential infestation in Indiana,” said Tom Swinford, regional ecologist for the DNR, noting that not every tree is inspected to guarantee it is not infected. “We should do everything we can to protect our unique and beautiful eastern Hemlock trees in Indiana. A visit to the Smoky Mountains shows just how sad and devastating this scourge can be.” "HWA will be very destructive if it reaches our native hemlocks, but the more people who become aware of the dangers of moving plant material and firewood over long distances, the better chance we have at protecting our forests,” Marshall said. The Conservancy works to prevent invasive species from taking hold in Indiana. “Prevention is the best medicine when it comes to invasive species,” notes Ellen Jacquart, Director of Northern Indiana Stewardship and coordinator for Invasive species issues for the Conservancy in Indiana. “Don’t buy hemlock for landscaping – choose another native tree instead, and help make sure our native hemlock stands survive.” Named for the cottony covering over its body, HWA somewhat resembles a cotton swab attached to the underside of young hemlock twigs. Within two years, its feeding causes graying and thinning of needles. Highly infested trees will stop putting on new growth, and major branches die, beginning in the lower part of the tree. Eventually the whole tree is killed. If you suspect an HWA infestation, call the Indiana DNR Invasive Species Hotline at 1-866-NO-EXOTIC. The Nature Conservancy is a leading conservation organization working around the world to protect ecologically important lands and waters for nature and people. The Conservancy and its more than 1 million members have protected nearly 120 million acres worldwide. Visit The Nature Conservancy on the Web at www.nature.org.
<urn:uuid:41c9855e-64d1-4ba6-9dee-73b103d70bef>
3.171875
947
Knowledge Article
Science & Tech.
40.973803
Photonics: Sensing on the way Published online 01 August 2012 Hollow optical fibers containing light-emitting liquids hold big promises for biological sensing applications Schematic illustration of a hollow fiber. The chemiluminescent liquid in the core (yellow) is guided through the fiber, also with help of further hole structures (dark blue). Processing biological samples on a small substrate the size of a computer chip is becoming a common task for biotechnology applications. Given the small working area, however, probing samples on the substrate with light can be difficult. To address this issue, Xia Yu and co-workers at the A*STAR Singapore Institute of Manufacturing Technology have now developed an optical fiber system that is able to deliver light to microfluidic chips with high efficiency1. “Our compact optical fibers are designed for use with high-throughput detection systems,” says Yu. “They are ideal for use in space-restrictive locations.” A common way of probing biological samples is by light. In this method, the sample is excited by an external light source and the light emitted in response is detected, which provides a unique fingerprint of the substance. Conventional techniques are able to deliver light to samples and probe the response, but they are not very efficient at probing a small sample volume. A solution to this is to use optical fibers that are able to guide light to small spaces. The drawback with this technique, however, has been that it can be difficult to insert the external probe light into the optical fiber with sufficient efficiencies. Yu and her co-workers have now circumvented this problem by using optical fibers with a hollow core (see image). The empty hollow core can be filled with liquids — in this case, with chemiluminescent solutions. The liquid is important to promote the transport of light through the core. In addition, these solutions consist of two liquids that when brought together initiate a chemical reaction that emits light. If such a solution is placed directly within the hollow core the problem of coupling light into the fiber is circumvented. This not only avoids external light sources but also promotes an established technology. “The use of chemical luminescence is a common technique for a variety of detection assays in biology,” says Yu. “By incorporating the emission mechanism into optical fibers, we can use it as a light source for sensing applications in microfluidics systems.” First tests for such sensing applications are already underway, although some challenges remain. For example, there might be losses in the light emitted by the fluid if the emitted light is not perfectly confined within the fiber. Such problems can be solved through improved fiber designs and an appropriate choice of materials, and applications of these fibers for microfluidic systems are promising. The A*STAR-affiliated researchers contributing to this research are from the Singapore Institute of Manufacturing Technology - Yu, X. et al. Chemiluminescence detection in liquid-core microstructured optical fibers. Sensors and Actuators B: Chemical 160, 800–803 (2011). | article
<urn:uuid:512b4ec2-ba1a-4c6f-8ec8-d09831c1eed6>
3.125
637
Academic Writing
Science & Tech.
32.100344
By John Fleck Web edition: February 12, 2010 Print edition: February 27, 2010; Vol.177 #5 (p. 30) Buy this book Young adults can learn how scientists use tree rings to document climate change.University of New Mexico Press, 2009, 91 p., $21.95. Please alert Science News to any inappropriate posts by clicking the REPORT SPAM link within the post. Comments will be reviewed before posting. You must register with Science News to add a comment. To log-in To register as a new user, follow this link. © Society for Science & the Public 2000 - 2013 All rights reserved.
<urn:uuid:d86faf02-af22-4034-9716-90c2739df0e6>
3.125
136
Truncated
Science & Tech.
80.627273
Hot Sites and Cool Books Recommended Web sites: Information about the 2006 dinosaur dig at the 5E Ranch can be found at www.montanadinosaurdigs.com/sauro.htm (Judith River Dinosaur Institute). Perkins, Sid. 2006. Bone hunt. Science News 170(Aug. 26):138-140. Available at http://www.sciencenews.org/articles/20060826/bob10.asp . Books recommended by SearchIt!Science: The Fossil Factory: A Kid's Guide to Digging Up Dinosaurs, Exploring Evolution, and Finding Fossils Niles Eldredge Published by Addison-Wesley Publishing Co., 1989. If you think that fossils are dinosaur bones, you're partly right. There are fossils of lots of other things, toograins of pollen, sea creatures, even human beings! How can you find fossils on your own? With black-and-white, cartoon-style drawings and a humorous writing style, a world-famous scientist and his teenage sons explain how fossils are formed, where you can find them, and how to take care of them. Along the way, they also offer a few chuckles as well as fascinating information about the history of life on Earth, the way rocks and continents formed, and what Earth was like during the age of the dinosaurs. Twelve activitiesincluding instructions for making a plaster cast of your own footprintare featured, too, along with step-by-step diagrams. At the end, a timeline shows how life forms evolved over millions of years. Armored, Plated, and Bone-Headed Dinosaurs: The Ankylosaurs, Stegosaurs, and Pachycephalosaurs Thom Holmes Published by Enslow Publishers, 2002. What are the origins of these spiny, armor-plated dinosaurs? What were their feeding habits? How did they defend themselves? Explore the anatomy and physiology of these creatures that are now extinct. Return to article From The American Heritage® Student Science Dictionary and The American Heritage® Children's Science Dictionary estuary The wide lower end of a river where it flows into the sea. The water in estuaries is a mixture of fresh water and salt water. fossil The hardened remains of traces of plant or animal that lived long ago. Fossils are often found in sedimentary rocks. paleontology The scientific study of life in the past, especially through the study of fossils. sauropod One of the two types of saurichian dinosaurs, widespread during the Jurassic and Cretaceous Periods. Sauropods were plant-eaters and often grew to tremendous size, having a stout body with thick legs, long slender necks with a small head, and long tails. Sauropods included the apatosaurus (brontosaurus) and brachiosuarus. sedimentary rock A rock that is formed when sediment, such as sand or mud, becomes hard. Sedimentary rocks form when sediments are collected in one place by the action of water, wind, glaciers, or other forces, and are then pressed together. Limestone and shale are sedimentary rocks. stegosaurus or stegosaur Any of several plant-eating dinosaurs of the Jurassic and Cretaceous Periods. Stegosaurus had a spiked tail and an arched back with a double row of large, triangular, upright, bony plates. Although stegosaurs grew to 20 feet (6.1 meters) in length, they had tiny heads with brains the size of a walnut Copyright © 2002, 2003 Houghton-Mifflin Company . All rights reserved. Used with permission. Return to article Behind the Scenes Hot Sites & Cool Books
<urn:uuid:2adbea19-6442-400d-895e-a52d103f9c60>
2.953125
780
Content Listing
Science & Tech.
50.30494
In A.D. 79 Mount Vesuvius erupted, annihilating the cities of Pompeii and Herculaneum and killing thousands who did not evacuate in time. To avert a similar fate for present-day Naples, which lies six miles west of the still active Vesuvius, as well as for the cities near volatile Mount Etna in Sicily, a novel laser system could soon forecast volcanic eruptions up to months in advance. Current methods to predict eruptions have downsides. Seismometers can monitor tremors and other ground activity that signal a volcano's awakening, but their readings can prove imprecise or complicated to interpret. Scanning for escaping gases can reveal whether magma is moving inside, but the instruments used to analyze such emissions are often too delicate and bulky for life outside a laboratory. "You have to collect samples from the volcano, bring them to a lab, and often wait through backlogs of weeks to months before analysis," explains Frank Tittel, an applied physicist at Rice University. A more promising technique for early detection focuses on changes in carbon isotopes in carbon dioxide. The ratio between carbon 12 and carbon 13 is roughly 90 to one in the atmosphere, but it can differ appreciably in volcanic gases. A ratio change by as little as 0.1 part per million could signal an influx of carbon dioxide from magma either building under or rising up through the volcano. Lasers can help detect this change: carbon 12 and 13 absorb light at slightly different mid-infrared wavelengths. The lasers must continuously tune across these wavelengths. Previously investigators used lead-salt lasers, which require liquid-nitrogen cooling and thus are impractical in the field. Furthermore, they are low-power devices, generating less than millionths of a watt, and can emit frequencies in an unstable manner. Other isotope scanning techniques are similarly lab-bound. Tittel and other scientists in the U.S. and Britain, in partnership with the Italian government, have devised a volcano-monitoring system around a quantum-cascade laser. Such a semiconductor laser can produce high power across a wide frequency. Moreover, they are rugged and do not require liquid-nitrogen cooling, making them compact enough to fit inside a shoe box. The researchers first tried out their device on gas emissions from Nicaraguan craters in 2000. The new field tests will check its performance and accuracy in harsh volcanic locales. Dirk Richter, a research engineer at the National Center for Atmospheric Research in Boulder, Colo., says it would prove difficult to design a system "to work in one of the worst and most challenging environments possible on earth," but "if there's one group in the world that dares to do this, that's Frank Tittel's group." If the instrument works, the plan is to deploy early-warning systems of lasers around volcanoes, with each device transmitting data in real time. False alarms should not occur, because carbon isotope ratios in magma differ significantly from those in the crust. The changes that the laser helps to detect also take place over weeks to months, providing time to compare data from other instruments, as well as ample evacuation notice. "Our system aims at avoiding a catastrophe like the Vesuvius eruption," says team member Damien Weidmann, a physicist at the Rutherford Appleton Laboratory in Oxfordshire, England. Field tests for the prototype are planned for the spring of 2005 in the volcanic Alban Hills region southeast of Rome, near the summer home of Pope John Paul II, as well as for volcanic areas near Los Alamos, N.M.� This article was originally published with the title Volcanic Sniffing.
<urn:uuid:9309d171-e9f0-44a8-9de3-926e353cd4e3>
3.90625
748
Truncated
Science & Tech.
40.471455
An instrument to measure the altitude of an object above a fixed level.Generally, mean sea level is used for the reference level. Mid-level cloud (bases generally 2000 - 8000m), made up of grey,puffy masses, sometimes appearing in parallel waves or bands. An indicatorof mid-level instability. Altocumulus can take on various forms such as AcLenticularis, Ac Undulatus, Ac Castellanus, Altocumulus 'mackerel sky'. A middle level cloud with vertical development that forms from altocumulusclouds. It is composed primarily of ice crystals in its higher portions andcharacterised by its turrets, protuberances or crenulated tops. Mid-level cloud composed of water droplets and ice crystals. Usuallygives the sun a watery or dimly visible appearance. A local wind that flows up the side of valleys due to increased heating alongthe valley walls. Often the anabatic wind results in cumulus clouds along theridges either side of the valley. See also Katabatic winds. A device used to measure wind speed. The departure of an element from its long-term average for the locationconcerned. For example, if the average maximum temperature for Melbourne in June is 14 degrees and on one particular day the temperature only reaches 10 degrees, than the anomaly for that day is -4. A large scale atmospheric circulation system in which the winds rotate anticlockwise in the Southern Hemisphere (clockwise in Northern Hemisphere).Anticyclones are areas of high atmospheric pressure and are generallyassociated with light winds and stable weather conditions. Interchangeablewith High pressure system. Rotation in the opposite sense as the Earth's rotation, i.e., anticlockwise in
<urn:uuid:1f2052ca-130e-43ae-ae93-421e6e25982d>
3.15625
366
Structured Data
Science & Tech.
24.921433
Home › SparkNotes › Chemistry Study Guides › Review of Gases › Gases Review Test don't seem to have. Please try a different browser. Scroll through the page to review your answers. The correct answer is Your incorrect answers (if any) are highlighted in If you'd like to take the test over again, click the reset button at the end of the test. Which of the following is a correct interpretation of the ideal gas law? What is the correct relationship between An isolated container of gas doubles in pressure and triples in volume. By what factor does T change? If the volume of a gas is doubled at constant temperature, the factor by which the pressure increases is: A barometer filled with an unknown liquid has a height of 1 m at 1 atm. During stormy weather, the height of the column is observed to rise to 1.3 m. What is the atmospheric pressure? Which of the following are possible units of R? What are the conditions of STP? A container contains 32 grams of gas and 2 grams of gas. If the total pressure of the vessel is 16 atm, what is the partial pressure of the As the average radius of a population of gas molecules increases, how does the factor b of van der Waals All of the following are properties of an ideal gas except: The ideal gas law is most valid under these conditions: For the van der Waals equation: For the equation PV = nRT , the value of T must be expressed in: Which of the following is not a SI unit A sample of gas has a volume of 22.4 L at a temperature of 273 K. How many moles are in the sample? The volume of a sample of gas expands five times at constant pressure. By what factor has the absolute The following reaction produces A sample of gas occupies 100 L at STP. If the absolute temperature is halved while all other conditions are constant, what will be the final volume? of a sample of at 300 K. A closed jar contains 2 moles of and 3 moles of . What is the ratio of the partial pressure of over the total pressure in the jar? The rate of effusion of gas A is four times that of gas B. What is The density of a certain gas at STP is 1.43 g/L. What is the identity of the gas? One end of a mercury manometer is open to the atmosphere ( tm = 760mmHg ). The other end is connected to a 1 mol sample of that is at 273 K and occupies 22.4 L. What is the height of the The Maxwell-Boltzmann distribution graph plots: James the giant has big shoes to fill. His shoes have a total area of in contact with the ground. Unfortunately, James' feet are not so big. Barefoot, his weight is spread over . What is the ratio of the pressure he exerts on the ground barefoot over the pressure he exerts with his shoes on? The "air" in airbags is generated via the decomposition of solid A sample of an ideal gas is compressed at constant temperature. What happens to the average kinetic energy of the molecules? A piston compresses a gas at constant temperature. Initially the gas occupied 1 L and was at a pressure of 1 atm. After compression, the gas occupies 0.1 atm. What is the pressure of the compressed gas? A collaborator from a foreign country reports that the value of has probably used units of "woozle" for which of the following variables: Avogadro's number is: The following Maxwell-Boltzmann distribution plot was measured for two gases A and B at the same A rigid container holds a mixture of gases. Within this mixture, the partial pressure of is 400 torr. If an additional quantity of gas is injected into the container such that the total pressure of the container rises by 760 torr, what is the change in the partial pressure of ? Assume that the temperature of the container's contents stays constant. If the pressure of a gas doubles and the temperature quadruples, by what factor does the volume change? Which of the following are possible units for pressure? The following Maxwell-Boltzmann distribution plot was measured for a gas at two temperatures A and B: For the following calculation of , the molar mass (MM) should be expressed in what units? By what significant numerical value are Boltzmann's constant (k) and the gas constant (R) related? The pressure of a gas is tripled while the volume is halved. By what factor does the temperature increase? The gas constant R: One end of a manometer is sealed off to a vacuum. The other end of the manometer is connected to a pressurized gas. The height of the liquid column is indicative of: A sample of and a sample of both have a temperature of 330 K. What is the ratio of the average kinetic energy of the over that of the The density of a gas at STP is 0.089 g/L. What is the molar mass of the gas? The following Maxwell-Boltzmann distribution plot was measured for two gases A and B at temperatures Gaseous methane ( ) burns completely in gaseous oxygen to produce carbon dioxide gas and water Liven up your study sesh with one of these playlists! Enjoy the tunes! This expertly-crafted playlist is brought to you by Chris Pine and Zoe Saldana heat up the red carpet! Auntie SparkNotes can help! Click here for simple, sexy makeup tricks! See every single look from the Met Gala! We already dib'sed Genghis Khan. Travel back in time! From super cute to super bad! What do you think? When you don't look like J-Law. What did Star Trek get wrong? Get Our FREE NOOK Reading Apps When your books and teachers don't make sense, we do. ©2013 SparkNotes LLC, All Rights Reserved
<urn:uuid:1475843b-afe3-4a21-b409-9b1d07231153>
3.34375
1,320
Content Listing
Science & Tech.
65.666305
All procedures in the Verilog HDL are specified within one of the following four statements: -- initial construct -- always construct The initial and always constructs are enabled at the beginning of a simulation. The initial construct shall execute only once and its activity shall cease when the statement has finished. In contrast, the always construct shall execute repeatedly. Its activity shall cease only when the simulation is terminated. There shall be no implied order of execution between initial and always constructs. The initial constructs need not be scheduled and executed before the always constructs. There shall be no limit to the number of initial and always constructs that can be defined in a module An initial block consists of a statement or a group of statements enclosed in begin... end or a signle statement , which will be executed only once at simulation time 0. If there is more than one block they execute concurrently and independently. The initial block is normally used for initialisation, monitoring, generating wave forms (eg, clock pulses) and processes which are executed once in a simulation. An example of initialisation and wave generation is given below clock = 1'b0; // variable initialization begin // multiple statements have to be grouped alpha = 0; #10 alpha = 1; // waveform generation #20 alpha = 0; #5 alpha = 1; #7 alpha = 0; #10 alpha = 1; #20 alpha = 0; An always block is similar to the initial block, but the statements inside an always block will repeated continuously, in a looping fashion, until stopped by $finish or $stop. NOTE: the $finish command actually terminates the simulation where as $stop. merely pauses it and awaits further instructions. Thus $finish is the preferred command unless you are using an interactive version of the simulator. One way to simulate a clock pulse is shown in the example below. Note, this is not the best way to simulate a clock. See the section on the forever loop for a better method. initial clock = 1'b0; // start the clock at 0 always #10 clock = ~clock; // toggle every 10 time units initial #5000 $finish // end the simulation after 5000 time units Tasks and functions can bu used to in much the same manner but there are some important differences that must be noted. A function is unable to enable a task however functions can enable other functions. A function will carry out its required duty in zero simulation time. Within a function, no event, delay or timing control statements are permitted. In the invocation of a function their must be at least one argument to be passed. Functions will only return a single value and can not use either output or inout statements. Functions are synthesysable. Disable statements canot be used. Function canot have numblocking statements. module function_calling(a, b,c); input a, b ; input a, b; myfunction = (a+b); assign c = myfunction (a,b); Tasks are capable of enabling a function as well as enabling other versions of a Task Tasks also run with a zero simulation however they can if required be executed in a non zero simulation time. Tasks are allowed to contain any of these statements. A task is allowed to use zero or more arguments which are of type output, input or inout. A Task is unable to return a value but has the facility to pass multiple values via the output and inout statements. Tasks are not synthesisable. Disable statements can be used. reg clock, red, amber, green; parameter on = 1, off = 0, red_tics = 350, amber_tics = 30, green_tics = 200; // initialize colors. initial red = off; initial amber =
<urn:uuid:2dc62b9f-398e-47e8-9d87-d0375daf0011>
3
806
Documentation
Software Dev.
42.033475
Wildlife you see in a national park or other reserved area don't know about the park boundary. Bobcat, martens, mink, and moose need different types of living space and habitat. Development outside the park affects their ability to inhabit the park. Brief review of bat research in the San Francisco Bay area and southern California providing land managers with information on the occurrence and status of bat species with links to bat inventories for California and related material. A literature synthesis and annotated bibliography focus on North America and on refereed journals. Additional references include a selection of citations on bat ecology, international research on bats and wind energy, and unpublished reports. Population size, foaling, deaths, age structure, sex ratio, age-specific survival rates, and more over a 14 year time span. This information will help land and wildlife managers find the best maintenance and conservation strategies.
<urn:uuid:4448345c-0895-4322-b597-44994e0b8dfa>
3.3125
182
Content Listing
Science & Tech.
24.002095
The Lake Tahoe area on the California-Nevada border can be appreciated from a variety of perspectives: Some people focus on the stunningly beautiful alpine lake nestled in the Sierra Nevada range, while others see it as a mecca for skiers and winter sports enthusiasts. When climate scientists look around, though, they see change. Two recent studies suggest that global warming is already altering that beloved ecosystem. The first report (pdf), produced by researchers at the UC Davis Tahoe Environmental Research Center, predicts that snowpack melts over the next century will have a drastic impact on both winter tourism and the water supply. The average snowpack in the northern Sierra Nevada mountains that ring the lake on the California-Nevada border will decline by 40 to 60 percent by 2100 “under the most optimistic projections,” says the report from three researchers at the University of California, Davis. Under less optimistic models, the melt-off could be accelerated. By the end of the century, precipitation in the region “could be all rain and no snow,” and peak snowmelt in the Upper Truckee River — which is the largest tributary flowing into Lake Tahoe — could occur four to six weeks earlier by 2100, the report says. [New York Times] The changes to the region’s hydrology could lead to new problems with runoff, erosion, and overflowing stormwater basins. While the researchers note that there is always some uncertainty when predicting far into the future, they also point out that the computer models they used are based on 100 years of data describing the changes in temperature and precipitation that have already occurred in the Tahoe area. The second study, published in the journal Geophysical Research Letters, used infrared (heat) measurements from satellites to examine the changes to the planet’s lakes. Two NASA scientists used satellite data to look at 104 large inland lakes around the world. They found that on average they have warmed 2 degrees [Celsius] since 1985. That’s about two-and-a-half times the increase in global temperatures in the same time period. [AP] Lakes in the the Northern Hemisphere’s mid and upper latitudes showed the most warming. That includes Lake Tahoe, which has heated up by 3 degrees Celsius since 1985, putting it behind only Russia’s Lake Ladoga. 80beats: Water Maps Show Stress Spread Out Across the Planet 80beats: Water Woes: The Southwest’s Supply Dwindles; China’s Behemoth Plumbing Project Goes On 80beats: Arctic Report Card: Warm Weather and Melted Ice Are the New Normal 80beats: Aral Sea Shows Signs of Recovery, While the Dead Sea Needs a Lifeline DISCOVER: 20 Things You Didn’t Know About… Water Image: Wikimedia Commons
<urn:uuid:c3b8b37a-b1f0-4749-b688-1d7e5bfb16b9>
3.859375
592
Personal Blog
Science & Tech.
34.263168
Last Wednesday the National Academy of Sciences held a press conference in Washington, DC, to introduce its newly completed report on priorities for the coming decade in solar and space physics. Daniel Baker of the University of Colorado chaired the committee that wrote the report. Thomas Zurbuchen of the University of Michigan was the vice chair. Together, they summarized the report’s highlights for the assembled reporters, scientists, and bureaucrats. Like its counterparts in astronomy and planetary science, the latest solar and space physics decadal survey is more than just a shopping list of missions and facilities. Its authors begin by defining their field in a broad and inspiring way: We live on a planet whose orbit traverses the tenuous outer atmosphere of a variable magnetic star, the Sun. This stellar atmosphere is a rapidly flowing plasma—the solar wind—that envelops Earth as it rushes outward, creating a cavity in the galaxy that extends to some 140 astronomical units (AU). There, the inward pressure from the interstellar medium balances the outward pressure of the solar plasma forming the heliopause, the boundary of our home in the universe. Earth and the other planets of our solar system are embedded deep in this extended stellar atmosphere or “heliosphere,” the domain of solar and space physics. The report goes on to review past and present accomplishments in solar and space physics before defining the four overarching goals that guided the committee members as they drew up their final recommendations: - Determine the origins of the Sun’s activity and predict the variations in the space environment. - Determine the dynamics and coupling of Earth’s magnetosphere, ionosphere, and atmosphere and their response to solar and terrestrial inputs. - Determine the interaction of the Sun with the solar system and the interstellar medium. - Discover and characterize fundamental processes that occur both within the heliosphere and throughout the universe. As I listened to Baker and Zurbuchen’s presentation, it became clear that two other overarching considerations informed the report. The first is a conceptual emphasis on viewing Earth’s aurorae, the solar wind, coronal mass ejections, and other heliospheric phenomena as part of a single system. It will be interesting to see whether this systemic view becomes manifest in journals, conferences, and courses. I, for one, have tended to think of solar physics as belonging more to astronomy than to heliospheric physics. The second consideration is a realistic and—to use Baker’s word—responsible approach to costs. The committee retained Aerospace Corp, a nonprofit consultancy based in El Segundo, California, to carry out an independent cost appraisal and technical evaluation (CATE) of potential missions. For the most part, the total cost of the committee’s recommended suite of programs lies within the budget envelope that NASA provided the committee for the years 2013–22. Physicists who remember chuckling when they first encountered the zeroth law of thermodynamics might be amused to learn that the committee’s first recommendation is also numbered zero—for good reason. As NASA and NSF, the other principal sponsor of heliospheric research, look to future missions and facilities, the committee recommends that they first complete their current program. Among the lineup is Solar Probe Plus (shown here in an artist’s impression). The ambitious mission, whose price tag is $1.4 billion, aims to fly as close as possible to the Sun to determine how the solar corona is heated and how the solar wind is accelerated. Diversify, realize, integrate, venture, educate The committee’s second recommendation, numbered 1.0, is to implement an initiative that goes by the acronym DRIVE (for “diversify, realize, integrate, venture, educate”). As far as I can tell, DRIVE aims to reorganize and reinvigorate the way researchers and their students practice heliospheric science. Surprisingly, given its high priority, DRIVE is not expensive. The committee projects that the initiative will cost at most about $50 million a year. To fulfill the goals embodied by its name, DRIVE seeks to make research opportunities more accessible to universities through small and mid-sized missions, including the shoebox-sized spacecraft called CubeSats. Funding the analysis and interpretation of data adequately is a key element of DRIVE, as is fostering interdisciplinary approaches to heliospheric research. Indeed, the committee urges NASA and NSF to establish heliospheric science centers, where observers, theorists, and modelers can work together to solve the grand challenges of solar and space physics. When Baker and Zurbuchen introduced DRIVE, it sounded somewhat woolly to me. Now, having read the DRIVE section of the report, I think it’s a bold and worthwhile model that could be profitably emulated in other fields, such as green energy or neuroscience. But to be effective, DRIVE will probably need a light administrative structure. Accelerate and expand the Heliophysics Explorer program! Recommendation 2.0 seeks to revitalize NASA’s Explorer program of modestly sized and priced spacecraft. Begun in 1958, the program, according to the committee, is “arguably the most storied scientific spaceflight program in NASA’s history.” Despite its success, which includes three Nobel prizes, funding for the Explorer program fell in 2004 and has languished since. To quote the report: The medium-class (MIDEX) and small-class (SMEX) missions of the Explorer program are ideally suited to advancing heliophysics science and have a superb track record for cost-effectiveness. Since 2001, 15 heliophysics Explorer mission proposals have received the highest category of ranking in competition selection reviews, but only 5 have been selected for flight. Thus there is an extensive reservoir of excellent heliophysics science to be accomplished by Explorers. Because MIDEX and SMEX missions are comparatively cheap, developing and launching more of them would not require a big outlay. The committee recommends that NASA augment the current Explorer program for solar and space physics by $70 million per year. In addition to more money for the Explorer program, the committee also recommends establishing a faster, more nimble way of accommodating missions of opportunity—that is, missions that are conceived in response to new technologies, new scientific knowledge, or new partnership opportunities with other space agencies. NASA: Let academia lead space science Perhaps by coincidence, a commentary by Baker appeared in Nature two weeks before his committee released its report. Entitled “NASA: Let academia lead space science,” the commentary urged the space agency to fund more missions that are small enough in scope that university-based principal investigators (PIs) can develop and lead them. Whether Baker’s fellow committee members endorsed his commentary is not clear. They do, however, evidently share his belief in the merits of PI-led missions. Recommendation 3.0 calls for NASA to transform its Solar Terrestrial Probes program from a large, centrally directed program to “a moderate-sized, competed, PI-led mission line that is cost-capped at approximately $520-million per mission.” The STP program aims to elucidate the physics of the Sun’s influence on Earth, on the other bodies in the solar system, and on the interstellar medium. To avoid the risk that a competitive free-for-all would omit important aspects of STP science, the committee outlined three kinds of missions that it would like to see fly: - IMAP (Interstellar Mapping and Acceleration Probe) to characterize the zone where the Sun’s magnetohydrodynamic influence ceases to prevail in the solar neighborhood. - DYNAMIC (Dynamical Neutral Atmosphere) to study how Earth’s ionosphere and thermosphere influence, and are influenced by, processes that occur at lower and higher altitudes. - MEDICI (Magnetosphere Energetics, Dynamics, and Ionospheric Coupling) to determine how the magnetosphere-ionosphere-thermosphere system responds to solar and magnetospheric forcing. The committee’s enthusiasm for modest missions is not unbridled, however. In the committee’s view, tackling the problem of how and why the Sun varies is a job for large, integrated missions. NASA’s Living with a Star program already includes the Solar Probe Plus and the Radiation Belt Storm Probes missions. Recommendation 4.0 is for Geospace Dynamics Constellation, a set of six formation-flying spacecraft that will characterize how the energy of geomagnetic storms is deposited and transformed in Earth’s atmosphere. Recharter the National Space Weather Program In March 1989 a geomagnetic storm caused the collapse of Hydro-Québec’s electricity grid. Five months later another geomagnetic storm shut down electronic trading on Toronto’s stock exchange. Anticipating such storms—or space weather—and predicting their effects is more important, now that the world’s electrical infrastructure has expanded, the number of Earth-orbiting satellites has increased, and telecommunications have become economically and socially more important. The current solar cycle, the 24th since records began in 1755, is set to peak next year. To monitor the cycle’s activity, the US relies on a set of spacecraft, such as the Solar and Heliospheric Observatory, whose principal purpose is basic research and whose engineering lifetimes are coming to an end. To avoid gaps in coverage, the committee recommends that NASA, the National Oceanic and Atmospheric Administration, and the Department of Defense should plan ahead and plan together. Of particular importance, the committee says, is maintaining a permanent monitoring capability at L1, the first Lagrange point of the Sun–Earth system. Lying between the two bodies 1.5 million km from Earth, L1 is an ideal vantage for tracking solar activity. The US has a comprehensive plan, the National Space Weather Program, for dealing with space weather. The trouble is, as the committee puts it, “implementation of such a program would require funding well above what the survey committee assumes to be currently available.” Accordingly, the committee recommends that the NSWP should be rechartered under the auspices of the National Science and Technology Council and should include the active participation of the Office of Science and Technology Policy and the Office of Management and Budget. The plan should build on current agency efforts, leverage the new capabilities and knowledge that will arise from implementation of the programs recommended in this report, and develop additional capabilities, on the ground and in space, that are specifically tailored to space weather monitoring and prediction. I haven’t read all 455 pages of the committee’s report. In venturing to summarize it, I have no doubt missed some important points and emphases. But what I have read has impressed me. Here is a plan to study the heliosphere as a system in a comprehensive, multidisciplinary, and cost-effective way. I hope its recommendations are heeded.
<urn:uuid:dfeab920-d146-490d-b528-0a5dc3071873>
2.96875
2,306
Personal Blog
Science & Tech.
27.110422
Sunlight is Earth’s most abundant energy source and is delivered everywhere free of charge. Yet direct use of solar energy—that is, harnessing light’s energy content immediately rather than indirectly in fossil fuels or wind power—makes only a small contribution to humanity’s energy supply. In 2008, about 0.1% of the total energy supply in the United States came from solar sources. In theory, it could be much more. In practice, it will require considerable scientific and engineering progress in the two ways of converting the energy of sunlight into usable forms. Photovoltaic systems are routinely employed to power a host of devices—from orbiting satellites to pocket calculators—and many companies make roof-sized units for homes and office buildings. Photovoltaic (PV) systems exploit the photoelectric effect discovered more than a century ago. In certain materials, the energy of incoming light kicks electrons into motion, creating a current. Sheets of these materials are routinely employed to power a host of devices—from orbiting satellites to pocket calculators—and many companies make roof-sized units for homes and office buildings. At the present time, however, the best commercial PV systems produce electricity at five to six times the cost of other generation methods, though if a system is installed at its point of use, which is often the case, its price may compete successfully at the retail level. PV is an intermittent source, meaning that it’s only available when the Sun is shining. Furthermore, unless PV energy is consumed immediately, it must be stored in batteries or by some other method. Adequate and cost-effective storage solutions await development. One factor favoring PV systems is that they produce maximum power close to the time of peak loads, which are driven by air-conditioning. Peak power is much more expensive than average power. With the advent of time-of-day pricing for power, PV power will grow more economical. Sunlight can also be focused and concentrated by mirrors and the resulting energy employed to heat liquids that drive turbines to create electricity—a technique called solar thermal generation. Existing systems produce electricity at about twice the cost of fossil-fuel sources. Engineering advances will reduce the cost, but solar thermal generation is unlikely to be feasible outside regions such as the southwestern United States that receive substantial sunlight over long time periods. Despite the challenges, the idea of drawing our energy from a source that is renewable and that does not emit greenhouse gases has powerful appeal.
<urn:uuid:515d3239-de8f-40e7-adc5-50e951db0245>
4.09375
508
Knowledge Article
Science & Tech.
28.740823
Why 2 high tides? Name: paul dickerson Date: 1993 - 1999 It makes sense to me why there is a high tide at or about noon during new moon, but why is there a high tide at or about mid-night as well? The reason is very simple, though it takes a bit of thinking to hit the right direction. Consider the Earth-Moon system, and visualise Earth as a sphere with a layer of water all around it. The water closer to thMoon will get attracted more than the Earth giving rise to the noon tide. But, Earth will get attracted more than the water on the other side this is what gives the midnight tide. Jasjeet ( Jasjeet S Bagla ) Click here to return to the Astronomy Archives Update: June 2012
<urn:uuid:179fc92f-8cea-49c6-b6b4-23ed568b253e>
2.6875
174
Q&A Forum
Science & Tech.
64.555083
When experiencing alpha decay, atoms shed alpha particles made of 2 protons and 2 neutrons. Why can't we have other types of particles made of more or less protons? The reason why alpha particles heavily dominate as the proton-neutron mix most likely to be emitted from most (not all!) radioactive components is the extreme stability of this particular combination. That same stability is also why helium dominates after hydrogen as the most common element in the universe, and why other higher elements had to be forged in the hearts and shells of supernovas in order to come into existence at all. Here's one way to think of it: You could in principle pop off something like helium-3 from an unstable nucleus - that's two protons and one neutron - and very likely give a net reduction in nuclear stress. But what would happen is this: The moment the trio started to depart, a neutron would come screaming in saying look how much better it would be if I joined you!! And the neutron would be correct: The total reduction in energy obtained by forming a helium-4 nucleus instead of helium-3 would in almost any instance be so superior that any self-respecting (and energy-respecting) nucleus would just have to go along with the idea. Now all of what I just said can (and in the right circumstances should) be said far more precisely in terms of issues such as tunneling probabilities, but it would not really change the message much: Helium-4 nuclei pop off preferentially because they are so hugely stable that it just makes sense from a stability viewpoint for them to do so. The next most likely candidates are isolated neutrons and protons, incidentally. Other mixed versions are rare until you get up into the fission range, in which case the whole nucleus is so unstable that it can rip apart in very creative ways (as aptly noted by the earlier comment). $\alpha$ particles are really $He^4_2$ nucleus i.e made up of 2 neutron and 2 protons. As you can see in this graph, $He^4_2$ ion has a high binding energy per nucleon, i.e. it is highly stable among all the neighboring nuclei. This makes them easy to sustain their existence and makes it easier for the nuclei to emit them in radioactive decay thus making the resultant nuclei much more stable than if a $He_2^3$ would have escaped.
<urn:uuid:2d293bb8-356c-42d9-9b1f-f5c7779b9986>
3.390625
505
Q&A Forum
Science & Tech.
48.759948
|Mon March 24, 1969 02:32PM (PST)| This report supersedes any earlier report of this event This event has been reviewed by a seismologist Mon March 24, 1969 02:32PM (PST) Mon March 24, 1969 22:32 (GMT) 30.1 km ( 18.7 mi) ENE ( 67. azimuth) from Hanford-300, WA 31.7 km ( 19.7 mi) SSE ( 151. azimuth) from Othello, WA 33.5 km ( 20.8 mi) NE ( 46. azimuth) from Hanford-400, WA |Depth:||7.34 Km (4.48 miles)| |Horizontal Uncertainty:||26.219 Km| |Depth Uncertainty:||25.22 Km| |Azimuthal Gap:||257.0 deg| |Number of Phases:||7| Depth within the Earth where an earthquake rupture initiated. PNSN reports depths relative to sea level, so the elevation of the ground above sea level at the location of the epicenter must be added to estimate the depth beneath the Earth's surface. A measure of how well network seismic stations surround the earthquake. Measured from the epicenter (in degrees), the largest azimuthal gap between azimuthally adjacent stations. The smaller this number, the more reliable the calculated horizontal position of the earthquake. Number of Phases How well the given earthquake location predicts the observed phase arrivals (in seconds). Smaller misfits mean more precise locations. The best locations have RMS Misfits smaller than 0.1 seconds. Number of P First Motions A P first motion is the direction in which the ground moves at the seismometer when the first P wave arrives. We distinguish between upward and downward first motions. This is the number of observations that were used to obtain the fault plane solution. Orientation of first possible fault plane The strike is the angle between the north direction and the direction of the fault trace on the surface, while keeping the dipping fault plane to your right. The dip is the steepness of the fault plane measured as an angle between the fault plane and the surface. For example, 0 degrees is a horizontal fault and 90 degrees is a vertical fault. Rake is the angle, measure in the fault plane, between the strike and the direction in which the material above the fault moved relative to the material on the bottom of the fault (slip direction). Orientation of second possible fault plane The orientation of the two possible fault planes is the best solution we can find to match the observed first motions at the seismometers using a grid search method. The uncertainty of the strike, dip, and rake indicate the number of degrees by which those values can vary and still match the observations satisfactorily. Code, or name, to designate a particular seismic station Network Code indicates the organization responsible for a particular station, the PNSN consists of UW=University of Washington, UO=University of Oregon, and CC=Cascade Volcano Observatory The quality of an observed P arrival polarity indicates how well you can tell whether it is up or down and can range from 0 (poor) to 1 (good). The channel name allows one to distinguish between data from different kinds of sensors. The first character indicates the sample rate of the data, examples are E=100Hz, B=40 or 50Hz, H=80 or 100 Hz. The second character indicates whether the channel is a high (H) gain or low (L) gain velocity channel or a strong-motion acceleration channel (N). The third character indicates the direction of motion measured, Z=up/down, E=east/west, N=north/south. Polarity means the direction of motion, in this context it means whether it is up (U) or down (D).
<urn:uuid:04b85eb8-944d-4176-8e7a-b2e53c9f1916>
2.890625
829
Structured Data
Science & Tech.
61.790159
Interpreter for Zoom Language Zoom language is new language developed at DePaul University by Dr. Jia. ZOOM stands for Z-based Object Oriented Modeling notation. It's made up of 3 different parts: zoom specification notations ZOOM-S, zoom design notation ZOOM-D and zoom implementation language ZOOM-I. The syntax of ZOOM-I is closely based on syntax of java language. It adds several extensions to java language such as enumerations, set and list formations, relations and function mappings and more. Programming language design is a challenging task. Development and testing of first implementation of the language is much easier and flexible done by implementing an interpreter. Changes to static or dynamic semantics of a language are more easily done in interpreter than compiler. My project is to implement interpreter for the ZOOM-I language. The interpreter is going to be GUI application with easy development of zoom programs. working on basic statements and expressions of Zoom-I langauge. - 5/11/03: basic java statements and expressions for primitive types - 5/19/03: extended expressions for List declaration and manipulation - 5/26/03: extended expressions for Set declaration and manipulation - 5/31/03: Start work on Object oriented features - 7/31/03: Object-oriented features finished - 8/01/03: TBD - Expected Completion: November 2003 - Initial Presentation - Power Point Slides - David A. Watt & Deryck F. Brown. Programming Language Processors in Java. Prentice Hall, 2000. - Alfred V. Aho, Ravi Sethi, Jeffrey D. Ullman. Compilers: Principles, Techniques and Tools. Addison-Wesley,1988. - Ravi Sethi. Programming Languages, Concepts & Constructs. Addison-Wesley, 1996. - Randy M. Kaplan. Constructing Language Processors for Little Languages. John Wiley & Sons, Inc., 1994.
<urn:uuid:6df337f0-b74a-4b93-a6ac-81d0729d798c>
2.765625
433
Personal Blog
Software Dev.
46.991907
The scene: Scientist Jian Chen adjusts optics mounted for an experiment at one of several PULSE laser laboratories housed at SLAC. (PULSE is a joint SLAC/Stanford University laser science institute.) In this experiment, a small fleck of sample material is held in a special “diamond anvil cell” and torqued to pressures up to 12 gigapascals—120,000 times greater than atmospheric pressure, similar to conditions deep inside the Earth. Chen and colleagues then use three separate, highly precise beams of pulsed laser light, bouncing variously through the specialized optics, to measure the behavior of electrons in the material under pressure. Experiments of this sort give scientists clues about the nature and dynamics of the atomic world that could aid in developing new materials with exotic properties. The shot: Canon 5D Mk II, 17-35mm/f2.8L lens @ 17mm, f/7.1. ISO 200, 1/40 sec exposure. Three lights (all Speedlites), one triggered with a Pocket Wizard II, the others with optical slaves: one camera left (close, with a red gel), one camera right (at full power, to cast the hard shadows), and one camera left (farther from the camera, with grid, visible in frame) to illuminate Chen. Used a tripod and remote trigger for this one. (All while wearing the same goggles Chen is wearing… tough way to shoot!)
<urn:uuid:aa900d17-ee05-43a1-9828-f3db9c23769c>
2.90625
301
Personal Blog
Science & Tech.
49.949323
The "Methane" experiment was proposed during the TransCom 2008 meeting in Utrecht. The first protocol was discussed during the post-ICDC8 TransCom meeting in Jena, followed by the final protocol in 2010. Since then 16 models or model variants have performed the simulations. Previous TransCom experiments focused on chemically non-reactive species (SF6, CO2, 222Rn). A CH4 intercomparison requires introduction of atmospheric chemistry, which means a significant new model development for the traditional TransCom participants. However, to focus on model transport properties, the CH4 chemistry is reduced to offline radical (OH, O1D, Cl) only, which means the full-chemistry modellers have to scale down chemistry. During discussion at Jena, methyl chloroform (CH3CCl3) was included for tracking tropospheric OH abundance in the models, as well as SF6 and 222Rn for model transport evaluations. Prescribed fluxes are input to a transport model and 20 years of simulation is run with meteorological forcing appropriate for 1988-2007. Hourly concentrations of all species are output for 280 locations. At 115 locations, species profiles, surface fluxes and meteorological variables are also output. The protocol (version 7) . It details the input fluxes, regridding instructions and lists of the output sites and required file formats (similar to TransCom continuous experiment). Instructions are included for accessing the ftp site for downloading input files and uploading model submissions. The model output is freely available for research purposes but please note the "conditions of use" . The data are available in two formats: the original model submissions containing output for all sites. Output files can be downloaded from ftp fxp.nies.go.jp (refer to the Protocol files for access information). In an effort towards ease of access, time series at a subset of surface sites are archived at the WMO World Data Centre for Greenhouse Gases (http://gaw.kishou.go.jp/) publications and presentations Patra, P. K., S. Houweling, M. Krol, P. Bousquet, L. Bruhwiler, and D. Jacob (2010), Protocol for TransCom CH4 intercomparison, Version 7, April (available online at transcom.project.asu.edu/pdf/transcom/T4.methane.protocol_v7.pdf ). Patra, P. K., S. Houweling, M. Krol, P. Bousquet, D. Belikov, D. Bergmann, H. Bian, P. Cameron-Smith, M. P. Chipperfield, K. Corbin, A. Fortems-Cheiney, A. Fraser, E. Gloor, P. Hess, A. Ito, S. R. Kawa, R. M. Law, Z. Loh, S. Maksyutov, L. Meng, P. I. Palmer, R. G. Prinn, M. Rigby, R. Saito, C. Wilson, TransCom model simulations of CH4 and related species: Linking transport, surface flux and chemical loss with CH4 variability in the troposphere and lower stratosphere, Atmos. Chem. Phys. Discuss., Submitted, 2011. presentations at the 10th TransCom workshop, University of California, Berkeley, 2010, (Saturday Session) are available on the TransCom-CH4 FTP server at NIES. for more information
<urn:uuid:fafff650-2a32-4b22-b6ba-3ad9c568a763>
2.796875
742
Knowledge Article
Science & Tech.
56.851609
Samarium: the essentials Samarium has a bright silver lustre and is reasonably stable in air. It ignites in air at 150°C. It is a rare earth metal. It is found with other rare earth elements in minerals including monazite and bastnaesite and is used in electronics industries. Samarium: historical information Samarium was discovered spectroscopically by its sharp absorption lines in 1853 by Jean Charles Galissard de Marignac in an "earth" called didymia. The element was isolated in 1879 by Lecoq de Boisbaudran from the mineral samarskite, named in honour of a Russian mine official, Colonel Samarski, and which therefore gave samarium its name. Samarium: physical properties Samarium: orbital properties Isolation: samarium metal is available commercially so it is not normally necessary to make it in the laboratory, which is just as well as it is difficult to isolate as the pure metal. This is largely because of the way it is found in nature. The lanthanoids are found in nature in a number of minerals. The most important are xenotime, monazite, and bastnaesite. The first two are orthophosphate minerals LnPO4 (Ln deonotes a mixture of all the lanthanoids except promethium which is vanishingly rare) and the third is a fluoride carbonate LnCO3F. Lanthanoids with even atomic numbers are more common. The most comon lanthanoids in these minerals are, in order, cerium, lanthanum, neodymium, and praseodymium. Monazite also contains thorium and ytrrium which makes handling difficult since thorium and its decomposition products are radioactive. For many purposes it is not particularly necessary to separate the metals, but if separation into individual metals is required, the process is complex. Initially, the metals are extracted as salts from the ores by extraction with sulphuric acid (H2SO4), hydrochloric acid (HCl), and sodium hydroxide (NaOH). Modern purification techniques for these lanthanoid salt mixtures are ingenious and involve selective complexation techniques, solvent extractions, and ion exchange chromatography. Pure samarium is available through the electrolysis of a mixture of molten SmCl3 and NaCl (or CaCl2) in a graphite cell which acts as cathode using graphite as anode. The other product is chlorine gas. WebElements now has a WebElements shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more.
<urn:uuid:03679806-1247-4d30-82a5-5ab86e77b708>
3.34375
560
Knowledge Article
Science & Tech.
23.493616
Oxygen Fuels the Fires of Time Scientists from The Field Museum in Chicago and Royal Holloway University of London, publishing their results this week in the journal Nature Geoscience, have shown that the amount of charcoal preserved in ancient peat bogs, now coal, gives a measure of how much oxygen there was in the past. Until now scientists have relied on geochemical models to estimate atmospheric oxygen concentrations. However, a number of competing models exist, each with significant discrepancies and no clear way to resolve an answer. All models agree that around 300 million years ago, in the Late Paleozoic, atmospheric oxygen levels were much higher than today. These elevated concentrations have been linked to gigantism in some animal groups, in particular insects, the dragonfly Meganeura monyi with a wingspan of over two feet epitomizing this. Some scientists think these higher concentrations of atmospheric oxygen may also have allowed vertebrates to colonize the land. These higher levels of oxygen were a direct consequence of the colonization of land by plants. When plants photosynthesize they evolve oxygen. However, when the carbon stored in plant tissues decays atmospheric oxygen is used up. To produce a net increase in atmospheric oxygen over time organic matter must be buried. The colonization of land by plants not only led to new plant growth but also a dramatic increase in the burial of carbon. This burial was particularly high during the Late Paleozoic when huge coal deposits accumulated. Dr. Ian J. Glasspool from the Department of Geology at the Field Museum explained that: "Atmospheric oxygen concentration is strongly related to flammability. At levels below 15% wildfires could not have spread. However, at levels significantly above 25% even wet plants could have burned, while at levels around 30 to 35%, as have been proposed for the Late Paleozoic, wildfires would have been frequent and catastrophic". However, there were periods in Earth's history when the charcoal percentage in the coals was as high as 70%. This indicates very high levels of atmospheric oxygen that would have promoted many frequent, large, and extremely hot fires. These intervals include the Carboniferous and Permian Periods from 320-250 million years ago and the Middle Cretaceous Period approximately 100 million years ago. "It is interesting", Professor Scott points out, "that these were times of major change in the evolution of vegetation on land with the evolution and spread of new plant groups, the conifers in the late Carboniferous and flowering plants in the Cretaceous". These periods of high fire resulting from elevated atmospheric oxygen concentration might have been self-perpetuating, with more fire meaning greater plant mortality, and in turn more erosion and therefore greater burial of organic carbon, which would have then promoted elevated atmospheric oxygen concentrations. "The mystery to us", Scott states, "is why oxygen levels appear to have more or less stabilized about 50 million years ago".
<urn:uuid:b82f6dde-52a7-4c70-a146-bfaf0d7d1e3b>
4
598
Knowledge Article
Science & Tech.
28.329925
If he casts the right fly, an angler can catch some really big fish. Scientists are the same way, needing the right type of microscope to visualize nature's smallest molecules and atoms. Now, researchers are redesigning their light microscopes to catch a glimpse of some of the most miniscule molecules, those that make proteins in bacteria and archaea. A promising solution is the use of fluorescence in situ hybridization (FISH) and stochastical optical reconstruction microscopy (STORM). Together, these techniques are improving our understanding of how bacteria and archaea transcribe DNA to RNA and then translate RNA to proteins. In addition, they are re-shaping how cell biology studies relate to environmental microbes. Luring and Lighting Biomolecules "Light microscopy has been a workhorse in cell biological research," says Harvard biophysicist Xiaowei Zhuang. She says scientists want to use light microscopy to study cells, especially live ones, because it is non-invasive. The problem, however, with zooming in on biomolecules and their movements in bacteria and archaea is the small size of the individual cells. At only about three micrometers long and a micrometer wide, bacterial and archaeal cells come into focus just around the diffraction limit of light, which is about 200 nanometers. With light microscopy, scientists can see a cell but not its nuclear and cellular machinery. Even though these cells are relatively simpler than mammalian cells and other eukaryotic ones, scientists still know little about them. To get a better look, Zhuang and her collaborators developed STORM in 2008 (1). Zhuang's group has used it to image individually labeled proteins in live cells, including bacteria and archaea. And, like pairing the right fly with a great bait, other researchers are using STORM with their own techniques to "look at the distribution and dynamics of nuclear targets at a resolution that is far from the reach of conventional microscopy," says Bakshi. For example, Cristina Moraru of the Max Planck Institute for Marine Microbiology in Germany and colleagues wanted to know where ribosomes sit within the cell because those molecular machines interact with the nucleoid—the carrier of the genetic information in archaea and bacteria. Based on where ribosomes are located, there are different models of interactions, which can significantly shape regulation of transcription, translation, and other cellular processes. In a paper recently published in Systematic and Applied Microbiology (2), Moraru’s group reported on a combined STORM and FISH approach to locate ribosomes in an Escherichia coli cell. Moraru’s team used FISH to label specific sequences of ribosomal RNA with fluorescent probes, and then imaged the samples with STORM. "In the end, all these differences could reflect in the way the cell answer to environmental changes, and therefore, in the fitness and survival," says Moraru. In the near future, she adds, scientists could use STORM, FISH, and other super-resolution techniques to count of the number of ribosomes in a bacterium. Ribosomal Catch and Release Counting the number of ribosomes is essential to understanding how bacteria grow. Moraru explains that "the regulation of ribosome numbers in microbial cells is complex and, probably, there will not always be a direct correlation between ribosome numbers and metabolic activity." But it is likely that a cell with a high ribosome content will be more active compared with one with a low ribosome content. If scientists can count ribosomes, they could get a sense of the level of metabolic activity in microbial cells. But scientists have not yet counted the exact numbers of ribosomes per cell; the FISH protocol and RNA probes need to be more efficient at hybridization. "Work in this direction is in progress, and we are confident that there is only a matter of time till ribosome quantification per cell will be achieved," says Moraru. So far, prokaryotic cell biology studies have been limited because many methods are not compatible with uncultivated microorganisms. But because the FISH-STORM approach uses RNA probes that target different microbial taxa in environmental samples, scientists could study ribosome variation across bacterial species. "By looking at samples from different environmental conditions, from warm season versus cold season, or, from high salinity versus low salinity, the variation of ribosome number across environmental conditions could be assessed," says Moraru. In structured environments, such as biofilms, activated sludge and tissue samples, FISH also preserves the spatial information and reveals potential interactions between different species and community members in a sample. "Targeting rRNA by super-resolution FISH is only the beginning. In the near future, we envision targeting the other nucleic acid components of microbial cells to reveal the sub-cellular localization and numbers of specific genes and mRNAs," says Moraru. A Different Kettle But the FISH-STORM approach isn't the only way to bait biomolecules in small cells. Bakshi, a graduate student in University of Wisconsin-Madison chemist James Weisshaar's lab, uses a technique called pointillism to do sub-diffraction limit imaging. With this technique, he constructs an image of a cell by localizing a large number of single molecules iteratively. This requires labels that can be switched on and off, but generates resolution up to 20–30 nanometers. In contrast to FISH, Bakshi’s approachcan be used for live-cell imaging. To truly understand the complexity and heterogeneity of the behavior of any biomolecule, says Bakshi, requires that scientists can probe one molecule at a time. His team's technique gives them the position and movement of a single object in a cell at a high spatio-temporal resolution. "When we are looking at a ribosome, it enables us to determine which molecules are involved in translation and where they are inside the cell," he says. In a 2012 paper published in Molecular Microbiology (3), he and Weisshaar reported that most of E. coli's translation is not coupled with transcription—a discovery that runs counter to the common view in the scientific literature. Bakshi says that since bacteria lack a nuclear membrane—which separates the nucleoid from the rest of the cytoplasm—co-transcriptional translation is possible in the cells. To what extent the translation process is coupled to transcription, however, was not clear. Electron microscope images of ribosomes in cell extract, published in the 1970s, suggested that all translating ribosomes are joined to the chromosome through transcriptional coupling. "When we found that our results suggest that most translation is actually happening without such coupling, we were very surprised," says Bakshi. The team eventually figured out that the lifetime of an mRNA in E. coli is much longer than the time taken for its transcription. The mRNA gets released from proteins associated with the nucleoid once transcription terminates and is then translated by ribosomes without being attached to DNA for the rest of its lifetime, he says. The techniques—whether it's FISH, STORM, or something else—ultimately let biologists cast deeper lines into individual cells of bacteria and archaea, learning more about their molecular and metabolic dynamics. 1. Huang, B., W. Wang, M. Bates, and X. Zhuang. 2008. Three-Dimensional Super-Resolution imaging by stochastic optical reconstruction microscopy. Science 319(5864):810-813. 2. Moraru, C. and Amann, R. (2012). "Crystal ball: Fluorescence in situ hybridization in the age of super-resolution microscopy." Systematic and Applied Microbiology. In Press. 3. Bakshi, S. et al. (2012). Super-resolution imaging of ribosomes and RNA polymerase in live Escherichia coli cells." Molecular Microbiology 85 (1): 21–38 4. Wang, W. et al. (2011). "Chromosome Organization by a Nucleoid-Associated Protein in Live Bacteria." Science 333: 1445 -1449.
<urn:uuid:321fbaa9-de0c-4c46-9752-a86ab7c290c2>
3.53125
1,712
Knowledge Article
Science & Tech.
35.103599
Phytoplankton Under Ice Beneath the Arctic ice—over 12 feet deep in some areas—lies a dark, cold and lifeless sea. Or so we thought. “If someone had asked me before the expedition whether we would see under-ice blooms, I would have told them it was impossible,” says Arrigo. “This discovery was a complete surprise.” The researchers discovered an abundance of phytoplankton—microscopic life that forms the base of the marine food chain. Phytoplankton require sunlight for photosynthesis, just like plants. And sunlight has a tough time penetrating thick sea ice. But that thick sea ice is changing. Not only are warmer temperatures thinning the ice, but as the ice melts in summer, it forms pools of water that act like transient skylights and magnifying lenses. These pools focus sunlight through the ice and into the ocean, where currents steer nutrient-rich deep waters up toward the surface. Phytoplankton under the ice evolved to take advantage of this narrow window of light and nutrients. The phytoplankton displayed extreme activity, doubling in number more than once a day. Blooms in open waters grow at a much slower rate, doubling in two to three days. These growth rates are among the highest ever measured for polar waters. Researchers estimate that phytoplankton production under the ice in parts of the Arctic could be up to 10 times higher than in the nearby open ocean. The phytoplankton bloom discovered by Arrigo and his colleagues in the Chukchi Sea (just north of Alaska) extends tens of meters deep in spots and about 100 kilometers (62 miles) across. “At this point we don’t know whether these rich phytoplankton blooms have been happening in the Arctic for a long time and we just haven’t observed them before,” Arrigo says. “These blooms could become more widespread in the future, however, if the Arctic sea ice cover continues to thin.” The discovery of these previously unknown under-ice blooms could have serious implications for the broader Arctic ecosystem, including migratory species such as whales and birds. Phytoplankton are eaten by small ocean animals, which are eaten by larger fish and ocean animals. “It could make it harder and harder for migratory species to time their life cycles to be in the Arctic when the bloom is at its peak,” Arrigo says. “If their food supply is coming earlier, they might be missing the boat.” The research is published this week in Science.
<urn:uuid:dda98bee-cbfe-4019-acc9-adace79b16b1>
4.15625
565
Knowledge Article
Science & Tech.
47.431837
Per Square Meter Warm-up: Relationships in Ecosystems (10 minutes) 1. Begin this lesson by presenting the powerpoint, “Per Square Meter”. 2. After the presentation, ask students to think of animal relationships that correspond to each of the following types; Competition, Predation, Parasitism, and Mutualism a. For example, two animals that compete for food are lions and cheetahs (they compete for zebras and antelopes) 3. Record the different types of relationships on the board. Activity One: My Own Square Meter (30 minutes) 1. Have students go outside and pick a small area (about a square meter each) to explore. It is preferable that this area be grassy or ‘natural’. The school playground might be a good spot. 2. Each student should keep a list of both the living organisms and man-made products found in their area (i.e grass, birds, insects, flowers, sidewalk etc.) Students are allowed to collect a few specimens from this area to show to the class. If students do not have jars, they can draw their observations. *See Reproducible #1 Activity Two: Who lives in our playground? (10 minutes) 1. After listing, collecting, and drawing specimens, students should return to the classroom and present their findings. a. Have the students sit in a circle. Each student should read his or her list of findings out loud. If they collected specimens or drew observations, have them present them to the class. 2. Make a list of these findings on the board. Only write repeated findings once (to avoid writing grass as many times as there are students). Keep one list of living organisms and one list of man-made products. 3. For now, focus on the list of living organisms. As a class, help students name possible relationships between the organisms. See if they can find one of each type of relationship. For example, a bee on a flower is an example of mutualism because the nectar from the flower nourishes the bee and in return, the bee pollinates the flower. Activity Three: Humans and the Environment: Human Effect on one Square Meter (15 minutes) 1. Now that students have focused on the animal relationships of their square meter, it is time to examine the effect of humans on the natural environment. Focus on the human-made product list. Ask students to consider the possible relationships between the human-made products and the environment. Prompt a brief class discussion on the effects of man-made products on the environment. Use the following questions as guidelines. a. What is the effect of an empty drink bottle (or any other piece of trash) in a grassy field? Will it decompose? Will it be used by an animal as a habitat or food? Answer: Trash is an invasive man-made product. Most trash is non-bio degradable and is harmful to the environment and to eco-system relations.Therefore, it is a harmful addition to the square meter. b. Who left the bottle there? Do you think they are still thinking about it? Did they leave it there on purpose? Why did they leave it there? Answer: Most people litter thoughtlessly. They are not thinking about their actions and how they may effect the environment or eco-systems. It is important that people recognize that litter has a major effect on the environment. c. What about a bench? Does a park bench have the same effect on the environment as a piece of trash? Answer: A park bench can be considered as a positive human-made product. A park bench has little negative effect on the environment and even helps humans further appreciate eco-systems. The Park Bench may even provide shelter or a perch for the eco-systems living organisms. d. Is there a difference between positive human-made products and negative ones? What are some examples of each? Answer: Yes, there is a difference between positive and negative human-made products. Positive products have minimal effect on the functioning of eco systems whereas negative products have major effects on eco systems. An example of a positive human-made product would be a solar powered house. An example of a negative human-made product would be a car that produces a lot of pollution. Wrap Up: Our Classroom Eco-Web (20-30 minutes) 1. Have students create classroom artwork by illustrating the relationships between their eco-systems. 2. Each student should draw at least two components of his or her square meter. 3. After everyone has finished their illustrations, create a web relating the illustrations. Draw arrows between illustrated components with written indications of the type of relationship exemplified. 4. Post the finished product in the classroom so that students can see the interconnectedness of the earth’s eco-systems. Extension: Exploring Aquatic Eco-Systems (On-going Activity) Students can explore another type of eco-system by creating a classroom aquarium or terrarium. The supplies for both of these mini eco-systems can be found at your local pet store. Students should help set up and maintain the aquarium or terrarium throughout the year. Periodically, students should observe how the mini-ecosystem is progressing, note changes, and assess the relationships between the organisms of the eco-system. This way, students are able to directly participate in the functioning of a natural system. Another related activity might be to take your students on a field trip to a different eco-system from that of your school. If you live near a river, lake, or ocean take them there to explore different ecological relations. If you live in a city, examples of diverse eco-systems can be found at the local zoo or aquarium.
<urn:uuid:c76adb43-fdc6-442d-882e-b7781f7e7d83>
3.921875
1,207
Tutorial
Science & Tech.
52.925334
No one knows how much warming is "safe". What we do know is that climate change is already harming people and ecosystems. Its reality can be seen in melting glaciers, disintegrating polar ice, thawing permafrost, changing monsoon patterns, rising sea levels, changing ecosystems and fatal heat waves. Scientists are not the only ones talking about these changes. From the apple growers in Himachal to the farmers in Vidharbha and those living in disappearing islands in the Sunderbans are already struggling with the impacts of climate change. But this is just the beginning. We need to act to avoid catastrophic climate change. While not all regional effects are known yet, here are some likely future effects if we allow current trends to continue. Relatively likely and early effects of small to moderate warming: Natural systems, including glaciers, coral reefs, mangroves, Arctic ecosystems, alpine ecosystems, Boreal forests, tropical forests, prairie wetlands and native grasslands, will be severely threatened. Longer term catastrophic effects if warming continues: Greenland and Antarctic ice sheets are melting. Unless checked, warming from emissions may trigger the irreversible meltdown of the Greenland ice sheet in the coming decades, which would add up to a seven meters rise in sea-level over some centuries. New evidence showing the rate of ice discharge from parts of the Antarctic means that it is also facing a risk of meltdown. Never before has humanity been forced to grapple with such an immense environmental crisis. If we do not take urgent and immediate action to stop global warming, the damage could become irreversible.
<urn:uuid:93f23c86-06b2-4c01-8d4e-f0341afe508c>
3.75
324
Knowledge Article
Science & Tech.
31.269855
The principal purpose of this project is to demonstrate the feasibility of instrumenting heavily icecovered fjords to obtain real-time data of the upper ocean. Greenland's ice-covered fjords are the connections between the Greenland Ice Sheet and the open ocean. These dynamic environments enable access of warm ocean water to outlet glaciers, causing large amounts of melting under floating tongues (e.g. Rignot and Steffen, 2008; Motyka et al., in press). On the other hand, deep fjords also enable ice to break up mechanically through the process of calving. These icebergs are then transported away from the glaciers, where they eventually melt. The interactions between the ocean, its ice cover (the melange), the glacier ice, and the atmosphere remain poorly understood, mostly due to the extremely difficult conditions for direct observations (e.g. Amundson et al., 2010). Yet, it is increasingly clear that the dynamic behavior of the ice sheet is dominated by its interaction with the surrounding oceans (e.g. Rignot and Kanagaratnam, 2006; Joughin et al., 2008; Holland et al., 2008). It is therefore imperative to gain a better understanding of the physical processes that determine the heat and mass exchange between ocean and ice. This is an issue not only for Greenland, but at all the larger glaciated areas of the planet. The current inability of predicting changes at marine-terminating glaciers is responsible for the lack of a reliable estimate of the future cryospheric contribution to sea level rise (IPCC, 2007; Truffer and Fahnestock, 2008). To make progress in the task of predicting the behavior of outlet glaciers, a better understanding of physical processes in glacier-fed fjords is necessary. This will require direct observations. The physical environment for this type of work is extremely challenging. The inner fjords are often covered in brash ice and large ice bergs, sometimes mixed with sea ice. Large ice bergs can roll, creating hazards to boats. The area very close to glaciers can have turbulent upwelling with fast currents, the proximity of the glacier is too dangerous to work in due to calving activity, and calving events can send meter-scale waves through the fjord. Moorings are difficult to deploy, and have to be deeply submerged to avoid interaction with the keels of the bigger ice bergs, making it impossible to measure processes and exchanges at the critical atmosphere-ice-ocean boundary. Here we propose to measure the properties of the upper water column using drifting buoys. The proposed experiment carries a certain risk, as the equipment could get destroyed. We will attempt to minimize this risk by letting the buoys drift, and by constructing them more solidly, so they are better able to absorb impacts. Also, they will be equipped with Iridium satellite modems, so that data can be uploaded on a regular basis and will not be lost should the buoys fail. We expect to obtain a record of up to one year length of temperature, salinity and currents in the upper water column (down to ~30m) of the inner Godthabsfjord, near the main outlet glacier Kangiata Nunata Sermia (KNS). We propose to deploy four Lagrangian drifters; two on the glacier side and two on the outer side of a sill that was created by a previous glacier advance (Mortensen et al., subm. to JGR). The deployments in the heavily ice-covered inner fjord are considered higher risk. The deployments on either side of the sill balance the risk of deploying in heavy ice with the desire to obtain data at those locations. The other expected result is to gain experience with instrumenting these difficult areas, where many details of physical processes have remained elusive. For example, if drifting buoys prove to be successful, one could develop these further into profiling instruments that are capable of sampling the entire water column. Another possible application is to develop drifting depth sounders to obtain geometric observations where boats cannot penetrate. Before such plans are implemented, it is imperative to gain some experience with lower cost instruments.
<urn:uuid:7e5d03a3-1703-4c51-b973-7b9dbb1300f8>
3.4375
852
Academic Writing
Science & Tech.
40.168238
Information found on this page has been archived and is for reference, research or recordkeeping purposes. Please visit NRC's new site for the most recent information. Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats by contacting us. Silicon is a semiconducting element. It behaves physically and chemically as a non-metal but is able to conduct electricity although not as well as the metals. Silicon is used to make the "chips" or tiny circuits found in everything from computers to VCRs. Scientists at the National Research Council Canada (NRC) have now developed a way to "wire" the surface of a silicon crystal with a single strand of molecules. Their goal is to produce nano-structures, molecular electronic devices one thousand times smaller than a single bacterium.
<urn:uuid:58cd456e-130d-479b-b5fd-0302bc9fa692>
3.515625
219
Knowledge Article
Science & Tech.
35.468882
Clouds and cosmos J'ai découvert par hazard les travaux de Henrik Svensmark sur le processus de formation des nuages. Ces derniers remettaient en cause les thèses défendues par les tenants du réchauffement climatique du aux activités humaines. En effectuant quelques recherches sur Internet j'ai pu mesurer l'incroyable fourvoiement de ceux qui conseillent les gouvernements sur les décisions à prendre. Et pas uniquement fourvoiement mais aussi manipulation des informations, intoxication et enfumage des foules. Les derniers travaux du Cern sur le sujet confirment les conclusions de Svensmark. Affaire à suivre, car une remise au pas del'IPCC/GIEC s'avère nécessaire. I discovered by chance the research conducted by Henrik Svensmark on the process of cloud formation. The results called into question the policies advocated by the proponents of global warming due to human activities. In doing some research online I could measure the incredible misdirection of those who advise governments on decisions. And not just misdirection but also manipulation of information, intoxication and smoking crowd. The latest work on the subject of Cern confirm the findings of Svensmark. Stay tuned, because in depth analysis of IPCC / GIEC turpitudes is necessary. Aug 30 "If it is an unusually warm winter in New York, it is probably also warm in Washington, D.C., for example," Hansen explained. "At high- and mid-latitudes Rossby Waves are the dominant cause of short-term temperature variations. And since those are fairly long waves we didn't think we needed a station at every one degree of separation." 5 October 2012 ESO celebrates its 50th anniversary The Cosmics Leaving Outdoor Droplets (CLOUD) experiment uses a special cloud chamber to study the possible link between galactic cosmic rays and cloud formation. Based at the Proton Synchrotron (PS) at CERN, this is the first time a high-energy physics accelerator has been used to study atmospheric and climate science. The results should contribute much to our understanding of clouds and climate. Cosmic rays are charged particles that bombard the Earth's atmosphere from outer space. Studies suggest they may have an influence on the amount of cloud cover through the formation of new aerosols (tiny particles suspended in the air that seed cloud droplets). This is supported by satellite measurements, which show a possible correlation between cosmic-ray intensity and the amount of low cloud cover. CERN Finds “Significant” Cosmic Ray Cloud Effect Best known for its studies of the fundamental constituents of matter, the CERN particle-physics laboratory in Geneva is now also being used to study the climate. Researchers in the CLOUD collaboration have released the first results from their experiment designed to mimic conditions in the Earth’s atmosphere. By firing beams of particles from the lab’s Proton Synchrotron accelerator into a gas-filled chamber, they have discovered that cosmic rays could have a role to play in climate by enhancing the production of potentially cloud-seeding aerosols. – Physics World, 24 August 2011 If Henrik Svensmark is right, then we are going down the wrong path of taking all these expensive measures to cut carbon emissions; if he is right, we could carry on with carbon emissions as normal. Jasper Kirkby is a superb scientist, but he has been a lousy politician. In 1998, anticipating he'd be leading a path-breaking experiment into the sun's role in global warming, he made the mistake of stating that the sun and cosmic rays "will probably be able to account for somewhere between a half and the whole of the increase in the Earth's temperature that we have seen in the last century." Global warming, he theorized, may be part of a natural cycle in the Earth's temperature. CHURCHVILLE, VA—Get ready for the next big bombshell in the man-made warming debate. Climate Change: News and Comments e physicien danois Henrik Swensmark ne se doutait probablement pas, en fournissant ses données et en faisant part de ses remarques aux réalisateurs de l'expérience "CLOUD" au CERN à Genève, que les résultats de cette expérience soulèveraient des problèmes politiques importants. WUWT reader Max_B tips us off to this article and video. According to Nigel Calder’s Blog , CERN’s CLOUD experiment (testing Svensmarks’s cosmic-ray theory) shows a large enhancement of aerosol production and the results are due for release in 2 or 3 months’ time. There is a short Physics World interview with Jasper Kirkby which is worthwhile viewing and was published a couple of days ago… J.A. performed the nucleation rate analysis. S.S. conducted the APi-TOF analysis. Results paper published in GIGS (January 2013) - Article posté le 07 avril 2010 - “La pluralité des voix n’est pas une preuve qui vaille. Car lorsqu’une vérité est quelque peu difficile à découvrir, il serait étonnant que tout un peuple l’ait rencontrée plutôt qu’un homme seul”. The Climatic Research Unit email controversy (also known as "Climategate" ) [ 2 ] [ 3 ] began in November 2009 with the hacking of a server at the Climatic Research Unit (CRU) at the University of East Anglia (UEA) by an external attacker. [ 4 ] [ 5 ] Several weeks before the Copenhagen Summit on climate change, an unknown individual or group breached CRU's server and copied thousands of emails and computer files to various locations on the Internet.
<urn:uuid:57ca599a-245e-40c8-95d4-d743e3a8f855>
2.78125
1,298
Personal Blog
Science & Tech.
43.698981
Deforestation monitoring needs better capacity and access to technologies Most tropical developing countries are struggling to monitor and report their greenhouse gas emissions from forest loss, and will need international support to implement the UN REDD+ scheme, according to a study. The Reducing Emissions from Deforestation and Degradation (REDD) scheme aims to reverse forest cover loss and curb related carbon emissions by putting a financial value on stored carbon. Countries voluntarily report back on their implementation of REDD+, but many lack the capacity to monitor forest loss and carbon emissions using key technologies such as satellite remote sensing, according to a paper in the May–June issue of Environmental Science and Policy. The study ranked tropical developing countries according to their ability to implement REDD+, and found that few such countries had improved their monitoring capacity between 2005 and 2010, with some even losing capacity, such as Burkina Faso and Mozambique. African countries were of most concern, as poor Internet connections and satellite coverage limit access to data. Meanwhile, mountainous countries such as Ecuador and Peru face technical challenges in analysing satellite images in areas with significant variations in altitude. Just four of the 99 analysed countries — Argentina, China, India and Mexico — had very small capacity gaps. These countries had also managed to increase their total forest cover between 2005 and 2010, unlike countries with larger gaps, where there was a net loss of forests in the same period. The paper recommends that the former group of countries could serve as advisors in South-South capacity building activities and regional collaboration efforts that could reduce the cost of accessing, processing and analysing remote sensing data. The international community should invest in better access to satellite data, especially for Central African and American countries, the study further recommended. Monitoring of forest fires and vulnerable high-carbon areas, such as tropical peatland systems in South-East Asia which are being lost to oil palm and pulpwood plantations, was also identified as a priority. Louis Verchot, a co-author of the study from the Center for International Forestry Research in Bogor, Indonesia, called for swift efforts to close capacity gaps. He told SciDev.Net that investment in countries suffering such gaps could yield high returns. "We laid out the study on a country by country basis, so this should help investors to lay out priorities and help target different types of intervention," Verchot added. The study provides useful insights on developing a steady emission reduction scheme for REDD+, said Nirarta Samadhi from Indonesia's REDD+ Task Force. He said it highlighted important details about capability gaps that would be valuable to global supporters. Environmental Science and Policy doi: 10.1016/j.envsci.2012.01.005 (2012) pdjmoo ( The Natural Eye Project | United States of America ) 6 May 2012 There is no time left to be fooling about with more reports on greenhouse gas relative to forest loss and the REDD+ programs. We all know enough now to demand that we cease and desist from any further deforestation for many reasons, the least of which is climate change, not to mention all the life and ecosystems being devastated that ultimately impact we humans and indigenous peoples. Further deforestation is a no-win for life and the planet. The only win is for profits and we just have to find a biodegradable alternative to timber for consumer needs. The palm oil and agriculture can be addressed without destruction of forests. A better use of our time and money. We can continue to kick the bucket down the road with dates like 2020 or find a way to have a global moratorium on forest destruction NOW...and that will require courage and cooperation from all levels. The matter is urgent. Then you can do all the reports, analysis and studies you want, once the destruction has ceased. A lot of food for thought here and willingness to move beyond our vested interests and old positions for the betterment and good of all life on this planet. Jorge Laine ( Venezuela ) 8 May 2012 Tropical deforestation does not necessarily mean eventual greenhouse gas increment. Scientists must look for land use changes promoting atmospheric carbon capture and storage: for example greening of deserts constituting almost 1/3 of earth nonpermafrost land. All SciDev.Net material is free to reproduce providing that the source and author are appropriately credited. For further details see Creative Commons.
<urn:uuid:a39a2d08-01a8-4cfd-a64b-30208373568d>
3.15625
898
Comment Section
Science & Tech.
35.328865
7. A point is on the perpendicular bisector of a line segment if and only if it lies the same distance from the two endpoints. There are two things that need to be proved here. The first is that if a point is on the perpendicular bisector of a line segment, then it is equidistant from the two endpoints of the segment. If we only use two column proofs, the student might get the idea that all proofs have to be two column proofs. This is not so. It is just that two column proofs work very well for congruent triangle proofs. In a congruent triangle proof, we first need to get the three parts of one triangle congruent to the corresponding three parts in the other triangle, note that we have congruent triangles, then conclude that the things we are trying to prove to be congruent will then be corresponding parts of the congruent triangles. That is a minimum of five steps, each step having a reason, which is a previously established statement. The two column format helps the student to keep all of these ideas straight and organized. However, the fact of the matter is, that when we get away from congruent triangle proofs, the two column format does not always work as well. This result is an example. While it is possible to devise a two column proof, a prose proof using the isosceles triangle theorems might prove to be simpler. If the point is on the perpendicular bisector of the line segment between the two points, then in the triangle formed by the base being the line segment, and the point being the vertex, the line from the vertex of the triangle to the midpoint of the base is perpendicular to the base, so the triangle is isosceles, and the point is equidistant from the endpoints of the line segment. For the converse - if the point is equidistant from the endpoints of the line segment, then we again have an isosceles triangle, and the line from the vertex to the midpoint of the base will be perpendicular to the base, and thus be the perpendicular bisector of the base.
<urn:uuid:f8b49205-6972-49a9-83ab-a66cb2a926a1>
3.78125
445
Academic Writing
Science & Tech.
56.653818
Introduction to Integrals The Definite Integral The definite integral is a convenient notation used the represent the left-hand and right-hand approximations discussed in the previous section. f (x)dx means the area of the region bounded by f , the y -axis and the lines x = a and x = b. Writing f (x)dx is equivalent to writing on the interval [a, b] , but it is a much more compact way of doing so. Note also the similarity between the two expressions. This should serve as a clear reminder that the definite integral is just the limit of right-hand and left-hand approximations. Unlike the indefinite integral, which represents a function, the definite integral represents a number, and is simply the signed area under the curve of f . The area is considered "signed" because according to the method of calculating the areas by subdivisions, the regions located below the x -axis will be counted as negative, and the regions above will be counted as positive. Negative regions cancel out positive regions, and the definite integral represents the total balance between the two over the given interval. For example, find Based on the picture of the region being considered, it should be clear that the answer is zero. Here, the negative region is exactly the same size as the positive region: Properties of the Definite Integral The definite integral has certain properties that should be intuitive, given its definition as the signed area under the curve: - cf (x)dx = c f (x)dx - f (x)+g(x) dx = f (x)dx + g(x)dx is on the interval f (x)dx = f (x)dx + f (x)dx This means that we can break up a graph into convenient units and find the definite integral of each section and then add the results to find the total signed area for the whole region. The Fundamental Theorem of CalculusThe fundamental theorem of calculus, or "FTC", offers a quick and powerful method of evaluating definite integrals. It states: if F is an antiderivative of f , then f (x)dx=F(b) - F(a) x 2 dx = (1)3 - (0)3 = Often, a shorthand is used that means the same as what is written above: x 2 dx = x 3 = One interpretation of the FTC is that the area under the graph of the derivative is equal to the total change in the original function. For example, recall that velocity is the derivative of position. So, v(t)dt=s(b) - s(a) This means that the change in area under the velocity curve represents the total change in position.
<urn:uuid:f092ba97-0d03-497d-811a-07a6f4f4e39d>
4.5
580
Tutorial
Science & Tech.
46.219777
From the time of Aristotle (384-322 BC) until the late 1500’s, gravity was believed to act differently on different objects. - Drop a metal bar and a feather at the same time… which one hits the ground first? - Obviously, common sense will tell you that the bar will hit first, while the feather slowly flutters to the ground. - In Aristotle’s view, this was because the bar was being pulled harder (and faster) by gravity because of its physical properties. - Because everyone sees this when they drop different objects, it wasn’t questioned for almost 2000 years. Galileo Galilei was the first major scientist to refute (prove wrong) Aristotle’s theories. - In his famous (at least to Physicists!) experiment, Galileo went to the top of the leaning tower of Pisa and dropped a wooden ball and a lead ball, both the same size, but different masses. - They both hit the ground at the same time, even though Aristotle would say that the heavier metal ball should hit first. - Galileo had shown that the different rates at which some objects fall is due to air resistance, a type of friction. - Get rid of friction (air resistance) and all objects will fall at the same rate. - Galileo said that the acceleration of any object (in the absence of air resistance) is the same. - To this day we follow the model that Galileo created. ag = g = 9.81m/s2 ag = g = acceleration due to gravity Since gravity is just an acceleration like any other, it can be used in any of the formulas that we have used so far. - Just be careful about using the correct sign (positive or negative) depending on the problem. Examples of Calculations with Gravity Example 1: A ball is thrown up into the air at an initial velocity of 56.3m/s. Determine its velocity after 4.52s have passed. In the question the velocity upwards is positive, and I’ll keep it that way. That just means that I have to make sure that I use gravity as a negative number, since gravity always acts down. vf = vi + at = 56.3m/s + (-9.81m/s2)(4.52s) vf = 12.0 m/s This value is still positive, but smaller. The ball is slowing down as it rises into the air. Example 2: I throw a ball down off the top of a cliff so that it leaves my hand at 12m/s. Determine how fast is it going 3.47 seconds later. In this question I gave a downward velocity as positive. I might as well stick with this, but that means I have defined down as positive. That means gravity will be positive as well.vf = vi + at = 12m/s + (9.81m/s2)(3.47s) vf = 46 m/s Here the number is getting bigger. It’s positive, but in this question I’ve defined down as positive, so it’s speeding up in the positive direction. Example 3: I throw up a ball at 56.3 m/s again. Determine how fast is it going after 8.0s. We’re defining up as positive again. vf = vi + at = 56.3m/s + (-9.81m/s2)(8.0s) vf = -22 m/s Why did I get a negative answer? - The ball reached its maximum height, where it stopped, and then started to fall down. - Falling down means a negative velocity. There’s a few rules that you have to keep track of. Let’s look at the way an object thrown up into the air moves. As the ball is going up… - It starts at the bottom at the maximum speed. - As it rises, it slows down. - It finally reaches it’s maximum height, where for a moment its velocity is zero. - This is exactly half ways through the flight time. As the ball is coming down… - The ball begins to speed up, but downwards. - When it reaches the same height that it started from, it will be going at the same speed as it was originally moving at. - It takes just as long to go up as it takes to come down. Example 4: I throw my ball up into the (again) at a velocity of 56.3 m/s. a) Determine how much time does it take to reach its maximum height. - It reaches its maximum height when its velocity is zero. We’ll use that as the final velocity. - Also, if we define up as positive, we need to remember to define down (like gravity) as negative. a = (vf - vi) / t t = (vf - vi) / a = (0 - 56.3m/s) / -9.81m/s2 t = 5.74s b) Determine how high it goes. - It’s best to try to avoid using the number you calculated in part (a), since if you made a mistake, this answer will be wrong also. - If you can’t avoid it, then go ahead and use it. vf2 = vi2 + 2ad d = (vf2 = vi2) / 2a = (0 - 56.32) / 2(-9.81m/s2) d = 1.62e2 m c) Determine how fast is it going when it reaches my hand again. - Ignoring air resistance, it will be going as fast coming down as it was going up. You might have heard people in movies say how many "gee’s" they were feeling. - All this means is that they are comparing the acceleration they are feeling to regular gravity. - So, right now, you are experiencing 1g… regular gravity. - During lift-off the astronauts in the space shuttle experience about 4g’s. - That works out to about 39m/s2. - Gravity on the moon is about 1.7m/s2 = 0.17g
<urn:uuid:43ce7457-915e-4a8a-b78f-fca95b28656c>
3.953125
1,359
Tutorial
Science & Tech.
81.255107
This weather balloon is full of helium gas. It is surrounded by Earth's atmosphere, which is mostly nitrogen and oxygen gasses. Helium is "lighter" (less dense) than nitrogen or oxygen, so the balloon will rise when the scientist lets go of it. Click on image for full size Image courtesy of the University Corporation for Atmospheric Research. Gas is one of the four common states of matter. The three others are liquid, solid, and plasma. There are also some other exotic states of matter that have been discovered in recent years. The air in Earth's atmosphere is mostly a mixture of different types of gases. A gas usually has much lower density than a solid or liquid. A quantity of gas doesn't have a specific shape; in this way it is like a liquid and different from a solid. If a gas is enclosed in a container, it will take on the shape of the container (a liquid will too). The volume of a gas changes if the temperature or pressure changes. There are several scientific laws, called the "gas laws", that describe how the volume, temperature, and pressure of a gas are related. The molecules or atoms in a gas are much further apart than in a solid or a liquid. Gas molecules or atoms are usually flying around at very high speeds, occasionally bouncing off each other or the walls of the container the gas is in. When a gas is cooled or placed under high pressure, it can condense and turn into a liquid. If a liquid boils or evaporates, it will become a gas. Under some circumstances, usually very low pressure, a solid can turn directly into a gas (without first melting and becoming a liquid). When a solid turns directly into a gas, it is called "sublimation". Most of the air in Earth's atmosphere is either nitrogen or oxygen gas. Balloons are often filled with helium gas; since helium is lighter (less dense) than air, helium balloons "float" or rise up in air. When liquid water boils or evaporates, it turns into a gas called "water vapor". Most of the gas in the atmospheres of the giant planets Jupiter and Saturn is hydrogen gas. In recent years, carbon dioxide gas has become quite famous because of its role in the Greenhouse Effect and global warming. Shop Windows to the Universe Science Store! Our online store includes fun classroom activities for you and your students. Issues of NESTA's quarterly journal, The Earth Scientist are also full of classroom activities on different topics in Earth and space science! You might also be interested in: Solid is one of the four common states of matter. The three others are gas, liquid, and plasma. There are also some other exotic states of matter that have been discovered in recent years. Unlike liquids...more Plasma is known as the fourth state of matter. The other three states are solid, liquid and gas.Almost everything is made up of atoms (your dog, your science book, this computer...). The atom has a nucleus...more Density is a measure of how much mass is contained in a given unit volume (density = mass/volume). Put simply, if mass is a measure of how much ‘stuff’ there is in an object, density is a measure of how...more Most things around us are made of groups of atoms connected together into packages called molecules. Molecules are made from atoms of one or more elements. Some molecules are made of only one type of...more A snowman, glass of water and steam might look very different but they are made of the same stuff! Just like any substance, water has three different forms, called states: solid, liquid and gas. The state...more Have you ever left a glass of water out for a long time? Did you notice that the water disappears after a few days? That's because it evaporated! Evaporation is when water passes from a liquid to a gas....more There is more nitrogen gas in the air than any other kind of gas. About four out of five of the molecules in Earth's atmosphere is nitrogen gas! A molecule of nitrogen gas is made up of two nitrogen atoms....more
<urn:uuid:7bf1d597-0484-4471-8f33-5503ed8fe8ab>
3.875
860
Knowledge Article
Science & Tech.
56.872293
Introduction to Enzymes The following has been excerpted from a very popular Worthington publication which was originally published in 1972 as the Manual of Clinical Enzyme Measurements. While some of the presentation may seem somewhat dated, the basic concepts are still helpful for researchers who must use enzymes but who have little background in enzymology. Early Enzyme Discoveries The existence of enzymes has been known for well over a century. Some of the earliest studies were performed in 1835 by the Swedish chemist Jon Jakob Berzelius who termed their chemical action catalytic. It was not until 1926, however, that the first enzyme was obtained in pure form, a feat accomplished by James B. Sumner of Cornell University. Sumner was able to isolate and crystallize the enzyme urease from the jack bean. His work was to earn him the 1947 Nobel Prize. John H. Northrop and Wendell M. Stanley of the Rockefeller Institute for Medical Research shared the 1947 Nobel Prize with Sumner. They discovered a complex procedure for isolating pepsin. This precipitation technique devised by Northrop and Stanley has been used to crystallize several enzymes.
<urn:uuid:b1f146ea-468c-4e1e-980b-c4c17efb5378>
3.734375
235
Knowledge Article
Science & Tech.
36.452391
Introduction to Enzymes The following has been excerpted from a very popular Worthington publication which was originally published in 1972 as the Manual of Clinical Enzyme Measurements. While some of the presentation may seem somewhat dated, the basic concepts are still helpful for researchers who must use enzymes but who have little background in enzymology. Effects of pH Enzymes are affected by changes in pH. The most favorable pH value - the point where the enzyme is most active - is known as the optimum pH. This is graphically illustrated in Figure 14. Extremely high or low pH values generally result in complete loss of activity for most enzymes. pH is also a factor in the stability of enzymes. As with activity, for each enzyme there is also a region of pH optimal stability. The optimum pH value will vary greatly from one enzyme to another, as Table II shows: In addition to temperature and pH there are other factors, such as ionic strength, which can affect the enzymatic reaction. Each of these physical and chemical parameters must be considered and optimized in order for an enzymatic reaction to be accurate and reproducible.
<urn:uuid:950e10c6-23a1-4ac4-896e-da30b265af84>
3.828125
234
Knowledge Article
Science & Tech.
32.865254
Some scientists say it's an exciting "start". We take a boat to see the results. At the moment, we're whizzing down a channel about a few miles downstream from Caernarvon. Denise Reed spends a lot of her life in these wetlands: she studies them for Louisiana State University. Reed would make a great scout leadershe's got no-nonsense hair, an infectious smile, and she forges through the grasses on this wetland like she's leading an expedition. "We're going to go over to see some marsh over thereby those trees." Reed says. "And that's where we're going to see how the freshwater, the nutrients and the sediments coming out of the diversion structure are revitalizing the marsh. So we're gonna go see. It's right over on the other side there ... Look at all this wonderful green, you know, there's nice big growth on these plants." Tracing the effects of the Canaervon. Photo: William Brangham/NOW with Bill Moyers Reed says if we had walked here before they started the Caernarvon project, it would have felt completely different. This wetland was sick back then, and when wetlands are sick, the soil gets all mushy and turns into open water. But now we're walking on solid ground. "You look at those ponds over there in the distance," Reed explains, "you see how the grass is gradually moving in and filling in. You can see that just here, you can see that grass growing out into the middle of this area. This would have all been bare. What is land loss? Land loss is marsh turning to open water. Here we've got open water in ponds filling in and becoming marsh. A lot of people think it's hopeless down here in coastal Louisiana, but just coming down here and looking at this makes us believe that we can do this." But these changes have disrupted some people's lives. The problem is, the minute you put your finger on a map and say, 'Let's tinker with nature here, let's mimic the old floods there,' chances are that you might flood somebody's backyard. Or you'll disrupt the bays and inlets where George Barisich does his fishing. Next: A Plague of Killer Mussells
<urn:uuid:75e536bc-b651-4d8e-87c9-5ed446966253>
3.09375
477
Audio Transcript
Science & Tech.
71.227569
Ever see a monarch butterfly? They have bright orange and black wings, and every year they fly from Canada to Mexico and then back again. Each individual butterfly doesn’t make the trip, but females lay eggs along the way and their offspring continue on. What a trip! Some people think monarch butterflies are in danger because they eat milkweed plants, and milkweed plants are getting harder to find. The problem is that an insect called the milkweed stem weevil also likes to eat milkweed plants, and it eats a lot of them. But an Agricultural Research Service (ARS) scientist made a discovery that could help save milkweed plants and monarchs. The scientist, Charles Suh, was working on a new boll weevil trap when he made his discovery. Boll weevils are a problem for farmers because they attack cotton plants, so farmers in Texas asked Suh to find out why their boll weevil traps weren’t working. Suh asked the trap manufacturer to make a trap with the exact mix of natural compounds that boll weevils use to sniff out each other. Suh placed the new traps in cotton fields and found that they didn’t catch any more boll weevils, but they did catch a lot of the milkweed stem weevils that eat milkweed plants. With a little more work, the discovery could lead to traps that control milkweed stem weevils. That would mean enough milkweed plants for monarch butterflies to keep making those long distance trips. By Dennis O'Brien, Agricultural Research Service, Information Staff
<urn:uuid:1bdcc472-c55c-4506-a05a-5c28629a7118>
3.765625
329
Knowledge Article
Science & Tech.
59.046274
figure tag is used to provide the structure for inserting a figure into a CNXML document. A figure may contain an image, multimedia object, or caption tag. <title>The World's Cutest Dog</title> <media id="dogpic" alt="A dog sitting on a bed"> <image mime-type="image/jpeg" src="image1.jpg" /> Notice how cute the dog is just sitting there. Results in this display: Figure 1: Notice how cute the dog is just sitting there. |The World's Cutest Dog| Allows you to determine which way subfigure elements are arranged. Has no effect if the figure has no subfigure children. - horizontal - Subfigures appear side by side (default). - vertical - Subfigures appear one on top of the other. Defines the type of figure in order to give specialized control over numbering. Figures of the same type are numbered in series (i.e., Figure 1, Figure 2...). Type can be used in conjunction with label so that figures of each user-defined type appear with their own label. Type can be any user-defined value that reflects the purpose of the figure. A unique identifier, whose value must begin with a letter and contain only letters, numbers, hyphens, underscores, colons, and/or periods (no spaces). may contain an optional tag, followed by an optional title Next, it must contain: may contain an
<urn:uuid:8c5551fd-d2dc-4ef1-966a-c0bcfa8ee5d2>
2.875
327
Documentation
Software Dev.
48.186005
There is a passage in On intelligence about the differences between parallel processing in human versus computers : From the dawn of the industrial revolution, people have viewed the brain as some sort of machine. They knew there weren't gears and cogs in the head, but it was the best metaphor they had. Somehow information entered the brain and the brain-machine determined how the body should react. During the computer age, the brain has been viewed as a particular type of machine, the programmable computer. And as we saw in chapter 1, AI researchers have stuck with this view, arguing that their lack of progress is only due to how small and slow computers remain compared to the human brain. Today's computers may be equivalent only to a cockroach brain, they say, but when we make bigger and faster computers they will be as intelligent as humans. There is a largely ignored problem with this brain-as-computer analogy. Neurons are quite slow compared to the transistors in a computer. A neuron collects inputs from its synapses, and combines these inputs together to decide when to output a spike to other neurons. A typical neuron can do this and reset itself in about five milliseconds (5 ms), or around two hundred times per second. This may seem fast, but a modern silicon-based computer can do one billion operations in a second. This means a basic computer operation is five million times faster than the basic operation in your brain! That is a very, very big difference. So how is it possible that a brain could be faster and more powerful than our fastest digital computers? "No problem," say the brain-as-computer people. "The brain is a parallel computer. It has billions of cells all computing at the same time. This parallelism vastly multiplies the processing power of the I always felt this argument was a fallacy, and a simple thought experiment shows why. It is called the "one hundred–step rule." A human can perform significant tasks in much less time than a second. For example, I could show you a photograph and ask you to determine if there is cat in the image. Your job would be to push a button if there is a cat, but not if you see a bear or a warthog or a turnip. This task is difficult or impossible for a computer to perform today, yet a human can do it reliably in half a second or less. But neurons are slow, so in that half a second, the information entering your brain can only traverse a chain one hundred neurons long. That is, the brain "computes" solutions to problems like this in one hundred steps or fewer, regardless of how many total neurons might be involved. From the time light enters your eye to the time you press the button, a chain no longer than one hundred neurons could be involved. A digital computer attempting to solve the same problem would take billions of steps. One hundred computer instructions are barely enough to move a single character on the computer's display, let alone do something But if I have many millions of neurons working together, isn't that like a parallel computer? Not really. Brains operate in parallel and parallel computers operate in parallel, but that's the only thing they have in common. Parallel computers combine many fast computers to work on large problems such as computing tomorrow's weather. To predict the weather you have to compute the physical conditions at many points on the planet. Each computer can work on a different location at the same time. But even though there may be hundreds or even thousands of computers working in parallel, the individual computers still need to perform billions or trillions of steps to accomplish their task. The largest conceivable parallel computer can't do anything useful in one hundred steps, no matter how large or how fast. Here is an analogy. Suppose I ask you to carry one hundred stone blocks across a desert. You can carry one stone at a time and it takes a million steps to cross the desert. You figure this will take a long time to complete by yourself, so you recruit a hundred workers to do it in parallel. The task now goes a hundred times faster, but it still requires a minimum of a million steps to cross the desert. Hiring more workers— even a thousand workers— wouldn't provide any additional gain. No matter how many workers you hire, the problem cannot be solved in less time than it takes to walk a million steps. The same is true for parallel computers. After a point, adding more processors doesn't make a difference. A computer, no matter how many processors it might have and no matter how fast it runs, cannot "compute" the answer to difficult problems in one hundred steps. So how can a brain perform difficult tasks in one hundred steps that the largest parallel computer imaginable can't solve in a million or a billion steps? The answer is the brain doesn't "compute" the answers to problems; it retrieves the answers from memory. In essence, the answers were stored in memory a long time ago. It only takes a few steps to retrieve something from memory. Slow neurons are not only fast enough to do this, but they constitute the memory themselves. The entire cortex is a memory system. It isn't a computer at all. The point made here is that the computing paradigm (that is, the way the whole thing works) of the brain and the computer are completely different. The computer is a Turing machine, and the brain is something else, possibly a memory system if you think that Jeff Hawking is right. Whatever it is, the brain is not a Turing machine. To go back to your question: Why can't human brains be used to do massive parallel processing in the same way computers are doing today? It has to do with the way the human brain works. If you assume that the brain will do any task in a parallel fashion, and the more neurons involved, the better the performance; then in order to maximize your performance you should use your whole brain. 1 task: 100% performance, 2 tasks: 50% performance, 3 tasks: 33% performance, and so on. But if you add an "attention switching cost" to go from one task to another, then you are better off just focusing on one task where the switching cost is zero. So you can multitask, but it won't be efficient.
<urn:uuid:4e2434c7-881c-47a7-922d-d412827921b3>
3.625
1,378
Q&A Forum
Science & Tech.
53.529127
Date: January 1, 1959 Description: With the assumptions that Berthelot's equation of state accounts for molecular size and intermolecular force effects, and that changes in the vibrational heat capacities are given by a Planck term, expressions are developed for analyzing one-dimensional flows of a diatomic gas. The special cases of flow through normal and oblique shocks in free air at sea level are investigated. It is found that up to a Mach number 10 pressure ratio across a normal shock differs by less than 6 percent from its ideal gas value; whereas at Mach numbers above 4 the temperature rise is considerable below and hence the density rise is well above that predicted assuming ideal gas behavior. It is further shown that only the caloric imperfection in air has an appreciable effect on the pressures developed in the shock process considered. The effects of gaseous imperfections on oblique shock-flows are studied from the standpoint of their influence on the life and pressure drag of a flat plate operating at Mach numbers of 10 and 20. The influence is found to be small. (author). Contributing Partner: UNT Libraries Government Documents Department
<urn:uuid:ab30b57c-df72-4818-9b0c-4b1841f9a159>
3.046875
229
Academic Writing
Science & Tech.
31.567098
Proceedings of the International Astronomical Union (2005), 2004:4748 Cambridge University Press Nowadays, more than one hundred extra-solar planets are known, and about a dozen of multi-planetary systems have been discovered. Most of them have been detected by the radial velocity (RV) method. The recovery of orbital parameters from RV data leads to several problems. Usually RV data cover irregularly a short time interval which is frequently shorter than the orbital period of the most distant planet. Moreover, observations contain a noise due to the instabilities of the star. The distribution of this noise is unknown. A precise determination of the dynamical state of a multi-planetary system is important for understanding its stability and evolution. In most cases observers determine the orbital parameters for multi-planetary systems just fitting a sum of Keplerian orbits. The parameters obtained in such a way are in most cases the only accessible data about an extra-solar system because the observes very rarely publish their observations. However, the parameters from a multi-Keplerian fit as it has already been observed by many authors, cannot be interpreted as the osculating elements for actual planetary orbits. Moreover, these parameters can be considered as Keplerian elements of: relative, barycentric or Jacobi orbits. One can find arguments that the interpretation of parameters from a multi-Keplerian fit as elements of Keplerian orbits in the Jacobi coordinates is the most proper one, see [Lee and Peale, 2002; Godziewski et al. 2003]. Our first aim was to determine how badly a multi-Keplerian fit determines osculating orbits. To this end, we performed several numerical simulations. For a chosen planetary system with two planets we generated a synthetic RV observations using the Newtonian three body problem. Then we fitted to these observations the Keplerian model and compared the obtained Keplerian elements with the true osculating elements of orbits. Then we changed the semi-major axis and the eccentricity of one planet and repeated all calculations. In this way we obtained maps of differences between the true and the fitted Keplerian elements (relative, barycentric and Jacobi) for a given system with two planets. The conclusions from these experiments are following. Even for a quite big separation of planets (2 AU), multi-Keplerian fits are bad. The errors appear mainly in the positions of planets in their orbits and can achieve 60 deg and more. The errors in eccentricities and semi-major axes achieve a few percent, but they can be bigger for bigger masses of planets, or when the observations cover only a part of the period of the external planet. Moreover, the errors are maximal for systems close to a mean motions resonance. All the above conclusions do not depend on how we interprete the parameters of a Keplerian fit: relative, barycentric, as well as Jacobi elements are equally bad if we look at the overall results.
<urn:uuid:99b65116-b661-429a-b76e-06694fb16baa>
2.6875
600
Academic Writing
Science & Tech.
27.195116
Since you are having this confusion, I think it helps to consider the concepts of zero, infinity and "undefined". In the most basic sense, division is the opposite of multiplication. Thus, the fact that 2 x 3 = 6 implies that 6 / 3 = 2. 1 x 0 = 0. Applying the above logic, 0 / 0 = 1. However, 2 x 0 = 0, so 0 / 0 must also be 2. In fact, it looks as though 0 / 0 could be any number! This obviously makes no sense - we say that 0 / 0 is "undefined" because there isn't really an answer. Likewise, 1 / 0 is not really infinity. Infinity isn't actually a number, it's more of a concept. If you think about how division is often described in schools, say, number of sweets shared between number of people, you see the confusion. If I go around some people giving them 0 sweets each, how many people do I need to go around until I have given away my 1 sweet? An infinite number? Kind of, because I can keep going around infinitely. However, I never actually give away that sweet. This is why people say that 1 / 0 "tends to" infinity - we can't really use infinity as a number, we can only imagine what we are getting closer to as we move in the direction of infinity. However, in this case, the number of sweets I have is never changing, so I'm not really getting closer to anywhere. Even this logic doesn't really work. The long and short of it is that 1 / 0 doesn't really make sense as a calculation. When we do use the notion of infinity we tend to use positive infinity where it doesn't matter purely by convention. However, if you think about it too hard you start to get into philosophy and stuff, like "what actually is infinity?" and "wait, what is a number"? The things people are talking about where it does are different ways of using numbers so they don't really count. For example, in the trivial ring, there is only one number, which works like a 0 (add it to anything and you get that thing) and a 1 (multiply it by anything and you get the same thing again) and makes sense because you can only add it to or multiply it by itself to get itself. It's pretty boring actually, but in that case this one number - let's call it x - is both 0 and 1, so 1 / 0 = x / x = x because everything equals x. As you can see, this is a bit of a cheat because we don't even have enough numbers to have a notion of 1 / 0 in the way you're thinking of it.
<urn:uuid:f5ecf10f-b9a5-4afd-b6d2-50e952f5505b>
3
565
Q&A Forum
Science & Tech.
71.001875
This shape would appear to be a rectangular prism. The lateral area (area of every side except top and bottom) is given by the formula: LA = ph (perimeter of the base multiplied by the height) The surface area is then found by adding the LA to the areas of the Bases (top and bottom). SA = LA + 2B In your figure: LA = 18 times 3 = 54 square cm SA = 54 + 2(14) = 82 square cm The volume of a right rectangular prism is given by the formula: V = LWH (length times width times height) In your case, V = 7 X 2 X 3 = 42 cubic cm. Well, on the offchance you can't understand masters' explanation; the easiest way to get the area is just to say: Total surface area = Sum of the areas of each face. So you've six faces, all rectangles. The area of a rectangle is obtained by just multiplying the two sides. So, say, the face at the top is 7x2 = 14. And the face at the bottom will be the same = 14. The face in front is 7x3=21 As is the one at the back = 21. The other face you can see is 3x2 = 6. And the corresponding one that you can't see =6 also. So the sum of the four squares is 14+14+21+21+6+6 = 82. The Surface Area is found by adding the areas of all the faces or more simply, if the base is "l" by "w" and the height is "h", then the surface area is given by: SA = 2(lw + hl + hw) Another method I already gave you in my earlier post.
<urn:uuid:511ac3d6-f1e2-46a9-9f2e-fc151104aeed>
3.625
385
Q&A Forum
Science & Tech.
81.671952
As many know, this is the 150th anniversary of the publication of On the Origin of Species. If I may be so bold, one of the things that might distinguish our thinking about evolution in the last 50 years from the first hundred years might be the speed at which natural selection can operate. For a long time, we thought of evolution taking long times: millions of years would be needed to see the gradual accumulation of changes. We learned in the past few decades that we can see the effects of selection over the course of a few decades. There are a few fast changing situations that should press the fast forward button on natural selection. Invasions are one. That’s why they’re invasions, not slow expansions. Boronow and Langkilde look at how the invasion of red fire ants are affecting fence lizards. The ants (Solenopsis invicta) are nasty little buggers. A dozen will kill a fence lizard in less than a minute. You’d think that would apply some pretty strong selection on the lizards if they have any traits in the population that provide even a little defense against the ants. To test whether natural selection has started acting on the fence lizards (Sceloporus undulatus), they collected lizards from two locations: one was invaded by the ants 70 years ago, and the other has not been invaded yet. Then, they allowed some angry ants to bite restrained lizards, and measured the animals’ performance on several behavioural tasks, like biting, running, and so on. A control group of lizards where handled, but not bitten. They also looked at the effect of dilute venom on the lizards’ blood directly. The bottom line? There’s no effect. The lizards from the region that had been putting up with ants for seven decades had the same behavioural responses to the ants as lizards from the region with no ants. No differences in the blood responses to venom, either, though the blood was affected by venom. The authors suggest that the ant venom might have a “tipping point.” Less than a certain dose, and the lizard is fine. More than that dose, and you’ve got a scaly corpse. The range in between “fine” and “dead” could be minuscule, in which case, there may not be a lot of variation for natural selection to work on. Thus, if the lizards can keep the bites under the critical value, they suffer no fitness consequences. Another issue is that the fence lizards do live with other fire ants, like Solenopsis xyloni. These have weaker venom, and they’re not as numerous as the red fire ants, but it might be that the fence lizards have already been pushed to have defenses against fire ants. A third possibility is simply that there is no existing variation that gives some members of the population greater resistance than others. Seventy years, which is about 35 generations of lizards, is quite a while, but may not be long enough. Who knows when just the right mutation will give some lucky lizard – and its offspring – a selective advantage. Boronow, K., & Langkilde, T. (2009). Sublethal effects of invasive fire ant venom on a native lizard Journal of Experimental Zoology Part A: Ecological Genetics and Physiology, 9999A DOI: 10.1002/jez.570 Lizard picture by J.N. Stewart on Flickr, used under a Creative Commons license. Ant picture by AJC1 on Flickr, used under a Creative Commons license.
<urn:uuid:30101667-01b1-4e3c-895f-4681fab593c3>
3.109375
756
Personal Blog
Science & Tech.
54.689741
On November 25, 1952, three months after returning from England, Pauling finally made a serious stab at a structure for DNA. The immediate spur was a Caltech biology seminar given by Robley Williams, a Berkeley professor who had done some amazing work with an electron microscope. Through a complicated technique he was able to get images of incredibly small biological structures. Pauling was spellbound. One of Williams's photos showed long, tangled strands of sodium ribonucleate, the salt of a form of nucleic acid, shaded so that three-dimensional details could be seen. To Pauling the strands appeared cylindrical. He guessed then, looking at these black-and-white slides in the darkened seminar room, that DNA was likely to be a helix. No other conformation would fit both Astbury's x-ray patterns of the molecule and the photos he was seeing. Even better, Williams was able to estimate the sizes of structures on his photos, and his work showed that each strand was about 15 angstroms across. Pauling was interested enough to ask him to repeat the figure, which Williams qualified by noting the difficulty he had in making precise measurements. The next day, Pauling sat at his desk with a pencil, a sheaf of paper, and a slide rule. New data that summer from Alexander Todd's laboratory had confirmed the linkage points between the sugars and phosphates in DNA; other work showed where they connected to the bases. Pauling was already convinced from his earlier work that the various-sized bases had to be on the outside of the molecule; the phosphates, on the inside. Now he knew that the molecule was probably helical. These were his starting points for a preliminary look at DNA. He still lacked critical data - he had no decent x-ray images, for instance, and no firm structural data on the precise sizes and bonding angles of the base-sugar-phosphate building blocks of DNA - but he went with what he had. It was a mistake. After a few pages of theorizing, using sketchy and sometimes incorrect data, Pauling became convinced - as Watson and Crick had been at first - that DNA was a three-stranded structure with the phosphates on the inside. Unfortunately, he had no Rosalind Franklin to set him right.
<urn:uuid:bb706256-060f-4d0e-abdd-85b7887e5fba>
3.921875
491
Nonfiction Writing
Science & Tech.
47.167632
Assume you have a planet of mass $M$ and radius $R$ and have a stationary spaceship at distance $4R$ from the center of the planet.If a projectile is launched from the spaceship of mass $m$ and velocity $v$ and just grazes the planet's surface, what will be the locus of the projectile? I guess on Earth we take projectile path to be parabolic because of no variation in acceleration due to gravity. But in this case acceleration due to gravity will change with change in distance. So in the end, will the trajectory be parabolic, elliptical, circular? Explain why with full proof.
<urn:uuid:1eec5fe0-6547-425a-bc78-ecd818d42c5a>
2.921875
133
Q&A Forum
Science & Tech.
50.953205
Mission Type: Flyby Launch Vehicle: 8K78 (no. T103-16) Launch Site: NIIP-5 / launch site 1 Spacecraft Mass: 893.5 kg Spacecraft Instruments: 1) imaging system and 2) magnetometer Spacecraft Dimensions: 3.3 m long and 1.0 m in diameter (4 m across with the solar panels and radiators deployed) Deep Space Chronicle: A Chronology of Deep Space and Planetary Probes 1958-2000, Monographs in Aerospace History No. 24, by Asif A. Siddiqi National Space Science Data Center, http://nssdc.gsfc.nasa.gov/ The second of three Soviet spacecraft intended for the 1962 Mars launch window, Mars 1 was the first spacecraft sent by any nation to fly past Mars. Its primary mission was to photograph the surface. This time the upper stage successfully fired the probe toward Mars, but immediately after engine cutoff, controllers discovered that pressure in one of the nitrogen gas bottles for the spacecraft's attitude-control system had dropped to zero (due to incomplete closure of a valve). On 6 and 7 November 1962, controllers used a backup gyroscope system to keep the solar panels constantly exposed to the Sun during the coast phase, although further midcourse corrections became impossible. Controllers maintained contact with the vehicle until 21 March 1963, when the probe was 106 million kilometers from Earth. Mars 1 eventually silently flew by Mars at a distance of 197,000 kilometers on 19 June 1963. Prior to loss of contact, scientists were able to collect data on interplanetary space (on cosmic-ray intensity, Earth's magnetic fields, ionized gases from the Sun, and meteoroid impact densities) up to a distance of 1.24 AU.
<urn:uuid:53817c6b-e149-4531-b0dd-eea6dbc743b2>
3.671875
369
Knowledge Article
Science & Tech.
55.919265
Seen at the Air Force Space and Missile Museum at Cape Canaveral Air Force Station, Florida — it’s a model of the Dinosaur Dynasoar space plane: Likely the most poorly-named program ever conceived, the Dynasoar (for dynamic soaring) was an early attempt at making a reusable manned space plane — essentially a mini-shuttle, and in some sense a follow-on to the X-15 experimental aircraft. First proposed in 1957, the U. S. Air Force saw this single-seat craft as their way into space — and assigned it a dizzying future array of tasks. Variants were discussed for reconnaissance, long-range weapons delivery, and even in-orbit warfare (the Soviets were planning similar vehicles at the time, so this was hardly unilateral thinking). Ultimately, the program wound down in 1963, victim of an unclear mission, escalating costs, and a hostile political environment — just months away from completion of the first flight-worthy vehicle. In its early days, Dynasoar was hobbled by the Eisenhower administration’s desire to avoid military competition with NASA’s mission of manned orbital flight. Once the Kennedy administration was in place, Defense Secretary McNamara ultimately cancelled Dynasoar in favor of military use of the Gemini spacecraft that NASA was then developing (although it too would soon be cancelled).
<urn:uuid:608f1efc-7218-4646-849b-f4f9f32d2d66>
3.25
280
Personal Blog
Science & Tech.
24.512033
There are multiple variations of IF statements: The simple IF construct is used to evaluate a Boolean condition and execute an appropriate set of commands. For instance the following example determines whether the current day of the week is Friday: IF DATEPART(dw, GETDATE()) = 6 BEGIN PRINT 'TGI Friday' END We can easily extend the same construct by adding an ELSE clause to take the alternative course of action and calculate the number of days before Friday: IF DATEPART(dw, GETDATE()) = 6 BEGIN PRINT 'TGI Friday' END ELSE BEGIN SELECT 'Sorry, Friday is ' + CONVERT(VARCHAR(1), (6-DATEPART(dw, GETDATE())) )+ ' days away' END Alternatively we can determine the Boolean value of a condition based on the outcome of a SELECT statement: IF (SELECT COUNT(*) FROM authors) > 20 PRINT 'authors table contains more than 20 records' Note: If you wish to execute multiple statements depending on the value of a condition, be sure to enclose those statements within a BEGIN END block ometimes you have to check for multiple conditions and control your program accordingly. You can do so by combining these conditions with AND and OR within the simple IF construct. The following example demonstrates two conditions combined with AND operator: IF (SELECT COUNT(*) FROM authors) > 20 AND (SELECT COUNT(*) FROM authors WHERE state = 'ca')>0 PRINT 'authors table contains more than 20 records' SELECT CAST((SELECT COUNT(*) FROM authors WHERE state = 'ca') AS VARCHAR(2)) + ' of them are from california' The previous query will return the number of authors in CA only if the authors table contains any authors from that state and only if the total number of authors is greater than 20. Therefore the AND operator returns true only if both parts of the condition are true.You can also combine conditions with the OR operator, which checks both conditions and returns TRUE if one of them is TRUE or if both of them are true. The OR operator returns FALSE if both parts of the condition are FALSE. Although you can use the OR operator within a simple IF construct it is not recommended to do so. If you have to check for multiple conditions combined with OR logic, it is recommended to use either nested IF statements or CASE operator. IF EXISTS construct checks for the existence of a record fitting a specified criterion or any records in the specified table and takes action accordingly. This is a great approach to take if you don't have to retrieve any values from a table. Since you cannot do any data retrieval with EXISTS, the SELECT statement within the EXISTS clause does not have to specify any column names - the "*" operator will suffice. In fact IF EXISTS will be more efficient than IF SELECT. The following example checks whether there are any authors from KS in the authors table: IF EXISTS (SELECT * FROM authors WHERE state = 'ks') PRINT 'found author(s) from KS' As we noted earlier, if you have to check multiple conditions you can use nested IF ELSE statements. These should be handled with care and appropriately indented to make the code readable. If you've used any other programming language you'll find T-SQL nested IFs very easy to get used to. The following example executes appropriate user stored procedures depending on the price of "abc" stocks: DECLARE @stock_price SMALLMONEY SELECT @stock_price = stock_price FROM high_yield_stock WHERE stock_symbol = 'abc' IF @stock_price > 10.01 BEGIN EXEC usp_alert_for_high_prices END ELSE IF @stock_price BETWEEN 5.01 and 10.00 BEGIN EXEC usp_notify_managers END ELSE EXEC usp_alert_for_price_drop If the outer condition evaluates to TRUE, the inner conditions will not be checked. This is especially useful if the evaluation criteria require multi-table joins or searches which are resource intensive. IF NOT (SELECT COUNT(*) FROM sales WHERE ord_date BETWEEN '1/1/95' AND '12/31/95') > 0 BEGIN PRINT 'no sales have been recorded in 1995' END
<urn:uuid:637a87cc-2f87-44da-b4da-484fcca5a7e0>
3.046875
920
Documentation
Software Dev.
32.728608
by Anne E. Egger, Ph.D. We all see changes in the landscape around us, but your view of how fast things change is probably determined by where you live. If you live near the coast, you see daily, monthly, and yearly changes in the shape of the coastline. Deep in the interior of continents, change is less evident – rivers may flood and change course only every 100 years or so. If you live near an active fault zone or volcano, you experience infrequent but catastrophic events like earthquakes and eruptions. Throughout human history, different groups of people have held to a wide variety of beliefs to explain these changes. Early Greeks ascribed earthquakes to the god Poseidon expressing his wrath, an explanation that accounted for their unpredictability. The Navajo view processes on the surface as interactions between opposite but complementary entities: the sky and the earth. Most 17th century European Christians believed that the earth was essentially unchanged from the time of creation. When naturalists found fossils of marine creatures high in the Alps, many devout believers interpreted the Old Testament literally and suggested that the perched fossils were a result of the biblical Noah’s flood. In the mid-1700’s, a Scottish physician named James Hutton (see Biography link to the right) began to challenge the literal interpretation of the Bible by making detailed observations of rivers near his home. Every year, these rivers would flood, depositing a thin layer of sediment in the floodplain. It would take many millions of years, reasoned Hutton, to deposit a hundred meters of sediment in this fashion, not just the few weeks allowed by the Biblical flood. Hutton called this the principle of uniformitarianism: processes that occur today are the same ones that occurred in the past to create the landscape and rocks as we see them now. By comparison, the strict biblical interpretation, common at the time, suggested that the processes that had created the landscape were complete and no longer at work. Figure 1: This image shows how James Hutton first envisioned the rock cycle. Hutton argued that, in order for uniformitarianism to work over very long periods of time, earth materials had to be constantly recycled. If there were no recycling, mountains would erode (or continents would decay, in Hutton’s terms), the sediments would be transported to the sea, and eventually the surface of the earth would be perfectly flat and covered with a thin layer of water. Instead, those sediments once deposited in the sea must be frequently lifted back up to form new mountain ranges. Recycling was a radical departure from the prevailing notion of a largely unchanging earth. As shown in the diagram above, Hutton first conceived of the rock cycle as a process driven by earth’s internal heat engine. Heat caused sediments deposited in basins to be converted to rock, heat caused the uplift of mountain ranges, and heat contributed in part to the weathering of rock. While many of Hutton’s ideas about the rock cycle were either vague (such as “conversion to rock”) or inaccurate (such as heat causing decay), he made the important first step of putting diverse processes together into a simple, coherent theory. Hutton’s ideas were not immediately embraced by the scientific community, largely because he was reluctant to publish. He was a far better thinker than writer – once he did get into print in 1788, few people were able to make sense of his highly technical and confusing writing (see the Classics link to the right to sample some of Hutton's writing). His ideas became far more accessible after his death with the publication of John Playfair’s “Illustrations of the Huttonian Theory of the Earth” (1802) and Charles Lyell’s “Principles of Geology” (1830). By that time, the scientific revolution in Europe had led to widespread acceptance of the once-radical concept that the earth was constantly changing. A far more complete understanding of the rock cycle developed with the emergence of plate tectonics theory in the 1960’s (see our Plate Tectonics I module). Our modern concept of the rock cycle is fundamentally different from Hutton’s in a few important aspects: we now largely understand that plate tectonic activity determines how, where, and why uplift occurs, and we know that heat is generated in the interior of the earth through radioactive decay and moved out to the earth’s surface through convection. Together, uniformitarianism, plate tectonics, and the rock cycle provide a powerful lens for looking at the earth, allowing scientists to look back into earth history and make predictions about the future. The rock cycle consists of a series of constant processes through which earth materials change from one form to another over time. As within the water cycle and the carbon cycle, some processes in the rock cycle occur over millions of years and others occur much more rapidly. There is no real beginning or end to the rock cycle, but it is convenient to begin exploring it with magma. You may want to open the rock cycle schematic below and follow along in the sketch, click on the caption to open this diagram in a new window. Figure 2: A schematic sketch of the rock cycle. In this sketch, boxes represent earth materials and arrows represent the processes that transform those materials. The processes are named in bold next to the arrows. The two major sources of energy for the rock cycle are also shown; the sun provides energy for surface processes such as weathering, erosion, and transport, and the earth's internal heat provides energy for processes like subduction, melting, and metamorphism. The complexity of the diagram reflects a real complexity in the rock cycle. Notice that there are many possibilities at any step along the way. Magma, or molten rock, forms only at certain locations within the earth, mostly along plate boundaries. (It is a common misconception that the entire interior of the earth is molten, but this is not the case. See our Earth Structure module for a more complete explanation.) When magma is allowed to cool, it crystallizes, much the same way that ice crystals develop when water is cooled. We see this process occurring at places like Iceland, where magma erupts out of a volcano and cools on the surface of the earth, forming a rock called basalt on the flanks of the volcano. But most magma never makes it to the surface and it cools within the earth’s crust. Deep in the crust below Iceland’s surface, the magma that doesn’t erupt cools to form gabbro. Rocks that form from cooled magma are called igneous rocks; intrusive igneous rocks if they cool below the surface (like gabbro), extrusive igneous rocks if they cool above (like basalt). Figure 3: This picture shows a basaltic eruption of Pu'u O'o, on the flanks of the Kilauea volcano in Hawaii. The red material is molten lava, which turns black as it cools and crystallizes. Rocks like basalt are immediately exposed to the atmosphere and weather. Rocks that form below the earth’s surface, like gabbro, must be uplifted and all of the overlying material must be removed through erosion in order for them to be exposed. In either case, as soon as rocks are exposed at the earth’s surface, the weathering process begins. Physical and chemical reactions caused by interaction with air, water, and biological organisms cause the rocks to break down. Once rocks are broken down, wind, moving water, and glaciers carry pieces of the rocks away through a process called erosion. Moving water is the most common agent of erosion – the muddy Mississippi, the Amazon, the Hudson, the Rio Grande, all of these rivers carry tons of sediment weathered and eroded from the mountains of their headwaters to the ocean every year. The sediment carried by these rivers is deposited and continually buried in floodplains and deltas. In fact, the U.S. Army Corps of Engineers is kept busy dredging the sediments out of the Mississippi in order to keep shipping lanes open. Figure 4: Photograph from space of the Mississippi Delta. The brown color shows the river sediments and where they are being deposited in the Gulf of Mexico. Under natural conditions, the pressure created by the weight of the younger deposits compacts the older, buried sediments. As groundwater moves through these sediments, minerals like calcite and silica precipitate out of the water and coat the sediment grains. These precipitants fill in the pore spaces between grains and act as cement, gluing individual grains together. The compaction and cementation of sediments creates sedimentary rocks like sandstone and shale, which are forming right now in places like the very bottom of the Mississippi delta. Because deposition of sediments often happens in seasonal or annual cycles, we often see layers preserved in sedimentary rocks when they are exposed. In order for us to see sedimentary rocks, however, they need to be uplifted and exposed by erosion. Most uplift happens along plate boundaries where two plates are moving towards each other and causing compression. As a result, we see sedimentary rocks that contain fossils of marine organisms (and therefore must have been deposited on the ocean floor) exposed high up in the Himalaya Mountains – this is where the Indian plate is running into the Eurasian plate. Figure 5: The Grand Canyon is famous for its exposures of great thicknesses of sedimentary rocks. If sedimentary rocks or intrusive igneous rocks are not brought to the earth’s surface by uplift and erosion, they may experience even deeper burial and be exposed to high temperatures and pressures. As a result, the rocks begin to change. Rocks that have changed below the earth’s surface due to exposure to heat, pressure, and hot fluids are called metamorphic rocks. Geologists often refer to metamorphic rocks as “cooked” because they change in much the same way that cake batter changes into a cake when heat is added. Cake batter and cake contain the same ingredients, but they have very different textures, just like sandstone, a sedimentary rock, and quartzite, its metamorphic equivalent. In sandstone, individual sand grains are easily visible and often can even be rubbed off; in quartzite, the edges of the sand grains are no longer visible, and it is a difficult rock to break with a hammer, much less rubbing pieces off with your hands. Some of the processes within the rock cycle, like volcanic eruptions, happen very rapidly, while others happen very slowly, like the uplift of mountain ranges and weathering of igneous rocks. Importantly, there are multiple pathways through the rock cycle. Any kind of rock can be uplifted and exposed to weathering and erosion; any kind of rock can be buried and metamorphosed. As Hutton correctly theorized, these processes have been occurring for millions and billions of years to create the earth as we see it: a dynamic planet. The rock cycle is not just theoretical; we can see all of these processes occurring at many different locations and at many different scales all over the world. As an example, the Cascade Range in North America illustrates many aspects of the rock cycle within a relatively small area, as shown in the diagram below. Figure 6: Cross-section through the Cascade Range in Washington state. Image modified from the Cascade Volcano Observatory, USGS. The Cascade Range in the northwestern United States is located near a convergent plate boundary, where the Juan de Fuca plate, which consists mostly of basalt saturated with ocean water is being subducted, or pulled underneath, the North American plate. As the plate descends deeper into the earth, heat and pressure increase and the basalt is metamorphosed into a very dense rock called eclogite. All of the ocean water that had been contained within the basalt is released into the overlying rocks, but it is no longer cold ocean water. It too has been heated and contains high concentrations of dissolved minerals, making it highly reactive, or volatile. These volatile fluids lower the melting temperature of the rocks, causing magma to form below the surface of the North American plate near the plate boundary. Some of that magma erupts out of volcanoes like Mt. St. Helens, cooling to form a rock called andesite, and some cools beneath the surface, forming a similar rock called diorite. Storms coming off of the Pacific Ocean cause heavy rainfall in the Cascades, weathering and eroding the andesite. Small streams carry the weathered pieces of the andesite to large rivers like the Columbia and eventually to the Pacific Ocean, where the sediments are deposited. Continual deposition of sediments near the deep oceanic trench results in the formation of sedimentary rocks like sandstone. Eventually, some sandstone is carried down into the subduction zone, and the cycle begins again (see Experiment! link to the right). The rock cycle is inextricably linked not only to plate tectonics, but to other earth cycles as well. Weathering, erosion, deposition, and cementation of sediments all require the presence of water, which moves in and out of contact with rocks through the hydrologic cycle; thus weathering happens much more slowly in a dry climate like the desert southwest than in the rainforest (see our The Hydrologic Cycle module for more information). Burial of organic sediments takes carbon out of the atmosphere, part of the long-term geological component of the carbon cycle (see our The Carbon Cycle module); many scientists today are exploring ways we might be able to take advantage of this process and bury additional carbon dioxide produced by the burning of fossil fuels (see News and Events link to the right). The uplift of mountain ranges dramatically affects global and local climate by blocking prevailing winds and inducing precipitation. The interactions between all of these cycles produce the wide variety of dynamic landscapes we see around the globe. Anne E. Egger, Ph.D. "The Rock Cycle: Uniformitarianism and Recycling," Visionlearning Vol. EAS-2 (7), 2005.
<urn:uuid:1d8cdccb-098e-46d2-97a3-50a9d15430c5>
4.0625
2,938
Academic Writing
Science & Tech.
39.073182
RNA, ribonucleotide acid, is built up of a phosphate and nitrogenous base, a ribose sugar, and a phosphate. The bases used are adenine (A), cytosine (C), guanine (G) and uracil (U). The chemical structure of RNA There are four major groups of RNA: messenger RNA (mRNA), ribosomal RNA (rRNA), transfer RNA (tRNA) and small, regulatory RNAs (sRNA). mRNA is transcribed from DNA by the enzyme RNA polymerase, and is then used as a template in translation. rRNAs are a major component of the ribosome, the translation machinery. They are divided into the 50S large subunit (23S and 5S) and small 30S (16S) in prokaryotes. The rRNAs decode the mRNA and interact with tRNAs. The tRNAs are attached to specific amino acids and carry them (with the help of elongation factor Tu) to the ribosome during translation. The sRNAs form a quite recently discovered group of regulatory RNAs that are thought to be of great importance especially during stress, when they bind specifically to their targets and as a consequence effect the expression of genes, either at the level of transcription or translation.
<urn:uuid:f44e2793-140c-4216-818a-97021b96f284>
3.90625
271
Knowledge Article
Science & Tech.
39.421532
Johannes Wilcke invented and then Alessandro Volta perfected the electrophorus over two hundred years ago. This device was quickly adopted by scientists throughout the world because it filled the need for a reliable and easy-to-use source of charge and voltage for experimental researches in electrostatics [Dibner, 1957]. Many old natural philosophy texts contain lithographs of the electrophorus. A hand-held electrophorus can produce significant amounts of charge conveniently and repeatedly. It is operated by first frictionally charging a flat insulating plate called a "cake". In Volta's day, the cake was made of shellac/resin mixtures or a carnauba wax film deposited on glass. Nowadays, excellent substitutes are available. TeflonTM, though a bit expensive, is a good choice because it is an excellent insulator, charges readily, and is easy to clean and maintain. The electrophorus is ideal for generating energetic capacitive sparks required for vapor ignition demonstrations. The basic operational steps for the electrophorus are depicted in the sequence of diagrams below. Note that the electrode, though making intimate contact with the tribocharged plate, actually charges by induction. No charge is removed from the charged cake and, in principle, the electrode can be charged any number of time by repeating the steps depicted. Click here to view a neat animation of the electrophorus charging process. Ainslie describes interesting experiments with an electrophorus that was charged in the Springtime and then its charge monitored throughout the summer [Ainslie, 1982]. The apparent disappearance of the charge during humid weather and its reappearance in the Fall must be attributed to changes in the humidity. The energy for each capacitive spark drawn from the electrophorus is actually supplied by the action of lifting the electrode off the cake. This statement can be confirmed by investigating the strength of the sparks as a function of the height to which the electrode is lifted. Layton makes this point and further demonstrates with a small fluorescent tube the dependence of the electrostatic potential on the position of the electrode [Layton, 1991]. Lifting the electrode higher gives stronger sparks [Lapp, 1992]. CLICK HERE to view an interactive, animated version of this demonstration that reveals the movement of charge as the steps of the demonstration are followed. Please be patient while the Java script loads! The electrophorus works most reliably if the charged insulating plate rests atop a grounded plane, such as a metal sheet, foil, or conductive plastic. [See Bakken Museum booklet, pp. 78-80.] The ground plane limits the potential as the electrode is first lifted from the plate, thus preventing a premature brush discharge. In dry weather, powerful 3/4" (2 cm) sparks can be drawn easily from a 6" (15 cm) diameter, polished, nick-free aluminum electrode. Estimating the potential of the electrode at V = ~50 kV and the capacitance at C = ~20 pF, we get Q = CV = ~1 microCoulomb for the charge and Ue = CV2/2 = ~30 milliJoules for the capacitive energy. This energy value easily exceeds the minimum ignition energy (MIE) of most flammable vapors. Click here to learn about a new type of electrophorus invented by S. Kamachi. The web site of the world-famous Exploratorium in San Francisco describes a simple electrophorus made of aluminum pie plates and other inexpensive materials. Young scientists should check out this page. In addition, the library references below contain interesting information about the electrophorus and other electrostatics demonstrations. One example is the cylindrical electrophorus [Ainslie, 1980]. A simple leaf electroscope attachment, shown in the figure below, makes it very easy to reveal some of the important charging and charge redistribution phenomena of the electrophorus. This accessory is especially handy because it works even on warm, humid days when large, impressive sparks can not be coaxed out of the electrophorus. Refer to the electroscope page for details on how to make this convenient accessory. The electroscope is operated in the same way as before, but now the electroscope reveals information about the charge and its distribution on the electrode. In particular, it should be noted that, as the electrophorus is lifted up, its charge does not change. The leaves of the electroscope spread apart because the constant charge on the electrode redistributes itself, with about half of the charge moving to the top surface. Another thing to notice is that the leaves, which spread to a wide angle when the electrode is first lifted, slowly come back together with time, indicating the leakage of electric charge, presumably due to corona discharge from the edges of the leaves. Corona discharge accessory Another simple accessory is a corona discharge point that can be attached to the electrophorus. The attachment is a metal rod of diameter 1/16" or greater with one end sharpened to a point. When the charged electrode is lifted, the electric field at the sharpened tip exceeds the corona limit and a local discharge starts, dissipating the charge on the electrophorus. If one listens closely as the electrode is lifted, a soft, varied-pitch buzzing noise lasting just a few seconds may be heard. This is the corona, and it stops after the voltage has been reduced below the corona threshold. Passive corona discharge points are used widely in manufacturing to dissipate unwanted static charge. The corona discharge can be largely suppressed by covering the sharpened point with a small piece of antistatic plastic foam of the type used for packaging ESD-sensitive electronic components. The figure below shows how this scheme -- called resistive grading -- works to reduce or stop corona discharges. D.S. Ainslie, "Inversion of electrostatic charges in a cylindrical electrophorus", Physics Teacher, vol. 18, No. 7, October, 1980, p. 530. D.S. Ainslie, "Can an electrophorus lose its charge and then recharge itself?", Physics Teacher, vol. 20, No. 4, April, 1982, p. 254. Bakken Library and Museum, Sparks and Shocks, Kendall/Hunt Publishing Co., Dubuque, IA, 1996, pp. 53-55. B. Dibner, Early Electrical Machines, pub. #14, Burndy Library, Norwalk, CT, 1957, p. 50-53. R.A. Ford, Homemade Lightning: creative experiments in electricity (2nd ed.), TAB Books (McGraw-Hill), New York 1996, chapter 10. O.D. Jefimenko, "Long-lasting electrization and electrets," in Electrostatics and its Applications (A.D. Moore, ed.), Wiley-Interscience, New York, 1973, pp. 117-118. D.R. Lapp, "Letters," Physics Teacher, November, 1992, p. 454. W. Layton, "A different light on an old electrostatic demonstration," Physics Teacher, Vol. 29, No. 1, January, 1991, p. 50-51. K.L. Ostlund and M.A. Dispezio, "Static electricity dynamically explored," Science Scope, February, 1996, pp. 12-16.
<urn:uuid:c009257d-7858-4c03-97a8-b9e17e05b736>
3.640625
1,544
Tutorial
Science & Tech.
43.789977
Science Fair Project Encyclopedia GABA A receptor The receptor is a multimeric transmembrane receptor that sits in the membrane of its neuron. Once bound to its ligand, the protein receptor changes confirmation within the membrane. This particular protein is configured in such a way as to allow certain ions to pass through its pore when the pore is open. The ligand GABA is the endogenous compound that tells this receptor to open, allowing chloride ions (Cl-) to pass down its electrochemical gradient . Because the chloride ion concentration is high outside of the cell, opening of the channel pore results in an influx of chloride into the cell, thus hyperpolarizing it. Other ligands interact with the GABA(A) receptor to mimic GABA or to potentiate its response. Such other ligands include the benzodiazepines (increase pore opening frequency), barbiturates (increase pore opening duration), and certain steroids. Still other compounds interact with the GABA(A) receptor to attenuate the effects of GABA; such blocking agents are Flumazenil (a competitive benzodiazepine antagonist) and picrotoxin, which blocks the channel directly. The phenotypic response to all of these interactions is seen in effects such as muscle relaxation, sedation, anticonvulsion, and anesthesia, based on the location of the cell in question, its intracellular second-messenger milieu, and the dose of the ligand at the receptor; the dosage issue is commonly related to the amount of exogenous drug that is delivered to the patient (e.g., anesthesia during surgery). The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:9474f750-6893-454a-97ac-d40a464838cf>
3.75
365
Knowledge Article
Science & Tech.
25.149776
Telescopium, Indus, and Pavo - Downloadable article Galaxies galore populate this trio of southern constellations. March 3, 2009 |This downloadable article is from an Astronomy magazine 45-article series called "Celestial Portraits." The collection highlights all 88 constellations in the sky and explains how to observe each constellation's deep-sky targets. The articles feature star charts, stunning pictures, and constellation mythology. We've put together 11 digital packages. Each one contains four Celestial Portraits articles for you to purchase and download.| "Telescopium, Indus, and Pavo" is one of four articles included in Celestial Portraits Package 4. As the cooler air of autumn descends across the Northern Hemisphere, the splendors of the summer sky sink in the west. Sagittarius and the center of the Milky Way dip below the horizon by midevening, yielding to a rather sparse region where star patterns are difficult to trace and galaxies prevail. The southernmost of the constellations east of the Milky Way rank among the most obscure in the entire heavens. From the northern United States, only the top stars in Telescopium and Indus poke above the southern horizon, while Pavo remains completely hidden. Most of this area comes into view from the southern tier of states, though the vista improves markedly from locales even farther south. A small triangle of modest stars south of Corona Australis forms the shape of Telescopium the Telescope. Only Alpha (α) Telescopii, a yellowish star located 250 light-years from Earth, shines brighter than magnitude 4.0. To read the complete article, purchase and download Celestial Portraits Package 4. |Deep-sky objects in Telescopium, Indus, and Pavo| IC 4662, NGC 6684, NGC 6744, NGC 6752, NGC 6810, IC 4889, Dunlop 227, NGC 6868, NGC 6876, Abell 3716, NGC 7020, NGC 7041, NGC 7049, Theta Indi, NGC 7090, Y Pavonis, IC 5152
<urn:uuid:8c98cbd5-d496-4c28-989a-afc0d6c2f9ea>
2.953125
453
Truncated
Science & Tech.
49.291445
Solenoids produce magnetic fields that are relatively intense for the amount of current they carry. To make a direct comparison, consider a solenoid with 55 turns per centimeter, a radius of 1.25 cm, and a current of 0.170 A. (a) Find the magnetic field at the center of the solenoid. (b) What current must a long, straight wire carry to have the same magnetic field as that found in part (a)? Let the distance from the wire be the same as the radius of the solenoid, 1.25 cm. Not sure how to do this one, i maybe use the turns to find force, then use F=ILBsin() ??? any help is appreciated
<urn:uuid:41425225-25cf-43c5-9a50-529c0f09245d>
3.484375
158
Q&A Forum
Science & Tech.
75.352751
is a project to create a GPL that was started by Richard Stallman , the creator of EMACS . The GNU project is now overseen by the Free Software Foundation , which Richard Stallman GNU is a recursive acronym, and it stands for GNU's Not Unix. This, no doubt, is because of Richard Stallman's grounding in Lisp. The GNU Project has created a great many software packages, including gcc, gdb, sed, glibc, make, awk, find, and a good deal more besides. These packages make up a major portion of every Linux distribution. The development of the GNU operating system, the Hurd, continues.
<urn:uuid:0a2c6d37-67ce-4143-9f6b-60fe5ca7a85d>
2.984375
140
Knowledge Article
Software Dev.
58.242236
From Physics Research Archive - Page 4 The Physics Classroom: Total Internal Reflection - Sep 16, 2010 The optical fiber in the photo above doesn't just guide the beam--the fiber produces the beam. Instead of a tube of helium and neon gas, or a piece of ruby, the "active medium" of this laser is added to the glass in the fiber. Since the mirrors are just the polished ends of the fiber, there is nothing to go out of alignment, and maintenance is easy. Network Theory: A Key to Unraveling How Nature Works - Sep 1, 2010 You are looking at a network diagram that shows the interconnectedness of the world economy. To learn more about this network, visit Mapping the World Economy. Making a supersonic jet in your kitchen - Aug 16, 2010 What exactly happens when an object makes a splash in water? The disk shown above was pulled into water in a reproducible way to investigate the splash. The Real Sea Monsters: On the Hunt for Rogue Waves - Aug 1, 2010 This "rogue wave" broke over the deck of an oil tanker, and was much taller than the other waves on the ocean at the time. See Freak Waves, Rogue Waves for graphs of rogue waves building up in the ocean, and for the measurement of one that struck an oil platform in the North Sea. From Soap Bubbles to Technology - Jul 16, 2010 The soap film you see here, made in between two metal rings, is called a catenoid, and it uses the minimum area to enclose a given volume. Click on the image to see another example of a "minimal surface" soap film. About Dust - Jul 1, 2010 This satellite image shows a recent dust storm in China that was so large it spread out to neighboring countries. For more on this storm, see this Time magazine article and also About Dust. Shock Diamonds and Mach Disks - Jun 16, 2010 When the speed of the gases in a jet or rocket exhaust exceeds the speed of sound, a dazzling pattern results called "shock diamonds" or "Mach disks," as shown in this photo of the SR-71 Blackbird. The diamonds are created by crisscrossing shock waves in the exhaust. image credit: NASA, ESA, H. Bond (STScI), R. Ciardullo (Penn State), and the Hubble Heritage Team (AURA/STScI); image source; larger image Stellar Evolution - Jun 1, 2010 When the Sun reaches the end of its life, its outer layers will drift into space, an intricate cloud illuminated by its hot, dense core, as in this false-color image of a planetary nebula and white dwarf. For more details, see this page on the death of solar-mass stars. Perspectives on Plasmas - May 16, 2010 This is a ball of plasma, created by discharging electricity into a solution. See the image source for more on how the image was made. Properties of Volcanic Ash - May 1, 2010 Why were so many European airports closed due to the volcano? The image above of one volcanic ash particle begins to tell us why: the extremely small particles, with their many voids, can travel great distances after eruption. Once inside a jet engine, they melt and then re-solidify. Read Properties of Volcanic Ash for more details. You can learn about the specific dangers of flying through volcanic ash here.
<urn:uuid:61a1c128-6043-43d5-acf4-3f35e7ec4ab0>
2.703125
722
Content Listing
Science & Tech.
55.684473
Full Lab Manual Introduction & Goals Chemistry & Background In Your Write-up In this two-week experiment, you will learn how to use an ion-exchange column and how to carry out an acid-base titration using an indicator. You will then apply these skills to determine the total concentration of cations in a sample of seawater. This week you will concentrate on understanding the chemistry of ion exchange and estimate the capacity of an ion exchange column from your observations. 1. Which ions are attached to the resin at the start of a given experiment? 2. Which ions are becoming attached to the resin and which are coming off, during each experiment? 3. How many equivalents of H+ will be replaced by the charging solution? 4. What will happen when the column reaches its capacity? Trustees of Dartmouth College, Copyright 19972003
<urn:uuid:fe8bd937-b24a-4a54-bb26-43a48a944a00>
3.453125
178
Tutorial
Science & Tech.
52.824545
Rhenium is a rare, silvery-white metallic element. Its atomic number is 75 and its symbol is Re. Rhenium was discovered in 1925 by a team of German scientists named Walter Noddack, Ida Tacke-Noddack, and Otto Berg. They discovered rhenium as a trace element in platinum ores and the mineral columbite. It is very dense. It has a melting temperature of 3,186 degrees Celsius (5,767 degrees Fahrenheit). It is not known to have any health benefit for animals or plants. Rhenium does not form minerals of its own, but it does occur as a trace element in columbite, tantalite and molybdenite. These minerals are the principal sources of columbium (commonly called niobium), tantalum and molybdenum metals. Rhenium is a very rare element that is produced principally as a by-product of the processing of porphry copper-molybdenum ores. Because it is scarce, very little rhenium is actually processed and isolated each year as compared to the millions of tons of copper and millions of pounds of molybdenum that are extracted from these same porphry copper deposits. As a result, the processing of rhenium poses no environmental threat. The equipment that reduces sulfur dioxide in these processing plants also removes any rhenium that may escape through the smokestacks. |Previous Element: Tungsten| Next Element: Osmium |Phase at Room Temp.||solid| |Melting Point (K)||3453.2| |Boiling Point (K)||5923| |Heat of Fusion (kJ/mol)||33.054| |Heat of Vaporization (kJ/mol)||707| |Heat of Atomization (kJ/mol)||770| |Thermal Conductivity (J/m sec K)||48| |Electrical Conductivity (1/mohm cm)||51.813| |Number of Isotopes||45 (2 natural)| |Electron Affinity (kJ/mol)||14| |First Ionization Energy (kJ/mol)||760| |Second Ionization Energy (kJ/mol)||---| |Third Ionization Energy (kJ/mol)||---| |Atomic Volume (cm3/mol)||8.9| |Ionic Radius2- (pm)||---| |Ionic Radius1- (pm)||---| |Atomic Radius (pm)||137| |Ionic Radius1+ (pm)||---| |Ionic Radius2+ (pm)||---| |Ionic Radius3+ (pm)||---| |Common Oxidation Numbers||+4| |Other Oxid. Numbers||-3, -1, +1, +2, +3 +5, +6, +7| |In Earth's Crust (mg/kg)||7.0x10-4| |In Earth's Ocean (mg/L)||4.0x10-6| |In Human Body (%)||0%| |Regulatory / Health| |OSHA Permissible Exposure Limit (PEL)||No limits| |OSHA PEL Vacated 1989||No limits| |NIOSH Recommended Exposure Limit (REL)||No limits| University of Wisconsin General Chemistry Mineral Information Institute Jefferson Accelerator Laboratory Rhenium was named after the Greek word for the Rhine River, Rhenus. Rhenium is obtained almost exclusively as a by-product of the processing of a special type of copper deposit known as a porphyry copper deposit. Specifically, it is obtained from the processing of the mineral molybdenite (a molybdenum ore) that is found in porphyry copper deposits. A porphyry copper deposit is a valuable copper-rich deposit in which copper minerals occur throughout the rock. The copper in these deposits occurs as primary chalcopyrite (CuFeS2) or the important secondary copper mineral chalcocite (Cu2S). The identified rhenium resources in the United States are estimated to total 5 million kilograms. These resources are found in the southwestern United States. The identified rhenium resources in the rest of the world are estimated to total 6 million kilograms. Countries producing rhenium include Armenia, Canada, Chile, Kazakhstan, Mexico, Peru, Russia, and Uzbekistan. Even though the United States has significant rhenium resources, the majority of the rhenium consumed in the U.S. is imported. Chile and Kazakhstan provide the majority of the imported rhenium. The rest is imported from Mexico and other nations. Because of its very high melting point, rhenium is used to make high temperature alloys (an alloy is a mixture of metals) that are used in jet engine parts. It is also used to make strong alloys of nickel-based metals. Rhenium alloys are used to make a variety of equipment and equipment parts, such as temperature controls, heating elements, mass spectrographs, electrical contacts, electromagnets, and semiconductors. An alloy of rhenium and molybdenum is a superconductor of electricity at very low temperatures. These superalloys account for the majority of the rhenium use each year. Rhenium is also used in the petroleum industry to make lead-free gasoline. In this application, rhenium compounds act as catalysts. (A catalyst is a chemical compound that takes part in a chemical reaction, and can often make the reaction proceed more quickly, but the chemical is not consumed in the chemical reaction.) Substitutes and Alternative Sources Substitutes for rhenium as a catalyst are being researched. Iridium and tin have been found to be a good catalyst for at least one reaction. Cobalt, tungsten, platinum and tantalum can be used in some of the other applications for rhenium. - Common Minerals and Their Uses, Mineral Information Institute. - More than 170 Mineral Photographs, Mineral Information Institute. Disclaimer: This article is taken wholly from, or contains information that was originally published by, the Mineral Information Institute. Topic editors and authors for the Encyclopedia of Earth may have edited its content or added new information. The use of information from the Mineral Information Institute should not be construed as support for or endorsement by that organization for any new information added by EoE personnel, or for any editing of the original content.
<urn:uuid:c664e83e-de9f-4793-8ec6-412b4f0d26e6>
4.21875
1,404
Knowledge Article
Science & Tech.
37.044087
Interrupt the execution of an expression and allow the inspection of the environment where browser was called from. browser(text = "", condition = NULL, expr = TRUE, skipCalls = 0L) - a text string that can be retrieved once the browser is invoked. - a condition that can be retrieved once the browser is invoked. - An expression, which if it evaluates to TRUEthe debugger will invoked, otherwise control is returned directly. - how many previous calls to skip when reporting the calling context. A call to browser can be included in the body of a function. When reached, this causes a pause in the execution of the current expression and allows access to the R interpreter. The purpose of the condition arguments are to allow helper programs (e.g. external debuggers) to insert specific values here, so that the specific call to browser (perhaps its location in a source file) can be identified and special processing can be achieved. The values can be retrieved by calling The purpose of the expr argument is to allow for the illusion of conditional debugging. It is an illusion, because execution is always paused at the call to browser, but control is only passed to the evaluator described below if expr evaluates to TRUE. In most cases it is going to be more efficient to use an if statement in the calling program, but in some cases using this argument will be simpler. skipCalls argument should be used when the browser() call is nested within another debugging function: it will look further up the call stack to report its location. At the browser prompt the user can enter commands or R expressions, followed by a newline. The commands are - (or just an empty line, by default) exit the browser and continue execution at the next statement. - synonym for - enter the step-through debugger if the function is interpreted. This changes the meaning of c: see the documentation for debug. For byte compiled functions nis equivalent to - print a stack trace of all active function calls. - exit the browser and the current evaluation and return to the top-level prompt. (Leading and trailing whitespace is ignored, except for an empty line). Anything else entered at the browser prompt is interpreted as an R expression to be evaluated in the calling environment: in particular typing an object name will cause the object to be printed, and ls() lists the objects in the calling frame. (If you want to look at an object with a name such as n, print it explicitly.) The number of lines printed for the deparsed call can be limited by setting TRUE disables the use of an empty line as a synonym for c. If this is done, the user will be re-prompted for input until a valid command or an expression is entered. This is a primitive function but does argument matching in the standard way. Becker, R. A., Chambers, J. M. and Wilks, A. R. (1988) The New S Language. Wadsworth & Brooks/Cole. Chambers, J. M. (1998) Programming with Data. A Guide to the S Language. Springer. Documentation reproduced from R 2.15.3. License: GPL-2.
<urn:uuid:a0f6a31d-26d9-465a-85af-fcef7ca88935>
3.84375
690
Documentation
Software Dev.
58.462505
The following HTML text is provided to enhance online readability. Many aspects of typography translate only awkwardly to HTML. Please use the page image as the authoritative form to ensure accuracy. Engineering and Environmental Challenges: Technical Symposium on Earth Systems Engineering The term Lupang Pangako means promised land—the sardonic name given to a garbage dump outside the city of Manila inhabited by almost 100,000 people. I visited Lupang Pangako about 15 years ago in a different life as a geologist, and the place really is hell on earth. As you drive through the Promised Land, you see stygian mists rising from the hillsides, the mountains of garbage, and if you look closely you see movement everywhere in the distance. You soon realize that the mountains are covered with people scavenging for their livelihoods. You may remember that in July 2000 torrential typhoon rains caused a huge landslide in the Promised Land that buried more than 200 people under a mountain of garbage. To me, this horrific event provides a powerful indicator of how we should be thinking about the impacts of climate on people and about human adaptation. The problem was not whether the typhoon was an above-average or below-average event. It was not a problem whose root causes could be revealed through a better understanding of anthropogenic climate change. The problem was that 100,000 people were living in poverty so deep that they could survive only by culling garbage. The results of humanity’s mistreatment of the environment fall disproportionately on poor people, on developing countries, and on tropical regions. Although these impacts are most severe in their chronic forms, they are most spectacular in their catastrophic versions, such as this landslide. As Figure 1 shows, the number of disasters has risen sharply throughout the world in the last 30 years, most markedly in the developing world. This trend does not reflect a changing climate; it reflects changing demographics—growing numbers of poor people living in urban areas, living in coastal regions, living on garbage dumps. Unlike changes in climate, this trend is something we can control. These are not natural disasters; these are intersections of natural phenomena and complex sociopolitical and socioeconomic processes. The number of disasters will continue to rise because we know that demographic trends are pointing toward more urbanization and greater numbers of impoverished people moving from agrarian areas to cities—often to areas in harm’s way. Megacities like Jakarta and Manila that have nearly 10 million people apiece are subject to typhoons, volcanoes, earthquakes, landslides, epidemics, and floods, for example. Because generating more knowledge on climate dynamics cannot help us in the short term, it is worth talking not just about the behavior of the climate and our capacity to modify it by reducing greenhouse gas emissions, but also about the interactions of social systems with climate and the engineered systems that sustain human beings. These systems are not sensitive to emissions of carbon dioxide but are very sensitive to demographic and socioeconomic trends. We have much less control over the future behavior of the climate than we do over the behavior of human beings. Given the complexity of these interdependent systems, the practical challenge is to learn to operate in ways that minimize our impact on the planet and maximize our resilience in the face of unpredictable events and the ever-changing
<urn:uuid:7084b5bd-239a-410e-b73d-f1d0e7c57a3d>
3.453125
672
Truncated
Science & Tech.
26.027676
The Plasma Spray – Physical Vapor Deposition (PS-PVD) rig at NASA's Glenn Research Center uses new technology to create super thin ceramic coatings. Here, Bryan Harder, the lead for the PS-PVD, installs a sample in the rig. Image Credit: NASA Turbines, or rotary engines that create power, have a multitude of uses. They are used in machines that perform work on Earth and are essential components of airplanes. Currently, most turbines are built using metallic based components, and these metal components require cooling to avoid reaching their thermal limits. New, more efficient engine technology requires components that can survive higher temperatures and reduced cooling. Silicon based ceramic components show great potential for use in advanced, higher efficiency engines, as they are capable of withstanding higher temperatures and weigh less than metal components. However, when unprotected, these silicon based ceramic components react and erode in turbine engine environments due to the presence of water vapor. New coating processing technology is being pioneered at NASA Glenn's Research Center in Cleveland. The technology is used to protect advanced silicon based ceramic engine components that are being developed for future engines. This coating processing technology will enable more complex and thinner coatings than are currently possible. This is important for coating turbine blades, which need to endure engine environments and stress conditions, while still remaining smooth to avoid the disruption of airflow. This coating processing technology, called Plasma Spray – Physical Vapor Deposition (PS-PVD), has the potential to radically improve the capabilities of ceramic composite turbine components. "PS-PVD technology is really necessary for the integration of silicon-based ceramic airfoil components into turbine engines. The use of these silicon-based ceramics as engine airfoil components would increase engine operation temperature, which translates into higher efficiencies," says Bryan Harder, the lead for the PS-PVD Facility at Glenn. Plasma Spray – Physical Vapor Deposition The PS-PVD rig uses a system of vacuum pumps and a blower to remove air from the chamber, reducing the pressure to one Torr (1/760th of normal atmospheric pressure). Image Credit: NASA It has been known for decades that enveloping metals and other substances, such as silicon based ceramic components, with a ceramic coating can protect them. But there is new, cutting-edge technology that can create ceramic coatings in an extremely precise, uniform fashion—the coatings can be controlled to a thickness of ten microns (a micron is one-millionth of a meter). This technology is made possible by Glenn's Plasma Spray – Physical Vapor Deposition (PS-PVD) Facility. The Plasma Spray – Physical Vapor Deposition (PS-PVD) Coater was completed at Glenn in 2010. Created in collaboration with Sulzer Metco, the PS-PVD rig is one of only two such facilities in the U.S.A. and one of four in the entire world. The PS-PVD rig, which is currently a research and development facility, uses a state of the art processing method of creating thin ceramic coatings. Planning began for the facility in 2007, and construction began in 2008 (previously constructed infrastructure was reused and is now the base for the new rig). The rig is nearing completion of its capabilities testing and assessment phase. A team of five, led by Bryan Harder, a materials research engineer, has put the rig through its paces. The rig will soon begin supporting the Supersonic Project within NASA's Aeronautics Research Mission Directorate at Glenn. Eventually, the rig could be of service to many other areas and projects within Glenn, other NASA centers and governmental entities, and private industry partners. "When you have something that has broad capabilities like this, it really allows us to work with a lot of different areas, which is a great thing," says Bryan Harder. Super Thin Ceramic Coatings Ceramic powder is pumped into the PS-PVD rig. It will be transformed inside the chamber to become a thin, precise, accurate ceramic coating. Image Credit: NASA The Plasma Spray-Physical Vapor Deposition (PS-PVD) rig creates thin, extremely precise ceramic coatings. These coatings are created on metal, ceramic, or other appropriate materials. "To create these coatings, ceramic powder is injected into a very high power plasma flame under a vacuum. During operation, the plasma is approximately 7 feet long and 3 feet wide. The ceramic material is vaporized within the plasma, and condenses onto the target component," says Bryan Harder. The coatings can be single or multilayer, and they protect the components from environmental and thermal impact. The extremely high heat and the vacuum within the chamber allow the ceramic coating to be precisely applied, creating durable, long-lasting, effective coatings. "If you can reduce the thickness, and still provide an effective barrier layer — you can reduce the weight, you can reduce your cost. There are a lot of benefits that come from this technology," Harder says. Inside the Chamber Within the PS-PVD, an extremely hot plasma flame is created. The plasma can reach a temperature of 10,000 degrees Celsius—ten times hotter than a candle flame. Image Credit: NASA Located at Glenn, the Plasma Spray – Physical Vapor Deposition (PS-PVD) is installed in a dedicated room. A large, blimp-shaped chamber is made of stainless steel. The exterior metal, which is welded to a second sheet of stainless steel beneath, has cool water pumped through it to keep the chamber from getting too warm. Inside the chamber is a steel arm which holds a plate made of a nickel-based superalloy. This plate holds the component that will be coated. Several feet away from this plate is the torch, where the ceramic powder is injected into the plasma. Once the chamber is closed, a system of vacuum pumps and a blower remove air from the chamber, reducing the pressure to one Torr (1/760th of normal atmospheric pressure). Then, helium and argon gases are introduced to the torch. An arc is created between the anode and cathode inside the chamber, ionizing the gases and creating the high temperature plasma. The plasma, which can grow to seven feet in length, can be observed through one of three portals on the side of the rig. Its steady, fierce, concentrated glow resembles a Lightsaber from the Star Wars movies. Once the vacuum and plasma are stable, the ceramic powder is introduced to the torch. The plasma immediately begins to change colors. Depending on which ceramic powder is introduced, the plasma dramatically erupts into oranges, yellows, aquas, purples and blues. The gas stream moves at a speed of Mach 2 — a rate of more than 2,000 feet per second. As the ceramic powder and the plasma blast the arm and plate where the component being coated is attached, the plasma appears to envelop the component and splash around it. The plasma, which appeared like a Lightsaber, seems to morph into the effect of the undulating stream of magic that occurs when Harry Potter's wand meets with Lord Voldemort's wand, in the Harry Potter movies. Inside the PS-PVD, ceramic powder is introduced into the plasma flame. The plasma vaporizes the ceramic powder, which then condenses to form the ceramic coating. Image Credit: NASA The entire process is over in about five minutes. The plasma is extinguished and the exhaust system clears the chamber. The pressure is returned to normal atmospheric conditions, and then the chamber can be opened. The newly-coated component glows red hot and must cool down for an hour before it can be handled. The plasma within the chamber can reach a scorching 10,000 degrees Celsius — ten times hotter than a candle flame. After the sample cools, it will be tested and evaluated to ensure the coating is an effective barrier. And then the sample — be it a small test button or an essential component of a supersonic aircraft — is ready to go. The front, sides and inside of the sample can be coated — a capability never previously available from vapor deposition techniques. "The PS-PVD allows us to do things that you can't do anywhere else," Harder says. This newly developed technology could have myriad applications, both within NASA and with potential industry partners. The potential applications are only beginning to be discovered — from membrane technology to fuel cells to ion conductors and beyond. The rig is a game-changing technology; Glenn is maturing and developing a technology that doesn't exist elsewhere, while making direct contributions to the NASA mission. "This is new ground," Bryan Harder says. "This was only developed in the last couple of years… and we don't even know the limits of what it [PS-PVD] is capable of." -Tori Woods, SGT Inc. NASA's Glenn Research Center
<urn:uuid:f1431e61-97ca-4d5d-8075-51f535c02fe1>
4.15625
1,834
Knowledge Article
Science & Tech.
41.229169
CR-39 is transparent in visible spectrum and is almost completely opaque in the ultraviolet range. It has high abrasion resistance, in fact the highest abrasion/scratch resistance of any uncoated optical plastic. CR-39 is about half the weight of glass and index of refraction only slightly lower than that of crown glass, making it an advantageous material for eyeglasses and sunglasses lenses. A wide range of colors can be achieved by dyeing of the surface or the bulk of the material. CR-39 is also resistant to most of solvents and other chemicals, to gamma radiation, to aging, and to material fatigue. It can withstand the small hot sparks from welding. It can be used continuously in temperatures up to 100 °C and up to one hour in 130 °C. In the radiation detection application, raw CR-39 material is exposed to proton recoils caused by incident neutrons. The proton recoils cause tracks, which are enlarged by an etching process in a caustic solution of sodium hydroxide. The enlarged tracks are counted under a microscope (commonly 200x), and the number of tracks is proportional to the amount of incident neutron radiation. Effect of alpha-particle energies on CR-39 line-shape parameters using positron annihilation technique.(Polyally diglycol carbonate ) Jul 01, 2006; Polyally diglycol carbonate "CR-39" is widely used as etched track type particle detector. Doppler broadening positron...
<urn:uuid:7294914f-5629-4204-9346-748aee0f98cf>
2.9375
314
Knowledge Article
Science & Tech.
45.809195
The basic forces in nature Contemporary Physics Education Project The interactions in the Universe are governed by four forces (strong, weak, electromagnetic and gravitational). Physicists are trying to find one theory that would describe all the forces in nature as a single law. So far they have succeeded in producing a single theory that describes the weak and electromagnetic forces (called electroweak force). The strong and gravitational forces are not yet described by this theory. Table courtesy of University of Guelph, Guelph, Ontario (Cananda) Shop Windows to the Universe Science Store!Cool It! is the new card game from the Union of Concerned Scientists that teaches kids about the choices we have when it comes to climate change—and how policy and technology decisions made today will matter. Cool It! is available in our online store You might also be interested in: The neutrino is an extremely light particle. It has no electric charge. The neutrino interacts through the weak force. For this reason and because it is electrically neutral, neutrino interactions with...more Some ideas are used throughout the sciences. They are "tools" that can help us solve puzzles in different fields of science. These "tools" include units of measurement, mathematical formulas, and graphs....more Mechanics is the term used to refer to one of the main branches of the science of physics. Mechanics deals with the motion of and the forces that act upon physical objects. We need precise terminology...more The interactions in the Universe are governed by four forces (strong, weak, electromagnetic and gravitational). Physicists are trying to find one theory that would describe all the forces in nature as...more When the temperature in the core of a star reaches 100 million degrees Kelvin fusion of Helium into Carbon occurs. Oxygen is also formed from fusion of Carbon and Helium together when the temperature is...more A plot of the binding energy per nucleon vs. atomic mass shows a peak atomic number 56 (Iron). Elements with atomic mass less then 56 release energy if formed as a result of a fusion reaction. Above this...more There are several experiments where nuclear fusion reactions have been achieved in a controlled manner (that means no bombs are involved!!). The two main approaches that are being explored are magnetic...more
<urn:uuid:f6b8e300-5420-4e46-9c29-9b2d695a94b5>
3.484375
475
Content Listing
Science & Tech.
45.407259
First we have the monarch butterflies. These incredible creatures spend their summer days in the northern parts and then migrate (leaving for another place) to the south for the winter. They travel thousands of miles - some almost 2900 km from Canada to Mexico. Just looking at a map, you can see how far Canada is from Mexico, but these little butterflies fly all the way to protect themselves from the cold winters. Traveling so much certainly does tire them out, which is why some of them cannot make a return trip. These butterflies have never been to these foreign places, but they still make the trip successfully. How do they do it? A map showing the monarch butterfly migration. They travel from Canada to Mexico. Then there are the Green sea turtles (or the scientific term is chelonia mydas) who swim for months and months in the east direction to migrate from a sea in Brazil in South America to an island called Ascension Island, which is about 3200 km away. Apparently these turtles were hatched on this island, and after they grow up in South America, they return to their birthplace, the Ascension Island, to hatch their own eggs. Another example is that of some crabs that are willing to walk about 240 km from deep water to shallow water, just to lay their eggs. These creatures have some kind of inborn compass or instinct that tells them where to go and at what is the right time for migration. No one has taught them this, most of them have never been to the new place, but they still manage to migrate safely. Scientists till today, cannot give good, complete explanations for these migrations. It probably is just one of nature's powers and mysteries.....
<urn:uuid:dbcddb3e-9989-4206-871b-4ac752667ccb>
3.65625
381
Personal Blog
Science & Tech.
61.48327
From Ajax Patterns |Revision as of 14:54, 17 September 2009 WikiMartha (Talk | contribs) Foundational Technology Patterns ← Previous diff |Revision as of 14:02, 25 October 2009 220.127.116.11 (Talk | contribs) Next diff → |Line 6:||Line 6:| |* [[Ajax App]] Create a rich application in a modern web browser.||* [[Ajax App]] Create a rich application in a modern web browser.| |== Display Manipulation ==||== Display Manipulation ==| Revision as of 14:02, 25 October 2009 Foundational Technology Patterns These patterns are the building blocks of Ajax applications. They are more "reference patterns" than true "design patterns", at least from the perspective of a modern Ajax developer, who will take these technologies as a given. The bestessays patterns are included to introduce the types of technologies that are used, provide a common vocabulary used throughout the language, and facilitate a discussion of pros and cons. - Ajax App Create a rich application in a modern web browser. - Display Morphing Alter styles and values in the DOM to change display information such as replacing text and altering background colour. - Page Rearrangement Restructure the DOM to change the page's structure - moving, adding, and removing elements. - Web Service Expose server-side functionality with an HTTP API. - XMLHttpRequest Call Use XMLHttpRequest objects for browser-server communication. - IFrame Call Use IFrames for browser-server communication. - HTTP Streaming Stream server data in the response of a long-lived HTTP connection. - Lazy Inheritance An approach intended to simplify writing OOP and provides support of prototype-based classes hierarchies, automatic resolving and optimizing classes dependencies. - Richer Plugin Make your application "more Ajax than Ajax" with a Richer Plugin. Programming Patterns (25) - RESTful Service Expose web services according to RESTful principles. - RPC Service Deepak Expose web services as Remote Procedural Calls (RPCs). - Ajax Stub Use an "Ajax Stub" framework which allows browser scripts to directly invoke server-side operations, without having to worry about the details of XMLHttpRequest and HTTP transfer. - HTML Message Have the server generate HTML snippets to be displayed in the browser. - Plain-Text Message Pass simple messages between server and browser in plain-text format. - XML Message Pass messages between server and browser in XML format. - UED Format Send message from the browser to the server using the UED Data exchange format. - Call Tracking Accommodate busy user behaviour by allocating a new XMLHttpRequest object for each request. See Richard Schwartz's blog entry.Note: Pending some rewrite to take into account request-locking etc. - Periodic Refresh The browser refreshes volatile information by periodically polling the server. - Distributed Events Keep objects synchronised with an event mechanism. - Cross-Domain Proxy Allow the browser to communicate with other domains by server-based mediation. - Flash-enabled XHR A client-side proxy pattern for cross-domain Ajax, using invisible flash to bridge the domain communication gap. - XML Data Island Retain XML responses as "XML Data Islands", nodes within the HTML DOM. - Browser-Side XSLT Apply XSLT to convert XML Messages into XHTML. - Browser-Side Templating Produce browser-side templates and call on a suitable browser-side framework to render them as HTML. - Fat Client Create a rich, browser-based, client by performing remote calls only when there is no way to achieve the same effect in the browser. - Browser-Side Cache Maintain a local cache of information. - Guesstimate Instead of grabbing real data from the server, make a guesstimate that's good enough for most user's needs. ITunes Download Counter, GMail Storage Counter. - Multi-Stage Download Quickly download the page structure with a standard request, then populate it with further requests. - Predictive Fetch Anticipate likely user actions and pre-load the required data. - Pseudo-Threading Use a timer and a worker queue to process jobs without the blocking application flow. - Code Compression Compress code on the server, preferably not on the fly. Code Generation and Reuse - Cross-Browser Component Create cross-browser components, allowing programmers to reuse them without regard for browser compatibility. Functionality and Usability Patterns (28) All of these widget patterns will be familiar to end-users, having been available in desktop GUIs and some in non-AJAX DHTML too. They are included here to catalogue the interaction styles that are becoming common in AJAX applications and can benefit from XMLHttpRequest-driven interaction. - Drilldown To let the user locate an item within a hierarchy, provide a dynamic drilldown. - Microcontent Compose the page of "Microcontent" blocks - small chunks of content that can be edited in-page. - Microlink Provide Microlinks that open up new content on the existing page rather than loading a new page. - Popup Support quick tasks and lookups with transient Popups, blocks of content that appear "in front of" the standard content. - Portlet Introduce "Portlets" - isolated blocks of content with independent conversational state. - Live Command-Line In command-line interfaces, monitor the command being composed and dynamically modifying the interface to support the interaction. - Live Form Validate and modify a form throughout the entire interaction, instead of waiting for an explicit submission. - Live Search As the user refines their search query, continuously show all valid results. - Data Grid Report on some data in a rich table, and support common querying functions. - Progress Indicator Hint that processing is occurring. - Rich Text Editor e.g. http://dojotoolkit.org/docs/rich_text.html - Slider Provide a Slider to let the user choose a value within a range. - Suggestion Suggest words or phrases which are likely to complete what the user's typing. - Drag-And-Drop Provide a drag-and-drop mechanism to let users directly rearrange elements around the page. - Sprite Augment the display with "sprites": small, flexible, blocks of content. - Status Area Include a read-only status area to report on current and past activity. - Virtual Workspace Provide a browser-side view into a server-side workspace, allowing users to navigate the entire workspace as if it were held locally. - One-Second Spotlight When a page element undergoes a value change or some other significant event, dynamically manipulate its brightness for a second or so.Responded - One-Second Mutation When a page element undergoes a value change or some other significant event, dynamically mutate its shape for a second or so. - One-Second Motion Incrementally move an element from point-to-point, or temporarily displace it, to communicate an event has occurred. - Blinkieblinkpattern When an element is blinking - Highlight Highlight elements by rendering them in a consistent, attention-grabbing, format. - Lazy Registration Accumulate bits of information about the user as they interact, with formal registration occurring later on. - Direct Login Authenticate the user with an XMLHttpRequest Call instead of form-based submission, hashing in the browser for improved security. - Host-Proof Hosting Server-side data is stored in encrypted form for increased security, with the browser decrypting it on the fly. - Timeout Implement a timeout mechanism to track which clients are currently active. - Heartbeat Have the browser periodically upload heartbeat messages to indicate the application is still loaded in the browser and the user is still active. - Autosave Autosave un-validated forms to a staging table on the server to avoid users losing their work when their session expires if they get called away from their desk while filling out a long form. - Unique URLs Use a URL-based scheme or write distinct URLs whenever the input will cause a fresh new browser state, one that does not depend on previous interaction. Development Practices (8) - DOM Inspection Use a DOM Inspection Tool to explore the dynamic DOM state. - Traffic Sniffing Diagnose problems by sniffing Web Remoting traffic. - Data Dictionary Visualize DOM tags in a table format, with a row for each attribute. (Contributed pattern) - Simulation Service Develop the browser application against "fake" web services that simulate the actual services used in production. - Service Test Build up automated tests of web services, using HTTP clients to interact with the server as the browser normally would. - System Test Build automated tests to simulate user behaviour and verify the results.
<urn:uuid:d4fe2988-7fed-4572-bf1f-b9a2071aed13>
2.734375
1,880
Structured Data
Software Dev.
34.996801
Welcome to http:/www.handsonuniverse.org/activities/Explorations/tactile-moonphases/ Try this instead: Link to alternate page with thumbnails linked to larger images. SEE Project. http://analyzer.depaul.edu/SEE_Project/ These images are set for high contrast that suits the needs of individuals who are blind and visually impaired. Print then copy the images onto swellform paper, then process through a Swellform Graphics Machine (http://analyzer.depaul.edu/SEE_Project/bm030507.htm). The result will be tactile images to sense by touch rather than through sight. SEE Project is funded by NASA IDEAS. You may notice that some images appear larger or smaller than others. The moon's orbit brings it sometimes as close as 55 Earth radii, and other times as far as 65 Earth radii. What difference does distance make in the apparent size of our Moon? Take a look at two full moon pictures taken on different dates. These pictures are mosaics constructed from images taken with Univ. of Chicago Yerkes Observatory Rooftop Telescope - South (Meade 8 inch, F/6.3, SBIG ST8 CCD). Number of moon refers to day in cycle. Moon images were taken during a variety of cycles. Questions or Comments? mailto:email@example.com?subject=Project SEE: Moon Phases Links to jpg files, png files, Link to fts files for display/manipulation with Hands-On Universe image processing software. Explorations * Hands-On Universe * SEE Project
<urn:uuid:4887fdb1-88ee-4e89-94b6-35a801b59d0a>
2.921875
350
Tutorial
Science & Tech.
54.619713
On New Year’s day, Comet Tuttle will be closest to the Earth, a mere 25 million miles away, and also at its brightest. The comet will just be visible to the unaided eye, so you will need to be observing from a very dark site. A gallery of images, and sky maps of when and where to look, can be found at SpaceWeather.com. [Image of Comet Tuttle taken by Pete Lawrence] Happy solstice to all our readers! The winter solstice this year occurs at 6am, on 22 December, 2007. That is the time when the Earth’s North pole is pointing directly away from the Sun (which is why it is so much colder in the Northern hemisphere). For people living in the Southern hemisphere, the South pole is pointing towards the Sun, making it summertime ‘down-under’! On the night of the 13 December, and the morning of the 14 December, the Geminid shooting star shower reaches its peak. The Earth will be ploughing through a stream of debris left behind by asteroid Phaethon, and we see these fragments burn up as they hit the Earth’s atmosphere, causing the shooting stars. And they are often big fragments! I myself saw a huge fireball in the UK during the Geminid shower of 1994. More details can be found at the NASA science website. Details of all the major annual meteor showers visible from the UK are available on the NMM website. Comet Holmes now appears almost twice the diameter of the full Moon in the night sky. To see the latest images, see the gallery at Spaceweather.com. Because the comet is so large in the sky, it is spread out, making it appear much fainter in the night sky. But it is still visible to the unaided eye when well away from light pollution. The best way to observe the comet now is with a pair of binoculars that are large (to collect a lot of light) but with low magnification (because the comet is so large in the sky). The apparent size and brightness of Comet Holmes is regularly estimated by amateur astronomers world wide. A list of estimates is available at the IAC/ICQ/MPC website. Using averages of these estimates, I have plotted the apparent size of Comet Holmes against time (below). In this graph, you can see the number of days along the bottom since 24 October, 2007 – the date when Comet Holmes suddenly increased in brightness. Up the left hand side of the graph, I show the angular size of the comet – that is how big the comet appears to us in the night-time sky. The apparent size of the Full Moon, which is half a degree across (or 32 arc-minutes) is labelled for comparison. Up the right hand side of the graph, I show the actual size of Comet Holmes in millions of km (assuming that the comet is at a fixed distance of 1.7 AU away – although the comet is moving away from us, it has not moved too much over the last 2 months). Note how within days of the outburst in October, the comet was bigger than the separation of the Earth and the Moon, and within weeks it was physically bigger than the Sun! Currently, it appears about 1 degree (60 arc-mins) across in the night sky – that’s twice the diameter of the full Moon. In physical size, the nucleus of the comet is now surrounded by a cloud of gaseous water that is over 2.5 times larger than the Sun. What an amazing comet!
<urn:uuid:ad4b71d1-d528-4474-b6b5-5e55a81c8aed>
3.59375
757
Personal Blog
Science & Tech.
64.333113
Splash (fluid mechanics) In fluid mechanics, a splash is a sudden disturbance to the otherwise quiescent free surface of a liquid (usually water). The disturbance is typically caused by a solid object suddenly hitting the surface, although splashes can occur in which moving liquid supplies the energy. This use of the word is onomatopoeic. Splashes are characterized by transient ballistic flow, and are governed by the Reynolds number and the Weber number. In the image of a brick splashing into water to the right, one can identify freely moving airborne water droplets, a phenomenon typical of high Reynolds number flows; the intricate non-spherical shapes of the droplets show that the Weber number is high. Also seen are entrained bubbles in the body of the water, and an expanding ring of disturbance propagating away from the impact site. Physicist Lei Xu and coworkers at the University of Chicago discovered that the splash due to the impact of a small drop of ethanol onto a dry solid surface could be suppressed by reducing the pressure below a specific threshold. For drops of diameter 3.4 mm falling through air, this pressure was about 20 kilopascals, or 0.2 atmosphere. Splash plate A plate made of a hard material on which a stream of liquid is designed to fall is called a "splash plate". It may serve to protect the ground from erosion by falling water, such as beneath an artificial waterfall or water outlet in soft ground. Splash plates are also part of spray nozzles, such as in irrigation sprinkler systems. See also - Harold Eugene Edgerton, whose Milkdrop Coronet is arguably the most famous photograph of a splash - Slosh, other free surface phenomenon - Lei Xu et al., "drop splashing on a dry smooth surface", Phys. Rev. Letts. (2005)
<urn:uuid:4c6631d4-811f-4f47-be0c-a0d12013f51d>
3.890625
383
Knowledge Article
Science & Tech.
46.962959
Also, it was also a place of owners to release fishes and other animals 'back to their nature'. Some of these released animals were not native in our region. Over time, a community of both native and invasive species is created within the longkang, eventually forming a longkang habitat......... Recently, while walking along a road in Pulau Ubin with KS, RY, JL and IV, we chanced upon a longkang which was teaming with life. With one glance, we saw animals from 2 phylums and about 5 classes, mainly from the subphylum Vetebrata which is under phylum Chordata. Apparently, like other longkangs, there were some invasive species, for example the tortise (class reptilia), which we could not take pictures of due to reflection of the water. Also, this fish (identification unknown) may not be a native as well. Schools of what looked like small half-beaks were also seen as well (picture below). As this longkang is in close proximity to a mangrove habitat, some of the mangrove species were also seen, mainly the gobies.. Also, a tree climbing crab (Episesarma sp.)was spotted by KS at the edge of the longkang (picture above). Some small mudlobster mounds (no picture) and burrows (picture below) were seen around the longkang as well. Species from the subphylum Crustacea (refer to the Spiders at our backyard..) seems to have a foothold here as well. Couldn't resist the temptation, I decided to enter the longkang(picture below) to 'be 1 with the habitat' as well while the rest remained on the road to watch from a distance (picture far below). There, I tasted the water as well to confirm that it was fresh water. However, time passed quite quickly and we had to move on. Reluctantly, I had to leave the longkang, bringing nothing but pictures and an experience which not many urban dwellers have in our air-conditioned nation.... Note: scientifically, there is no such term as longkang habitat. longkang ==> drain Vetebrate: chordates which has a backbone, or vertebral column, that forms the skeletal axis of the body. Chordates: Deuterostome animals that, at some time in their lives, have a cartilaginous, dorsal skeletal structure called a notochord; a dorsal, tubular, nerve cord; pharyngeal gill grooves; and a postanal tail. Also featured in: Solomon, Berg and Martin. (2008) Biology, 8th Edition. Thomson Brooks/Cole. Peter K L Ng and N Sivasothi. (1999) "A Guide to the Mangroves of Singapore II: Animal Diversity". Singapore Science Centre.
<urn:uuid:3c283a33-37c2-461a-849e-f638a2408d32>
2.84375
614
Personal Blog
Science & Tech.
51.700766
We got hit by one 1,200 years ago. It came from two colliding Neutron Stars from a few thousand light years away and scientists were just now able to pick it up because of the existence of carbon-14 in tree rings. What did it do around the year 775 AD? Pretty much nothing. The estimate two-second blast had really zero effect on the earth since the most high tech thing on the planet at the time was the castle and the crossbow. Had that blast happened today we would be in some serious trouble since it would short out power grids and knock out all of our satellites. If the blast happened from say, 100 light years away, we would have been a crispy cinder. These gamma ray bursts were the result of the creation of a black hole from the collision of the Neutron Stars. So you'll have to excuse science for taking a while to figure this mystery out since there's no evidence visible. Had it been a supernova, people would have seen it in the 700s because it would have been so bright it would have been visible during the day. Had it been a solar flare, it would have been the largest flare every recorded. The black hole theory pretty much settles everything. Except, when is this going to happen again? (Buy this awesome book on space by Neil deGrasse Tyson - the guy that killed Pluto.)
<urn:uuid:b7693f17-084a-4de8-97c7-e3b5d8861d95>
3.375
286
Personal Blog
Science & Tech.
67.524745
Constructing an Open Box: An open box with a square base is requried to have a volume of 10 cubic feet. a) Express the amount A of material used to make such a box as a function of the length x of a side of the square base. MY answer: S=x^2 + 4x(10/x^2) b)How much material is required for a base 1 foot by 1 foot? c)How much material is required for a base 2 feet by 2 feet? d) Graph A=A(x). For what value of x is A smallest? I cannot figure out b,c,d...thank you
<urn:uuid:2d56224e-e6b6-4dad-bb65-a20c6e83e07e>
3.578125
140
Q&A Forum
Science & Tech.
96.726656
Martin Harwit has argued that we cannot have made more than ten per cent of the crucial discoveries in Astronomy. He uses what John Barrow aptly calls `the proof-readers argument'. If two independent readers look at a manuscript then it is possible to estimate, by comparing their different results, how many errors there must be in total, including those not identified. In an analogous way two independent astronomical channels (say optical and X-ray) can be used to examine the Universe and a comparison of their separate key discoveries will yield an estimate of the numbers still to be found. In any case with so little data to work on it shouldn't be too difficult to devise a plausible theory to account for them. It is, however, sobering to compare the cosmological situation with the history of other sciences. Take geology. Men were living on the earth for millions of years, and quarrying rock, digging mines and canals and puzzling over its fossils for thousands of years, before unexpected palaeomagnetic patterns revealed for certain the key idea of Continental Drift. In stellar physics two thousand years elapsed between Hipparcos's speculations and Bessel's first measurement of a stellar distance. Seventy years later the statistical patterns in the H-R diagram led to our understanding of stellar structure. However the closest comparison comes from my own field of galaxy astronomy which is, as an observational science, almost exactly contemporary with cosmology. Although we now have good spectra and images of thousands of galaxies the list of fundamental things we don't know about them (Table 3) is far more striking that the list of things we do. |1.||How our knowledge is warped by Selection Effects.| |2.||What they are mostly made of. (Dark Matter?)| |3.||How they formed - and when.| |4.||How much internal extinction they suffer from.| |5.||What controls their global star-formation rates.| |6.||What parts their nuclei and halos play.| |7.||If there are genuine correlations among their global properties.| |8.||How they keep their gas/star balances.| Of course these are only arguments by analogy. The optimistic cosmologist can always counter argue [I don't know how] that the Universe in the large is a great deal simpler than its constituent parts.
<urn:uuid:d9323dd8-b6ab-4284-9ad8-2d96d4a50574>
3.5
497
Academic Writing
Science & Tech.
52.79938
There are two different questions at work here, that you've kind of mashed together. The first question is "What is the speed at which a change in the electric field propagates?" The answer to that is the speed of light. In QED terms, the electromagnetic interaction that we see as the electric field is mediated by photons, so any change in an established field (say, due to shifting the position of the charge creating the field) won't be felt by a distant object until enough time has passed for a photon from the source to make it to the observation point. The second question is "What is the speed of propagation of electric current?" This speed is slower than the speed of light, but still on about that order of magnitude-- the exact value depends a little on the arrangement of wires and so on, but you won't be far off if you assume that electrical signals propagate down a cable at the speed of light. This relates to electric field in that the charge moving through a circuit to light a light bulb has to be driven by some electric field, so you can reasonably ask how that field is established, and how much time it takes. Qualitatively, the necessary field is established by excess charge on the surface of the wires, with the surface charge being generally positive near the positive terminal of a battery and generally negative near the negative terminal, and dropping off smoothly from one to the other so that the electric field is more or less piecewise constant (that is, the field is the same everywhere inside a wire, and the field is the same everywhere inside a resistor, but the two field values are not the same). When the circuit is first connected, there is a rapid redistribution of the charge on the surface of the wires which establishes the surface charge gradients that drive the steady-state current that will eventually do whatever it is you want it to do. The time required to establish the gradients and settle in to the steady-state condition is very fast, most likely on the order of nanoseconds for a normal circuit. There's a good discussion of the business of how, exactly, charges get moved around to drive a current in the textbook that we use for our introductory classes, Matter and Interactions, by Chabay and Sherwood. It doesn't go into enough detail to let you calculate the relevant times directly, but it lays out the basic science pretty well. (It's a textbook for a first-year introductory physics class, so it sweeps a lot of condensed matter physics under the metaphorical rug-- there's no discussion of band structure or surface modes, or any of that. It's fairly solid conceptually, though, at least according to colleagues who know more about those fields than I do.)
<urn:uuid:49279033-9e98-43e4-ba87-afe02bc68b49>
3.515625
558
Q&A Forum
Science & Tech.
35.886967
In Jena, Graph is an interface. It abstracts anything that looks like RDF - storage options, inference, other legacy data sources. The main operations are addition, there are a number of getters to access handlers of various features (query, statistics, reification, bulk update, event manager) . Having handlers, rather than directly including all the operations for each feature reduces the size of the interface and makes it easier to provide default implementations of each feature. Implementing a graph rarely needs to directly implement the interface. More usually, an implementation starts by inheriting from the class GraphBase. A minimal (read-only) implementation just needs to implement Wrapping legacy data often only makes sense as a read-only graph. To provide update operations, just implement the methods which are the methods called from the base implementations of Then for testing with JUnit, inherit from AbstractGraphTest (override tests that don't make sense in a particular circumstance) and provide the getGraph operation to generate a graph instance to test. Where the graph level is minimal and symmetric (e.g. literal as subjects, inclusion of named variables) for easy implementation, the RDF API enforces the RDF conditions and provides a wide variety of convenience operations so writing a program can be succinct, not requiring the application writer to write unnecessary boilerplate code sequences. The ontology API does the same for OWL. If you look at the javadoc, you'll see the APIs are large but the system level interface is small. A graph is turned into a Model by calling ModelFactory.createModelForGraph(Graph). All the key application APIs are interface-based although it's rarely needed to do anything other that use the standard Model-Graph bridge. Data access to the graph all goes via find. All the read operations of application APIs, directly or indirectly, come down to calling Graph.find or a graph query handler. And the default graph query handler works by calling Graph.find, so once find is implemented everything (read-only) works. ARQ's query API, which includes a SPARQL implementation, included. It may not be the most efficient way but importantly all functionality is available and so the graph implementer can quickly get a first implementation up and running, then decide where and when to spend further development time - or whether that's needed at all. An example of this is a prototype Jena-Mulgara bridge (work in progress as of Jan'08). This maps the Graph API to a Mulgara session object, which can be a local Mulgara database or a remote Mulgara server. The prototype is a single class together with a set of factory operations for more convenient creation of a bridge graph wrapped in all Jena's APIs. Implementing graph nodes, for IRIs and for literals is straight forward. Mulgara uses JRDF to represent these nodes and to represent triples. Mapping to and from Jena versions of the same is just the change in naming. Blank nodes are more interesting. A blank node in Jena has an internal label (which is not a URI in disguise). When working at the lowest level of Graph, the code is manipulating things at a concrete, syntactic level. A blank node in Mulgara has an internal id but it can change. It really is the internal node index as I found out by creating a blank node with id=1 and found it turned into rdf:type which was what was really at node slot 1. Paul has been (patiently!) explaining this to me on a Mulgara mailing list. The session interface is an interface onto the RDF data, not an interface to extend the graph details to the client. Both approaches are valid - it's just different levels of abstraction. If the Jena application is careful about blank nodes (not assuming they are stable across transactions, and not deleting all triples involving some blank node, then creating triples involving that blank node) then it all works out. The most important case of reading data within a transaction is safe. Bulk loading is better down via the native Mulgara interfaces anyway. The Jena-Mulgara bridge enables a Jena application to access a Mulgara server through the same interfaces as any other RDF data.
<urn:uuid:4357bba8-8f33-427a-854e-0afa0f5e1dea>
2.765625
909
Personal Blog
Software Dev.
41.344553
Science Fair Project Encyclopedia Very nice diagrams of refraction (with the red lines). Very good at explaining the phenomenon. I think that a rainbow is visible only when the sun is at a low altitude- mornings and late afternoon/ evenings. Isn't there some specific angle for this? KRS 15:33, 1 Feb 2004 (UTC) - I added: Hence there is no rainbow if the sun is at a higher altitude than 42°: the rainbow would be below the horizon. --Patrick 23:30, 1 Feb 2004 (UTC) Nevertheless, it is not true, as sometimes one can look below the horizon. For example, if you are looking down from a mountain, or - as mentioned in the article! - from an aeroplane. I've deleted the incorrect reference to glories from the aeroplane comment. Glory is a different optical phenomenon from rainbow and it is incorrect to state that a full-circle rainbow is a glory. This error needs to be removed from the page Glory_(rainbow) and I've put that on my task list, but I'm not sure how to fix the problem that the error is incorporated into the page title. Advice welcome. --Richard Jones 13:45, 20 Mar 2004 (UTC) Added: Even more rarely is a triple rainbow seen and a few observers have reported seeing quadruple rainbows in which a dim outermost arc had a rippling and pulsating appearance. - Sounds fantanstic, but I saw this, and I was not the only one - Leonard G. 03:50, 25 Aug 2004 (UTC) The article does a clumsy job about what is special about the 42° or the 52° angle. The picture lead me to correctly see that light can be refracted-internally.reflected-refracted.again at a large range of angles, its just that 42° is where the largest intensity of refraction occurs. The page http://www.phy.ntnu.edu.tw/java/Rainbow/rainbow.html has a much better explanation for the angle. 220.127.116.11 21:46, 30 Aug 2004 (UTC) I'm not clear on this section: In a very few cases, a moonbow, or night-time rainbow, can be seen on strongly-moonlit nights. As human visual perception for colour in low light is poor, moonbows are perceived to be white. In Hawaii, we see moonbows all the time, and it's possible to make out many colors. So, what does the editor (or author) mean by "in a very few cases"? --Viriditas 12:00, 29 Oct 2004 (UTC) The article states: Even more rarely is a triple rainbow seen and a few observers have reported seeing quadruple rainbows... These things are not rare in Hawaii. I've seen triple rainbows many times and a quadruple rainbow only twice. --Viriditas 12:32, 29 Oct 2004 (UTC) - More importantly we could use a scientific explanation of how they are possible. I've seen a 3+ rainbow and know that the additional bows cannot be explained using Descartes' internal reflections in a rain drop. -- Solipsist 08:32, 24 Nov 2004 (UTC) The main mnemonic described in the article is 'Richard of York...', given the subject am I right in thinking that this is only commonly used in the UK? Another editor has also added 'Roy G. Biv' saying it is more common. I haven't heard this one, is it common in the US? -- Solipsist 08:43, 24 Nov 2004 (UTC) Total internal reflection? The article states that light is reflected from the back of the drop under total internal reflection. I find this statement rather dubious at best. A quick derivation from snell's law shows that the minimum angle for total internal reflection in water (using nw = 1.33) is 48.7 degrees. That would imply that the angle at the back of the droplet is greater than 90 degrees, which by inspection is not the case. Since light would therefore leave the back of the drop refracted, would it not be impossible to see a rainbow between the observer and the sun, if the appropriate areas of the sky were unobscured?Kenneth Charles Edit: I did some research. Light is indeed passed out the back of a droplet, but due to the fact that there is no distinct peak of emission from this spectra, it does not form a visible rainbow. However, the statement that light is totally internally reflected inside a raindrop is wrong and should be removed. The contents of this article is licensed from www.wikipedia.org under the GNU Free Documentation License. Click here to see the transparent copy and copyright details
<urn:uuid:be7c449a-7a83-4c19-9cde-e1a313a1a2b7>
3.28125
997
Comment Section
Science & Tech.
68.455178
Histogram of the raw rainfall (mm) amount for running 3-month periods in chronological order from 1955 through 1996. The seasonal cycle of the quartile boundaries (25 %ile: lower light line; 50 %ile [i.e., median]: dark line; and 75 %ile: upper light line) are plotted with the actual rainfall amounts for the given period/year (vertical bars). The year labels shown on the horizonal axis are placed at the center of the calendar years rather than at Dec-Jan-Feb, with the latter denoted by tick marks. The ENSO status of each boreal winter is shown underneath the main panels of the histogram. Boreal winters that are split between two rows of the histogram (i.e., 1968-69 and 1982-83) have their ENSO status indicated in both rows.
<urn:uuid:54416580-72fc-46d4-8c33-021e084cae99>
2.703125
181
Structured Data
Science & Tech.
68.527735
Common Lisp the Language, 2nd Edition Several kinds of numbers are defined in Common Lisp. They are divided into integers; ratios; floating-point numbers, with names provided for up to four different floating-point representations; and complex numbers. X3J13 voted in March 1989 (REAL-NUMBER-TYPE) to add the type real. The number data type encompasses all kinds of numbers. For convenience, there are names for some subclasses of numbers as well. Integers and ratios are of type rational. Rational numbers and floating-point numbers are of type real. Real numbers and complex numbers are of type number. Although the names of these types were chosen with the terminology of mathematics in mind, the correspondences are not always exact. Integers and ratios model the corresponding mathematical concepts directly. Numbers of type float may be used to approximate real numbers, both rational and irrational. The real type includes all Common Lisp numbers that represent mathematical real numbers, though there are mathematical real numbers (irrational numbers) that do not have an exact Common Lisp representation. Only real numbers may be ordered using the <, >, <=, and >= functions. A translation of an algorithm written in Fortran or Pascal that uses real data usually will use some appropriate precision of Common Lisp's float type. Some algorithms may gain accuracy or flexibility by using Common Lisp's rational or real type instead.
<urn:uuid:bcd01b03-abb2-4458-a8b2-0122562fb6ac>
3.828125
283
Documentation
Software Dev.
36.567112
caroline apprend a nager elle prend des lecons de In a coordinate plane, the points (2,4) and (3,-1) are on a line. Which of the following must be true? 1. The line crosses the x-axis. 2. The line passes through (0, 0). 3. The line stays above the x-axis at all times. 4. The line rises from the lower left to the upper right. ... X=2 Is that right Ms. Sue? I will spell Algebra correctly from now on thanks for your help. Solve the equation 15(x+3)=75 Sorry Mr. Reiny I could not find the page where I had asked the question on Sunday when I went back to look, thanks for the link and the answer. Please show me step by step how to Make a table of solutions for the equation,and then use the table to graph the equation. y = 2x -1 Who was the best president Make a table of solutions for the equation, and then use the table to graph the equation. Just graph one of them. y = 2x -1 How do I make one, may I use microsoft Excel? Sorry Mr. Reiny, I guess I should of figured that out since you are so smart at doing the math problems. I do not have an option key on my windows 7 keyboard but I bet there is another way I can do the underline thing. Thanks Again for taking time out of your day to help us Ma... Thanks Reiny, you assumed correct, how did you get the line under the greater than sign? You are a very smart and kind woman to have been such a great help, Thanks! For Further Reading
<urn:uuid:b893e824-8132-4f01-92ab-ecd84c5ffbe8>
3.203125
364
Comment Section
Science & Tech.
88.598527
Yes - of course it does. Without "random" in front choice is an attribute with no object. I didn't actually do it, but thats what would happen - "choice is not defined blahblah" Is that the same as saying its not a "global namespace"? I'd have to consult a reference book to be 100% sure. I'm not certain how a pure object oriented language treats namespaces compared to a procedural language (ie, C++ is both procedural and OOP - I need to do a review For example, In C++ you explicitly state your namespaces - in 99% of cases students do this by adding a line 'using namespace std;' (std = standard) near the top of their file, which is frowned upon with most real projects. By doing this they don't have to put the namespace std in front of functions defined in std. cout << "hello" << endl; //prints hello std::cout << "hello" << std::endl; //prints hello Now say you have a special cout function that prints ascii numbers instead of the letters to the console. You can define a namespace in your file called Manta and do this.... Manta::cout << "hello" << std::endl; //prints 90 88 96 96 99 (just guessing the ascii values) In practice, namespaces are used in procedural languages to avoid name clashes. When a project gets large enough, you start running out of good descriptive variable names, so it is better to create seperate namespaces and reuse these descriptive names instead of resorting to complicated naming gyrations. "It looks to me like random might be a static class with static methods, hence, no need to instantiate anything. " Yes - a very good way to say it. How come the texts don't say that? Got me - maybe I should write a book. This is just what I think is happening... I'd have to consult python.org to be sure. Do Java and C++ have Modules? Please describe or give a definition to me for that. No. Java has the following.... packages - groups of related classes form a package. example: javax.swing is the package for the swing classes example: java.lang contains the core classes of the Java language classes - you know what these are.... Math is a class containing fields and methods related to math JButton is a class for instantiating a button in swing And you can create your own packages.... there are a few rules for doing this. In C++, which supports both procedural and OOP, the main library is called the STL - standard template library, which uses the namespace std like I showed you above. Instead of using a package, C++ has a keyword called friend - imo, friends are the most unfriendly thing I've seen in any language and I much prefer Java's use of packages. My language class didn't cover Python - C++, Java, Ada, LISP, Fortran, Prolog, Cobol and some others - here is what one of the tutorials says... You can use a module to organize a number of Python definitions in a single file. <snip> A package is a way to organize a number of modules together as a unit. Python packages can also contain other packages. So Python has both modules and packages where it looks like a module is a related to group of classes and functions, and a package is a related group of modules and other packages. Here is a link that I think will explain it in detail.... I plan on reading it later tonight.
<urn:uuid:62be6a2b-62af-4b65-b55d-fa907187d43b>
2.78125
764
Comment Section
Software Dev.
66.57796