text
large_stringlengths
148
17k
id
large_stringlengths
47
47
score
float64
2.69
5.31
tokens
int64
36
7.79k
format
large_stringclasses
13 values
topic
large_stringclasses
2 values
fr_ease
float64
20
157
Wed Feb 10 23:16:28 GMT 2010 by V Stability of an atom is based upon pressure, if outer field, the electron forcefield holding the internal massive space applies sufficient pressure the heavy atom holds together. Under weak surrounding pressure heavier atoms undergo a degeneration process. A fractal division. Fission. The spacial hole divides into smaller constituents. A proton and an electron(neutral heavy system) fall out of the larger system leaving it that much less massive and energetic. Heavier than uranium atoms are easily found in more massive systems, planets more massive than earth. The outer forces acting upon the singular system are higher, meaning less nuclear fallout. On other planets or under a forcefield heavier than uranium atoms can be contained without fractioning, or decaying. Fusion of nuclei is possible under force. Thu Feb 11 01:12:11 GMT 2010 by Penguinclaw I think the article refers to mass and not weight..... which would be meaningless on such a small scale anyway Thu Feb 11 03:17:19 GMT 2010 by Calafalas If I understand you correctly, you're suggesting that the half-life of radioactive elements is increased on larger planets by what is essentially an increase in atmospheric pressure. That kind of pressure can be reproduced in laboratories (that's how we make synthetic diamonds). I believe if that were true, someone would have noticed by now. Also, nuclear decay is usually in the form of alpha particles (2 protons and 2 neutrons) or beta particles (electrons emitted from a neutron as it transforms into a proton). Dropping a proton (with or without an electron) is much less likely. Thu Feb 11 11:39:23 GMT 2010 by Ray Why are you even bothering to take this V nutter seriously? he's spouting dribble Thu Feb 11 21:29:18 GMT 2010 by V "Halflife of radioactive elements is increased on larger planets", yes this comes from special relativity, it has been experimentaly proven, time passes slower around bodies of higher mass. But this is not what I'm saying, what I mean is that earth due to its mass supports elemements 1-92 any higher mass system any elementary systems bigger than 92 fall apart due to lack of gravitational pressures exerted as all falls toward the gravitational centre. The earths centre is a balanced point, the directional forces are infinite, there is no point on earth that your body is not pulled through to the centre at. But this isn't the pressure accountable, the pressure accountable is the conical pressure, envision it as a hurricane gradually going right through the earth getting infinitely small at the singularity, now the pressure I'm talking about is attributed by the cone, all particles want to fall to the central point, the force attributable to the elementary systematic limits of 1-92 is the force the particles exert upon each other within the earths singular system. In a more massive system, let's say a whitedwarf. A star that didn't dissipate, didn't go supernova, a star that pulls in material to rid the spacial hole, a whitedwarf has much denser energy particles, the forces exerted on a whitedwarf contain material that would fizz away break up if it left the spacial hold of the whitedwarfs system, a material this compact under pressures of the earth would give us insane amount of fission as the atoms expand and relax. Forcefields we have today would do nothing to contain such an elemental particle, our fields are limited to the amount of energy we can produce. We could power the field only with the material itself All comments should respect the New Scientist House Rules. If you think a particular comment breaks these rules then please use the "Report" link in that comment to report it to us. If you are having a technical problem posting a comment, please contact technical support.
<urn:uuid:1f7a4e9a-2a82-43b0-aa82-f78ac5fe4c27>
3.203125
798
Comment Section
Science & Tech.
48.526107
Unit 4 Assignment: Dangerous and Natural Energy SC300: Big Ideas in Science: From Methods to Mutation Earthquakes: Ohio vs. The World Every Wednesday at noon the National Weather Service out of Wilmington Ohio sounds the tornado sirens and a women’s voice is heard through the skies “this is a test of the emergency broadcast system out of Wilmington Ohio, ect”. I have grown up in a small town located outside Columbus, Ohio, and with the sounds of tornado sirens every year since I can remember, I have never seen a tornado or seen the damage of one in my area. Nevertheless, I know what to do in case of a tornado, and could coach others; it is something that is breed into each and every person who grows up here. Although I know what to do if a tornado were to touch down, how would that help me in the event of an earthquake? I can say without a doubt, that I would have no idea what to do if an earthquake were to occur in my state or if one were to occur when I was traveling. At a young age I was taught that “earthquakes don’t happen in Ohio” of course who was I to question authority; however, there have been several earthquakes in Ohio, none reaching a magnitude 8.0, but an earthquake nonetheless. If earthquakes can occur in Ohio why are we not taught as a society the safety precautions we can take during an earthquake? We hear of most earthquakes happening on the West coast of the United States, but if they can happen here in the Midwest, what actually needs to happen to create an earthquake? Do more earthquakes happen in one part of the United States, or are they localized to a certain part of the world? Is there a way to predict an earthquake in order to give people time to evacuate, like there is with tornado warning systems? My objective is to address these very important questions, not only for myself, but for the well being of other Ohioans, and individuals who might... [continues] Cite This Essay (2011, 06). Earthquakes. StudyMode.com. Retrieved 06, 2011, from http://www.studymode.com/essays/Earthquakes-723748.html "Earthquakes" StudyMode.com. 06 2011. 06 2011 <http://www.studymode.com/essays/Earthquakes-723748.html>. "Earthquakes." StudyMode.com. 06, 2011. Accessed 06, 2011. http://www.studymode.com/essays/Earthquakes-723748.html.
<urn:uuid:bcb8c13a-8d77-459f-ae04-7c2ca3a8cbf7>
3.21875
550
Academic Writing
Science & Tech.
66.381659
The universe’s extra dimensions For superstring theory to work, our cosmos must have another six dimensions. Where are they hiding? February 25, 2013 |The April article “What string theory tells us about the universe” overviews superstring theory and how it fits with science’s standard model of particle physics. But this model of the universe requires an additional six dimensions, which researchers haven’t yet detected. Astronomy published an article a few years ago that investigates how these six dimensions are curled in space and describes possible observational signatures that scientists can look for. Specifically, in “Searching for the shape of the universe,” researchers study the cosmic microwave background — a residual radiation that exists everywhere in the cosmos.| Astronomy magazine subscribers can read the downloadable article for free. Just make sure you're registered with the website. You are currently not logged in. This article is only available to Astronomy magazine subscribers. Already a subscriber to Astronomy magazine? If you are already a subscriber to Astronomy magazine you must log into your account to view this article. If you do not have an account you will need to regsiter for one. Registration is FREE and only takes a couple minutes. Non-subscribers, Subscribe TODAY and save! Get instant access to subscriber content on Astronomy.com! - Access our interactive Atlas of the Stars - Get full access to StarDome PLUS - Columnist articles - Search and view our equipment review archive - Receive full access to our Ask Astro answers - BONUS web extras not included in the magazine - Much more!
<urn:uuid:a52ed36d-eeac-4129-8af5-2607c025e270>
3.140625
343
Truncated
Science & Tech.
32.452569
Answer to 3.) Label the hikers with their times. First 1 and 2 go over (2 minutes), and 1 comes back (1 minute). Then 5 and 10 go over (10 minutes) and 2 returns (2 minutes). Finally 1 and 2 go over (2 minutes). The total is 17 minutes. Answer to 4.) Since the 100-pound sack of of potatoes was 99 percent water, it consisted of 99 pounds of water and 1 pound of pure potato essence. After the evaporation, the sack weighed X pounds and was 98 percent water and 2 percent potato essence. Thus 2 percent of the new weight X is the 1 pound of potato essence. Since .02X = 1, we can solve to get that X = 50 pounds. The answer is that the potatoes now weigh just 50 pounds. This may seem an apolitical problem, but imagine your stockbroker's fixed fee constituting 1 percent of the original worth of your investment, but 2 percent of its present worth. Then the problem is not necessarily small potatoes. Answer to 5.) You would take one marble from the box labeled "blue and red." Assume it's red. (Analogous reasoning follows if it's blue.) Since the marble is red and it comes from an incorrectly labeled box saying "blue and red," it must be the box with red marbles only. Thus the box labeled "blue" must have either red marbles only or red and blue marbles. It can't be the box with the red marbles only, so it must be the box with blue and red marbles. Finally the box labled "red" must contain the blue marbles. John Allen Paulos, a professor of mathematics at Temple University, is the author of the best-sellers "Innumeracy" and "A Mathematician Reads the Newspaper," as well as of the forthcoming (in December) "Irreligion." His "Who's Counting?" column on ABCNEWS.com appears the first weekend of every month.
<urn:uuid:cd85a8da-5160-41a6-a5d2-37bad4751b74>
2.9375
414
Q&A Forum
Science & Tech.
76.36537
A team of researchers, led by Stony Brook University paleontologist David Krause, has discovered the remains in Madagascar of what may be the largest frog ever to exist. The 16-inch, 10-pound ancient frog, scientifically named Beelzebufo, or devil frog, links a group of frogs that lived 65 to 70 million years ago with frogs living today in South America. Discovery of the voracious predatory fossil frog — reported on-line this week in the journal Proceedings of the National Academy of Sciences (PNAS) — is significant in that it may provide direct evidence of a one-time land connection between Madagascar, the largest island off Africa’s southeast coast, and South America. “Beelzebufo appears to be a very close relative of a group of South American frogs known as ‘ceratophyrines,’ or ‘pac-man’ frogs, because of their immense mouths,” said Krause, whose research was funded by the National Science Foundation (NSF). The ceratophryines are known to camouflage themselves in their surroundings, then ambush predators. from Eureka Alert Good thing these babies are extinct! As anyone who has ever been attacked by “Pac-Man” frogs knows, those things can eat their way through a cloud of skeeters in about two shakes, and they are tiny! Imagining what a ten pounder could do is enough to keep us awake at night. Powered by ScribeFire.
<urn:uuid:68587c13-dd60-420c-871f-0cf80852ea88>
3.484375
321
Personal Blog
Science & Tech.
37.749685
An Enterprise JavaBeansTM (EJB) component, or enterprise bean, is a body of code having fields and methods to implement modules of business logic. You can think of an enterprise bean as a building block that can be used alone or with other enterprise beans to execute business logic on the Java EE server. There are two kinds of enterprise beans: session beans and message-driven beans. A session bean represents a transient conversation with a client. When the client finishes executing, the session bean and its data are gone. A message-driven bean combines features of a session bean and a message listener, allowing a business component to receive messages asynchronously. Commonly, these are Java Message Service (JMS) messages. In Java EE 5, entity beans have been replaced by Java persistence API entities. An entity represents persistent data stored in one row of a database table. If the client terminates, or if the server shuts down, the persistence manager ensures that the entity data is saved.
<urn:uuid:b4839398-5a26-42a2-8f68-0102f85f4920>
2.78125
202
Documentation
Software Dev.
33.526964
The distribution of water quality variables in space and time is computed by solving the one-dimensional advection-dispersion equation in which non-conservative constituent relationships are considered to be governed, in general, by first order rates (see equations on page 3-5). The processes of chemical and biochemical transformations, including interaction among various parameters as represented in the model are shown in Figure 3.1 at the end of this chapter. These constituents include dissolved oxygen, carbonaceous BOD, phytoplankton, organic nitrogen, ammonia nitrogen, nitrite nitrogen, nitrate nitrogen, organic phosphorus, dissolved phosphorus, TDS, and temperature. The conceptual and functional descriptions of the constituent reactions are based generally on QUAL2E (Brown and Barnwell 1987); although in certain instances they were updated based on the work of Bowie et al., (1985). Mass balance equations are written for all quality constituents in each parcel of water (see equations on page 3-5). The reader is referred to Jobson and Scoellhamer (1987) for a description of the Lagrangian formulation which provides the basic framework for DSM2-Qual. In applying the water quality model, changes in concentration due to advection and dispersion, including changes due to tributaries or agricultural drainage are first computed. Next, concentrations of each constituent in each parcel of water are updated, accounting for decay, growth, and biochemical transformations. New subroutines developed for modeling non-conservative constituents are structured in modular form to facilitate extension for simulation of additional constituents (in the case that such needs arise in the future). Subroutine KINETICS updates constituent concentrations at each time step. A single or any combination of the eleven water quality variables can be modeled to suit the needs of the user. KINETICS is called by the parcel tracking subroutine of DSM2-QUAL for every parcel at each time step. The model has also been extended to simulate kinetic interactions in reservoirs (extended open water bodies encountered in the Delta). Subroutine CALSCSK builds a source/sink matrix within KINETICS for each non-conservative constituent simulated. For simulation of temperature, a subroutine that computes net transfer of energy at the air-water interface has been adapted from the QUAL2E model with some modification. Required meteorology data (obtained preferably at hourly intervals) include dry bulb and wet bulb atmospheric temperatures, wind speed, atmospheric pressure, and cloudiness. Physical, chemical, and biological rate coefficients required for KINETICS are read as input. Some of these coefficients are constant throughout the system; some vary by location; and most are temperature-dependent. A list of these coefficients and sample values is provided in chapter 5. The numerical scheme for updating kinetic interactions was developed considering properties of Lagrangian box models that are most accurate when time steps are small enough to define the dominant temporal variations in flow and concentration. A relatively simple scheme that takes advantage of small time stepsthe Modified Euler methodis used to update concentrations. Concentration updating is done at least once in every time step, and more often if the parcel in question has passed a grid point before the current time step is fully accounted for. In the latter case, the reaction time step will be the increment of time remaining to be accounted forless than the simulation time step (typically 15 minutes). Consequently, reaction time steps remain small, so the Modified Euler scheme for concentration updating is appropriate. Since changes in concentration of any constituent affect the other constituents, tests are included in DSM2-Qual to check whether corrections to constituent concentrations are The ability of the model to simulate the dissolved oxygen sag on a reach of the San Joaquin River near Stockton was recently demonstrated. DSM2-Qual was capable of capturing diurnal variations of important constituents such as dissolved oxygen, phytoplankton, temperature, and nutrients under the unsteady conditions of the estuary. Variations were realistic, although lack of a large temporal variation in observed data was somewhat of an impediment to testing the model's full capacity to predict field conditions. Tests of the model's capability to distinguish between alternatives in terms of incremental changes in water quality were encouraging (Rajbhandari 1995). The model has great potential for use as a practical tool for analysis of the impacts of water management To enhance the predictive capability of the model, sensitivity analysis should be performed to determine the relative influence of rate coefficients on model response. Calibrated values of the rate coefficients which are most sensitive should be refined. Also, subject to a consistent expansion of the database, future extensions in the model to add additional variables (such as zooplankton) are likely to result in improvement in model performance. Extension of model to represent sediment transport capability should also be investigated such that a dynamic interaction of sediments with simulated constituents is possible. Other uses of the model would be in providing the spatial and temporal distributions of water quality variables for the Particle Tracking Model, so that aquatic species can be more accurately modeled. Bowie, G. L., Mills, W. B., Porcella, D. B., Campbell, C. L., Pagenkopt, J. R., Rupp, G. L., Johnson, K. M., Chan, P. W. H., and Gherini, S. A. (1985). Rates, Constants and Kinetics Formulations in Surface Water Quality Modeling. 2nd Ed., US EPA, Athens, Georgia, EPA 600/3-85/040. Brown, L. C., and Barnwell, T. O. (1987). The Enhanced Stream Water Quality Models QUAL2E and QUAL2E-UNCAS: Documentation and Users Manual. US EPA, Athens, Georgia, EPA 600/3-87/007. Jobson, H. E., and Schoellhamer, D. H. (1987). Users Manual for a Branched Lagrangian Transport Model. U.S. Geological Survey, Water Resources Investigation Report 87-4163. O'Connor, D. J. and Dobbins, W. E. (1956). Mechanism of Reaeration in Natural Streams. J. Sanitary Engrg. Div., ASCE, 82(6), 1-30. Orlob, G. T. and N. Marjanovic (1989) Heat Exchange. Ch. 5 in Mathematical Submodels in Water Quality Systems, ed. S.E. Jorgensen and M.J. Gromiec, Elsevier Pub. Rajbhandari, H. L. (1995). Dynamic Simulation of Water Quality in Surface Water Systems Utilizing a Lagrangian Reference Frame. PhD Dissertation, Univ. of California, Davis, Calif. Goto: 1998 Annual Goto: Annual Reports
<urn:uuid:114cad78-e8ad-49c5-b4cf-cef74a70e5d1>
2.6875
1,537
Academic Writing
Science & Tech.
36.576956
Possible Duplicate: Travelling faster than the speed of light Someting almost faster than light traveling on something else almost faster than light I've got two questions which are ... I always get the doubt about this. I know that a bike measures its speed based on the motion of its front wheel. So what is the case with train? Is it same principle? Then what about an airplane? Is ...
<urn:uuid:b7f78bf6-7656-4513-bd38-885a0261204f>
2.734375
83
Q&A Forum
Science & Tech.
72.955138
What term is used to describe a group of whales? Unlike dolphins, which form large groups of 100 or more called "pods" or "schools", baleen whales do not usually travel or stay together in an organized group. Baleen whales (like humpbacks, blues, grays, seis...) are found alone, in pairs - such as a mother and calf - or in threes - such as a female with two courting males. On a larger scale (meaning across miles of waters), whales stick together in what we call "populations". These larger groups migrate from breeding waters to feeding waters at the same time, and occupy the same general region together, such as the Hawaiian Islands during breeding season. So, one species of whale, such as the humpback whale, will have a number of populations around the world made up of individuals that stay mostly together throughout the year. Thank you for your question, This archive was generated by hypermail 2b30 : Mon Feb 25 2002 - 21:06:00 EST
<urn:uuid:4d4f5056-a886-45bc-9661-7365ccca0670>
3.484375
224
Q&A Forum
Science & Tech.
58.531667
Liesegang ringArticle Free Pass Liesegang ring, in physical chemistry, any of a series of usually concentric bands of a precipitate (an insoluble substance formed from a solution) appearing in gels (coagulated colloid solutions). The bands strikingly resemble those occurring in many minerals, such as agate, and are believed to explain such mineral formations. The rings are named for their discoverer, the 20th-century German chemist Raphael Eduard Liesegang. What made you want to look up "Liesegang ring"? Please share what surprised you most...
<urn:uuid:0a5008f5-099c-4603-abb0-8e6efabc1cc9>
3.3125
124
Knowledge Article
Science & Tech.
32.73494
Howard Ecker, Ray Hopper, and Alfred O.C. Nier examine a 60 degrees mass spectrometer that was the prototype for the Consolidated-Nier commercial instrument. Photograph courtesy of the University of Minnesota Archives, University of Minnesota - Twin Cities. Alfred Nier was at the forefront of mass spectrometry as it was used in the Manhattan Project. A gifted young physicist who earned his Ph.D. in 1936, Nier had already been the source for dozens of scientists in need of a precise mass-measuring device. Nier’s instrument expertise and groundbreaking work extended to the study and measurement of isotopes; by the 1930s Nier had been involved with measuring the mass of uranium isotopes: There are actually two uranium series. There’s the U-238 that decays to U-206, and the U-235 that decays to U-207, each at a different rate. So you have like two hourglasses, running at the same time, see? So, if you measured the isotopes of lead accurately, and you knew the isotopic composition of uranium accurately, you could then determine and compare the ages by the two methods. But one didn’t know the relative abundances of the isotopes of uranium accurately. [Francis W.] Aston had observed the isotopes on his photographic plates, and showed that for U-235 there was a little smudge on the plate. But that’s as far as he got. So people had guessed at the relative abundances of the uranium isotopes, but I think they were off by a factor of three or some amount like that. It was realized then that we could now accurately measure the uranium isotopes. (Nier, 60) Nier’s work in isotopes did not go unnoticed by the scientific community. With rumors swirling about what nuclear fission could do—and how devastating the results could be if that power were to fall into the wrong hands—accurate data about isotopes was necessary. The buzz surrounded fission and not Nier’s work specifically, but it led to a serendipitous meeting that would prove necessary for the Manhattan Project to begin. In 1939 Nier met physicist Enrico Fermi at an American Physical Society meeting: Hear Alfred Nier: NIER: But when my paper came on, everybody walked out, except the chairman, my wife, and somebody else. And I remember the chairman of the session was Ed Condon, who I knew already, as I said, the man who was later going to be the Director of the Bureau of Standards. So I had this small courtesy group who listened to me tell about the isotopes of iron and nickel at the American Physical Society meeting. So that was the crazy thing that happened there. However, that was the meeting right after nuclear fission was discovered and where I met [Enrico] Fermi. I knew John Dunning already, who was the man in charge of the Columbia cyclotron, and was interested in nuclear physics, and through him, I met Fermi at that meeting. GRAYSON: At that meeting? NIER: At that meeting. That was April of 1939, and fission had just been discovered a few months before. It was one of the things that was talked about a lot at the meeting. And that’s when I got acquainted with Fermi. So that was a positive thing that came out of the meeting. [...] Dunning had figured out that, if I just souped up the spectrometer a little bit, I could collect enough separated isotopes of uranium to make possible a determination of the fissionable isotope. He knew how much uranium it would take to detect fission if they bombarded my samples with neutrons. I don’t remember the exact conversation, but he pointed out that if I could collect some fraction of uranium-235, they ought to be able to verify it was fissionable nuclide. (Nier, 74) Consequently, in 1940 Fermi asked Nier and his lab to provide a small sample of U-235 for John R. Dunning at Columbia University. Fermi and Dunning thought that U-235 would be the best bet for fission, but neither could be certain until a pure sample could be procured. Nier was able to produce such a sample by using a mass spectrometer he had created earlier, which allowed Dunning to demonstrate that the isotope U-235 could undergo fission. With that discovery Nier was put to the task of separating uranium using mass spectrometry. Alfred O.C. Nier with a 180 degrees Mass Spectrometer Tube constructed during his fellowship at Harvard University, 1937-38. Photo taken during his 1989 oral history interview, CHF Collections. The U.S. government and the Manhattan Project leaders knew they needed massive quantities of the isotope to fuel a bomb, but they were as yet unsure what method could enrich uranium quickly. There were several possibilities—by centrifuge, gaseous diffusion, gas centrifuge, electromagnetic isotope separation (a hybrid instrument consisting of a mass spectrometer and a cyclotron), or thermal diffusion. The centrifuge method was abandoned in November 1942, but other methods showed promise. Nier himself worked on ways to capitalize on the functionality of the mass spectrometer for such a task, but Dunning and his group, ironically using data from Nier, proved that the gaseous-diffusion method worked best. With Nier’s help the United States was able to achieve uranium enrichment on a large scale. Nier’s pioneering efforts in mass spectrometry meant that his lab was uniquely qualified for Manhattan Project work. Nier himself noted that his combination of experience and knowledge made him one of the only people in the entire nation who would be up to the task: You know, nobody’s indispensable, but there wasn’t anybody else in the world who had the experience I had. It was certainly the right place to be, for me to be, because I had the background. Keep in mind, there weren’t many mass spectrometers in the world at that time. We were the only people who could even make measurements of these kinds. True, there were people coming along. Consolidated was manufacturing instruments, 180-degree instruments, which were used in the oil industry. They sold them to big oil companies, where they could do routine analyses of hydrocarbon mixtures in their plants in 1 percent of the time needed by the old methods of analysis. So there were spectrometers available. But the companies that made them . . . you were supposed to use them in a certain way. It’s just like when you buy an instrument now. Unless you use it the way it’s made for, it isn’t too useful. There weren’t many people who had the flexibility that we had, in that if a new problem came up, I said, “Sure, we’ll go home and try it,” and next week, we’d probably have an answer. And that’s the way we lived during that time. There certainly were many clever people who could have done the same thing, but they didn’t have the mass-spectrometry experience. That was really the unique thing that we had that other people didn’t have, was the combination of experience and ability to develop new instrumentation. (Nier, 121–122) Several Nier mass-spectrometry instruments were employed at various Manhattan Project labs across the United States. In fact, most of the mass spectrographs used for monitoring the separation of uranium during the Manhattan Project were Nier models. Until the middle of 1942 we made all of their isotope analyses. I wish I still had the telegram which I got from Gene Booth after they had sent us some critical samples. They never told us which samples were critical so as not to prejudice us. I got this wonderful telegram from Gene saying that either I could read minds or we did a good job. It was to tell me that everything was as it was supposed to be. The measurement confirmed that the diffusion method was performing as hoped for and could be developed further. We had built four machines. I sent two to Columbia and two to Virginia. (Nier, 98)
<urn:uuid:076a123e-b8d1-47c0-b7df-b5a29a361515>
3.078125
1,760
Audio Transcript
Science & Tech.
52.440291
The removal of apex predators like sharks is one of the most prevalent and devastating human impacts on earth’s marine ecosystems. Overfishing and the use of indiscriminate, destructive fishing gear like gillnets is far-reaching. Sharks are becoming increasingly rare throughout most of the Belize Barrier Reef and this decline presents a major ecological and economic problem for Belize. We know very little about the specific roles of sharks in Caribbean coral reef ecosystems, but current models and theories suggest that their loss causes ripple effects throughout local food webs and could lead to reef collapse. With sharks and reef systems under similar threats worldwide, there’s an urgent need to develop a better understanding of the roles sharks have within reef ecosystems and whether protected reef areas are effective in helping populations recover. A goal of the project is to better describe the niche of the dominant shark species on the Belize Barrier Reef, including Caribbean reef, nurse, Caribbean sharpnose, great hammerhead, blacktip, lemon, silky, night and tiger sharks. Since no studies of shark abundance have ever been conducted before and after the establishment of a marine reserve in Belize, we have a rare opportunity - with your help - to significantly improve the management of these protected areas by observing their impact. This opportunity is especially significant for Belize, where ongoing efforts to build a comprehensive network of marine reserves take place against a backdrop of increasing shark exploitation. Although Belize has 13 marine protected areas, not all of them include no-take marine reserves and several of them still allow longline and gillnet fishing for sharks inside or very near their borders. Showing whether and how the marine reserves in Belize are useful for shark conservation will be decisive in consolidating the existing marine reserve network and improving it over time. Working closely with local partners in the Wildlife Conservation Society-Belize and providing data to Belize’s Fisheries and Tourism Departments, as well as the Association of Protected Area Management Organizations, the scientists working on this project will be able to use your efforts to bring about real improvements in shark and reef conservation. You’ll assist with the deployment, recovery, and maintenance of hook-and-line shark fishing gear in various locations at the Glover’s Reef study site, as well as in the measurement of associated environmental data like water quality, salinity, and pH. You’ll help the researchers tag, take tissue samples from, and release captured sharks. (All sharks are firmly secured to the side of the research vessel prior to data collection and are kept in the water for the whole procedure. Volunteers will be involved with all facets of this process except the securing and final release of the animal, which will be carried out by experienced staff.) You’ll also help collect shark tissue samples within several different habitats, from local fishermen’s catches, and from your own handline fishing and seine-netting. You will also conduct snorkel surveys to record habitat type, as well as abundance and diversity of coral and fish species. This data on the status of the reef will be collected for each site where a video is deployed, to allow comparisons between different habitats. You’ll have opportunities to interact with tourists and Belizeans throughout the project to help assess their attitudes toward sharks, reefs, and marine reserves, helping produce, distribute, and score written questionnaires and add them to the database. You may also help transcribe video interviews. Meet the Scientists Dr. Demian Chapman School of Marine & Atmospheric Science Stony Brook University, NY, USA Dr. Chapman is an internationally recognized shark expert who has been working on shark research and conservation projects in Belize for nearly a decade. He is a molecular ecologist and field biologist, and an expert in the integration of telemetry tracking into research on shark dispersal and reproduction. He’s the author of more than 20 scientific articles and has managed field research projects on sharks in The Bahamas, his native New Zealand, and Florida. He received his undergraduate degree in zoology and ecology from Victoria University in New Zealand, and his masters and PhD (Oceanography-Marine Biology) from Nova Southeastern University Oceanographic Center, FL, USA. Dr. Elizabeth Babcock Rosenstiel School of Marine & Atmospheric Science University of Miami, FL, USA Dr. Babcock is a quantitative fisheries scientist, with experience in fisheries stock assessment and marine fish conservation, for species including sharks, billfishes and sturgeons. Using innovative data sources and analysis methods to inform management of fisheries for which traditional fisheries data are lacking is a primary focus of her research, with an increasing emphasis on marine reserves and the ecosystem impacts of fisheries. She has an undergraduate degree in Biology and Environmental Science from the University of California, Berkeley, and received her PhD from the University of Washington's School of Fisheries. After completing her PhD, she worked for the Wildlife Conservation Society (WCS) as the first Constantine S. Niarchos Fellow in Marine Conservation, and did a project on bycatch of endangered Humboldt penguins in the gillnet fishery out of Punta San Juan, Peru. She then served as Chief Scientist for the Pew Institute for Ocean Science, which is now the Institute for Ocean Conservation Science.
<urn:uuid:bec6ca67-da97-445c-b2ff-4bd88e1b2bc0>
3.546875
1,081
Knowledge Article
Science & Tech.
22.868701
16.4.3 Relations for expr expr supports the usual logical connectives and relations. These have lower precedence than the string and numeric operators (previous sections). Here is the list, lowest-precedence operator first. - Returns its first argument if that is neither null nor zero, otherwise its second argument if it is neither null nor zero, otherwise 0. It does not evaluate its second argument if its first argument is neither null nor zero. - Return its first argument if neither argument is null or zero, otherwise 0. It does not evaluate its second argument if its first argument is null or zero. - ‘< <= = == != >= >’ - Compare the arguments and return 1 if the relation is true, 0 otherwise. == is a synonym for =. expr first tries to convert both arguments to integers and do a numeric comparison; if either conversion fails, it does a lexicographic comparison using the character collating sequence specified by the LC_COLLATE locale.
<urn:uuid:54dfddef-1a53-423b-81d3-f5e383fb7e1a>
3.046875
220
Documentation
Software Dev.
46.360729
West Virginia has recently discovered, though a project funded by Google, that it is lying on top of a major resource: geothermal power. The finding is even more significant taking in consideration the state’s high dependence on fossil fuels. Google invested $481,500 for the study led by the Southern Methodist University. According to the research, the state has a huge geothermal potential. If only 2 percent could be collected, the system would deliver 18,890 MW of green energy. The figures show a lot more potential than originally estimated, as the new study was carried out taking in consideration more details and more data points. 1,455 new thermal data points were found besides existing ones, most of them in the eastern part of the state. Although its geothermal potential was ignored for not being a tectonically active zone, it has now become the state with most geothermal sites from the US. Currently, Western Virginia has a power generating capacity of 16,350MW, but only 3 % of that capacity is provided by renewable energy. Google, which uses great amounts of energy for its huge data centers, has invested $10 million for the research of renewable energy sources that are cheaper than traditional ones. Mike is a master student of graphic design and is particularly interested in green designs and green technologies that affect people directly. Besides publishing, he supervises any changes in the site's aesthetics. The current logo is his concept.
<urn:uuid:176524e8-5b68-4963-adf4-9282db56f265>
3.078125
294
Personal Blog
Science & Tech.
40.080952
Solar Eclipses: An Observer's Guide (Infographic) Solar eclipses are one of the cosmic wonders of our solar system. They occur when the new moon blocks part or all of the sun as seen from the surface of the Earth. Check out the SPACE.com Infographic above to see how solar eclipses work. When the moon passes in front of sun, as viewed from Earth, the eclipse that occurs is visible from a narrow path on Earth that corresponds to the location of the moon's shadow. During a total solar eclipse, this path is known as the path of totality. WARNING: Never look directly at the sun during an eclipse with a telescope or your unaided eye. Severe eye damage can result and scientists use special filters to safely view the sun. There are several other types of solar eclipses. In addition to total eclipses of the sun, the moon can block part of the sun's disk (a partial solar eclipse), or leave only an outer ring of the sun visible in a so-called annular solar eclipse. A hybrid solar eclipse occurs when the tip of the moon's shadow lifts off the surface of the Earth at some point, allowing some observers to see a total eclipse while others witness an annular eclipse. - Amazing Total Solar Eclipse Photos - CAUTION! - How to SAFELY Observe the Sun (Video) - Venus Transit of 2012: Complete Coverage
<urn:uuid:b24a630e-020a-40a0-bde6-80ca3c01b7f0>
4.0625
290
Truncated
Science & Tech.
51.369944
Molecules that drill holes in bacteria and make them spring a fatal leak could become the next weapon in the war against antibiotic-resistant superbugs. Unlike most antibiotics, these "hydraphile" molecules are not based on the bug-killing compounds found naturally in organisms such as fungi. Instead, they mimic the tiny "ion channels" embedded in the membranes of living cells. But while ion channels control the movement of particular ions into and out of cells, hydraphiles let them flow freely in and out. This wrecks the delicate balance needed to keep the cell alive Hydraphiles work by forming a tunnel right through the membrane. At each end of the molecule is a ring-like structure called a "crown ether", which is negatively charged. This pulls positive ions such as calcium and sodium into the tunnel. Each hydraphile has another crown ether "relay" halfway along its length to maintain the ions' ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:e8c67c9a-d46e-4eec-a88d-e4f27b825880>
3.390625
216
Truncated
Science & Tech.
46.147598
What was the first type of early glue? How far back do you want to go? After all, the answer to your question is that substance that the first time a humanoid discovered that something stuck two things together and that could be just about anything. The very first glues were made several hundred thousand years ago, and made from sticky tree sap. Clearly, there has been a lot of improvement since then! If you Google "early glues", you can find quite a lot of interesting historical information. "There is evidence of man using amber as glue as much as 11,000 years ago. In ancient Egypt, man knew how to boil fish or animal skins to make glue as well." Click here to return to the Material Science Archives Update: June 2012
<urn:uuid:92368d1e-9312-4844-8161-15b9cf819e23>
3.09375
166
Q&A Forum
Science & Tech.
62.74258
PL/Perl is a loadable procedural language that enables the Perl programming language to be used to write PostgreSQL functions. Normally, PL/Perl is installed as a "trusted" programming language named plperl. In this setup, certain Perl operations are disabled to preserve security. In general, the operations that are restricted are those that interact with the environment. This includes file handle operations, require, and use (for external modules). There is no way to access internals of the database backend or to gain OS-level access under the permissions of the PostgreSQL user ID, as a C function can do. Thus, any unprivileged database user may be permitted to use this language. Sometimes it is desirable to write Perl functions that are not restricted --- for example, one might want a Perl function that sends mail. To handle these cases, PL/Perl can also be installed as an "untrusted" language (usually named plperlu). In this case the full Perl language is available. The writer of a PL/PerlU function must take care that the function cannot be used to do anything unwanted, since it will be able to do anything that could be done by a user logged in as the database administrator. Note that the database system allows only database superusers to create functions in untrusted languages.
<urn:uuid:4e4811e1-e821-4e78-b869-41fea91f428c>
2.75
273
Documentation
Software Dev.
35.815
The title here should strike a familiar theme for most readers. Climate forcings do not just include CO2 (other greenhouse gases, aerosols, land use, the sun, the orbit and volcanoes all contribute), and the impact of human emissions often has non-climatic effects on biology and ecosystems. First up last week was a call from Michael Prather and colleagues that the production of a previously neglected greenhouse gas (NF3) was increasing and could become a significant radiative forcing. This paper was basically an update of calculations done for the IPCC combined with new information about the production of this non-Kyoto gas. Most of the media stories that picked this up focused on the use of this gas in a particular manufacturing process – flat screen TVs. Thus the headlines almost all read something like “Flat-screen TVs cause global warming”! (see here, here, here etc.). Unfortunately, very few of the headline writers read the small print. NF3 is indeed a more powerful greenhouse gas than CO2 (as are methane, CFCs and SF6 etc.), but because it is much less prevalent, the net radiative forcing (as with other Kyoto gases) is much smaller. Unfortunately, no-one has any measures of the concentration of NF3 in the atmosphere. This is likely to be increasing, since production has stepped up rapidly in recent years, but the amount of gas that escapes to the air is unknown. Manufacturers claim that it is only a very small percentage – but historically such claims have not always been very reliable. However, it is almost certain that NF3 has not caused a significant amount of global warming (yet). The one issue that many stories did get wrong was in the comparison with coal. Prather’s paper compared the effect of the entire global production of NF3 being released into the atmosphere with the CO2 impact of one coal-fired power station. Since that is the maximum estimate of the current effect, and only matches a single power-station, the subtlety of the comparison got a little lost on the way to “Flat screen TVs ‘worse than coal’” story…. Needless to say, no-one should be throwing away their flat screen TVs because of this (it’s not in the use of the TV that causes a problem), but manufacturers will likely need to step up monitoring of NF3 leakage or switch to an alternative process which some have already done. The second story getting some attention, is the ocean acidification issue. As we’ve discussed previously, the increased take up in the oceans of human-released CO2 is rapidly increasing the acidity (lowering the pH) of the oceans, making it more difficult for many carbonate-producing organisms to produce calcite or aragonite. These organisms include corals, coccolithophores, foraminfera, shell fish etc. Both of these issues are relevant to the ongoing climate change discussion and it’s good to see the media picking up (albeit imperfectly) on these ancillary discussions. But as with the “North Pole” lightning rod discussed last week, there always needs to be a hook before something gets wide press (the ‘tyranny of the news peg’ as ably described by Andy Revkin). In the first case, there was a link to a popular consumer item and in the second, there has been a concerted effort to get the ocean acidification issue higher up the agenda. The fact of the matter is that most of what goes on in the sciences is completely (and usually correctly) well below the radar of the public at large. But when there are discoveries and issues that do have public policy ramifications, getting the public to pay attention often requires finding just these kinds of resonances. Now if there was only a way to make sure the story underneath was accurate….
<urn:uuid:2abebf27-73d3-4043-a025-0bc57055d09b>
3.1875
801
Personal Blog
Science & Tech.
40.580357
, comprising only the family Aleyrodidae , are small homopterans . More than 1550 species have been described. Whiteflies typically feed on the underside of plant leaves. While feeding damage can cause economic losses, it is the ability of whiteflies to transmit or spread viruses that has had the widest impact on global food production. In the tropics and subtropics, whiteflies (Hemiptera ) have become one of the most serious crop protection problems. Economic losses are estimated in the hundreds of millions of dollars. While several species of whitefly cause crop losses through direct feeding, a species complex, or group of whiteflies in the genus Bemisia are important in the transmission of plant diseases. Bemisia tabaci and B. argentifolii , transmit African cassava mosaic, bean golden mosaic, bean dwarf mosaic, bean calico mosaic, tomato yellow leaf-curl, tomato mottle, and other Begomoviruses , in the Family: Geminiviridae . The world-wide spread of emerging biotypes, such as B. tabaci biotype B, also known as, 'B. argentifolii', and a new biotype Q, continue to cause severe crop losses which will likely continue to increase, resulting in higher pesticide use on many crops (tomatoes, beans, cassava, cotton, cucurbits, potatoes, sweet potatoes). Efforts to develop integrated pest management, IPM, systems aimed at environmentally friendly strategies to also reduce insecticide use will help re-establish the ecological equilibrium of predators, parasitoids, and microbial controls that were once in place. New crop varieties are also being developed with increased tolerance to the whiteflies, and to the whitefly-transmitted plant diseases. A major problem is the fact that the whiteflies and the viruses they carry can infect many different host plants, including agricultural crops and weeds. This is complicated by the difficulty in classifying and detecting new whitefly biotypes and Begomoviruses. Proper diagnosis of plant diseases depends on using sophisticated molecular techniques to detect and characterize the viruses and whiteflies which are present in a crop. A team of researchers, extension agents and growers working together are needed to follow disease development, using dynamic modeling, to understand the incidence of disease spread. In 1997 Tomato yellow leaf-curl begomovirus, TYLCV was discovered in the USA, in Florida. This plant disease is the worst viral disease of tomato. The disease is transmitted by the whitefly, Bemisia argentifolii. The whitefly is also been shown to transmit almost all of the 60 known whitefly transmitted plant viral diseases. Whitefly damage by feeding: Whiteflies feed by tapping into the phloem of plants, exposing plants to the whiteflies' toxic saliva and decreasing the plant's overall turgor pressure. The damage is quickly elevated as whiteflies congregate in large numbers, quickly overwhelming susceptible plants. Damage is further exacerbated as whitesflies, like aphids, excrete honeydew as a waste product, which promotes mold growth and may seriously impede the ability of farms to process cotton harvests. Whiteflies share modified form of hemimetabolous metamorphosis, in that the immature stages begin life as mobile individuals, but soon attach to a host plant. The stage before the adult is called a pupa, though it shares little in common with the pupal stage of holometabolous insects. is difficult and complex, as they rapidly gain resistance to chemical pesticides . The USDA recommends "an integrated program that focuses on prevention and relies on cultural and biological control methods when possible." Use of yellow sticky traps to monitor infestations and only selective use of insecticides is advised. Various companion plants are reputed to repel or trap white flies. Calendula do so for documented reason, producing chemicals that repel them. Nasturtiums are thought to have a similar effect, while mint may serve either as a repellant or trap crop. Basil, too, has a reputation for repelling them, which may be due to its production of several essential oils that are known to drive away insects, including citronella - Hunter, WB, Hiebert, E, Webb, SE, Tsai, JH, & JE. Polston. 1998. Location of geminiviruses in the whitefly Bemisia tabaci (Homoptera: Aleyrodidae). Plant Disease, Vol. 82: 1147-1151. - Hunter, WB, Hiebert, E, Webb, SE, & JE. Polston. 1996. Precibarial and cibarial chemosensilla in the whitefly, Bemisia tabaci (Gennadius)(Homoptera: Aleyrodidae). International Journal of Insect Morphology & Embryology. Vol. 25: 295-304. Pergamon Press, Elsevier Science Ltd., Great Britain. - Sinisterra, XH., McKenzie, CL, Hunter, WB, Shatters, RG, Jr. 2005. Transcript expression of Begomovirus in the Whitefly Vector (Bemisia tabaci, Gennadius: Hemiptera: Aleyrodidae). J General Virology 86: 1525-32. on the UF / IFAS Featured Creatures Web site
<urn:uuid:96f7f3fe-51f8-4961-91a3-0951247e3bb1>
3.671875
1,121
Knowledge Article
Science & Tech.
28.406318
Insight articles relating to "Climate Change" Wednesday, August 29, 2012 The arctic melt this summer is unprecedented. By mid-September, it will be worse even than 2007, which saw the Northwest Passage opened for the first time. On August 27, the US National Snow and Ice Data Center reported that the ice pack was a 4.1 million square kilometers, 70,000 square kilometers lower than 2007. There are still about...
<urn:uuid:96a22565-531e-4c4c-8a26-6acccb4105ff>
2.78125
90
Content Listing
Science & Tech.
64.278028
Artificial life is a concept usually reserved for horror movies and the rich scenarios of science fiction novels. But just as art imitates life, so it can also imitate science and technology as every day researchers push the parameters of understanding the universe with new, mind-boggling discoveries. In a research project aimed at creating a synthetic life form, scientists have reported a major step forward by creating an artificial genome and bringing a hollowed-out bacterium back to life. Headed by American geneticist and genome pioneer, Craig Venter, the hope is to learn how to engineer custom-made microbes. This landmark experiment has far reaching possibilities and complications and is highly controversial, to say the least. Some twenty scientists were involved in this “defining moment in biology” for fifteen years at an estimated cost of $40 million. Venter firmly believes their important achievement is only the beginning of a new era marked by bacteria that will work for humanity’s good; churning out biofuels, manufacturing vaccines and reducing carbon footprints by designing algae that can soak up carbon monoxide from the atmosphere. There are many who criticize this earth-shattering breakthrough. (Remember how Columbus was warned that he would fall off the flat surface of the earth if he dared to venture too far from home.) Some claim that Venter is playing God and a few religious groups have condemned his work. One organization warned that these synthetic organisms could wreak environmental chaos or even be converted into biological weapons. Official bioethics advisers at the behest of President Obama are scheduled to report imminently on the implications of this monumental discovery. President Obama had this to say about this very important issue: “In its study, the Commission should consider the potential medical, environmental, security, and other benefits of this field of research, as well as any potential health, security or other risks. Further, the Commission should develop recommendations about any actions the federal government should take to ensure that America reaps the benefits of this developing field of science while identifying appropriate ethical boundaries and minimizing identified risks.” In Venter’s own words: “This is the first synthetic cell that’s ever been made. This is the first self-replicating species that we have had on the planet whose parent is a computer… This becomes a very powerful tool for trying to design what we want biology to do.” While members of the research team claimed they had taken only baby steps toward the goal of starting with a digital file and custom-making an organism, there seems no question that the concept of artificial life is very close to becoming a stunning reality. Latest posts by marjorie (see all) - Nathan Sawaya: LEGO Artist Extraordinaire - April 7, 2012 - Is Recently Discovered Fossil Ape or Man? - October 5, 2011 - Extreme Diving Stunt From Roof of Boston Art Museum - October 1, 2011
<urn:uuid:fcd3ecd9-500d-470e-8929-9c1b1738ad0b>
3.21875
604
Personal Blog
Science & Tech.
26.693233
Special Relativity assumes time is a dimension, i.e. space-time is Minkowski space. There are thus four coordinates in this space, xi with the index i taking the values 0,1,2,3. Since time has different units than length, to be able to describe space and time as elements of one space-time we have to multiply time by a constant of dimension length/time, i.e. a velocity. This constant is usually denoted c. It is then x0 = c t. We will come back to the meaning of this constant later. The other ingredient of Special Relativity is that the laws of physics are same for all observers with constant velocity. That means there are sensible and well-defined transformations between observers that preserve the form of the equations. A Word or Two about Tensors The way to achieve such sensible transformations is to make the equations "tensor equations", since a tensor does exactly what we want: it transforms in a well-defined way under a change from one to the other observer's coordinate system. The simplest sort of a tensor is a scalar φ, which doesn't transform at all - it's just the same in all coordinate systems. That doesn't mean it has the same value at each point though, so it is actually a scalar field. The next simplest tensor is a vector Vi which has one index that runs from 0 to 3, corresponding to four entries - three for the spatial and one for the time-component. Again this can be a position dependent quantity, so it's actually a vector field. The next tensor has two indices Tij that run from 0 to 3, so 16 entries, and so on: Uijklmn.... The number of indices is also called the "rank" of a tensor. To transform a tensor from one coordinate system in the other, one acts on it with the transformation matrix, one for every index. We will come to this transformation later. Note that it is meaningless to say an object defined in only one inertial frame is a tensor. If you have it in only one frame, you can always make it into a tensor by just defining it in every other frame to be the appropriately transformed version. The Scalar Product A specifically important scalar for Special Relativity is the scalar product between two vectors. The scalar product is a symmetric bilinear form, which basically means it's given by a rank two tensor gij that doesn't care in which order the indices come, and if you shovel in two vectors out comes a scalar. It goes like this: gijViUj = scalar, where sums are taken over indices that appear twice, once up and once down. This is also known as Einstein's summation convention. I used to have a photo of Einstein with him standing in front of a blackboard cluttered with sum symbols. Unfortunately I can't find it online, a reference would be highly welcome. That photo made really clear why the convention was introduced. Today the sum convention is so common that it often isn't even mentioned. In fact, you will have to tell readers instead not to sum over equal indices if that's what you mean. The scalar product is a property of the space one operates in. It tells you what the lengths of a vector is, and angles between different vectors. That means it describes how to do measurements in that space. The bilinear form you need for this is also called the "metric", you can use it to raise and lower indices on vectors in the following way: gijVj = Vi. Note how indices on both sides match: if you leave out the indices that appear both up and down, the remaining indices have to be equal on both sides. Technically, the metric it is a map from the tangential to the co-tangential space, it thus transforms row-vectors V into column vectors VT and vice versa, where the T means taking the transverse. A lower index is also called "covariant", whereas upper indices are called "contravariant," just to give you some lingo. The index jiggling is also called "Ricci calculus" and one of the common ways to calculate in General Relativity. The other possibility is to go indexless via differential forms. If you use indices, here is a good advice: Make sure you don't accidentally use an index twice for different purposes in one equation. You can produce all kind of nonsense that way. In Special Relativity, the metric is (in Euclidean coordinates) just a diagonal matrix with entries (1,-1,-1,-1), usually denoted with ηij. In the case of a curved space-time it is denoted with gij as I used above, but that General case is a different story and shall be told another time. So for now let us stick with the case of Special Relativity where the scalar product is defined through η. Now what is a Lorentz transformation? Let us denote it with Λ. As mentioned above, you need one for every index of your tensor that you want to transform. Say we want to get a vector V from one coordinate system to the other, we apply a Lorentz transformations on it so in the new coordinate system we have V' = VΛ, where V' is the same vector, but how seen in the other coordinate system. With indices that reads V'iΛij = Vj. Similarly, the transverse vector transforms by V'T = ΛT VT. Lorentz transformations are then just the group of transformations that preserve the length of all vectors, length as defined through the scalar product with η. You can derive it from this requirement. First note that a transformation that preserves the lengths of all vectors also preserves angles. Proof: Draw a triangle. If you fix the length of all sides you can't change the angles either. Lorentz transformations are thus orthogonal transformations in Minkowski space. In particular, since the scalar product between any two vectors has to remain invariant, VT η U = V'T η U' = VT ΛT η Λ U, they fulfil (with and without indices) ΛijηkiΛlk = ηjl <=> ΛT η Λ = η (1) If you forget for a moment that we have three spatial dimension, you can derive the transformations from (1) as we go along. Just insert that η is diagonal with (in two dimensions) entries (1,-1), name the four entries of Λ and solve for them. You might want to use that if you take the determinant on both sides of the above equation you also find that |det Λ| = 1, from which we will restrict ourselves to the case with det = 1 to preserve orientation. You will be left with a matrix that has one unknown parameter β in the following familiar form with γ-2 = 1- β2. Now what about the parameter β? We can determine it by applying the Lorentz transformation to the worldline (cΔt, Δx) of an observer in rest such that Δx = 0. We apply the Lorentz transformation and ask what his world line (Δt', Δx') looks like. One finds that Δx'/Δt = βc. Thus, β is the relative velocity of the observers in units of c. One can generalize this derivation to three spatial dimensions by noticing that the two-dimensional case represents the situation in which the motion is aligned with one of the coordinate axis. One obtains the general case by doing the same for all three axis, and adding spatial rotations to the group. The full group then has six generators (three boosts, three rotations), and it is called the Lorentz group, named after the Dutch physicist Hendrik Lorentz. Strictly speaking, since we have only considered the case with det Λ = +1, it is the "proper Lorentz group" we have here. It is usually denoted SO(3,1). Once you have the group structure, you can then go ahead and derive the addition-theorem for velocities (by multiplying two Lorentz-transformations with different velocities), length contraction, and time dilatation (by applying Lorentz transformations to rulers). Now let us consider some particles in this space-time with such nice symmetry properties. First, we introduce another important scalar invariant of Special Relativity, which is an observer's proper time τ. τ is the proper length of the particle's world line, and an infinitesimally small step of proper time dτ is consequently dτ2 = c2 dt2 - dx2 One obtains the proper time of a curve by integrating dτ over this curve. Pull out a factor dt2 and use dx/dt = v to obtain dτ2 γ2 = dt2 A massive particle's relativistic four-momentum is pi = mui, where ui=dxi/dτ = γ dxi/dt is the four-velocity of the particle, and m is its invariant rest mass (sometimes denoted m0). The rest mass is also a scalar. We then have for the spatial components (a = 1,2,3) pa = m γ va . What is c? Let us eventually come back to the parameter c that we introduced in the beginning. Taking the square of the previous expression (possibly summing over spatial components), inserting γ and solving for v one obtains the particle's spatial velocity as a function of the momentum to In the limit of m to zero, one obtains for arbitrary p that v=c. Or the other way round, the only way to get v=c is if the particle is massless m=0. So far there is no experimental evidence that photons - the particles that constitute light - have mass. Thus, light moves with speed c. However, note that in the derivation that got us here, there was no mentioning of light whatsoever. There is no doubt that historically Einstein's path to the Special Relativity came from Maxwell's equations, and many of his thought experiments are about light signals. But a priori, arguing from symmetry principles in Minkowski-space as I did here, the constant c has nothing to do with light. Nowadays, this insight can get you an article in NewScientist. Btw, note that c is indeed a constant. If you want to fiddle around with that, you'll have to mess up at least one step in this derivation. See also: The Equivalence Principle
<urn:uuid:1bfcbd33-965b-43fe-8f90-9f3f3c7430a4>
4.21875
2,273
Knowledge Article
Science & Tech.
58.792914
|- What is it?| |- Rare Objects| |- Time Machine| |- Using Redshifts| |Details of the Data| What is the Sloan Digital Sky Survey? Simply put, the Sloan Digital Sky Survey is the most ambitious astronomical survey ever undertaken. The survey will map one-quarter of the entire sky in detail, determining the positions and absolute brightnesses of hundreds of millions of celestial objects. It will also measure the distances to more than a million galaxies and quasars. The SDSS addresses fascinating, fundamental questions about the universe. With the survey, astronomers will be able to see the large-scale patterns of galaxies: sheets and voids through the whole universe. Scientists have many ideas about how the universe evolved, and different patterns of large-scale structure point to different theories. The Sloan Digital Sky Survey will tell us which theories are right - or whether we will have to come up with entirely new ideas. Mapping the Universe Making maps is an activity central to the step-by-step advance of human knowledge. The last decade has seen an explosion in the scale and diversity of the mapmaking enterprise, with fields as disparate as genetics, oceanography, neuroscience, and surface physics applying the power of computers to recording and understanding enormous and complex new territories. The ability to record and digest immense quantities of data in a timely way is changing the face of science. The Sloan Digital Sky Survey will bring this modern practice of comprehensive mapping to cosmography, the science of mapping and understanding the universe. The SDSS will make the largest map in human history. It will give us a three-dimensional picture of the universe through a volume one hundred times larger than that explored to date. The SDSS will also record the distances to 100,000 quasars, the most distant objects known, giving us an unprecedented hint at the distribution of matter to the edge of the visible universe. The SDSS is the first large-area survey to use electronic light detectors, so the images it produces will be substantially more sensitive and accurate than earlier surveys, which relied on photographic plates. The results of the SDSS are electronically available to the scientific community and the general public, both as images and as precise catalogs of all objects discovered. By the end of the survey, the total quantity of information produced, about 15 terabytes (trillion bytes), will rival the information content in all the books of the Library of Congress. By systematically and sensitively observing a large fraction of the sky, the SDSS will have a significant impact on astronomical studies as diverse as the large-scale structure of the universe, the origin and evolution of galaxies, the relation between dark and luminous matter, the structure of our own Milky Way, and the properties and distribution of the dust from which stars like our sun were created. The SDSS will be a new reference point, a field guide to the universe that will be used by scientists for decades to come. The Science of the SDSS The universe today is filled with sheets of galaxies that curve through mostly empty space. Like soap bubbles in a sink, they form into dense filaments with voids between. Our best model for how the universe began, the Big Bang, gives us a picture of a universe filled with a hot, uniform soup of fundamental particles. Somehow, between the time the universe began and today, gravity has pulled together the matter into regions of high density, leaving behind voids. What triggered this change from uniformity to structure? Understanding the origin of the structure we see in the universe today is a crucial part of reconstructing our cosmic history. Understanding the arrangement of matter in the universe is made more difficult because the luminous stars and galaxies that we see are only a small part of the total. More than 90% of the matter in the universe does not give off light. The nature, amount and distribution of this "dark matter" are among the most important questions in astrophysics. How has the gravity from dark matter influenced visible structures? Or, put another way, we can use careful mapping of the positions and motions of galaxies to reconstruct the distribution of mass, and from that, we can find clues about dark matter. A Map of the Universe One of the difficulties in studying the entire universe is getting enough information to make a picture. Astronomers designed the Sloan Digital Sky Survey to address this problem in a direct and ambitious way: the SDSS gathers a body of data large enough and accurate enough to address a broad range of astronomical questions. The SDSS will obtain high-resolution pictures of one quarter of the entire sky in five different colors. From these pictures, advanced image processing software will measure the shape, brightness, and color of hundreds of millions of astronomical objects including stars, galaxies, quasars (compact but very bright objects thought to be powered by material falling into giant black holes), and an array of other celestial exotica. Selected galaxies, quasars, and stars will be observed using an instrument called a spectrograph to determine accurate distances to a million galaxies and 100,000 quasars, and to provide a wealth of information about the individual objects. These data will give the astronomical community one of the things it needs most: a comprehensive catalog of the constituents of a representative part of the universe. SDSS's map will reveal how big the largest structures in our universe are, and what they look like. It will help us understand the mechanisms that converted a uniform "primordial soup" into a frothy network of galaxies. An Intergalactic Census The U.S. Census Bureau collects statistical information about how many people live in the U.S., where they live, their races, their family incomes, and other characteristics. The Census becomes a primary source of information for people trying to understand the nation. The Sloan Digital Sky Survey will conduct a sort of celestial census, gathering information about how many galaxies and quasars the universe contains, how they are distributed, their individual properties, and how bright they are. Astronomers will use this information to study questions such as why flat spiral galaxies are found in less dense regions of the universe than football-shaped elliptical galaxies, or how quasars have changed during the history of the universe. The SDSS will also collect information about the Milky Way galaxy and even about our own solar system. The wide net cast by the SDSS telescope will sweep up as many stars as galaxies, and as many asteroids in our solar system as quasars in the universe. Knowledge of these objects will help us learn how stars are distributed in our galaxy, and where asteroids fit into the history of our solar system. Needles in a Haystack, Lighthouses in the Fog Rare objects, almost by definition, are scientifically interesting. By sifting through the several hundred million objects recorded by the SDSS, scientists will be able to construct entire catalogs of the most distant quasars, the rarest stars, and the most unusual galaxies. The most unusual objects in the catalog will be about a hundred times rarer than the rarest objects now known. For example, stars with a chemical composition very low in metals like iron are the oldest in the Milky Way. They can therefore tell us about the formation of our galaxy. However, such stars are also extremely rare, and only a wide-field deep sky survey can find enough of them to form a coherent picture. Because they are so far away, quasars can serve as probes for intergalactic matter throughout the visible universe. In particular, astronomers can identify and study galaxies by the way they block certain wavelengths of light emitted by quasars behind them. Using the light from quasars, the SDSS will detect tens of thousands of galaxies in the initial stages of formation. These galaxies are typically too faint and too diffuse for their own light to be detected by even the largest of telescopes. Quasar probes will also allow scientists to study the evolution of the chemistry of the universe throughout its history. The Telescope as a Time Machine Peering into the universe with a telescope allows us to look not only out into space, but also back in time. Imagine intelligent beings in a planetary system around a star 20 light years away. Suppose these beings pick up a stray television transmission from Earth. They would see events 20 years after they occurred on Earth: for instance, a newscast covering Ronald Reagan's re-election (1984) would be seen 20 years later (2004). While today we have seen three new presidents, the beings would still see Reagan. Light travels extremely fast, but the universe is a very big place. In fact, astronomers routinely look at quasars so far away that it takes billions of years for the light they produce to reach us. When we look at galaxies or quasars that are billions of light-years away, we are seeing them as they were billions of years ago. By looking at galaxies and quasars at different distances, astronomers can see how their properties change with time. The SDSS will measure the distribution of nearby galaxies, allowing astronomers to compare them with more distant galaxies now being seen by the new instruments like the Hubble Space Telescope and the Keck Telescope. Because quasars are very bright, the SDSS will allow astronomers to study their evolution through more than 90 percent of the history of the universe. Measuring Distance and Time: Redshift The universe is expanding like a loaf of raisin bread rising in an oven. Pick any raisin, and imagine that it's our own Milky Way galaxy. If you place yourself on that raisin, then no matter how you look at the loaf, as the bread rises, all the other raisins move away from you. The farther away another raisin is from you, the faster it moves away. In the same way, all the other galaxies are moving away from ours as the universe expands. And because the universe is uniformly expanding, the farther a galaxy is from Earth, the faster it is receding from us. The light coming to us from these distant objects is shifted toward the red end of the electromagnetic spectrum, in much the same way the sound of a train whistle changes as a train leaves or approaches a station. The faster a distant object is moving, the more it is redshifted. Astronomers measure the amount of redshift in the spectrum of a galaxy to figure out how far away it is from us. By measuring the redshifts of a million galaxies, the Sloan Digital Sky Survey will provide a three-dimensional picture of our local neighborhood of the universe.
<urn:uuid:ac4d8509-6a78-44cf-86b2-6b181c71899c>
3.78125
2,182
Knowledge Article
Science & Tech.
37.686431
Did you know that not all mutations happen at an equal rate? There are several kinds of mutations: substitutions, insertions, deletions, etc. Insertions and deletions happen when bits of DNA are either inserted or deleted, whereas substitutions happen when the overall length of the DNA locus doesn't change, but a base is substituted for another. As you all know, we have 4 nucleotides (A, C, G, and T), however, not all possible changes are equally likely. The most frequent substitutions are As with Gs and Cs with Ts. Mutations happen because of errors in DNA replications or because of DNA lesions. These are chemical processes that are more or less likely depending on the circumstances. For example, DNA is "stronger" when it's a double helix, although occasionally the bonds between the two helices can locally denature, opening up a chance for a mutation to happen. In all nucleated cells DNA is packaged inside the nucleus in units called nucleosomes: threads of DNA (~147 base pairs) wrap around "spools" formed by 8 protein units called histones. When the DNA is packed into nucleosomes it is more resistant and less prone to mutations. At the same time, chromatin, the assembly of all nucleosomes inside the nucleus, is hardly ever static. See this post where I discuss how nucleosomes are reassembled in order to promote the expression of certain genes versus others (a phenomenon called "chromatin remodeling"). A new study published in the latest issue of Science investigates how the structure and assembly of DNA inside the cells affects the likelihood of certain mutations versus others. They found that nucleosomes act as regulators for substitution mutations, protecting DNA from damage. For example, compared to other DNA states, nucleosomal DNA undergoes 50% less C -> T mutations. "Furthermore, the rates of G -> T and A -> T mutations were also about two-fold suppressed by nucleosomes. On the basis of these results, we conclude that nucleosome-dependent mutation spectra affect eukaryotic genome structure and evolution and may have implications for understanding the origin of mutations in cancers and in induced pluripotent stem cells."Without getting into too many technical details, Chen et al. looked at the initial nucleosome profile from two replicates of the yeast Saccharomyces cerevisiae strain Y55, and then tracked subsequent mutations. They also looked at SNPs (single-nucleotide polymorphisms) in the germline of the Japanese killifish medaka. Germline cells are cells that give rise to oocytes and spermatocytes, hence mutations in this line are of evolutionary importance since they get carried on to subsequent generations. We have revealed that nucleosomes, the most abundant eukaryotic protein-DNA complexes, likely function as a major regulator of substitution mutations in eukaryotes. Binding of proteins to DNA to suppress DNA breathing or to exclude endogenous mutagens may be how cells protect their DNA. However, DNA repair, which often works with varied efficiency between nucleosomal DNA and naked DNA, may also shape the base-specific mutation spectrum."Chen, X., Chen, Z., Chen, H., Su, Z., Yang, J., Lin, F., Shi, S., & He, X. (2012). Nucleosomes Suppress Spontaneous Mutations Base-Specifically in Eukaryotes Science, 335 (6073), 1235-1238 DOI: 10.1126/science.1217580 Photo: light reflections (or is it refractions?) on a soap bubble. Shutter speed 1/125, focal length 100mm, F-stop f5, ISO speed 100.
<urn:uuid:303552b0-50f8-4942-8fd0-ae929b010402>
3.734375
774
Personal Blog
Science & Tech.
42.632938
Song sparrows listening to the songs of other sparrows can figure out which bird was the intruder in a territorial dispute. The research has shown that song sparrows can distinguish an aggressor from an "innocent" bird that has had its territory invaded. Scientists at the University of Washington in Seattle, US, used recorded calls to stage territorial disputes between two birds. They played the songbird squabble so that neighbouring sparrows were able to hear it and studied the birds' reactions. After hearing this "dispute", the sparrows reacted aggressively only when they heard the broadcasted calls of the intruding bird. When the victim's song was played the birds did not react. "This [was] not simply increased aggression to any call they overheard recently in an aggressive situation," explained graduate student Caglar Akcay. "They seem to be able to infer that the victim is [not at] fault." The results indicated that, although the birds react defensively to protect their own territories from intruders, they co-operate peacefully with non-aggressive neighbours.
<urn:uuid:2d870a46-14c5-4946-8e6c-0d0fffdc8b7f>
3.765625
229
Personal Blog
Science & Tech.
43.842778
Creates a new module. If module-name is specified in its unqualified form, the module will be created in the schema which has the same name as the current ident. If module-name is specified in its fully qualified form (i.e. schema-name.module-name) the module will be created in the named schema (in this case, the current ident must be the creator of the specified schema). A module is simply a convenient enclosure for the collection of one or more routines that are declared as belonging to the module when it is created. function-definition, see CREATE FUNCTION. procedure-definition, see CREATE PROCEDURE. Two modules with the same name cannot belong to the same schema. All the functions and procedures declared as belonging to the module must be created in the same schema as the module. Two functions with the same name cannot belong to the same schema. Two procedures with the same name cannot belong to the same schema. It is not possible to create a synonym for a module name. The names of the functions and procedures declared as belonging to the module are qualified by using the name of schema to which they belong and not the name of the module. Example@ CREATE MODULE M1 DECLARE PROCEDURE PROC_1() READS SQL DATA BEGIN ... END; DECLARE PROCEDURE PROC_2(IN X INTEGER) MODIFIES SQL DATA BEGIN ... END; DECLARE FUNCTION FUNC_1() RETURNS INTEGER READS SQL DATA BEGIN ... END; END MODULE @ For more information, see the Mimer SQL User's Manual, chapter 9, Creating Modules, Functions, Procedures and Triggers. SQL/PSM YES Fully compliant. Upright Database Technology AB Voice: +46 18 780 92 00 Fax: +46 18 780 92 40
<urn:uuid:4a54e1fd-80b6-4bbf-96e8-c9df29c9b73f>
3.03125
402
Documentation
Software Dev.
54.363413
Fish community data were collected by Missouri Department of Conservation staff from 180 sites throughout the basin during 1995 - 1997 (Table Bc01). Fish were collected using a seine 15 or 25 feet long with 1/8" mesh. Kick seine methods were used to sample riffles. A boat-mounted electrofishing unit was used where possible to sample deep pools. Large fish were identified on site and returned to the water. Small fish were preserved and later identified in the lab. Data collected prior to 1995 were obtained from the Missouri Department of Conservation fish database. A total of 80 species from 16 families has been collected in the Salt River basin. Sixty-four species and one Lepomis hybrid were found in recent surveys. From a basinwide perspective, the community includes fishes representative of the Prairie, Lowland, Ozark, and Big River faunal regions. Of recently collected species, one-third are wide-ranging, 13% are Big River species, 25% are Prairie species, 31% are Ozark species, and 9% are representative of the Lowlands (Pflieger 1971). Several species are often associated with two faunal regions so the sum of these percentages exceeds 100%. The dominant fish families were the minnows (17 species), perches (10 species), suckers (9 species), sunfishes (9 species) and catfishes (8 species). The most common and abundant species collected in recent surveys were the bluntnose minnow (Pimehpales notatus) and red shiner (Cyprinella lutrensis). Bluntnose minnows comprised 13 to 24% of the total fish sample in each of four main sub-basins (lower Salt, North Fork, Middle Fork, South Fork) and occurred at 85% of all sites. Red shiners comprised 14 to 41% of the total sample in each sub-basin and were found at 70% of all sites. Both species are tolerant of high turbidity and siltation that persists throughout much of the basin. Other commonly occurring species (found in at least 60% of all sites) include the following: johnny darter (Etheostoma nigrum), creek chub (Semotilus atromaculatus), redfin shiner (Lythrurus umbratilis), and green sunfish (Lepomis cyanellus). Sportfish (18 species that provide angling opportunity) comprised 6% of all fish collected in basin streams. These fishes were under-represented numerically because larger adults were not fully vulnerable to our sampling gear. Green sunfish were the most abundant species in this group and were found at 68% of all sites. Channel catfish (Ictalurus punctatus), probably the most popular game species outside of Mark Twain Lake, were occurred at 12% of all sites, but accounted for less than 1% of the total fish collected. Largemouth bass (Micropterus salmoides) and bluegill (Lepomis macrochirus) were collected at 37 and 38% of all sample locations, respectively. Sixteen species found in the basin prior to 1995 and not found in recent surveys include the following: lake sturgeon (Acipenser fulvescens), which were stocked in Mark Twain Lake and last collected in 1986, mooneye (Hiodon tergisus) and goldeye (H. alosoides) last collected in 1957, threadfin shad (Dorosoma petenense), which were stocked in Mark Twain Lake and last collected in 1989, highfin carpsucker (Carpiodes velifer) last collected in 1983, spotted sucker (Minytrema melanops) last collected in 1986, black redhorse (Moxostoma duquesnei) last collected in 1978, goldfish (Carassius auratus) last collected in 1957, hornyhead chub (Nocomis biguttatus) last collected in 1941, silver chub (Macrhybopsis storeriana) last collected in 1983, pallid shiner (Notropis amnis) and river shiner (N. blennius) last collected in 1941, spotfin shiner (Cyprinella spiloptera) and striped shiner (Luxilus chrysocephalus) last collected in 1983, Mississippi silvery minnow (Hybognathus nuchalis) last found in 1957, and freckled madtom (Noturus flavus) last collected in 1978. Striped shiners, pallid shiners, hornyhead chubs, and Mississippi silvery minnows have likely been extirpated from the basin. Similar declines of these species have occurred in other northeast Missouri streams. Reasons for the declines are not well understood; however, these species prefer clear water and are intolerant of turbidity and siltation (Pflieger 1997). The only species collected in recent surveys that were not found prior to 1995 were paddlefish (Polydon spathula) and walleye (Stizostedion vitreum). Although not abundant, both species have been long time inhabitants of the basin due to its connection with the Mississippi River, but apparently avoided sampling gear during early surveys. The lower Salt River sub-basin, which had the fewest sample sites, yielded the most species (58), followed by the Middle Fork (48), South Fork (43), and North Fork (41). We also found a higher average number of species per site in the lower Salt sub-basin than in other sub-basins. Thirty-three species were collected from one site in the lower Salt River just below the re-regulation dam. This sub-basin, in which streams typically have higher gradients and largely gravel substrates, had proportionately more species associated with the Ozark and Big River faunal regions than the other sub-basins. The North Fork, Middle Fork, and South Fork sub-basins were generally dominated by more tolerant, Wide-Ranging species, although Ozarkian species were also common. Threatened and Endangered Species Of the species collected in the basin since 1995, two (paddlefish, and ghost shiners, N. buchanani) are currently on the state watch list. None are considered state or federal rare or endangered. Although not found in the basin recently, lake sturgeon, which are state endangered, are likely to occur in the periodically in the lower Salt River due to past stockings in Mark Twain Lake and restoration efforts in the Mississippi River. Several fish species have been stocked in basin streams and lakes. Spotted bass (Micropterus punctulatus) were stocked in basin streams 1961 in an attempt to provide an additional sportfish (Fajen 1975). Survival of these fish was very low. Other species have been stocked into Mark Twain Lake to improve the lake fishery. Walleye were stocked annually from 1984 to 1996. Survival of these walleye has been low. Adult walleye currently utilize gravel shoals in streams above Mark Twain Lake each spring for spawning. However, spawning success and survival of the hatch is apparently low. Small and advanced walleye fingerlings were stocked in several basin streams during 1999 and 2002 and the success of these stockings is under evaluation. Threadfin shad were stocked in Mark Twain Lake during 1986 and 1989 to provide an additional forage species for sportfish in the lake. Survival and reproduction of this species was determined to be insufficient to benefit sportfish survival and growth in the lake so stocking was discontinued. Blue catfish (Ictalurus furcatus) were also stocked in Mark Twain Lake, first in 1984 and later in 1992. Lake sturgeon (Acipenser fulvescens) were stocked in Mark Twain Lake in 1986 and 2001 as part of Missouri's reintroduction program. Fishes stocked in rearing ponds within the basin of Mark Twain Lake prior to impoundment include largemouth bass (M. salmoides), bluegill, channel catfish (I. punctatus), black crappie (Pomoxis nigromaculatus), orange spotted sunfish (L. humilis), gizzard shad (D. cepedianum), and fathead minnows (Pimephales promelas). Benthic marcoinvertebrate surveys in the basin have been conducted by the Missouri Department of Conservation (Duchrow 1974), Hazelwood (1974-1981), Missouri Botanical Gardens (Klein and Daley 1974), Gass (1979), and Environmental Science and Engineering, Inc. (Govro 1984). The first four studies documented the presence of 298 benthic macroinvertebrate taxa. The most recent survey (Govro 1984) reported 96 taxa. Duchrow (1974) reported that the communities in the Salt River basin were dominated by silt-tolerant forms due to heavy siltation and turbidity from agricultural practices that have degraded the habitat to the point that communities characteristic of undisturbed streams cannot be supported. Govro (1984 ) also reported twenty mussel species in the upper Salt basin. The most abundant were three-ridge (Amblema plicata) and fat mucket (Lampsilis radiata luteola). Only fossil shells of Quadrula pustulosa, Elliptio dilatata, Strophitus undulatus, Lampsilis teres, Ligumia subrostrata, and Obliquaria reflexa were collected. The Salt River was once one of two Missouri streams where the state endangered Warty-back (Quadrula nodulata) occurred. However, it was likely extirpated from the basin following inundation of Mark Twain Lake. The Missouri of Department of Conservation mussel database list 43 species in streams of the basin (Table Bc02). Five crayfish species are known to inhabit basin streams or grasslands (B. DiStifano, Missouri Department of Conservation, personal communication). These species are as follows: - Golden Crayfish (Orconectes luteus)-common Missouri crayfish - Northern crayfish (Orconectes virilis)-most widely distributed of Missouri crayfish - Papershell crayfish (Orconectes immunis)-common to Prairie and Big River faunal regions - Devil crayfish (Cambarus diogenes)-burrowing species common in northern Missouri - Grassland crayfish (Procambarus gracilis)-burrowing species inhabiting grasslands, often away from water
<urn:uuid:7ccd36c3-860e-4008-9e63-e8d9327d5e6a>
2.859375
2,209
Academic Writing
Science & Tech.
33.404048
As a physics student, often I find when doing blackboard problems, the lecturer will struggle to find a good variable name for a variable e.g. "Oh, I cannot use B for this matrix, that's the magnetic field". Even ignoring the many letters used for common physical concepts, it seems most of the usual Greek and Latin letters already have connotations that would make their usage for other purposes potentially confusing, for instance one would associate $p$ and $q$ with integer arguments, $i,j,k$ with indices or quaternians, $\delta$ and $\varepsilon$ with small values, $w$ with complex numbers and $A$ and $B$ with matrices, and so forth. It then seems strange to me that there's been no effort to introduce additional alphabets into mathematics, two obvious ones, for their visual clarity, would be Norse runes or Japanese katakana. The only example I can think of offhand of a non Greek or Latin character that has mainstream acceptance in mathematics would be the Hebrew character aleph ($\aleph$), though perhaps there are more. My question then, is have there been any strong mainstream efforts, perhaps through using them in books, or from directly advocating them in lectures or articles, to introduce characters from other alphabets into mathematics? If there have been, why have they failed, and if there haven't been, why is it generally seen as unnecessary? Thank you, and sorry if this isn't an appropriate question for math.stackexchange.com, reading the FAQ made it appear as if questions of this type were right on the borderline of acceptability.
<urn:uuid:10b1a6c1-0857-4a18-9d45-371766c467e6>
2.75
350
Q&A Forum
Science & Tech.
41.99617
The arithmetic mean, also called the average or average value, is the quantity obtained by summing two or more numbers or variables and then dividing by the number of numbers or variables. The arithmetic mean is important in statistics. When there are only two quantities involved, the arithmetic mean is obtained simply by adding the quantities and dividing by 2. In these cases, the operation is sometimes symbolized by a double colon (::) between the two quantities to be averaged. For example: 3 :: 11 = 7-10 :: +4 = -3 The determination of the average of a large number of quantities is a tedious task; computers are commonly used to calculate these values. The arithmetic mean of a continuous function over a defined interval is determined by first calculating the definite integral over the interval, and then dividing this quantity by the width of the interval. Also see Mathematical Symbols.
<urn:uuid:a4d80f08-e7da-433c-a21f-0240f00f0b68>
4.1875
180
Knowledge Article
Science & Tech.
28.405714
|As humans explore the universe, the record for largest asteroid visited by a spacecraft has increased yet again. Earlier this month, ESA's robotic Rosetta spacecraft zipped past the asteroid 21 Lutetia taking data and snapping images in an effort to better determine the history of the asteroid and the origin of its Although of unknown composition, Lutetia is not massive enough for gravity to pull it into a sphere. Pictured above on the upper right, the 100-kilometer across Lutetia is shown in comparison with the other nine asteroids and four comets that have been visited, so far, by human-launched spacecraft. Orbiting in the main asteroid belt, shows itself to be a heavily cratered remnant of the early Solar System. The Rosetta spacecraft is now continuing onto comet Churyumov-Gerasimenko where a landing is planned for 2014. Montage: Emily Lakdawalla
<urn:uuid:536ecd23-2533-47c6-8ac1-82152d6040c2>
3.4375
207
Truncated
Science & Tech.
22.511817
This is a webquest for students to complete while browsing the CAMEL website. NOTE: This is intended for students in an upper level environmental science class who already have some background knowledge of global warming. That being said, it can easily be adapted for other levels. Students will gain further understanding of the effects of global warming and the poitical landscape. ACTIVITY DESCRIPTION AND TEACHING MATERIALS - View/Download Attached File >> Handout for students - Complete the following webquest to learn more about climate change! - This will be collected at the end of class. - Go to www.camelclimatechange.org Go to the “Causes” section on the left side navigation panel - a. How has the climate on earth changed throughout history? Explain, citing your sources in the space provided below. - What evidence is there for this change? How do we know that climate has fluctuated? - What do models tell us about the future? Navigate to the “Consequences” section. - Comment on the economic effects - How are ecosystems being disturbed? Give specific examples (think case studies) Go to the “Solutions” section. - Comment on some economic solutions available. - What are some “Local” policies that have been effective (they do not have to be local for you, just done at the local level)? Would they work here? Why or why not? Navigate to “Actions” > “Individual” > “Activities” > “Education” > “Misconceptions”. Then go to Articles and read the article on “Climate Change Skepticism” (posted by Sydney Draggan). - Using specifics from THIS article, why do you think people are skeptical of science? - What is the “Snowball Effect”? Why does this all matter? When you are all finished, Please take the survey available for visitor! TEACHING NOTES / CONTEXT FOR USE This was intended for an upper level environmental science class. Students will be graded based on their responses to the attached assignment.
<urn:uuid:431f2e0b-0d03-4852-b262-2a1d6e11be02>
4
463
Tutorial
Science & Tech.
44.619
When looking at satellite photos of the earth taken at night, it is easy to see urban areas as brights spots in a sea of black. Rural areas barely even register on these night shots. The reason for this is mainly because of street and building lighting, and light pollution on a grand scale can be seen around every major urban center in the US, especially the New York Metropolitan and Greater Los Angeles areas, home to over 13 million people jointly. But what is that bright spot in North Dakota, a state with a population of barely 700,000? Urban light pollution is one thing, but the bright spot in ND isn’t from an urban center. Rather, over the last few years, natural gas drilling companies have flooded into the area, about 150 at the moment. These companies are drilling, sometimes up to eight new hydraulic fracturing wells per day, and producing 660,000 barrels of natural gas daily. Of course, these drill rigs and wells require infrastructure, which would include lighting. The bright spot of light pollution in North Dakota isn’t from electric lights, but from natural gas flaring, which burns off “excess” natural gas. Estimates put natural gas flaring at about 33% of what’s coming out of the ground, which means that about 330,000 barrels per day are simply burned off into the atmosphere. North Dakota is literally on fire, releasing [if my math is correct]119 million tons of carbon dioxide pollution into the atmosphere every day, and suddenly, that bright spot isn’t so bright any more.* Currently, ND environmental laws allow flaring of waste natural gas, but I’m thinking that maybe they should take another look at what it’s doing to the atmosphere. True, natural gas emits less carbon dioxide than gasoline, but what’s the point if so much of it gets flared off? *The entire transportation sector of the United States, including planes, trains, and automobiles, emits just 1.99 billion tons of carbon dioxide into the atmosphere annually, the same amount that the North Dakota Bakken Formation emits in just two weeks. Someone was kind enough to request verification [read: challenge] of the actual location of these natural gas drilling operations. Here is a link from the original story on NPR, as well as the satellite map showing the location, just to clarify things a little.
<urn:uuid:3bfb070c-94a4-49fb-a910-b23ef2d5e104>
3.203125
488
Personal Blog
Science & Tech.
48.92536
Visit additional Tabor Communication Publications December 02, 2005 In a paper which was featured on the cover of the July 28, 2005 issue of Nature, an international group of researchers reported the first observation of geologically produced anti-neutrinos. The observation is giving scientists new insight into the interior of our planet. While the "geo-neutrinos" were detected at the KamLAND facility in Japan, most of the data was stored on High Performance Storage System (HPSS) at the U.S. Department of Energy's National Energy Research Scientific Computing Center (NERSC) and analyzed using the PDSF cluster at NERSC. Together, these systems allowed scientists to find the scientific equivalent of a needle in a very large haystack. KamLAND records data 24 hours a day, seven days a week. These data are shipped on tapes from the experimental site to LBNL, where it is read off the tapes and stored in the HPSS at NERSC. KamLAND records about 200 GB of data each day and HPSS currently has more than 250 TB of KamLAND data stored, making KamLAND the second-largest user of NERSC's HPSS system. The KamLAND experiment, located in a mine in Japan, is a 1 kiloton liquid scintillator detector that was built to study anti-neutrinos coming from Japanese nuclear reactors, which are about 200 km from the detector. KamLAND is the first reactor experiment that observed the disappearance of electron anti-neutrinos from the reactor to the detector. Last year, the experiment also showed that the energy spectrum has a distortion typical of neutrino oscillation and measured the so-called mass-splitting, a key parameter in neutrino oscillation. During dedicated production periods at NERSC, the KamLAND data are read out of HPSS and run through the reconstruction software to convert the waveforms (essentially oscilloscope traces) of about 2,000 photo-multiplier tubes (PMTs) to physically meaningful quantities such as energy and position of the event inside the detector. This reduces the data volume by a factor of 60-100 and the reconstructed events are stored on disk for further analysis. "The event reconstruction requires a lot of computing power, and with over 600 CPUs, PDSF is a great facility to run these kinds of analysis," said Patrick Decowski, an LBNL physicist who works with NERSC staff on the project. "PDSF has been essential for our measurements." With the data on disk, specialized analysis programs run over the reconstructed events to extract the geo-neutrinos and perform the final analysis. PDSF is also used for various simulation tasks in order to better understand the background signals in the detector. "The whole analysis is like looking for a needle in a haystack - out of more than 2 billion events, only 152 candidates were found," Decowski said. "And of these, 128 - plus or minus 13 - are background events." Despite the poor signal-to-background ratio of the early measurements, they are nonetheless exciting, since the data open up a completely new field on how to study the Earth's interior. Forty years ago, the late John Bahcall proposed the study of neutrinos coming from the sun to understand the fusion processes inside the sun. The measurement of a persistent deficit of the observed neutrino flux relative to Bahcall's calculations led to the 2002 Nobel Prize for Ray Davis and the discovery of neutrino oscillation. Today, anti-neutrinos are being used to study the interior of the Earth, which is still little known. The deepest borehole ever drilled is less than 20 km in depth, while the radius of the Earth is more than 6000 km. While seismic events have been used to deduce the interior makeup of the Earth's three basic regions - the core, the mantle and the crust - there are no direct measurements of the chemical makeup of the deeper regions. An important measurement to understand the Earth is the measurement of the heat-flux coming from within. These measurements show that the Earth produces somewhere between 30 and 45 TW of heat. Two important sources of heat generation are the primordial energy released from planetary accretion and latent heat from core solidification. However, it is believed that radiologically produced heat (heat from radioactivity) also plays an important role in the Earth's heat balance, contributing perhaps half of the total heat. Neutrinos can help in the understanding of the Earth's internal structure and heat generation. Three important isotopes that are part of current Earth models - potassium, uranium and thorium - produce electron anti-neutrinos in their radioactive decay. These neutrinos (so-called geo-neutrinos) only interact with the surrounding Earth material very weakly and almost all of them reach the surface of the Earth. However, occasionally they do interact with normal matter, and by building a large device that can detect them, something can be learned about the abundance of these isotopes. This allows scientists to study part of the composition of the Earth and most importantly, provide an estimate of the amount of heat produced through radioactive decay. The research is a multinational effort, as shown by the fact that the Nature article represented the work of 87 authors from 14 institutions spread across four nations. This is a reprint of an article originally published by Berkeley Lab Computing Sciences. May 16, 2013 | When it comes to cloud, long distances mean unacceptably high latencies. Researchers from the University of Bonn in Germany examined those latency issues of doing CFD modeling in the cloud by utilizing a common CFD and its utilization in HPC instance types including both CPU and GPU cores of Amazon EC2. May 15, 2013 | Supercomputers at the Department of Energy’s National Energy Research Scientific Computing Center (NERSC) have worked on important computational problems such as collapse of the atomic state, the optimization of chemical catalysts, and now modeling popping bubbles. May 10, 2013 | Program provides cash awards up to $10,000 for the best open-source end-user applications deployed on 100G network. May 09, 2013 | The Japanese government has revealed its plans to best its previous K Computer efforts with what they hope will be the first exascale system... 05/10/2013 | Cleversafe, Cray, DDN, NetApp, & Panasas | From Wall Street to Hollywood, drug discovery to homeland security, companies and organizations of all sizes and stripes are coming face to face with the challenges – and opportunities – afforded by Big Data. Before anyone can utilize these extraordinary data repositories, however, they must first harness and manage their data stores, and do so utilizing technologies that underscore affordability, security, and scalability. 04/15/2013 | Bull | “50% of HPC users say their largest jobs scale to 120 cores or less.” How about yours? Are your codes ready to take advantage of today’s and tomorrow’s ultra-parallel HPC systems? Download this White Paper by Analysts Intersect360 Research to see what Bull and Intel’s Center for Excellence in Parallel Programming can do for your codes. In this demonstration of SGI DMF ZeroWatt disk solution, Dr. Eng Lim Goh, SGI CTO, discusses a function of SGI DMF software to reduce costs and power consumption in an exascale (Big Data) storage datacenter. The Cray CS300-AC cluster supercomputer offers energy efficient, air-cooled design based on modular, industry-standard platforms featuring the latest processor and network technologies and a wide range of datacenter cooling requirements.
<urn:uuid:0088ec1e-465d-4ced-83dd-0d7e8a7342ac>
3.03125
1,607
Content Listing
Science & Tech.
36.631551
Flood waters submerging much of St Louis were captured at their peak by the European Space Agency's ERS-1 satellite at the beginning of August. St Louis lies at the confluence of three rivers: the Missouri (bottom), the Mississippi (centre) and the Illinois. The image combines infrared pictures from ERS-1 (which is unaffected by cloud cover) with satellite photographs which show the lie of the land. Prolonged rain had swollen the three rivers and flood waters at St Louis peaked at nearly 15 metres above normal. The waters are now subsiding, dropping in the early days by almost 40 centimetres a day. The floods washed away bridges and inundated farmland, and at their peak coastguard tugs were called in to retrieve a floating Burger King restaurant which had broken loose. To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:ceaa18e0-e941-46fa-b331-cb4661979285>
2.953125
193
Truncated
Science & Tech.
51.188158
Page from the new North Australian Fire Information website—see link below Satellites have been providing people with images of bushfires from space for years, but data was usually updated infrequently and used by government agencies or large companies. Now, advances in web and satellite technology mean fires can be monitored soon after they are detected—by anybody with a reasonable internet connection writes Peter Jacklyn . One of the first websites to offer satellite-based views of fire to the public was developed by Western Australia’s Department of Land Assessment in the late 90s. It showed simple maps of ‘hotspots’—the location of suspected fires—as seen by each satellite pass. Hotspots were calculated from images provided by American weather satellites (National Oceanic and Atmospheric Administration, or NOAA). Although designed to measure things like cloud and sea temperatures, instruments on NOAA satellites could locate fires to within about four square kilometres. In the last few years, however, NASA has launched two satellites, Aqua and Terra, specifically designed to monitor the earth’s surface. One of the detectors they carry is the Moderate Resolution Imaging Spectroradiometer (MODIS) which can be used to locate burning fires to within about a square kilometre. The new NASA satellites, together with the NOAA satellites, can now provide a few readings a day, making it possible to monitor fires in close to real time (cloud cover permitting). Recent advances in web technology now allow hotspots to be placed on interactive web-based maps, where users can zoom in, display detailed map features, and query hotspots for their time of detection. Last year CSIRO, the Defence Imagery and Geospatial Organisation and Geoscience Australia developed the Sentinel website that used MODIS data to display hotspots on an interactive map. The site is aimed largely at emergency services which need to respond to fires quickly. The site shows hotspots detected at various times over the most recent three days. WA’s Department of Land Information (DLI) also has a site that shows hotspots and fire histories from both MODIS and NOAA on an interactive map. The latest interactive fire mapping website is the North Australian Fire Information or NAFI site, developed by Tropical Savannas CRC and Ecobyte Systems in collaboration with the Bush Fires Council NT, Kimberley Regional Fire Management Project and the Cape York Peninsula Development Association. It is designed to meet the needs of northern rural and remote fire managers and uses hotspot data supplied by Sentinel and WA DLI. Remote fire managers not only want to know where fires are burning now, but what areas have already been burnt as recently burned areas can be used as fire breaks. The NAFI site allows users to see hotspots from all months of the year as well as recent The site also shows fire scars which are hand-mapped from satellite images and put onto the site every few weeks. Users can also navigate to different map locations by clicking on text links rather than zooming in on images. Customised ‘quicklooks’ can also be created that deliver a compact image of fires in an area with one mouse click. Developers are also providing back-up fire information such as emails and faxes of hotspot locations.
<urn:uuid:fc5a9730-a6f6-4895-a55a-a91098a7b17c>
3.53125
722
Knowledge Article
Science & Tech.
23.582596
Statistics and facts on global climate change by Isabel Wagner In 2011, the Earth’s surface temperature was around 0.51 Celsius degrees warmer than the 20th century average. The global anomaly in surface temperature might be the cause of an increase in sea level, a decrease in arctic ice and the growing number of weather-related catastrophes, including storms, floods and droughts. The economic loss due to the 2011 drought in the United States reached around eight billion U.S. dollars, making it the country’s most costly drought in history. Between November 26 and December 7 2012, Doha hosted the 18th session of the United Nations' Framework Conference on Climate Change. The objective of the annual conference is to tackle climate change, stabilize greenhouse gas concentrations in the atmosphere, and to reach a post-Kyoto Protocol agreement. The Kyoto Protocol was initially adopted in 1997, when global energy-related CO2 emissions stood at around 24.4 billion metric tons. Today, this figure is significantly higher: about 34.7 billion metric tons of carbon dioxide was emitted worldwide in 2011. China is currently the largest producer of CO2 emissions. In order to reduce the production of carbon dioxide, several countries have started issuing tradable green certificates. In 2012, the global carbon market is projected to reach a value of around 85 billion U.S. dollars. The increase in energy generation from renewable energy sources is seen as another way to cut down on carbon dioxide emissions. Photo: sxc.hu / barunpatro, iprole
<urn:uuid:642282d3-abad-4e87-9537-710ea7d3bfb7>
3.875
326
Knowledge Article
Science & Tech.
44.98718
By Laura Sanders, Science News Light has been put to work generating the same force that makes airplanes fly, a study appearing online December 5 in Nature Photonics shows. With the right design, a uniform stream of light has pushed tiny objects in much the same way that an airplane wing hoists a 747 off the ground. Researchers have known for a long time that blasting an object with light can push the object away. That’s the idea behind solar sails, which harness radiation for propulsion in space, for instance. “The ability of light to push on something is known,” says study coauthor Grover Swartzlander of the Rochester Institute of Technology in New York. Light’s new trick is fancier than a boring push: It created the more complicated force called lift, evident when a flow in one direction moves an object perpendicularly. Airfoils generate lift; as an engine propels a plane forward, its cambered wings cause it to rise. Lightfoils aren’t about to keep an Airbus aloft for the time it takes to fly from JFK to LAX. But arrays of the tiny devices might be used to power micromachines, transport tiny particles or even enable better steering methods on solar sails. Optical lift is “a really neat idea,” says physicist Miles Padgett of the University of Glasgow in Scotland, but it’s too early to say how the effect might be harnessed. “Maybe it’s useful, maybe it’s not. Time will tell.” That light can have this unexpected lift effect started with a very simple question, Swartzlander says: “If we have something in the shape of a wing and we shine light through it, what happens?” Modeling experiments told the researchers that an asymmetrical deflection of light would create a surprisingly stable lift force. “So we thought we’d better do an experiment,” Swartzlander says. “Because this just looks too pretty.” The researchers created tiny rods shaped kind of like airplane wings—flat on one side and rounded on the other. When these micron-sized lightfoils were immersed in water and hit with 130 milliwatts of light from the bottom of the chamber, they started to move up, as expected. But the rods also began moving to the side, a direction perpendicular to the incoming light. Tiny symmetrical spheres didn’t exhibit this lift effect, the team found. Optical lift is different from the aerodynamic lift created by an airfoil. A plane flies because air flowing faster under its wing exerts more pressure than air flowing above. But in a lightfoil, the lift is created inside the object as the beam shines through. The shape of the transparent lightfoil causes light to be refracted differently depending on where it goes through, which causes a corresponding bending of the beam’s momentum that creates lift. These lightfoils’ lift angles were about 60 degrees, the team found. “Most aerodynamic things take off at very gradual angles, but this has a very striking, very powerful lift angle,” Swartzlander says. “You can imagine what would happen if your airplane took off at 60 degrees—your stomach would be in your feet.” As the rods lift, they shouldn’t stall out, the paper predicts. “The subtlety is that it actually self-stabilizes,” Padgett says. “It twists a little bit one way, and you think, ‘Oh dear, it’ll stop working,’ then the light rotates it back again.” Swartzlander says he hopes to ultimately test the lightfoils in air, too, and try different shapes and materials with various refractive properties. In the study, the researchers used ultraviolet light to generate the lift, but other kinds of light would work just as well, Swartzlander says. “The beautiful thing about this is that it would work as long as you have light.” Follow U.S. News Science on Twitter.
<urn:uuid:65d97e92-4916-4d9b-83a5-1733f6096d98>
4.21875
865
Truncated
Science & Tech.
56.017484
Although the reaction wheel failure incapacitates the telescope, we are still finding new Earth-sized planets in the plethora of existing data. What is it like inside the airplane observatory? Title: A Paucity of Proto-Hot Jupiters on Super-Eccentric Orbits Authors: Rebekah I. Dawson, Ruth A. Murray-Clay, John Asher Johnson First Author’s Institution: Harvard-Smithsonian Center for Astrophysics Note: This post is based in part on a talk by Rebekah Dawson at the UC Berkeley Planet and Star Formation Seminar on Nov. 7, 2012. Introduction How do planets [...] In the sun, subsurface flows are 20-100 times slower than what is predicted in widely used theoretical models. What if type Ia supernova are not all made the same way? For the first time, a study links type Ia supernova explosions to their parent systems, uncovering evidence for two different ways to produce these purportedly “standard” explosions. Kepler mission extended to 2016. Of Kepler’s 2,321 planet candidates, many are in the “habitable zone.” If Kepler does not obtain additional financial support, it will “close its eyes forever.” – Natalie Batalha. Don’t wish upon a shooting star; wish upon a shooting planet.
<urn:uuid:10889564-3914-4e41-afb1-3135732d23e4>
3.0625
289
Content Listing
Science & Tech.
46.383122
A Very Basic Introduction Why S. cerevisiae? S. cerevisiae is a choice organism to study. Most importantly, budding yeast exhibit many of the fundamental properties of cells in other eukaryotes such as human cells. Second, due a number of powerful techniques and resources, these fundamental properties can be dissected rapidly and exactingly in S. cerevisiae. For example, the life cycle of budding yeast is simple yet includes diverse features such as meiosis and cell-cell signalling during mating. All cells in our body contain the same genetic information, yet they exhibit vastly different features. During the development of the human embryo, cells make specific decisions to adopt different fates. Often, these decisions are made by sensing signals within the cells and signals from the environment. Once processed, these signals frequently result in changes in gene expression and cells morphology. So too, yeast cells exhibit different fates and make very precise and intelligent decisions based on sensing a number of internal and external signals. •Internal Spatial Information Our lab studies internal spatial signals that affect both gene expression and cell morphology. In the first case, a spatial asymmetry permits only the mother yeast cell to transcribe the HO endonuclease gene. In the second case, a inherited landmark at the cell periphery positions the site of emergence of the future daughter cell. Both these signals are derived from lineage determined events occuring during budding yeasts asymmetric division cycle. Determining the exact origin of these signals and their mechanism of controlling future events is one of our goals. •External Spatial Information We also are interested in how yeast respond to external signals from the environment. In particular, yeast cells polarize their cytoskeleton in response to phermones from yeast of the opposite mating type -- a process called shmooing. While it is clear the phermone activates a receptor at the cell surface and thus stimulates a cascade of signalling protein kinases, until recently little is known about how this signal cascade spatially orients the cytoskeleton. We wish to determine how these signalling events cause a dramatic and precise alteration in cell morphology. These events lead to the mating of two cells of the opposite mating type that are polarized toward each other. We hope to unravel the subsequent events that may involve additional signalling events, cell wall degradation, and membrane fusion which culminate in cell and nuclear fusion. •Cell Cycle Control The generation and usage of spatial information is tightly coupled with temporal events in the yeast cell cycle. For example, budding only occurs at the START transition, point of commitment to S phase entry. The HO endonuclease gene also is only activated at START. The URS2, a precisely defined region of the HO promoter is responsible for restricting its expression to this phase of the cell cycle. We are identifying potential cell cycle specific repressors of HO transcription that are required for regulating this event. We would also like to understand what events generate the initial decision to progress to START. Assessment of cell size and nutrient levels may precisely modulate the levels of cyclins, essential activators of CDC28, the central kinase controlling START. The events of the cell cycle can also be dramatically altered in diploids to include a second divisional event which leads to the generation of haploids. This modification, which is special to meiosis, requires a special set of genes. We are attempting to identify and understand all the specialized genes required for the meiotic cell cycle.
<urn:uuid:0ecbb543-76f0-43a3-9ce8-bb9306d39dc0>
3.46875
709
Academic Writing
Science & Tech.
27.930331
2.1. How many homes are served by geothermal power plants? The geothermal power production in the U.S. today provides enough electricity to meet the electricity needs of about 2.4 million California households. (1) This does not include contributions from geothermal heat pumps and direct heating uses. 2.2. How much geothermal electricity is currently supplied to the U.S.? In 2007, geothermal was the fourth largest source of renewable energy in the U.S. Today the U.S. has about 3,000 MW of geothermal electricity connected to the grid. (2) Geothermal energy generated 14,885 gigawatt-hours (GWh) of electricity in 2007, which accounted for 4% of renewable energy-based electricity consumption in the U.S. (including large hydropower). (3) The U.S. continues to produce more geothermal electricity than any other country, comprising approximately 30 percent of the world total. (4) In California, the state with the largest amount of geothermal power on line, electricity from geothermal resources accounted for 5 percent of the state’s electricity generation in 2003 on a per kilowatt hour basis. (5) Geothermal is the largest non-hydro renewable energy source in the state, significantly exceeding the contribution of wind and solar combined. Figure 14: Renewable Energy Generation in California 1983-2006 2.3. Are geothermal projects currently being developed in the U.S.? Yes. As of August 2008, almost 4,000 MW of new geothermal power plant capacity was under development in the U.S. (this includes projects in the initial development phases). Those states with projects currently under consideration or development are: Alaska, Arizona, California, Colorado, Florida, Hawaii, Idaho, Nevada, New Mexico, Oregon, Utah, Washington, and Wyoming. Combined, these states have approximately 103 projects in development ranging from initial to advanced stages. (6) Direct uses applications of geothermal energy occur today in 26 states—almost as many states as produce coal. (7) New direct use projects are encouraged by the provisions of the Geothermal Steam Act Amendments passed by Congress in 2005. There is interest in new direct use projects in numerous states and on various Indian reservations within several states. Geothermal heat pump installations have been growing at an annual rate of 15 percent, with more than 600,000 units installed in the U.S. by the end of 2005. Every year in the U.S., 50,000 to 60,000 new units are installed—the largest growth in the world for geothermal heat pumps. (8) 2.4. How much energy does geothermal Geothermal energy supplies more than 10,000 MW to 24 countries worldwide and now produces enough electricity to meet the needs of 60 million people. (9) The Philippines, which generates 23% of its electricity from geothermal energy, is the world’s second biggest producer behind the U.S. (10) Geothermal energy has helped developing countries such as Indonesia, the Philippines, Guatemala, Costa Rica, and Mexico. The benefits of geothermal projects can preserve the cleanliness of developing countries seeking energy and economic independence, and it can provide a local source of electricity in remote locations, thus raising the quality of life. Iceland is widely considered the success story of the geothermal community. The country of just over 300,000 people is now fully powered by renewable forms of energy, with 17% of electricity and 87% of heating needs provided by geothermal energy (fossil fuels are still imported for fishing and transportation needs). Iceland has been expanding its geothermal power production largely to meet growing industrial and commercial energy demand. In 2004, Iceland was reported to have generated 1465 gigawatt-hours (GWh) from geothermal resources; geothermal production is expected to reach 3000 GWh this year (2009). GEA’s May 2007 Interim Report: Update on World Geothermal Development named the countries producing geothermal electricity: - 21 Countries Generating Geothermal Power in 2000: Australia, China, Costa Rica, El Salvador, Ethiopia, France (Guadeloupe), Guatemala, Iceland, Indonesia, Italy, Japan, Kenya, Mexico, New Zealand, Nicaragua, Philippines, Portugal (Azores), Russia, Thailand, Turkey, United States - 3 Countries Adding Power Generation by 2005 (for a total of 24): Austria, Germany, Papua New Guinea - 22 Potential New Countries by 2010 (for potential total of 46): Armenia, Canada, Chile, Djibouti, Dominica, Greece, Honduras, Hungary, India, Iran, Korea, Nevis, Rwanda, Slovakia, Solomon Islands, St. Lucia, Switzerland, Taiwan, Tanzania, Uganda, Vietnam, Yemen Geothermal electricity generation is likely to expand. According to the International Geothermal Association (IGA) in IGA News 72 (April–June 2008), total global geothermal capacity is expected to rise to 11 GW by 2010. (11) See also section 3.5. In addition to large power generation, geothermal is also used for direct use purposes worldwide. In 2005, 72 countries reported using geothermal energy for direct heating, providing more than 16,000 MW of geothermal energy. Geothermal energy is used directly for a variety of purposes, including space heating, snow melting, aquaculture, greenhouse production, and more. (12) Next Page: Potential Use
<urn:uuid:3f66c4d2-781c-419d-8f5f-4108b687819d>
3.15625
1,128
Knowledge Article
Science & Tech.
40.548548
HOW BIG CAN AN EAGLE GET? Certain animals serve as a reminder that despite what some people like to think about humans being special, some predators view us as simply another meal. Sharks, crocodiles, and giant snakes are clearly in the top 10 of animals that under the right circumstances would view us as just another prey species. A Komodo dragon, the largest lizard in the world, in Indonesia and a pet Burmese python in a Florida residence each killed someone this year, presumably considering the people as potential prey. A scientific article in the September issue of the Journal of Vertebrate Paleontology by R. Paul Scofield of Canterbury Museum in New Zealand and Ken W. S. Ashwell of the University of New South Wales in Australia adds a new dimension to the potential list of human predators. Information collected during their study suggests that a giant eagle, now extinct but alive in New Zealand as recently as 500 years ago, may have preyed occasionally upon children and small adults. The scientific emphasis of the published study was on other aspects of the ecology and evolution of the eagles, but an aerial predator capable of swooping down and carrying someone off for a meal is a chilling thought. information about Haast's eagle, as the predator is called, is based on skeletal material the scientists examined. As with modern birds of prey, including hawks, owls, and eagles, females typically got larger than males. A male Haast's eagle is estimated to have reached a body weight of 27 pounds. The females are believed to have weighed in at just over 39 pounds. Because the estimated weights of Haast's eagle are based on only a few specimens, it seems safe to say that the largest ones were probably over The thought comes to mind that the Maoris, the original settlers of New Zealand, probably got cricks in their necks from keeping a close watch on the skies for incoming eagles. But the primary target of Haast's eagles were flightless birds native to the islands--the moas. Ostriches are the largest birds on earth today. Moas were even larger; they were the biggest birds ever known to have lived on earth. Some of the species were over 10 feet tall and weighed more than 400 pounds. Haast's eagles were the only natural predator of the moas, which used their enormous legs to move around quickly, like modern-day ostriches and emus. One advantage of flight is the ability to escape ground predators. About 10 species of moas had evolved on the islands of New Zealand because, without flying, they were able to fend off any natural predators--except the eagles. Things would probably have persisted for centuries in equilibrium, with big eagles eating big flightless birds, if the Maoris had not arrived in New Zealand in the late 1200s. The Maoris could easily capture and kill the moas, which had never encountered such a relentless land predator and had never evolved the ability to fly. So by the time Columbus landed in America, all species of giant moas had been extinct for a century. Haast's eagles are gone now, the former because of relentless hunting and the latter because its main prey base was driven to extinction. An aerial predator that can swoop down and carry off a small human is a staple of certain myths, fairy tales, and speculative fiction. But such a creature wasn't mythical; it was real. Knowing that a bird twice the size of a bald eagle once existed is an intriguing thought. But the idea that one might swoop down and carry away your walking partner goes one step beyond what feels comfortable. you have an environmental question or comment, email
<urn:uuid:61c94f4b-1848-4faa-9a5d-618f109fa58a>
3.890625
770
Nonfiction Writing
Science & Tech.
48.230145
Buseck et al. have "rediscovered" an anaytical method developed in 1947 by Dennis Gabor for defining the morphology of crystals using electron microscopy. Unfortunately, the Buseck et al. paper adds nothing to further the understanding of the issue of life on Mars. It demonstrates that these authors fail to understand the work of Thomas-Keprta et al, 2001 who used a transmission electron microscope to image individual microscopic particles at multiple angles and orientations. From this, the 3-D morphology of the particles could be reconstructed. The technique used by Buseck et al. also uses a transmission electron microscope to image microscopic particles at many different angles in order to reconstruct a 3-D image. These two techniques, for this particular application, are essentially identical. |In an effort to minimize contamination, sawing of samples in the Antarctic Meteorite Laboratory at JSC is done in a nitrogen cabinet without any type of lubrication. The Buseck et al. PNAS paper is interesting in that they do not examine ALH84001's population of likely biogenically produced magnetite crystals or the reference MV-1 magnetite crystals; these magnetite populations are the central issues in the debate and without studying either population, at best, their conclusions are irrelevant to the question of life on Mars. Buseck et al. describes the 3-D geometry of a magnetite crystal from "an undescribed, uncultured magnetotactic coccus collected from Sweet Springs Natural Reserve, Morro Bay, CA." It is unclear why they would describe just any magnetite from a previously undescribed strain and compare the geometry of one crystal with one of the best described terrestrial magnetite populations, that of strain MV-1. Buseck et al. are trying to compare a quick study of "oranges (i.e., his undescribed magnetites) with a well-defined study of "apples" (magnetites from ALH84001 and MV-1). Buseck et al. are trying to lay claim to discovering an analytical method of defining morphology of nanocrystals. Such work has been done for decades using conventional TEM techniques. We see no scientific basis for the Buseck et al. comments "we argue that the existing crystallographic and morphological evidence is inadequate to support the inference of former life on Mars" when he has not examined the magnetites in ALH84001 or MV-1. |Magnetite crystals produced by terrestrial bacteria look very similar to crystals in ALH84001. The statement at the conclusion of the Buseck et al. PNAS contribution is another example of researchers trying to use data to refute a scientific hypothesis when the data does not apply to the arguments. One must consider all the lines of evidence used to reach the conclusions. Furthermore, Buseck et al. state that three of the four lines of evidence proposed by McKay et al. 1996 have been refuted; that is incorrect. Gibson et al. 2001 (in Precambrian Research) show additional evidence to support all the original lines of evidence of possible biogenic activity within ALH84001 and its carbonate globules are valid. Additional evidence for possible biogenic activity was also described for two younger Martian meteorites-Nakhla (1.3 billion years old) and Shergotty (165 million years old) in that same report (Gibson et al. 2001). The Buseck et al. paper appears to be little more than a poorly disguised advertisement for the technique of electron tomography, an attempt to capitalize on the intense debate surrounding the issue of life on Mars to gain publicity. We have been looking forward to the scientific results from Buseck et al. and were very disappointed. Related Web Pages Evidence of Martian life dealt critical blow (NAI) Search for Past Life on Mars (McKay, Thomas-Keptra, et al.) Mars Meteorites (JPL)
<urn:uuid:daebc5dc-1b7d-4b51-80f8-279b717170d4>
2.84375
818
Knowledge Article
Science & Tech.
34.917953
Boost.Thread uses several configuration macros in <boost/config.hpp>, as well as configuration macros meant to be supplied by the application. These macros are documented here. These macros are defined by Boost.Thread but are expected to be used by application code. |BOOST_HAS_THREADS||Indicates that threading support is available. This means both that there is a platform specific implementation for Boost.Thread and that threading support has been enabled in a platform specific manner. For instance, on the Win32 platform there's an implementation for Boost.Thread but unless the program is compiled against one of the multithreading runtimes (often determined by the compiler predefining the macro _MT) the BOOST_HAS_THREADS macro remains undefined.| These macros are defined by Boost.Thread and are implementation details of interest only to implementors. |BOOST_HAS_WINTHREADS||Indicates that the platform has the Microsoft Win32 threading libraries, and that they should be used to implement Boost.Thread.| |BOOST_HAS_PTHREADS||Indicates that the platform has the POSIX pthreads libraries, and that they should be used to implement Boost.Thread.| |BOOST_HAS_FTIME||Indicates that the implementation should use GetSystemTimeAsFileTime() and the FILETIME type to calculate the current time. This is an implementation detail used by boost::detail::getcurtime().| |BOOST_HAS_GETTTIMEOFDAY||Indicates that the implementation should use gettimeofday() to calculate the current time. This is an implementation detail used by boost::detail::getcurtime().| Last revised: October 15, 2006 at 14:52:53 GMT |Copyright © 2001-2003 William E. Kempf|
<urn:uuid:d4244a3d-fd02-4fc2-8746-ccde7bb9ce69>
2.71875
399
Documentation
Software Dev.
36.695619
I think there is another caveat in that last example which is supposed to run in constant space. The lines (in the last code example) x' <- readSTRef x y' <- readSTRef y writeSTRef x y' writeSTRef y (x'+y') will build up a long unevaluated sum (1+1+2+3+5+8+..) in the STRef, which takes up stack space. When compiled I get a stack overflow when running fibST 1100000 (1.1 million) with a stack size of 8MB. There might be some hidden strictness in that "The ST monad provides support for strict state threads." but that isn't explained in this page. Forcing evaluation with seq stops the stack overflow from happening, e.g. x' `seq` writeSTRef y (x'+y')
<urn:uuid:5f01066d-e353-41a3-8fcc-537098a6b189>
2.71875
186
Comment Section
Software Dev.
84.888066
The galaxy models used in these experiments have a central bulge, an exponential disk, and a spherical dark halo. Disks are actually represented in two different ways; as flat, rotating structures corresponding to the usual notion of a disk, or as spherical, isotropic distributions with the same cumulative mass profiles as an exponential disk. When the latter disk representation is used, distribution functions for all components may be calculated exactly using Abel integrals. For ease of comparison with earlier work, we initially adopted galaxy models similar to those used by Velazquez & White (1996), with spherical bulges following Hernquist (1990) profiles, exponential & isothermal disks, and non-singular isothermal halos (Hernquist 1993). But a problem came to light in trying to realize the halo using an isotropic distribution function; the r−1 profile of the inner bulge creates a potential with a nonzero gradient as r → 0, and in such a potential well a halo with a constant-density core can only be realized with an anisotropic distribution function. We therefore replaced the isothermal halo profile with an NFW profile; the latter has the same profile as the bulge as r → 0, so it can be realized with an isotropic distribution function. For numerical convenience, the bulge profile was tapered off smoothly at large radii: The NFW halo has a logarithmically diverging mass, so it must be tapered at large radii; we used the form adopted by Springel & The disk's density profile depends on cylindrical radius R and Parameters for the galaxy model are listed below. For each component, M is the mass, a is the scale radius, and b is the taper radius. |Component||Integrated Mass||Scale Radius||Taper Radius| |bulge||Mb = 0.3125||ab = 0.15||bb = 8| |disk||Md = 1||Rd = 1| |halo||Mh = 14||ah = 2.5||bh = 24.9485| These quantities are reported in arbitrary units with G = 1. Any such system of units is internally self-consistent, and the results may be rescaled as needed. To scale this model to the Milky Way, for example, we may equate the disk's scale length Rd and mass Md to the scale length and mass of the MW's disk, Rd = 3.5 kpc and Md = 6 ×1010 M\odot. Fig. 1 shows density and mass profiles for each component of the galaxy model. Here the disk is represented by a spherical configuration which has the same mass profile M(r). At all radii the spherically-averaged disk density and mass profiles fall below the corresponding halo profiles. This is not the case in the V&W model - there the constant density of the halo core allows the disk to dominate (slightly) at intermediate radii. Initial data are generated from isotropic distribution functions fb(E), fs(E), and fh(E), which represent the bulge, `spherical disk', and halo, respectively. We calculated these distribution functions using an Abel equation which relates component c's density profile ρc(r) and distribution function fc(E) to the potential generated by all components, Φ(r). To allow for the finite resolution of the N-body force calculation, we smoothed the total density profile ρ(r) with a Plummer kernel before evaluating the potential function Φ(r). This smoothing procedure employs a semi-empirical fitting function with a free parameter κ; the tests below were run to establish the optimal value of this parameter. The N-body realizations used in these tests had Nb = 20480 bulge particles, Ns = 65536 `spherical disk' particles, and Nh = 229376 halo particles. Plummer smoothing with a length scale of ε = 0.05 length units was adopted. Forces were calculated with a modified tree-code. Bodies were advanced with a time-centered leap-frog; for these initial tests a time-step ∆t = 1/16 was used. |κ = 1.15||κ = 1.25||κ = 1.30||κ = 1.35| Fig. 2 and the associated animations shows how bodies evolve in binding energy Ei during a fairly short run of 8 time units. As the animations make clear, bodies are scattered in binding energy by stochastic events and by large-scale potential fluctuations; the former give rise to a diffusive process, while the latter create coherent patterns of motion. These coherent motions are very nearly absent in the κ = 1.35 model, indicating that this model is closest to equilibrium. Initial data for the disk is generated using a routine which sets up a self-gravitating disk in approximate equilibrium with the external potential due to the bulge and halo. Initial data for the bulge and halo are generated using the same Abel integral technique described above; to calculate the required distribution functions, we assume that the total potential is spherical. Of course, the actual potential is aspherical because of the disk, but this does not seem a significant source of error in the present experiments. Last modified: September 24, 2002
<urn:uuid:272b5197-a0b8-4dfc-8eb3-cc0943ff29b1>
2.84375
1,128
Academic Writing
Science & Tech.
46.931271
Experiment of the Month Looking down on the pendulum, as in the figure , the simplest motion of the pendulum bob would be motion in a straight line. (Strictly, the projection of the pendulum motion onto a horizontal plane can be a straight line.) The pendulum motion can also create an ellipse in the horizontal plane. The ellipse may be described as the sum of two motions, one along the minor axis of the ellipse and one along the major axis. Ordinarily, the bob displacement along the minor axis is maximum when the bob displacement along the major axis is zero. In the figure at right, the major axis is along the y axis, and the bob is following a clockwise path around the ellipse. Ordinarily it is assumed that the period of a pendulum is independent of the amplitude of the pendulum motion. In the case of elliptical motion this means that the period of oscillation along the minor axis (x) is the same as the period along the major axis (y). Imagine for a moment that the period along the x axis is made shorter than the period along the y axis. In particular, imagine that the period becomes shorter just as the bob passes through point A. After 1/4 of a period, the bob will go to its maximum y displacement (zero y velocity). Ordinarily, at the same time, the bob will reach zero x displacment (maximum x velocity). However, if the x period is shorter, the x motion reaches its maximum velocity sooner. Its average velocity during this 1/4 period is larger, and the bob travels farther during the 1/4 period than it "should." The effect of this extra travel in the x direction is to shift the apse of the ellipse in a clockwise direction. In fact, the period does depend (weakly) on the amplitude of the motion, and the minor axis does have a shorter period. If the bob follows a clockwise path around the ellipse, the ellipse will precess clockwise. The larger the difference in periods, the larger is the rate of precession. To minimize this effect, in monumental Foucault pendulums, the amplitudes are kept small. Synge and Griffith worked out the rate of precession for their mechanics book. (Synge and Griffith, Principles of Mechanics, McGraw-Hill, second edition 1949, 373-381) Their result, and an application to a pendulum like ours is shown in the figure at the right. In the figure Xmax is the maximum displacement along the minor axis, and Ymax is the maximum displacement along the major axis. The ordinary Foucault pendulum at our latitude (about 40 degrees north) "should" precess clockwise at about 10 degrees per hour, roughly 0.2 radian per hour. This is the same as the elliptical precession when the minor axis is only 2% of the major axis in size. The bob can follow an elliptical path in either the clockwise or the counterclockwise direction, leading to precession either in the direction of the Foucault precession, or opposite to it. Our method of driving pumps energy into the major axis motion, and pumps energy out of the minor axis motion. In the absence of the Coriolis force, our pendulum motion would evolve towards linear motion rather that elliptical motion. On the other hand, in the presence of a Coriolis force, our method of driving causes a clockwise precession. A sketch of how the Coriolis force associated with the vertical driving velocity induces clockwise precession follows: At mid-latitudes, when pendulum is pulled up as the y axis swings north at maximum velocity, the vertical component of velocity causes the bob to move (more nearly) parallel to the earth's axis. The Coriolis force is reduced. When the same bob swings south, pulling the bob up causes the bob to move (more nearly) perpendicular to the earth's axis. The Coriolis force is increased. In the extreme case, the northward path is straight, with no Foucault precession, while the southward path is curved, with an enhanced Foucault precession. This cyclic path has an elliptical component, with clockwise elliptical trajectory. The piston driven pendulum will precess clockwise more rapidly than the Foucault pendulum, because of the additional elliptical precession, driven by the Coriolis force on the vertical pumping motion. In the figure at the right, the path of the bob is calculated for one cycle, 2 seconds in duration, as the bob swings first south, and then north. The drive is a 0.1 second upward pull at the lowest part of the northward swing. The north-south motion is assumed to be sinusoidal. The east-west motion is due to the Coriolis force. In the figure, the end point has shifted clockwise from the starting point due to the ordinary Foucault precession. The path has opened to approximate an ellipse, due to the modification of the coriolis force by the additional velocity caused by the vertical driver. To make the effect easier to see, it has been exaggerated by stretching the picture along the x axis.
<urn:uuid:fe4996e4-8209-4fcd-9e6c-937c50a10dc6>
4.3125
1,105
Academic Writing
Science & Tech.
45.886743
Read more: "Mind maths: Five laws that rule the brain" From its crackling electrical storm of activity, the brain needs to predict the surrounding world in a trustworthy way, whether that be working out which words are likely to crop up next in a conversation, or calculating if a gap in the traffic is big enough to cross the road. What lies behind its crystal-ball gazing? One answer comes from an area of mathematics known as Bayesian statistics. Named after an 18th-century mathematician, Thomas Bayes, the theory offers a way of calculating the probability of a future event based on what has gone before, while constantly updating the picture with new data. For decades neuroscientists had speculated that the brain uses this principle to guide its predictions of the future, but Karl Friston at University College London took the idea one step further. Friston looked specifically at ... To continue reading this article, subscribe to receive access to all of newscientist.com, including 20 years of archive content.
<urn:uuid:c34cf866-0812-45fb-9cdc-3cb5c349cef9>
3.359375
207
Truncated
Science & Tech.
44.009311
With the blogosphere all a-flutter with discussions of hundredths of degrees adjustments to the surface temperature record, you probably missed a couple of actually interesting stories last week. The parameters controlling the system can be transparently combined into a single control, and there exists a critical value of this control from which a small perturbation leads to a qualitative change in a crucial feature of the system, after some observation time. and the examples that he thinks have the potential to be large scale tipping elements are: Arctic sea-ice, a reorganisation of the Atlantic thermohaline circulation, melt of the Greenland or West Antarctic Ice Sheets, dieback of the Amazon rainforest, a greening of the Sahara, Indian summer monsoon collapse, boreal forest dieback and ocean methane hydrates. To that list, we’d probably add any number of ecosystems where small changes can have cascading effects – such as fisheries. It’s interesting to note that most of these elements include physics that modellers are least confident about – hydrology, ice sheets and vegetation dynamics. Prediction vs. Projections As we discussed recently in connection with climate ‘forecasting‘, the kinds of simulations used in AR4 are all ‘projections’ i.e. runs that attempt to estimate the forced response of the climate to emission changes, but that don’t attempt to estimate the trajectory of the unforced ‘weather’. As we mentioned briefly, that leads to a ‘sweet spot’ for forecasting of a couple of decades into the future where the initial condition uncertainty dies away, but the uncertainty in the emission scenario is not yet so large as to be dominating. Last week there was a paper by Smith and colleagues in Science that tried to fill in those early years, using a model that initialises the heat content from the upper ocean – with the idea that the structure of those anomalies control the ‘weather’ progression over the next few years. They find that their initialisation makes a difference for a about a decade, but that at longer timescales the results look like the standard projections (i.e. 0.2 to 0.3ºC per decade warming). One big caveat is that they aren’t able to predict El Niño events, and since they account for a great deal of the interannual global temperature anomaly, that is a limitation. Nonetheless, this is a good step forward and people should be looking out for whether their predictions – for a plateau until 2009 and then a big ramp up – materialise over the next few years. Model ensembles as probabilities A rather esoteric point of discussion concerning ‘Bayesian priors’ got a mainstream outing this week in the Economist. The very narrow point in question is to what extent model ensembles are probability distributions. i.e. if only 10% of models show a particular behaviour, does this mean that the likelihood of this happening is 10%? The answer is no. The other 90% could all be missing some key piece of physics. However, there has been a bit of confusion generated though through the work of climateprediction.net – the multi-thousand member perturbed parameter ensembles that, notoriously, suggested that climate sensitivity could be as high as 11 ºC in a paper a couple of years back. The very specific issue is whether the histograms generated through that process could be considered a probability distribution function or not. (‘Not’ is the correct answer). The point in the Economist article is that one can demonstrate that very clearly by changing the variables you are perturbing (in the example they use an inverse). If you evenly sample X, or evenly sample 1/X (or any other function of X) you will get a different distribution of results. Then instead of (in one case) getting 10% of models runs to show behaviour X, now maybe 30% of models will. And all this is completely independent of any change to the physics. My only complaint about the Economist piece is the conclusion that, because of this inherent ambiguity, dealing with it becomes a ‘logistical nightmare’ – that’s is incorrect. What should happen is that people should stop trying to think that counting finite samples of model ensembles can give a probability. Nothing else changes.
<urn:uuid:e41eefb8-4d84-421f-b7e8-e849dd778d02>
2.75
899
Comment Section
Science & Tech.
40.620032
Today, a story about 19th-century science and 21st-century technology. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people whose ingenuity Here comes a wild new idea for meeting our energy needs. And it leads us right back to the Victorian scientist Lord Kelvin. In 1851, Kelvin wrote our modern system of thermodynamics. Up to then, the things we knew about heat seemed to disagree. He made sense of it all. Once he did, we could design engines that'd draw power from any natural temperature gradient. So we look for natural gradients. The Gulf Stream provides one. Engineers have worked on a floating engine to generate electric power from warm water flowing over colder water below. There's a much larger temperature gradient right under our feet. In many places, Earth's temperature rises 150 degrees each mile you drill downward. And here we run into Lord Kelvin again. In 1862 the deeply religious and anti-evolutionist Kelvin shocked both fundamentalists and geologists. He calculated that Earth was a hundred million years old. If we began as molten lava, he said, it would've taken that long to cool down and establish Earth's temperature gradient. Kelvin didn't know that radioactivity sustains the gradient. His result was low by a factor of 50. And it kicked off a fight that lasted into this century. Yet the fight itself finally gave us better mathematics as well as better physics. Now engineers want to tap into Earth's temperature gradient. They're drilling test holes into the earth. They mean to pump cold water down into the rock, 12,000 feet below. They should be able to bring it back under pressure at 460 degrees. The hot water can then supply a power plant on the We Americans are so hungry for energy. We use 80 quadrillion BTUs a year. But Earth is vast. The energy stored in subsurface rock could supply that energy for a hundred thousand years. Still, it's a new technology. And new technologies always harbor troubles. We'll have to force water down through cracks in the hot, dry rock. Then we have to find the water after it's heated and pump it back up. And, like coal or oil, Earth's heat can be used up. When it's gone, it's gone for a long, As we drill downward I hope we do the right thing. And I look back at quiet, scholarly Lord Kelvin. There he sits behind it all. He wasn't thinking about 21st-century power. He was thinking about Watt's steam engine -- and about a debate between geologists and fundamentalists that's now grown So honest 19th-century thought is still feeding 21st century life. Kelvin's lucid mind set forces in motion that've reached beyond his comprehension -- and beyond his dreams. I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds Nahin, P.J., Kelvin's Cooling Sphere: Heat Transfer Theory in the 19th Century Debate over the Age-of-the-Earth. History of Heat Transfer: Essays in Honor of the 50th Anniversary of the ASME Heat Transfer Division. (E.T. Layton and J.H. Lienhard, eds.) New York: ASME, 1988, pp. 65-85. Wald, M.W., Mining Deep Underground for Energy. The New York Times, Sunday, November 3, 1991, pg. 16 F. Some might object when I credit Kelvin for setting up the means for drawing power from a temperature gradient. Carnot had already given us the underlying idea in 1824. But he did so without understanding what heat was. It wasn't until 1851 that Kelvin made a proper separation of the first and second laws of thermodynamics. Only then could we really relate the power output of a heat engine to a temperature gradient, quantitatively. The Engines of Our Ingenuity is Copyright © 1988-1997 by John H. Episode | Search Episodes |
<urn:uuid:62f66fa0-720c-451d-8948-bf573c5eb400>
3.953125
916
Audio Transcript
Science & Tech.
68.155296
The workhorse of minibuffer interaction is a coroutine called minibuffer.read. That it is a coroutine should not be intimidating to the novice — you need not necessarily understand how coroutines work in order to use them in the most common ways — only that the yield keyword is used to call them. There are a dozen or more procedures in Conkeror with names that start with "minibuffer.read", for example minibuffer.read_file, and minibuffer.read_yes_or_no. However, most of these are simply wrappers around minibuffer.read itself, so that is the place to begin. Minibuffer.read's behavior is controlled by keyword options. Because of the number of these options, keywords have a clear advantage over positional parameters because it results in a shortened call form, and each argument is labeled by its keyword, which makes the form easier to read than it would be otherwise. A note about keyword usage: anywhere you would give a value of boolean true for a keyword, you can simply give the keyword name with no value. 1. keyword options - The prompt string. - The initial input text. - When given and true, the initial input will be selected. Gives the keymap to use for the minibuffer interaction. Defaults to minibuffer_keymap. Typically if you wanted to add key bindings to a particular minibuffer prompt, you would make a keymap that inherits from minibuffer_keymap, and pass that keymap as the value for $keymap to minibuffer.read. - When given, a string key name, under which the minibuffer history for this call to read will be accessed and saved. - When given, should be a function of two arguments. The first is the input string, and the second is a reference to the minibuffer. Normal exit from the minibuffer will not be permitted unless the validator function returns a true value. So glad you asked. See below. - When true, normal exit from the minibuffer will only be allowed when an item found in the completions is chosen. - When present, it must be a string found in the completions. This entry will be selected in the completions list when the input is empty. This option is only used in conjunction with $match_required. boolean or string. string values are labels that will be resolved to booleans by traversing minibuffer_auto_complete_preferences and minibuffer_auto_complete_default, which see. - When true, the completions display will pop open as soon as there are completions to display for the input. For example, you type "f" and there is a completion "foo", the completions display would pop open without you hitting tab. - When true, auto-completion is performed immediately when the minibuffer input is opened. Valid only when $auto_complete is also given. - Normally, empty input (such as after deleting all text, or if $auto_complete_initial is given, immediately when the prompt is opened) results in a completions display of all available completions. When this option is given, empty input results in no completions. Valid only when $auto_complete is also given. Delay (in milliseconds) after the most recent key-stroke before auto-completing. Defaults to value of the user variable default_minibuffer_auto_complete_delay. There is rarely any need to give this keyword. When given, space will be made in the completions display for an icon on each row. The icons are gotten from the get_icon method of the completer. - When true, space can be used as a completion key in addition to tab. Useful for minibuffer interactions where space never appears as a character in the allowed input. Often useful in combination with prefix_completer. Constructs a completer that completes based on prefix. The completion data and the procedures to interpret the items for display in the UI are given by the following keywords. Gives either an array of items to select among, or a function for obtaining such an array, which we will come back to in a moment. The simplest case is an array of strings. You can also have an array of other types of objects, and then provide $get_string and optionally $get_description and/or $get_value. - This can also be given as a function, and it's a little bit subtle how it works, so here goes: it should be a function of one argument. The object passed to the function is itself a function, which should be called for each completion item. You could think of it as "add item". - A procedure that takes a completion item as its argument and returns the string representation that will actually be completed upon. The default for this keyword is an identity procedure (a procedure that returns its argument unchanged). - A procedure that takes a completion item as its argument and returns a string description, normally displayed in the column to the right of the completions. Generally used to provide extra information about the item. The default is a procedure that always returns the empty string. - When given, this is a procedure that takes a completion item as its argument and returns the string url of an icon to display in the completions pane for this item. Used only in combination with the minibuffer.read keyword $enable_icons. - When given, this is a procedure that transforms the final chosen item from the completions to another value to return to the caller of minibuffer.read. Used only in combination with the minibuffer.read keyword $match_required. An all_word_completer allows you to match completions by typing any number of substrings, separated by spaces. The input is allowed to match either the item itself (rather its string representation) or its description. Takes the same keyword arguments as prefix_completer.
<urn:uuid:86d6c008-729e-45cb-be43-ed87791fa36b>
2.859375
1,263
Documentation
Software Dev.
44.489415
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Subject Index][Author Index] Charles A. Long, G. P. Zhang, Thomas F. George & Claudine F. Long. (2003) Physical theory, origin of flight, and a synthesis proposed for birds. Journal of Theoretical Biology 224: 9-26. Neither flapping and running to take-off nor gliding from heights can be disproved as the assured evolutionary origin of self-powered flight observed in modern vertebrates. Gliding with set wings would utilize available potential energy from gravity but gain little from flapping. Bipedal running, important in avian phylogeny, possibly facilitated the evolution of flight. Based on physical principles, gliding is a better process for the origin of powered flight than the "ground-up" process, which physically is not feasible in space or time (considering air resistance, metabolic energy costs, and mechanical resistance to bipedal running). Proto-avian ancestors of Archaeopteryx and Microraptor probably flapped their sparsely feathered limbs synchronously while descending from leaps or heights, with such "flutter-gliding" presented as a synthesis of the two earlier theories of flight origin (making use of the available potential energy from gravity, involving wing thrusts and flapping, coping with air resistance that slows air speed, but effecting positive fitness value in providing lift and slowing dangerous falls). Find things fast with the new MSN Toolbar ? includes FREE pop-up blocking!
<urn:uuid:3fc04a85-4865-49c4-8510-355d81561f07>
2.90625
343
Comment Section
Science & Tech.
38.064457
This chapter describes the functions for creating streams and performing input and output operations on them. As discussed in I/O Overview, a stream is a fairly abstract, high-level concept representing a communications channel to a file, device, or process. Streams: About the data type representing a stream. Standard Streams: Streams to the standard input and output devices are created for you.
<urn:uuid:23c912be-dd48-49fe-a389-53e85a6b57ba>
3.015625
84
Documentation
Software Dev.
44.477321
Prove that the points with the position vectorsrA = (2,2), rB = (-1,6), rC = (-5,3) and rD = (-2,-1) in the (xy)-plane are the vertices of a square. It helps if you plotted the points.Prove that the points with the position vectors: in the xy-plane are the vertices of a square. Code:B | (-1,6)* | | C | (-5,3) * | A | *(2,2) | - - - - - - - - + - - - - - - ( -2,-1)* | D | We have: . . . Quadrilateral is a rhombus. . . Hence: . Therefore, is a square.
<urn:uuid:162bd0bf-b7b5-4569-8b40-ac09df7eb630>
2.859375
175
Tutorial
Science & Tech.
104.236594
Capturing a Whisper From Space Deep space communications are far more challenging than communications with Earth-orbiting satellites because the distances are so staggering. Signals must travel millions or billions of kilometers between Earth and the spacecraft. At the same time, the spacecraft's communications equipment must be compact and lightweight to allow for a larger scientific payload. Most spacecraft communicate at a very low power - up to 20 times less than the power required for a digital watch. To hear the whisper of a signal at such great distances, receiving antennas on Earth must be very large, equipped with extremely sensitive receivers and protected from interference. Imagine a person shouting loudly to a companion a block away. If the street is quiet, the person can be understood. But what if the two pals are in Los Angeles and San Francisco - 700 kilometers (435 miles) apart? That's exactly the kind of conditions the Deep Space Network operates under. The Earth-orbiting satellites are the equivalent of two friends shouting a block apart; while the deep space probes a scattered through the solar system - from 2001 Mars Odyssey at Mars to Voyager 1 at the very edge of our solar system. The Deep Space Network listens to low-power spacecraft signals with huge antenna dishes that are precisely shaped and pointed at the target with pinpoint accuracy. In a sense, they are giant ears. The network's largest - the 70-meter (230-foot) diameter antenna - stands as tall as a 9-story building. They also serve as giant megaphones so mission control can talk to the spacecraft. The network's antennas use high power transmitters to broadcast commands to spacecraft, allowing mission controllers to activate computers and instruments and make course corrections. A Global Network As if that weren't hard enough, the Deep Space Network must accomplish this while anchored the the surface of a rapidly spinning planet. Imagine you are standing on Callisto, one of Jupiter's moons, looking back towards Earth with a powerful telescope. First, you'd see the United States. A few hours later - as the Earth rotates on its axis - you'd be looking at Australia and still later the European continent would swing into view. To compensate for this rotation, the Deep Space Network maintains clusters of antennas three locations - in California's Mojave Desert, near Madrid, Spain and outside of Canberra, Australia. The spacecraft signals are received at one site; as Earth turns, the spacecraft "sets" at that site - just like the Sun sets every night - and the next site picks up the signal, then the third and then back to the first. Think of it as a relay race with one runner handing off the baton to the next, who hands it off to a third. This configuration allows NASA to maintain 24-hour contact with distant spacecraft.
<urn:uuid:d8c0bddb-7279-4807-9cf0-e5df0fbc1eed>
3.9375
563
Knowledge Article
Science & Tech.
45.671575
2006: Mulling the World From a Bench on Broadway - New York Times Dr. Schmidt runs a global climate model, called ModelE, out of the NASA Goddard Institute for Space Studies, a part of the Earth Institute at Columbia University. ModelE is one of about a dozen global models that have been used to project the climate into the future. The results form the basis for international treaties on climate change, including the Kyoto Protocol, and for governmental stances regarding climate change across the world. ModelE breaks down climate into the basic laws of physics. The equations are written in Fortran, a computer language that is, as Dr. Schmidt puts it, "very old and not very trendy." The computer code is 126,327 lines, to be exact, and when Dr. Schmidt scrolls through it on his computer screen, it looks like nothing so much as an extensive (and incomprehensible) grocery list. In his office, not long ago, Dr. Schmidt tried to translate a few lines: "These lines," he said, "calculate how much water vapor condenses out of air to form a cloud as the temperature decreases."
<urn:uuid:cfebe4b5-76ae-4018-ba7e-d49a20b6d9bc>
2.953125
232
Personal Blog
Science & Tech.
51.203464
I just didn't know the fact that the sums of the heat capacities would equal 0. Is there a specific reason why? Yes. We makes the assumption that no energy in the form of heat is lost or gained by the system, that it is perfectly insulated. This means the total energy in the form of heat can not change magnitude. This means for the temperature of one thing to increase, energy must flow into it, the energy has to come from somewhere, the system or the surroundings. This system is better represented mathematically as: But that only works for two things, so move them to the same side and you are left with 0; so you can use the same assumption for a total system (system + surroundings) with more then 2 components.
<urn:uuid:f7244d3d-ee70-4e0a-9921-5083184e977b>
2.875
156
Q&A Forum
Science & Tech.
59.242313
No matter what number you add you always come back to 9. examples: 13 1+3=4 13-4=9 94 9+4=13 94-13=81 8+1=9 156 1+5+6=12 156-12=144 1+4+4=9 And no matter what the size of the number it always goes back to 9! 1,233,465,957 add up to 45 4+5=9 1,233,465,912=add all up =36 3+6=9 Can someone explain who invented numbers and why 9 is hidden in mathematical solving problems? The number 9 is not supernatural. Certain spiritual, religious, and occult teachings claim that numbers have special meaning when in fact they do not. For example, if God is a Trinity, the Father, the Son, and the Holy Spirit, that does not make the number 3 supernatural. The examples you list demonstrate a number theoretical property of the base 10 or decimal number system. Base 10 is commonly used in mathematics and throughout society out of habit. There is nothing sacred about base 10. It became popular a long time ago simply because people counted on their fingers, and most people have 10 fingers. What determines the base is the number of single character numbers used and what multiple character numbers represent. In base 10, there are 10 such single character numbers: 0,1,2,3,4,5,6,7,8, and 9. The next number greater than 9 is called 10, which is 0X1 + 1 X10. The number 472 = 2X1 + 7X10 + 4X10X10. Other bases exist. Any whole number greater than 1 can define a base. For example, all computers operate on base 2 or binary mathematics. The 2 characters used are 0 and 1. This is the case because a single transistor can be in only one of two possible states, on or off. In this case, the number after 1 is 10, which can be converted to base 10 as 0X1 + 1X2 = 2. The number 11001 = 1X1 + 0X2 + 0X2X2 + 1X2X2X2 + 1X2X2X2X2 = 25 in base 10. Digital circuits are often worked with in groups of 8, called a byte, to make it easier to code and decode data. So base 8 or octal mathematics can be used. The 8 single character numbers used are 0,1,2,3,4,5,6, and 7. The number after 7 is 10, which, converted to base 10, is the same as 0X1 + 1X8 = 8. The number 142 = 2X1 + 4X8 + 1X8X8 = 97 in base 10. Sometimes 2 bytes, or 16 bits, are grouped together to simplify working with computerized data. Not surprisingly, base 16 or hexidecimal mathematics is used. The 16 single character numbers are 0,1,2,3,4,5,6,7,8,9,A,B,C,D,E, and F. The number after F is 10, which can be converted to base 10 as 0X1 + 1X16 = 16. The number A5D03E = 14X1 + 3X16 + 0X16X16 + 13X16X16X16 + 5X16X16X16X16 + 10X16X16X16X16X16 = 10,866,750 in base 10. In the same way any whole number such as 3, 4, 5, 22, 64, 183 or 122,835 could be used as the base of a number system. In fact, ancient Babylonia used a base 60 number system, and this is still commonly used to measure cirlces (360 degrees) and time (60 seconds and 60 minutes). Getting back to your examples: “And no matter what the size of the number it always goes back to 9!” You are observing a number theoretical property of whole numbers, and “it always goes back to 9” only if you are working in a base 10 number system. There is nothing special about the number 9! If you are working in base 2, it always goes back to 0, in base 8 it always goes back to 7, and in base 16 it always goes back to F. I have verified that this is true, and I encourage you to do the same. I repeat, there is nothing sacred about the base 10 number system. It was created by humans as an outgrowth of counting on fingers, and its continued use today is merely a convention. If you are interested in studying these topics further, perform searches on “number theory” and “history of mathematics”. Both subjects are very interesting.
<urn:uuid:ebb9837b-f86f-4d14-9db4-aa3596513558>
3.28125
1,026
Comment Section
Science & Tech.
82.43723
Prokaryotic and Eukaryotic Cells Part of our definition/description of what it means to be a living thing on Earth includes the assertion that living things are made of cells and cell products. In other words, we consider the cell to be a pretty fundamental structural aspect of life. Cells in our world come in two basic types, prokaryotic and eukaryotic. "Karyose" comes from a Greek word which means "kernel," as in a kernel of grain. In biology, we use this word root to refer to the nucleus of a cell. "Pro" means "before," and "eu" means "true," or "good." So "Prokaryotic" means "before a nucleus," and "eukaryotic" means "possessing a true nucleus." This is a big hint about one of the differences between these two cell types. Prokaryotic cells have no nuclei, while eukaryotic cells do have true nuclei. This is far from the only difference between these two cell types, however. Here's a simple visual comparison between a prokaryotic cell and a eukaryotic cell: This particular eukaryotic cell happens to be an animal cell, but the cells of plants, fungi and protists are also eukaryotic. Despite their apparent differences, these two cell types have a lot in common. They perform most of the same kinds of functions, and in the same ways. Both are enclosed by plasma membranes, filled with cytoplasm, and loaded with small structures called ribosomes. Both have DNA which carries the archived instructions for operating the cell. And the similarities go far beyond the visible--physiologically they are very similar in many ways. For example, the DNA in the two cell types is precisely the same kind of DNA, and the genetic code for a prokaryotic cell is exactly the same genetic code used in eukaryotic cells. Some things which seem to be differences aren't. For example, the prokaryotic cell has a cell wall, and this animal cell does not. However, many kinds of eukaryotic cells do have cell walls. Despite all of these similarities, the differences are also clear. It's pretty obvious from these two little pictures that there are two general categories of difference between these two cell types: size and complexity. Eukaryotic cells are much larger and much more complex than prokaryotic cells. These two observations are not unrelated to each other. If we take a closer look at the comparison of these cells, we see the following differences: Examination of these differences is interesting. As mentioned above, they are all associated with larger size and greater complexity. This leads to an important observation. Yes, these cells are different from each other. However, they are clearly more alike than different, and they are clearly evolutionarily related to each other. Biologists have no significant doubts about the connection between them. The eukaryotic cell is clearly developed from the prokaryotic cell. One aspect of that evolutionary connection is particularly interesting. Within eukaryotic cells you find a really fascinating organelle called a mitochondrion. And in plant cells, you'd find an additional family of organelles called plastids, the most famous of which is the renowned chloroplast. Mitochondria (the plural of mitochondrion) and chloroplasts almost certainly have a similar evolutionary origin. Both are pretty clearly the descendants of independent prokaryotic cells, which have taken up permanent residence within other cells through a well-known and very common phenomenon called endosymbiosis. One structure not shown in our prokaryotic cell is called a mesosome. Not all prokaryotic cells have these. The mesosome is an elaboration of the plasma membrane--a sort of rosette of ruffled membrane intruding into the cell. This diagram shows a trimmed down prokaryotic cell, including only the plasma membrane and a couple of mesosomes. A mitochondrion is included for comparison: The similarities in appearance between these structures are pretty clear. The mitochondrion is a double-membrane organelle, with a smooth outer membrane and an inner membrane which protrudes into the interior of the mitochondrion in folds called cristae. This membrane is very similar in appearance to the prokaryotic plasma membrane with its mesosomes. But the similarities are a lot more significant than appearance. Both the mesosomes and the cristae are used for the same function: the aerobic part of aerobic cellular respiration. Cellular respiration is the process by which a cell converts the raw, potential energy of food into biologically useful energy, and there are two general types, anaerobic (not using oxygen) and aerobic (requiring oxygen). In practical terms, the big difference between the two is that aerobic cellular respiration has a much higher energy yield than anaerobic respiration. Aerobic respiration is clearly the evolutionary offspring of anaerobic respiration. In fact, aerobic respiration really is anaerobic respiration with additional chemical sequences added on to the end of the process to allow utilization of oxygen (a very common evolutionary pattern--adding new parts to old systems). So it's pretty reasonable of biologists to think that a mitochondrion evolved from a once-independent aerobic prokaryotic cell which entered into an endosymbiotic relationship with a larger, anaerobic cell. So is there any real evidence that the distant ancestors of mitochondria were independent cells? Quite a lot, actually. And of a very convincing type. Mitochondria (and chloroplasts, for that matter) have their own genetic systems. They have their own DNA, which is not duplicated in the nucleus. That DNA contains a number of the genes which are necessary to make the materials needed for aerobic cellular respiration (or photosynthesis, in the case of the chloroplast). Mitochondrial and chloroplast DNA molecules are naked and circular, like prokaryotic DNA. These organelles also have their own population of ribosomes, which are smaller and simpler than the ribosomes out in the general cytoplasm. Mitochondria and chloroplasts also divide on their own, in a manner similar to the binary fission of prokaryotic cells. Then there's that interesting outer membrane, another feature chloroplasts share with mitochondria. The manners by which large objects enter cells automatically create an outer membrane (actually a part of the big cell's plasma membrane) around the incoming object. This discussion suggests a very interesting question. Endosymbiosis is a very widespread phenomenon. The more we look, the more examples we find throughout the kingdoms of life. So, if a mitochondrion is the distant descendent of an independent prokaryotic cell, is it then an organism living inside a larger cell? Or is it just a part of that larger cell? Is it an independent organism or not? Before you leap to a conclusion, think a bit. Certainly, mitochondria are absolutely dependent upon the cells in which they reside. Like any long-time endosymbiont, they long ago gave up many of the basic life processes needed for independent life. And the cells in which they reside are completely dependent upon their mitochondria, because the anaerobic respiration they could do without the mitochondria wouldn't provide nearly enough energy for the cell's needs. In fact, it's very probable that the evolution of big, complex eukaryotic cells wasn't possible until the "invention" of aerobic respiration. But there are many endosymbiotic relationships in nature which are just as interdependent. For example, no termite could survive without the population of endosymbionts that lives inside its guts, digesting its woody diet for it. And the protists and bacteria that make up that population can't survive outside the termite. Complete interdependency. Now, the termite and its passengers look a lot more like independent creatures to us than a cell and its mitochondria. But they are actually no more independent of each other. So if we decide that the mitochondrion is just a part of the cell, then don't we have to also decide that the endosymbionts inside the termite's guts are just parts of the termite? If not, how do we justify insisting that there's a difference? Before you get too frustrated trying to sort this out, allow me to relieve your mind. There is, in fact, no answer to this question. Just the reinforcement of a very important lesson. Despite our human need to sort our world into neat, clean categories, the real universe often doesn't cooperate, and this is just such a case. We want to be able to decide "two separate organisms" or "parts of the same organism" in cases like this, but reality shows us that there are many situations which fall somewhere between these two categories. This is a lesson we learned when we examined the "alive" vs "not alive" issue, and again when we tried to decide how to functionally describe species. We want neat categories; nature doesn't cooperate.
<urn:uuid:a278a8f6-3605-4417-a24b-4ed3f4846347>
3.921875
1,923
Academic Writing
Science & Tech.
33.88717
What is a DLL? A DLL, short for Dynamic Link Library, is a library that contains both code and data that can be used as a shared DLL or resource DLL by more than one program or application at the same time. Using Windows DLLs or DLL libraries allows the developer to make more efficient use of memory and disk space, as well as allowing for a more modular approach to the application design that encourages componentization and code re-use. A DLL library can be used as a software component to add functionality to your application or project, without the need to spend the time to write all of the code yourself. You will find .NET DLL products and C# DLL products here for a variety of common tasks that are both visual and non-visual. For example you will find dynamic DLL products that are datagrid DLLs that enable you to provide data grids or to present datatables in your app. Or if you need full calc engine functionality there are spreadsheet DLLs available too. If you are looking for charting DLLs or graphing DLLs, you will find several products here to include in your application to draw pie-charts, bar charts, line graphs etc. Other visual DLL libraries listed here include: a DLL editor for text editing and word processing, calendar DLLs for planning and scheduling, Computer Aided Design or CAD DLLs, image processing DLLs and image conversion DLLs and DLL libraries for document imaging & scanning, OCR DLLs for optical character recognition projects, as well as complex DLLs for multi-media and video compression tasks, or a simple dynamic DLL to display a barcode. You will also find non-visual dynamic DLL products, that work in the background to add functionality to your your application, such as sending and receiving emails via SMTP or POP or transferring files via FTP or HTTP over the Internet. Or non-visual data reporting DLLs to create daily management reports or scheduled MIS reports overnight or to create PDF files in batch mode using a PDF DLL library to carry out the file format and conversion task for you. You will also find similar C++ DLL or CPP DLL products or DLL libraries for your Visual C++ application or the equivalent Borland library for your BC++ project code. Microsoft Visual Studio 2005 (MS Visual Studio 2005) allows Visual Studio 2005 programmers to create Windows applications for their end users fast. Visual Studio 2005 developers can extend the standard functionality available inside the Visual Studio 2005 IDE with a variety of Visual Studio 2005 add-ins and Visual Studio 2005 tools. The extensibility of Visual Studio 2005 software is one of the main reasons it has proved to be so popular with developers, as Visual Studio 2005 software engineers can find Visual Studio 2005 downloads from other companies or other developers to act as a Visual Studio 2005 extension to their Visual Studio 2005 IDE. The versatility of Visual Studio 2005 also extends to various forms of Visual Studio 2005 software components. Visual Studio 2005 controls can be used to create feature rich Visual Studio 2005 user interfaces on forms and Web pages for Visual Studio 2005 apps. These Visual Studio 2005 UI controls are augmented by non-visual Visual Studio 2005 components or Visual Studio 2005 libraries that can help a developer add many hidden features in Visual Studio 2005 applications running in the background. These non-visual components are commonly available as a Visual C++ library, Visual C++ Class library, Visual Basic library, Visual Basic Class library or as a Visual Basic custom control or ActiveX/OCX component. An example of a visual Visual Studio 2005 control is Janus GridEx for .NET, a .NET UI control that allows you to create an MS Outlook style or look and feel to your latest Visual Basic project. A non-visual .NET component example is a Visual Studio 2005 compatible product that allows you to create and output files in different formats, such as: PDF, XPS, PostScript, RTF, HTML, XML to help solve file conversion and document storage needs. Other examples of a visual Visual Studio 2005 control is BCGControlBar Professional, a Visual C++ Class Library that allows you to create an MS Office Ribbon style or look and feel to your latest Visual C++ project. A non-visual Visual C++ Class Library component example is a Visual C++ compatible product call IP*Works! C++ Edition that allows you to send emails via SMTP and POP or to transfer files reliably using HTTP or FTP Internet protocols from within your Visual C++ program. The Visual Studio 2005 software products listed in this Visual Studio 2005 product gallery will allow you to save a lot of time and effort in creating your new Visual Studio 2005 app. Whether you are looking for a Visual Studio 2005 plug-in or Visual Studio 2005 utility, or you are looking for a Visual Studio 2005 control or a Visual Studio 2005 component, you will be able to find a wide variety of Visual Studio 2005 tools to help you finish your project faster. We also have other Visual Studio product galleries for: Visual Studio 2010, Visual Studio 2008, Visual Studio .NET and Visual Studio compatible products. Browse Featured items in DLL / Visual Studio 2005 for DLL / Visual Studio 2005
<urn:uuid:1081fb6d-e286-458f-b1c0-80ea7dc1f8a9>
3.0625
1,074
Knowledge Article
Software Dev.
46.235821
Common Lisp the Language, 2nd Edition The following function may be used to obtain a type specifier describing the type of a given object. (type-of object) returns an implementation-dependent result: some type of which the object is a member. Implementors are encouraged to arrange for type-of to return the most specific type that can be conveniently computed and is likely to be useful to the user. If the argument is a user-defined named structure created by defstruct, then type-of will return the type name of that structure. Because the result is implementation-dependent, it is usually better to use type-of primarily for debugging purposes; however, in a few situations portable code requires the use of type-of, such as when the result is to be given to the coerce or map function. On the other hand, often the typep function or the typecase construct is more appropriate than type-of. Many have observed (and rightly so) that this specification is totally wimpy and therefore nearly useless. X3J13 voted in June 1989 (TYPE-OF-UNDERCONSTRAINED) to place the following constraints on type-of: array float package sequence bit-vector function pathname short-float character hash-table random-state single-float complex integer ratio stream condition long-float rational string cons null readtable symbol double-float number restart vector Then (subtypep (type-of x) type)) must return the values t and t; that is, type-of applied to x must return either type itself or a subtype of type that subtypep can recognize in that implementation. As an example, (type-of "acetylcholinesterase") may return string or simple-string or (simple-string 20), but not array or simple-vector. As another example, it is permitted for (type-of 1729) to return integer or fixnum (if it is indeed a fixnum) or (signed-byte 16) or (integer 1729 1729) or (integer 1685 1750) or even (mod 1730), but not rational or number, because (typep (+ (expt 9 3) (expt 10 3)) 'integer) is true, integer is in the list of types mentioned above, and (subtypep (type-of (+ (expt 1 3) (expt 12 3))) 'integer) would be false if type-of were to return rational or number.
<urn:uuid:fb8dde29-1b24-4bef-8278-5fce19a94e74>
3.21875
522
Documentation
Software Dev.
28.277143
Threats to Polar Bears The most serious threat to polar bears today is climate change. As temperatures in the Arctic continue to get warmer, the sea ice that polar bears rely on for survival melts earlier each spring and forms later each fall. In 2008, the U.S. Fish and Wildlife Service listed the polar bear in Alaska as threatened, the first listing under the Endangered Species Act chalked up primarily to climate change. Oil and gas development also poses a major risk to polar bears, particularly the threat of oil spills. There is still no proven method of cleaning up oil in broken sea-ice conditions, and an oil spill would not only directly harm polar bears, but would also deplete their prey and contaminate their habitat.
<urn:uuid:41d019f9-0e63-4f31-8e72-172baec7057b>
3.421875
150
Knowledge Article
Science & Tech.
54.104688
The Binomial Distribution Many types of probability problems have only two outcomes, or they can be reduced to two outcomes. For example, when a coin is tossed, it can land heads or tails. when a baby is born, it will be either male or female. In a basketball game, a team either wins or loses. A true-false item can be answered in only two ways, true or false. Other situations can be reduced to two outcomes. For example, a medical treatment can be classified as effective or ineffective, depending on the results. A person can be classified as having normal or abnormal blood pressure, depending on the measure of the blood pressure gauge. A multiple-choice question, even though there are four or five answer choices, can be classified as correct or incorrect. Situations like these are called binomial experiments. A binomial experiment is a probability experiment that satisfies the following four requirements: 1. Each trial can have only two outcomes or outcomes that can be reduced to two outcomes. These outcomes can be considered as either success or failure. 2. There must be a fixed number of trials. 3. The outcomes of each trial must be independent of each other. 4. The probability of a success must remain the same for each trial. A binomial experiment and its results give rise to a special probability distribution called the binomial distribution. The outcomes of a binomial experiment and the corresponding probabilities of these outcomes are called a binomial distribution. In binomial experiments, the outcomes are usually classified as successes or failures. For example, the correct answer to a multiple-choice item can be classified as a success, but any of the other choices would be incorrect and hence classified as a failure. The notation that is commonly used for binomial experiment and the binomial distribution is defined next. Notation for the Binomial Distribution P(S) The symbol for the probability of success P(F) The symbol for the probability of failure p The numerical probability of a success q The numerical probability of a failure P(S) = p and P(F) = 1 - p = q n The number of trials X The number of successes Note that 0Xn. The probability of a success in a binomial experiment can be computed with the following formula. Binomial Probability Formula In a binomial experiment, the probability of exactly X successes in n trials is An explanation of why the formula works will be given in the following example. A coin is tossed three times. Find the probability of getting exactly two heads. This problem can be solved by looking that the sample space. There are three ways to get two heads. HHH, HHT, HTH, THH, TTH, THT, HTT, TTT The answer is or 0.375. Looking at the problem in the previous example from the standpoint of a binomial experiment, one can show that it meets the four requirements. 1. There are only two outcomes for each trial, heads or tails. 2. There is a fixed number of trials (three). 3. The outcomes are independent of each other (the outcome of one toss in no way affects the outcome of another toss). 4. The probability of a success (heads) is 1/2 in each case. In this case, n = 3, X = 2, p = 1/2, and q = 1/2. Hence, substituting in the formula gives which is the same answer obtained by using the sample space. The same example can be used to explain the formula. First, not that there are three ways to get exactly two heads and one tail from a possible eight ways. They are HHT, HTH, and THH. In this case, then, the number of ways of obtaining two heads from three coin tosses is , or 3. In general, the number of ways to get X successes from n trials with out regard to order is This is the first part of the binomial formula. (Some calculators can be used for this.) Next each success has a probability of 1/2, and can occur twice. Likewise, each failure has a probability of 1/2 and can occur once, giving the part of the formula. To generalize, then, each success has a probability of p and can occur X times, and each failure has a probability of q and can occur (n-X) times. Putting it all together yields the binomial formula. If a student randomly guesses at five multiple-choice questions, find the probability that the student gets exactly three correct. Each question has five possible choices. In this case n = 5, X = 3, and p = 1/5, since there is one chance in five of guessing a correct answer. Then, A survey from Teenage Research Unlimited (Northbrook, Ill.) found that 30% of teenage consumers receive their spending money from part-time jobs. If five teenagers are selected at random, find the probability that at least three of them will have part-time jobs. To find the probability that at least three have a part-time job, it is necessary to find the individual probabilities for either 3, 4, or 5 and then add them to get the total probability. P(at least three teenagers have part-time jobs) = 0.132 + 0.028 + 0.002 = 0.162 Mean, Variance, and Standard Deviation for the Binomial Distribution The mean, variance, and standard deviation of a variable that has the binomial distribution can be found by using the following formulas. These formulas are algebraically equivalent to the formulas for the mean, variance, and standard deviation of the variables for probability distributions, but because they are for variables of the binomial distribution, they have been simplified using algebra. The algebraic derivation is omitted here, but their equivalence is shown in the next example. A coin is tossed four times. Find the mean, variance, and standard deviations of the number of heads that will be obtained. With the formulas for the binomial distribution and n = 4, p = 1/2, and q = 1/2, the results are From the previous example, when four coins are tossed many, many times, the average of the number of heads that appears is two, and the standard deviation of the number of heads is one. Note that these are theoretical values. As stated previously, this problem can be solved by using the expected value formulas. The distributions is shown as follows: |No. of heads, X||0||1||2||3||4| Hence, the simplified binomial formulas give the same result.
<urn:uuid:8914972a-23e8-40db-ad4c-3d2646ca4347>
4.21875
1,395
Knowledge Article
Science & Tech.
56.735901
Carbon Capture Technologies that Could Help Fight Climate Change Evolving technology could make cleaning the air more profitable than fouling it, says Columbia Univ economist In the wake of the hottest and driest summer in memory throughout much of North America, and Super-storm Sandy that flooded cities and ravaged large swaths of the Mid-Atlantic coast, many now recognize that the climate change isn’t just real, but that it is already at our doorstep. As this realization continues to sink in, the political will may ripen to take more aggressive action to put a brake CO2 emissions. Already, President Obama, who had remained mostly silent on the issue during his reelection campaign, has made it clear that tackling climate change will be among his top second-term priorities. But the fact remains that even if the entire world switched magically to 100 percent solar and other non-polluting power sources tomorrow, it’s too late to roll back some of the impacts of climate change. The current level of carbon dioxide in the air is already well beyond what scientists regard as the safe threshold. If we remain on our present course, scientists say, CO2 levels will continue to rise — sharply— for years to come. Climatologists tell us that the climate change train has long since left the station, but perhaps it is not yet too late to prevent it from accelerating beyond our capacity to cope. There are technologies now being developed which could cut the rate of increase of greenhouse gases, even potentially return Earth’s atmosphere to preindustrial levels of CO2. Better yet, the price tag for implementing them may not be all that great — especially when compared to the mounting costs of continuing down our present course. Best of all, say two scientists who are making these astonishing claims, we don't have to cut out fossil fuels entirely to accomplish it. I met with Dr Klaus Lackner and Allen Wright at Columbia University’s Earth Institute where they are working on a new "carbon capture" project which involves literally sucking carbon dioxide out of the atmosphere. The duo conduct their research in a room less than half the size of most high school chemistry labs, but teeming with vials, beakers, meters, gas canisters and other devices unnameable by a social science major like myself. One of the tables held an array of cream-colored plastic doodads that looked like miniature shag rugs, scrub brushes and cylindrical Christmas ornaments. A smiling Lackner handed me an object shaped like the tuft of needles at the end of a pine branch. Only instead of needles, they were thin streamers impregnated with sodium carbonate which chemically "mops up" CO2 from the air. Photo of gas production worker via Shutterstock. Read more at Earth Island Journal.
<urn:uuid:18c2bf19-0c16-44d1-b389-6473e3315454>
3.421875
574
Truncated
Science & Tech.
38.254186
Solutes, such as proteins or simple ions, dissolve in a solvent such as water. This raises the concentration of the solute in these areas. The solvent then diffuses to these areas of higher solute concentration to equalize the concentration of the solute throughout the solution. Example of osmosis A practical example of this osmosis in cells can be seen in red blood cells. These contain a high concentration of solutes including salts and protein. When the cells are placed in solution, water rushes in to the area of high solute concentration, bursting the cell. Many plant cells do not burst in the same experiment. This is because the osmotic entry of water is opposed and eventually equalled by the pressure exerted by the cell wall, creating a steady state. In fact, osmotic pressure is the main cause of support in plant leaves. When a plant cell is placed in a solution higher in solutes than inside the cell osmosis out of the cell occurs. The water in the cell moves to an area higher in solute concentration, and the cell shrinks and so becomes flaccid. This means the cell has become plasmolysed - the cell membrane has completely left the cell wall due to lack of water pressure on it. When a solute is dissolved in a solvent, the random mixing of the two species results in an increase in the entropy of the system, which corresponds to a reduction in the chemical potential. For the case of an ideal solution the reduction in chemical potential corresponds to: As mentioned before, osmosis can be opposed by increasing the pressure in the region of high solute concentration with respect to that in the low solute concentration region. The pressure differential at which the flow of solvent through the membrane is stopped is called the osmotic pressure or turgor. Increasing the pressure increases the chemical potential of the system in proportion to the molar volume (δμ = δPV). Therefore, osmosis stops, when the increase in potential due to pressure equals the potential decrease from Equation 1, i.e.: For the case of very low solute concentrations, ln(1-x2) ~ x2 and Equation 2 can be rearranged into the following expression for osmotic pressure: The osmosis process can be driven in reverse with solvent moving from a region of high solute concentration to a region of low solute concentration by applying a pressure in excess of the osmotic pressure. This reverse osmosis technique is commonly applied to purify water.
<urn:uuid:cb73d17e-1a4d-44cb-abd1-061f2c6566cb>
3.921875
529
Knowledge Article
Science & Tech.
40.1245
9.3Nonuniform Circular Motion What about nonuniform circular motion. Although so far we have been discussing components of vectors along fixed x and y axes, it now becomes convenient to discuss components of the acceleration vector along the radial line (in-out) and the tangential line (along the direction of motion). For nonuniform circular motion, the radial component of the acceleration obeys the same equation as for uniform circular motion, but the acceleration vector also has a tangential component, = slope of the graph of |v| versus t . The latter quantity has a simple interpretation. If you are going around a curve in your car, and the speedometer needle is moving, the tangential component of the acceleration vector is simply what you would have thought the acceleration was if you saw the speedometer and didnít know you were going around a curve. Example: Slow down before a turn, not during it. Question: When youíre making a turn in your car and youíre afraid you may skid, isnít it a good idea to slow down. Solution: If the turn is an arc of a circle, and youíve already completed part of the turn at constant speed without skidding, then the road and tires are apparently capable of enough static friction to supply an acceleration of |v| /r. There is no reason why you would skid out now if you havenít already. If you get nervous and brake, however, then you need to have a tangential accel- eration component in addition to the radial component you were already able to produce successfully. This would require an acceleration vector with a greater magnitude, which in turn would require a larger force. Static friction might not be able to supply that much force, and you might skid out. As in the previous example on a similar topic, the safe thing to do is to approach the turn at a comfortably low speed. An object moving in a circle may speed up (top), keep the magnitude of its velocity vector constant (middle), or slow down (bottom). Section 9.3Nonuniform Circular Motion
<urn:uuid:ba388f18-f42f-4c05-a389-ea6208d712cc>
3.8125
477
Tutorial
Science & Tech.
44.439497
This may seem obvious. But in evolutionary terms, the benefits of sexual reproduction are not immediately clear. Male rhinoceros beetles grow huge, unwieldy horns half the length of their body that they use to fight for females. Ribbon-tailed birds of paradise produce outlandish plumage to attract a mate. Darwin was bothered by such traits, since his theory of evolution couldn’t completely explain them (“The sight of a feather in a peacock’s tail, whenever I gaze at it, makes me feel sick!” he wrote to a friend).
<urn:uuid:0d1c3fee-ad36-4edb-bfa0-599580ba838f>
2.734375
117
Knowledge Article
Science & Tech.
51.096021
Hello, I'm just reading my text on harmonic series and I'm just having trouble where they explain the inequalities. For examples: s2 = 1 + 1/2 > 1/2 + 1/2 = 2/2 s4 = s2 + 1/3 + 1/4 > s2 + (1/4 + 1/4) = s2 + 1/2 > 3/2 So my question is, where are they getting the two halves in s2 and the two quarters in s4? Or how do they know to compare the partial sums to those numbers?
<urn:uuid:c0b3fa8e-cb47-4ca3-a5f2-7f8b262d10fb>
2.890625
123
Q&A Forum
Science & Tech.
99.007778
A simple demonstration of Newton's 3rd law of motion in action. A rubber band under tension is cut, launching both the sled and the launching mechanism in equal and opposite directions. Entries in motion (11) This is a fun and easy demo to teach what happens to objects when forces are acting in the same direction. A series of balls are dropped together, one on top of another. The forces are added together, creating a larger force that propels the top ball to a much greater height. Inertia is the tendency of an object to resist a change in its motion. An apple and a knife are moving downward. The counter-top applies an unbalanced force upward on the knife. The knife decelerates, but the apple does not. The apple decelerates only when the knife handle applies a force upward on it. According to Newton's first law of motion, an object moving at constant velocity will continue to move at constant velocity unless an outside unbalanced force is applied. This is why moving objects on Earth will always slow down and stop. The outside unbalanced force that does it is friction. Watch the video to see how friction affects the motion of a marble on a track. Speed is the rate at which an object covers a distance. How do you know if that rate changes or not? The video shows how to determine if the speed of an object is constant.
<urn:uuid:7ee156ca-6cd9-4b3f-b748-3a11722edecf>
4.15625
286
Content Listing
Science & Tech.
60.165945
Polar bear scientist Steve Amstrup was stunned when he learned he had won the 2012 Indianapolis Prize, the Nobel Prize of conservation. "You could have bowled me over with a feather," he said. The award from the Indianapolis Zoo—and the $100,000 that goes with it—is given every other year to a person who has made extraordinary contributions to the conservation of a single animal species or multiple species. No one has done more to save polar bears than Amstrup, who has been studying the animals and their habitat since 1980. He has called attention to their plight and the grim future they face due to the disappearance of Arctic ice caused by climate change. He projects that two-thirds of the world's polar bear population could disappear by midcentury unless climate change is slowed, and extinct by the end of the century. In 2007, he led an international team of researchers to the Arctic. The group's reports led to the 2008 listing of polar bears as a threatened species under the Endangered Species Act. The bears are the first and only species to be listed as endangered because of threats posed by global warming. Amstrup, now chief scientist at Polar Bears International is not only the most prominent defender of the polar bear, he's become one of the world's most cogent and forceful speakers on climate change. InsideClimate News caught up with him via email at his home in Kettle Falls, Wa., shortly after he was announced winner of the zoo prize. ICN: What have you learned from your many years with the bears that you didn't expect to learn? Amstrup: Early on, I learned how mobile they are, [that] they have the largest home ranges of all four-legged animals. I also learned that many of them have (or at least used to have) their cubs in dens constructed on drifting pack ice. These maternal females would be transported in the blind hundreds of kilometers. They would emerge with new tiny cubs and know exactly how to come back home. Later I realized just how vulnerable they are to a pack ice environment that was literally transformed during my research career. I never would or could have projected I would see that magnitude of change in my working lifetime. ICN: Your efforts helped put the polar bear on the endangered species list. That milestone must have been a long time coming. Did you always have faith that it would eventually happen? Amstrup: I have never had a goal of listing bears. My goal was only to understand their ecology and describe their present and future welfare. The understanding I gained from objective scientific inquiry led to the classification. But, the legal designation as threatened is not nearly as important as the scientific realization that the future of polar bears is in jeopardy if we humans do not change our ways. ICN: Other than stopping or at least slowing climate change, what other measures must be taken to save the polar bear? Amstrup: My work has shown clearly that without stopping greenhouse gas rise, no other management actions can make a difference. If, however, we mitigate GHG rise, on the ground management like establishing protective zones, etc. can help. The problem is that many have become fixated on the prospect of setting up refuges, establishing critical habitats, regulating hunting, etc. and those topics can become dangerous distractions from the real concern and the only thing that can really save polar bears. If we allow ourselves to be distracted from the mission of reducing GHG emissions, we surely will become polar bear historians rather than polar bear conservationists. ICN: Besides your effort to help save the polar bear and its habitat, you've been praised for your ability to make complex scientific concepts digestible to the general public. Did you come by this ability naturally? Amstrup: I think one of my greatest strengths always has been recognizing what is important in a cast of problems or issues and being able to simply and elegantly explain and describe it. It was clear to me from my earliest days as a professional biologist that if I could not communicate what I knew to all audiences, it was of little value. So, this sort of communication always has been a focus, but it is not necessarily something I had to practice. That isn’t to say that I don't learn from my experiences, how to do it better! ICN: What are some of the pitfalls that researchers and scientists tend to fall into when trying to explain science to the lay person? Amstrup: To a scientist, the devil is always in the details and the questions are always about the uncertainties—how can we reduce our error bars or sharpen our projections. For the public and policymakers, however, the certainties are the main interest. And, in the current "sound bite" environment, we cannot (and do not have time to) get into the uncertainties if we are to leave the audience with what we know rather than what we don't know. Global warming is perhaps the best case of this.
<urn:uuid:c7dd37c6-f2c8-47d5-8563-363c021bd62c>
2.828125
1,026
Audio Transcript
Science & Tech.
49.76056
Tutorial on programming devices? Wouter van Marle wouterm at spammers-unite-here.com Wed Mar 19 09:57:19 CET 2003 www.pyhton.org is a very valuable source on these things. And besides that, isn't reading/writing from some port (almost) the same as read/write to a file? Just use /dev/ttySxx (with xx your port number) as filename. Supposed to work like that (never tried myself). Just a thought. Oh if you are not in Linux/Unix/etc it will probably be different. In DOS I had to send commands directly to a port (that was a project using Turbo Pascal, felt to me as a pretty 'hacky' way of reading and writing your data, using interrupts and all). Good luck anyway! "Enno Middelberg" <emiddelb at mpifr-bonn.mpg.de> wrote in message news:slrnb7gao8.f80.emiddelb at pc069.MPIfR-Bonn.MPG.de... > I want to write some code to read data from my GPS receiver. The > signals will arrive via a serial link which I want to process and > I do not have a clue on how to read/write from/to devices using > Python. So far, I have only written scripts to process data stored in > files. Does anybody know a good starting point for reading or a > Many thanks, More information about the Python-list
<urn:uuid:5d3e8f87-3fff-4753-bb2a-2a08fef7c502>
2.765625
346
Comment Section
Software Dev.
77.019097
Ants in the genus Aphaenogaster are medium sized to large, slender with long legs and antennae, usually have propodeal spines (a few species lack spines), have 12 segmented antennae with the last 4 segments forming a weak club. The genus is widespread in North America and species nest in rotting wood, under bark, and in soil. Aphaenogaster treatae is a large dark reddish-brown species. Workers and queens have a distinctive lobe at the base of the scape usually extending at least one-fourth the length of the scape. The lobe is thick, with its upper face forming an obtusely projecting angle in the middle (as seen from the side). Aphaenogaster ashmeadi, a rarely collected species in AL and MS, also has a lobe at the base of the scape, but the lobe usually only extends rearward along the basal fifth (or less) of the scape and is flat and thin (as seen from the side). Biology and Economic Importance Aphaenogaster treatae is a common species in this region in prairies and open woodland habitats where it nests in the soil. In AL and MS, this species can usually be identified in the field by its size. Discover Life Images
<urn:uuid:be2f53ca-1e36-4c1f-a287-4e8bea5b7d93>
3.6875
269
Knowledge Article
Science & Tech.
43.985
An object is undergoing simple harmonic motion with period 1.2s and amplitude 0.6m. At $t=0$, the object is at $x=0$. How far is the object from the equilibrium position when $t=0.480$s? Attempt at solution: I used the displacement equation : and also found out what the angular frequency $\omega$ is (5.2rad/s). Then I found that $\phi$ is 0. I plugged my results in the equation and when I looked at the solution they used the following equation: Im not quite sure how I was supposed to know to use this equation instead of the cosine equation that is written on my equation sheet.
<urn:uuid:34c81716-955c-4c03-b9cf-12c7c84fed1d>
2.90625
147
Q&A Forum
Science & Tech.
75.909041
Most astronomers believe that the massive moons of Jupiter, including Io, Europa, Ganymede and Callisto, were born from a large gaseous disk that surrounded the planet during its formation. However, until now, the processes that brought smaller moons, particularly those around Uranus, Neptune and Pluto, into existence has remained a mystery. Aurélien Crida and Sébastien Charnoz now suggest that most moons in the solar system were born from massive ring systems that once surrounded the planets. Read more about this research from the 30 November issue of Science here. [Image courtesy of Frederic Durillon | animea; click the image for more information.]
<urn:uuid:6e2ac3e8-de54-4795-b337-72a956b9e96d>
3.75
141
Truncated
Science & Tech.
26.345861
Here is a list of some of the best books and most reliable web sites containing information on the history of mathematics and, in particular, the history of calculus. Bear in mind that books are edited and reviewed, whereas web sites are not subject to professional scrutiny and so their accuracy is not guaranteed. The History of the Calculus and Its Conceptual Development (New York: Dover, 1959). Carl Boyer and Uta Merzbach, A History of Mathematics (New York: Wiley, 1987). The Historical Development of the Calculus (New York: Springer-Verlag, 1979). An Introduction to the History of Mathematics, 6th ed. (New York: Saunders, 1990). C.C. Gillispie, ed., Dictionary of Scientific Biography (New York: Scribner's, 1974). Judith V. Grabiner, The Origins of Cauchy's Rigorous Calculus (Cambridge, MA: The MIT Press, 1981). A History of Mathematics: An Introduction (New York: HarperCollins, 1993). Mathematical Thought from Ancient to Modern Times (New York: Oxford University Press, 1972). Dirk J. Struik, A Concise History of Mathematics, 3rd ed. (New York: Dover, 1967). John Fauvel and Jeremy Gray, eds., The History of Mathematics, A Reader (London: MacMillan Press, 1987). D.E. Smith, ed., A Sourcebook in Mathematics (New York: Dover, 1959). D.J. Struik, ed., A Sourcebook in Mathematics, 1200-1800 (Princeton, N.J.: Princeton University Press, 1969). Web SitesSt. Andrews MacTutor History of Mathematics To find material on the history of calculus, click on History Topics Index, then on Analysis, and then on History of Calculus. This site also contains biographies of mathematicians, a chronology (timeline) of important events in the history of mathematics, and an interactive index of famous curves.Trinity College, Dublin, History of Mathematics archive This site contains a wealth of information concerning Sir Isaac Newton and Bernhard Riemann (including excerpts from their original works), as well as accounts of the lives and works of many 17th and 18th century mathematicians adapted from W.W. Rouse Ball's A Short History of Mathematics. There are also links to more than 200 other web sites relevant to the history of mathematics.British Society for the History of Mathematics This site contains well-annotated links to more than 70 sites organized into 16 categories.David Joyce's History of Mathematics Web Resources This site is valuable not only for its links to various web resources, but also for its extensive bibliographies of sourcebooks and other books on the history of mathematics.Jeff Miller's History of Mathematical Notation This site gives the earliest uses of various mathematical symbols, including those of calculus, and the contexts in which they occurred.Jeff Miller's History of Mathematical Words This site shows the earliest known uses of some of the words of mathematics, some with direct quotations from the mathematicians who coined them.
<urn:uuid:37d4a659-06d2-41a9-8c37-18d0e4343795>
3.171875
669
Content Listing
Science & Tech.
47.836657
Simula introduced OO in the 60s. Smalltalk took it to its logical and pure extreme in the 80s. C++ brought it to systems programming and gave it the performance that only static optimized code can enjoy. Really smart people wrote really smart code using really powerful, really futuristic features in C++... and created great big steaming piles of crap. Efforts to create any reasonable operating system or database system using C++ failed. Telephone switches, previous infallible, failed spectacularly in cascade. Countless klunky, slow, buggy, bloated, unmaintainable Windows apps were written. Government agencies swore off of it in favor of other systems, even avoiding systems with OO at all, favoring APL or C. Windows programmers eschewed the complexity of C++ in favor of VisualBasic. This created a bit of a paradox. Was it the language responsible for all of these flawed designs and executions? Did adding objects, destructors, multiple inheritance, and so on, create a language that it just isn't possible to write clean code in? For a long time, people thought so. "C++ is a messy language", they said, conflating the learning curve of the syntax of the language with learning to design objects and APIs that made sense. Gosling and those behind Java seemed to think so. They threw away multiple inheritance, operator overloading, and a pile of other things. For a time, Java code was clean. So clean that they started teaching it in schools. Projects started demanding it and all across the world, people with no prior programming knowledge quickly ramped up on the language. They joined the workforce and wrote buggy, overly complex, unmaintainable code. Simula's idea of OO was to provide a new abstraction to programmers with which to better model the interactions of the parts of a complex system. If a programmer could conceptualize the program in a way that drew parallels to real objects acting on each other, the objects and their interactions could all be better understood. In so much as that's true, there's nothing wrong with OO and no reason it should lead to unmaintainable code. Much earlier, Donald Knuth wrote _Literate Programming_. Local variables and functions were the celebrated abstraction with which people were making huge messes. Knuth sat down and asked what was going wrong and speculated about how those problems might be avoided. It proved far easier to offer people a new abstraction that they haven't yet ruined than to get them to example the psychological, social, and technical forces driving them to write crap. When the C++ guys realized that not only were they writing terrible code but that they were predictably and consistently writing terrible code, they too sat down and put some of their profound intelligence into asking why this was. This was the birth of the field of "Object Oriented Design and Analysis". There were a lot of things that people tried to do with objects that just didn't make sense and didn't work in the long run. There are a lot of early warning signs of failures to conceptualize things well and map them to objects. The Java guys, determined not to repeat the mistakes of the C++ guys, adopted less analytical, more rule-of-thumb versions of this and called them "Design Patterns", "Code Smells", and so on. They fairly successfully marketed to their own ranks the ideas of studying design for design's sake rather than merely learning a language. The Perl camp briefly and very superficially flirted with these ideas too but the glamour wore off and sitting down and just winging it and seeing where it goes is just so gosh darn much fun. History is boring. Of course, if you've read history, repeating it is boring. Even this sort of backfired for the Java camp; understanding so well how to build certain types of abstractions, they went nuts building them all over the place and then managed to construct great big steaming piles of crap out on a much higher level -- they built them composed out of large scale constructs devised to avoid problems at lower levels. While Java failed to convince everyone to actually studying the inherent follies and blunders novices make when designing software in hopes of avoiding them, it did introduce the world to a different idea: rather than using objects to model programs in terms of parts that interact, use objects to create massive APIs. It's a big step backwards but it kind of wins in the "worse is better" department in that it's a lot easier for people to wrap their heads around than OODA. The language vendor created a whole lot of objects for you to use that represent different parts of the system they built, and if you base your application largely around using these pre-created objects, you're less likely to fuck things up. The win of objects became easily traversing large sets of documentation to figure out how things are done. If you get a Foo back from calling Bar.bar, you can instantly look up the docs for it and see if maybe you can weasel a Baz out of that to pass to Quux. This began the umpteenth documentation push which started with flowcharts, spent years on massive shelves of spiral bound printed pages from vendors, and, at some point culminated in programmers having bookshelves full of O'Reilly books before branching out into CPAN display module docs. Everyone got tired of Java and gave up hope that any sort of cure for the worlds programming ills would emerge from that camp. Yo dawg, I heard you like MVCs so I put an MVC in your MVC! Even reading histories of how Java fucked things up and repeatedly missed the point is boring, and history is interesting. Java just smears boring all over stuff. Meanwhile, the Python folks were writing much better software without nearly so large of sticks up their butts. No one knows how the Python people did it so I guess they just supposed that Python is magical and code written in it is clean. Likewise, no one understands why Ruby is so magically delicious, so like amateurs banging on a piano keyboard after the maestro gets up, they're trying their hand at it. Back to present day and Perl. OO and OO abstractions mean something entirely different than what the Simula guys had in mind. Now, when we sit down and create a system, we don't conceptualize the parts of the system and their interactions. We don't model the problem space using analogues to real world ideas and real world interactions. We don't search for the correct idiom. Instead, we use APIs, and the more, the better. There is no User; instead, there are DBIx::Class ResultSet objects for a user or login table; there are admin screens that check things in that; there are Apache Response objects for talking to the user through; there are piles of Moose meta things that have nothing to do with hypothetical objects modeling a hypothetical universe but do a neat job of setting up neat things. Everything is abstracted -- except for the concepts the program is trying to actually model in its own logic. If there are objects definitively, unique, and authoritative representing things and nothing but that thing in a normalized SQL sense, then those objects are all thrown together in a big pile. A lot of the C++ OODA textbook's pages are concerned with finding idioms for how things interact to model those interactions. In Perl, we just pass stuff. And we're proud of our redneck selves. In C++, and again in Java, and most certainly in Perl, we've shat on the grave of Simula. Smalltalk did not; Ruby had a blessed birth by virtue of drawing so heavily from Smalltalk. Python programmers tried hard to keep the faith even though OO is borderline useless -- nearly as bad as Perl's -- in that language. If we were to sit down and try to represent our actual problem space as objects -- what the program is trying to do rather than the APIs it's trying to use -- we'd find that we're knee deep in shit. This isn't one man's opinion. C++ programmers trying to claw their language's way out of its grave named parallel inheritance hierarchies as a thing to avoid; Java redubbed them as "code smells". If you have multiple objects each representing a user in some capacity, but not representing some limited part or attribute of the user, you have this disease. If you're using a bunch of abstraction layers to represent the User, for example, you have this disease. Yet it has been escaped from. There are cures. You can have your objects representing things which your program deals with and have good abstractions to program in too. MVC frameworks aren't the cure but some people benefit from any restraint they can get. And here it is, 2010. Everyone wants to learn the language syntax and fuck around with it, which is fine and great. But not everyone is here, reading this, being told that you can and will paint yourself into a corner in any language -- even assembly language -- and that it isn't the language's fault but especially: the language won't help you. Perl programmers and Perl programs suck because Perl programmers think that rather than Perl fostering bad code, it'll help you dig your way out, with all of the magical things it can do. This is what C++ programmers thought. Perl programmers would be far better off if they actually thought that Perl fostered bad code and worked against this imagined doom. So, let me say it: every programmer in every language, if he lets himself tackle large or ill defined enough tasks, will code himself into a corner. Not him nor perhaps anyone who follows him will be able to dig him out. The house of cards will implode. Trusting in abstractions of the language to save you will accelerate this process unless, just perhaps, you're privy to the lore. Books talk about how to design inheritance hierarchies that make sense. They talk about how to handle multiple inheritance and how to conceptualize things as objects. There's lots of benefit to modeling your problem space not as "objects" in the sense of the API you're using but in the sense of actors and props in a drama. Like C++ programmers of yore, Perl programmers reliably, consistently build houses of cards. As Ruby programmers start to build larger systems and have the time to grow things to that point, they'll discover that merely representing things as objects isn't enough, and that the interactions between objects are out of hand. This isn't to say that I'm not susceptible to these same forces. I most certainly am. And I fear them.
<urn:uuid:b2af9dac-260f-49b2-88e0-2bdb14be63c2>
2.875
2,199
Personal Blog
Software Dev.
54.532929
How do you make a higher order function in Python? Functions and methods are first-class objects in Python, so if you want to pass a function to another function, you can just treat it as any other object. To bind a function object to a specific context, you can use either nested scopes or callable objects. For example, suppose you wanted to define linear(a,b) which returns a function f(x) that computes the value a*x+b. Using nested scopes: def linear(a, b): def result(x): return a*x + b return result Or using a callable object: class linear: def __init__(self, a, b): self.a, self.b = a,b def __call__(self, x): return self.a * x + self.b In both cases: taxes = linear(0.3, 2) gives a callable object where taxes(10e6) == 0.3 * 10e6 + 2. The callable object approach has the disadvantage that it is a bit slower and results in slightly longer code. It can be a bit easier to understand, though, especially if you’re used to OO design. It also allows a collection of callables to share their signature via inheritance: class exponential(linear): # __init__ inherited def __call__(self, x): return self.a * (x ** self.b) And the object can encapsulate state for several methods: class counter: value = 0 def set(self, x): self.value = x def up(self): self.value=self.value+1 def down(self): self.value=self.value-1 count = counter() inc, dec, reset = count.up, count.down, count.set Here inc(), dec() and reset() act like functions which share the same counting variable.
<urn:uuid:18d749f4-0ec4-44e8-a1a7-df2ac8a8bf63>
3.640625
414
Tutorial
Software Dev.
76.808296
Energy, Climate, and Innovation Discussion Paper Why should regulatory agencies and lawmakers pay attention to black carbon today if it has been largely ignored so far in climate-mitigating strategies? How can the United States address black carbon emissions as part of its climate change policies and regulations? While there’s long-standing scientific consensus on the climate impacts of black carbon, global, regional and national bodies have yet to include the agent as part of climate change mitigating policies and regulations. This paper argues that the reduction of black carbon emissions should be a priority because it leads to near-immediate impacts on atmospheric concentrations, counteracts the erosion of cooling aerosols, offers health and air quality co-benefits, and uses technology that is already widely available. Carver’s research aims to contribute to U.S. climate change mitigation efforts by identifying policies and strategies that can help reduce the climate impacts of domestic black carbon emissions from on-road and nonroad diesel sources. Through a review of the scientific, public health and environmental literature and interviews with regulators, scientists and practitioners, the paper finds that the U.S. has a number of national and sub-national policy mechanisms that could facilitate accelerated black carbon emissions reductions, but no coordinated national strategy. After acknowledging various barriers to reducing black carbon emissions, Carver recommends sets of specific actions for the EPA, Congress and regional, state and local governments to take full advantage of the near-term opportunity that reducing black carbon emissions offers in slowing the rate of climate change. Read the complete paper here. To request a hard copy of this publication, email CIERP@tufts.edu.
<urn:uuid:3fbf9aed-f366-429e-a7af-b2f12e7f7fa1>
2.984375
333
Academic Writing
Science & Tech.
23.142391
The rate of Hawking radiation emission (same as ideal black body radiation) is thought to be inversely proportionate to mass according to: Therefore a million solar mass black hole would be observed at a temperature of about 0.00000000000006 deg K, and would be gaining net mass from both cosmic background radiation at 2.7 K, plus any starshine or other infalling mass source. In the extremely distant future, if the expanding universe theory is correct, the cosmic background radiation might go low enough to cause a net mass loss...but we're talking trillions of years plus. Even then the rate of evaporation would be very slow. Bear in mind that the radius of that million solar mass black hole's event horizon is about three million kilometers, so there's not really that much surface area to radiate from - that's only about four times larger than the Sun. Question... Does the numerator on the right hand side have anything to do with Avogadro's number? In the extremely distant future, if the expanding universe theory is correct, Personally, I really hope heat death is not the ultimate fate of the universe. It would kind of make this whole enterprise a rather large waste of time.
<urn:uuid:4b247b25-ae95-41eb-817d-9e6084ab09df>
3.203125
251
Q&A Forum
Science & Tech.
54.471872
Joined: 16 Mar 2004 |Posted: Fri Aug 28, 2009 8:42 am Post subject: How Nanotechnology Can Help Wean World off Fossil Fuels Nanotechnologies can be used to develop sustainable energy systems while reducing the harmful effects of fossil fuels as they are gradually phased out over the next century. This optimistic scenario is coming closer to reality as new technologies such as biomimetics and Dye Sensitized solar Cells (DSCs) emerge with great promise for capturing or storing solar energy, and nanocatalysis develops efficient catalysts for energy-saving industrial processes. Europe is ready to accelerate development of these technologies, as delegates heard at a recent conference, Nanotechnology for Sustainable Energy, organised by the European Science Foundation (ESF) in partnership with Fonds zur Förderung der wissenschaftlichen Forschung in Österreich (FWF) and the Leopold-Franzens-Universität Innsbruck (LFUI). The conference focused on solar rather than other sustainable energy sources such as wind, because that is where nanotechnology is most applicable and also because solar energy conversion holds the greatest promise as a long-term replacement of fossil fuels. Solar energy can be harvested directly to generate electricity or to yield fuels such as hydrogen for use in engines. Such fuels can also in turn be used indirectly to generate electricity in conventional power stations. “The potential of solar power is much, much larger in absolute numbers than that of wind,” said Professor Bengt Kasemo from Chalmers University of Technology and the chair of the ESF conference. However, like wind, the potential of solar power generation varies greatly across time and geography, being confined to the daytime and less suitable for regions in higher latitudes, such as Scandinavia and Siberia . For this reason there is growing interest in the idea of a global electricity grid according to Kasemo. “If solar energy is harvested where it is most abundant, and distributed on a global net (easy to say – and a hard but not impossible task to do) it will be enough to replace a large fraction of today's fossil-based electricity generation,” said Kasemo. “It also would solve the day/night problem and therefore reduce storage needs because the sun always shines somewhere.” In the immediate future, solid state technologies based on silicon are likely to predominate the production (manufacture) of solar cells, but DSC and other “runners ups” are likely to lower costs in the long term, using cheaper semiconductor materials to produce robust flexible sheets strong enough to resist buffeting from hail for example. Although less efficient than the very best silicon or thin film cells using current technology, their better price/performance has led the European Union to predict that DSCs will be a significant contributor to renewable energy production in Europe by 2020. The DSC was invented by Michael Grätzel, one of the speakers and vice chair at the ESF conference. The key point to emerge from the ESF conference, though, is that there will be growing choice and competition between emerging nanotechnology-based solar conversion technologies. “I think the important fact is that there is strong competition and that installed solar power is growing very rapidly, albeit from a small base,” said Kasemo.”This will push prices down and make solar electricity more and more competitive.” Some of the most exciting of these alternatives lie in the field of biomimetics, which involves mimicking processes that have been perfected in biological organisms through eons of evolution. Plants and a class of bacteria, cyanobacteria, have evolved photosynthesis, involving the harvesting of light and the splitting of water into electrons and protons to provide a stream of energy that in turn produces the key molecules of life. Photosynthesis can potentially be harnessed either in genetically-engineered organisms, or completely artificial human-made systems that mimic the processes, to produce carbon-free fuels such as hydrogen. Alternatively, photosynthesis could be tweaked to produce fuels such as alcohol or even hydrocarbons that do contain carbon molecules but recycle them from the atmosphere and therefore make no net contribution to carbon dioxide levels above ground. Biomimetics could also solve the longstanding problem of how to store large amounts of electricity efficiently. This could finally open the floodgates for electrically-powered vehicles by enabling them at last to match the performance and range of their petrol or diesel-based counterparts. One highlight of the ESF conference was a presentation by Angela Belcher, who played a major role in pioneering nanowires made from viruses at the Massachusetts Institute of Technology (MIT) in the US . Bizarre as it sounds, there is a type of virus that infects E.coli bacteria (a bacteriophage) capable of coating itself in electrically-conducting materials such as gold. This can be used to build compact high capacity batteries, with the added advantage that it can potentially assemble itself, exploiting the natural replicating ability of the virus. The key to the high capacity in small space lies in the microscopic size of the nanowires constructed by the viruses – this means that a greater surface area of charge carrying capacity can be packed into a given volume. However, commercial realisation of biomimetic and other emerging technologies lies far in the future. But meantime, as delegates heard from several speakers at the ESF conference, nanotechnology has an important contribution to make, improving the efficiency of existing energy-generating systems during the transition from fossil fuels. For example, Robert Schlögl outlined how nano-scale catalysts can be used to improve the efficiency of engines or systems consuming fossil fuels. Inspired by such presentations, delegates at the conference were unanimous in calling for a follow up. “The conference was regarded as a real success and a new proposal for a conference in 2010 (chaired by Grätzel) will soon be submitted,” said Kasemo. “In particular the conference inspired and educated young people, such as doctors, students, postdocs, young researchers, who will be the ones to realise the potential of nanotechnology for sustainable energy.” The ESF-FWF conference in Partnership with LFUI on NANOTECHNOLOGY FOR SUSTAINABLE ENERGY was held at the Universitätszentrum Obergurgl, near Innsbruck in Austria during June 2008. Meanwhile, a team of physicists, engineers, chemists and biologists at the University of Kansas and partner institutions is devising nanotechnology that could help supplant fossil fuels and curb climate change. Led by Judy Wu, University Distinguished Professor of Physics and Astronomy at KU, the researchers want to develop better, less-costly solar panels and biofuels. Why the focus on solar? According to Wu, the sun outshines every alternative because of its ability to cleanly fulfill humankind's mounting need for energy. “If you fully use wind energy, for instance, you can only cover about 20 percent of our energy need of 14 terawatts per year,” said Wu. “And our energy requirement is going to double by the middle of this century and triple by the end. But the wind is not going to increase. And if you look at fossil energy, we're going to burn out our resources probably within some short time frame of 100 or 200 years. But with solar — if you look at our 14 terawatts per year in need — you only need one hour of sunlight to deliver this much energy. The sun's energy is the singular solution for our increasing energy needs.” According to the KU researcher, the trouble is that current solar technologies are inefficient and too expensive, leading to slower-than-necessary adoption of photovoltaic technology. “Out of the total 14 terawatts per year energy use for the world, only 2 percent is solar,” Wu said. “This includes photovoltaic and biofuel. If you look at the photovoltaic market, it is increasing at an extremely high rate of 40 to 50 percent per year. But if you grow at this same rate, it will take many, many years for solar to dominate. So we really need breakthrough technology to speed up use of solar energy.” Wu said innovations in solar energy production will create a “third generation” of PV panels. “The first generation was traditional silicone wafer-based solar cells with efficiency capped at 31 percent, as predicted by theory,” Wu said. “So far, the best solar cell probably gets 20-something percent efficiency. And the cost is also high. The second generation tried to take the same performance, but drop the cost dramatically by one or two orders of magnitude. For the third generation, we want to go toward extremely high performance and take advantage of the second generation in terms of low cost. It eventually could play a big role in energy generation.” According to Wu, advancing to the third generation of solar panels will depend upon nanotechnology. A primary objective of Wu's “nanotechnology for renewable energy” team will be boosting the performance of solar energy capture by better understanding photosynthesis in plant life, which is driven by energy from the sun. The group would fabricate self-assembling nanocomposite materials that mimic photosynthesis — an approach that demands expertise in several different scientific fields. “If you look at photosynthesis, this entire process involves biology, chemistry, physics and engineering,” Wu said. “So that is why this interdisciplinary team is very critical to address the entire process of solar energy capture and usage.” In addition to the variety of collaborators at KU, the team includes researchers at Kansas State University , Wichita State University and participants from the University of Notre Dame, Argonne and Oak Ridge national laboratories and the National Renewable Energy Laboratory. Other research efforts led by Wu involve conversion of plant biomass into biofuel and consideration of global environmental impacts and commercialization possibilities of technologies developed through KU-based investigations into solar energy. Indeed, if this effort bears fruit, Wu thinks the state of Kansas would have much to gain.
<urn:uuid:f0d3d7de-7192-4979-a5ad-a1708be371af>
3.15625
2,112
Comment Section
Science & Tech.
22.569896
What is System in System.out.println() method? Is it a Class or a Package The System class contains several useful class fields and methods. It cannot be instantiated. Where as out is the "standard" output stream. This stream is already open and ready to accept output data. Typically this stream corresponds to display output or another output destination specified by the host environment or user. And println is the method which terminate the current line by writing the line separator string. thanks, ur answer was helpful. The System class is a predefined class contains several useful class fields and methods. It manipulates various operating system related objects and provides access to the system. System is a class. If you are facing any programming issue, such as compilation errors or not able to find the code you are looking for. Ask your questions, our development team will try to give answers to your questions.
<urn:uuid:0d178cc3-9c9f-4d84-9110-923b9cc4e270>
3.09375
187
Q&A Forum
Software Dev.
53.583165
Using the left mouse button, select piece and drag into position. Pressing the right mouse button rotates piece. If you need a hint, press the "V" key to view the whole image. Change the number of pieces: 9 | 24 | 48 | 160. Tropical cyclones with an organized system of clouds and thunderstorms with a defined circulation, and maximum sustained winds of 38 mph (61 km/h) or less are called "tropical depressions". Once the tropical cyclone reaches winds of at least 39 mph (63 km/h) they are typically called a "tropical storm" and assigned a name. If maximum sustained winds reach 74 mph (119 km/h), the cyclone is called a hurricane in the North Atlantic Ocean. To discover more about hail, go to JetStream - an Online School for Weather.
<urn:uuid:3dbd5677-6c6d-4d27-b700-24d920329696>
3.125
175
Tutorial
Science & Tech.
65.393558
There are many ways to do pattern matching, and many domains in which to use it. Snobol4 uses a very expressive pattern matching language over strings; Prolog uses a fairly simple pattern matching language over relations (trees). In both Snobol4 and Prolog, pattern matching is expensive, because it can imply backtracking; both languages provide imperative mechanisms for the programmer to override the natural declarative nature of the pattern matching so as to avoid backtracking. It would be nice if there were a model of pattern matching that would allow us to use a pattern matching language that was as powerful as possible and yet not so powerful as to overcomplicate the mechanism and lead to inefficiency. Consider Unix file globbing patterns: they seem to be simple, but are they? Do they ever imply backtracking? If not, are they as powerful as they could be? In fact, Unix file globbing patterns form an ad hoc pattern matching language, which, while efficient (non-backtracking), is not as powerful as it could be. We can find a good model for pattern matching in the field of automata theory. This raises several interesting questions: A whole theory of computability and computation exists, based on a classification of pattern matching machines. These machines are models of the basic kinds of computers. Sometimes programming languages are implemented by compilation or translation in an effort to gain efficiency by skipping a level of interpretation, but usually the levels of interpretation continue up to higher levels: a Lisp interpreter is written in C, for example; an object-oriented language like CLOS is written in Lisp; a CLOS programmer designs an interpreter for an application-oriented language in CLOS; if the application is extensible, a user of the appliation may design yet another interpreter in that language. Each of these interpreters, except for the bottom-most layer, is a software machine. Pattern matchers are likewise software machines. How do we prove this? We prove it by constructing an interpreter for the lower level machine on the higher level one: if we can construct such an interpreter, then clearly the higher level machine is capable of computing anything the lower level machine can compute: it can simply compute it on the interpreter! The levels of the Chomsky hierarchy are as follows: MACHINE CLASS LANGUAGE CLASS ------------- -------------- finite automata regular languages pushdown automata context-free languages linear bounded automata context-sensitive languages Turing machines recursively enumerable languagesThe simplest kind of machine is the finite automaton, while the most complex machine is the Turing machine. Each of these language classes are of interest to computer programmers: LANGUAGE CLASS EXAMPLE -------------- ------- regular languages pattern matching languages context-free languages simpler programming languages context-sensitive languages most programming languages recursively enumerable languages natural languages Languages are usually defined by grammars, so there are also classes of grammars that correspond to these language classes: LANGUAGE CLASS GRAMMAR CLASS -------------- ------------- regular languages regular expressions context-free languages Backus-Naur forms context-sensitive languages Van Wijngaarden (two-level) grammars recursively enumerable languages ??? Parsing technology for programming language implementation is based on algorithms that are known to be able to parse particular classes of languages based on particular grammars. Regular expressions are used in compilers to implement the lexical level of language definition. Because of the simplicity and efficiency of the finite automata that interpret regular expressions, they are also used for pattern matching in tools and other languages. To run the automaton on a given input string, we simply iteratively execute the state transition function, calling it with the current state (starting with the initial state) and the current input symbol. Each time we call the function, we advance to the next input symbol, and use that with the new state returned by the function as the input to the next iteration. We terminate when we've exhausted the input string. If the process terminates with the function returning a state that's a member of the set of accepting states, we have recognized the input string; otherwise, we have rejected it. Finite automata can be modelled with a state transition diagram, a digraph with the states represented by the nodes and the transitions represented by the arcs (labelled with symbols). For any regular language, there are an infinite number of possible finite automata that can be constructed to recognize that language. Every finite automaton can be represented by a regular expression; regular expressions are the grammars of the regular languages. A regular expression (regexp) is defined as follows: It's very easy to write an efficient interpreter for a given finite automaton in any programming language. All that's required is to represent that state transition function as a function or an array. It's also easy to automate the construction of efficient finite automata from regular expressions. vi); pagers ( more); file searching tools ( sed); programming languages (Tcl, Perl, C, Awk, Lex, etc). grep-like tools do this); you can move the cursor in a text editor or pager to the next occurrence of a matching regular expression; you can substitute new text for matching occurrences, allowing you to change or delete based on regular expressions; and you can extract subparts of a match. Real regular expression notations differ from the simple notation above in order to provide shorthands that are easier to write. I will describe Tcl's regular expression notation (which is that of egrep) and explain the shorthands in terms of the basic A B 7 & @ \* |operator; here are some sums of regexps: *; here are some iterated regexps: A* 9* \**Iteration means zero or more occurrences of the preceding regexp. (ABC) (ABC)* (0|1|2|3|4|5|6|7|8|9)(0|1|2|3|4|5|6|7|8|9)* 0|1|2|3|4|5|6|7|8|9 There is a special shorthand for use in a character class: a range of characters can be expressed by separating the lower and upper inclusive bounds with a hyphen; in this case, order matters because ASCII collating order is used to resolve the range. These three regexps are equivalent: 0|1|2|3|4|5|6|7|8|9 [0-9] This shorthand makes the hyphen a metacharacter within a character class. How do we get a literal hyphen in a character class? Like this (all three are equivalent): [-A-Z] [A-Z-] A|B|C|D|E|F|G|H|I|J|K|L|M|N|O|P|Q|R|S|T|U|V|W|X|Y|Z|- There is also a notation for the complement of a character class: if the first character of the character class is a carat ^), the character class is complemented. So [^a-zA-Z] is the regexp that matches any non-alphabetic ASCII character (including all control characters). To use a literal carat in a character class, simply place it anywhere except in the To include a literal ] in the sequence, make it the first character (following a possible ?) operator makes the preceding regexp optional (i.e, it can occur zero or one times). This shorthand is equivalent to a common use of summing. These two regexps are equivalent: +) is just like the *operator except that it specifies one or more repetitions of the previous regexp. These two regexps are equivalent: .) metacharacter matches any single ASCII character. It's a very important shorthand for the sum of all the ASCII characters. ^) will only match if it occurs at the beginning of the input string. This is the behavior we described for the basic regular expressions above. Without a carat, Tcl regular expressions can actually begin anywhere in the input string: we're not testing whether or not the entire string matches, but whether or not a match occurs as a substring of the input. It turns out that this is more useful, so it's the default. If the anchored approach were the default, then most Tcl regexps would have to start with .*. Carat only has this special interpretation if it's the first character of the regexp. A regular expression that ends with dollar ( matches at the end of the input string. Combining these two anchors allows us to constrain a regexp to match the entire input string, thus simulating the more strict basic regexps described earlier. a*b*when matched against the string aab, the first three letters of the input string.
<urn:uuid:ed1d576a-f225-4d66-8f8f-ddb46db7a906>
3.046875
1,891
Documentation
Software Dev.
35.37951
The description of the word you requested from the astronomical dictionary is given below. 'ga‧lax‧y; galaktos = [Greek] milk A galaxy is a region of space with millions to thousands of millions of stars and many clouds of gas that are tied together by their gravity. A typical spiral galaxy or elliptical galaxy (two types of galaxies) contains 100 thousand million stars, has a diameter of 100,000 lightyears, and is at a distance of a few million lightyears from other neighboring spiral or elliptical galaxies. Our Solar System is part of the Milky Way, which is a fairly ordinary spiral galaxy.
<urn:uuid:711b516d-8b71-4dd0-a3f3-92d2bab36ee9>
3.59375
132
Structured Data
Science & Tech.
41.067682
Tag Archives: ocean On a planet where 71 percent of the surface is covered by water, the oceans are critical for life itself. They feed us, regulate our weather patterns, provide over half of the oxygen that we breathe, and provide for our energy and economy. Yet only 5 to 10 percent of the ocean floor and of the waters beneath the surface have been explored and mapped in a level of detail similar to what already exists for the dark side of the Moon, for Mars, and for Venus. GIS technology, which has long provided effective solutions to the integration, visualization, and analysis of information about land, is now being similarly applied to oceans. Our ability to measure change in the oceans (including open ocean, nearshore, and coast) is increasing, not only because of improved measuring devices and scientific techniques, but also because new GIS technology is aiding us in better understanding this dynamic environment. This domain has progressed from applications that merely collect and display data to complex simulation, modeling, and the development of new research methods and concepts. What have we learned after 100 years? On April 15, 1912, more than 1,500 passengers and crew aboard the RMS Titanic perished at sea in one of the most infamous maritime disasters in all of human history. She was the largest ship afloat at the time, but the location of her wreckage remained a mystery until 1985. Many have seen similarities between the sinking of Titanic and the struggles of the gigantic cruise ship Costa Concordia, which ran aground off the coast of Italy almost 100 years later. Continue reading The ocean makes up a huge part of our planet. Yet “there is still so much we don’t know about the ocean,” says Prof. Dawn Wright, ocean scientist and geographer at Oregon State University (and incoming Esri chief scientist). “How can we understand and mitigate the impacts of climate change, clean up oil spills, protect species, sustain fisheries, and so forth, if we still have not fully explored and understood the ocean?”
<urn:uuid:b192a130-a04e-4a2c-bcfb-473e044bab19>
3.15625
414
Content Listing
Science & Tech.
37.879891
In order to examine the results of the wave theory of the electron for each elementA substance containing only one kind of atom and that therefore cannot be broken down into component substances by chemical means. in the periodic tableA chart showing the symbols of the elements arranged in order by atomic number and having chemically related elements appearing in columns., we must recall the general rules that are necessary in order to predict electron configurations for all atoms of the elements. To review, these rules are as follows: 1 The Aufbauprinzip (building-up principle). The structure of an atom may be built up from that of the element preceding it in the periodic system by adding one protonThe positively charged particle in an atomic nucleus; its mass is similar to the mass of a hydrogen atom. (and an appropriate number of neutrons) to the nucleusThe collection of protons and neutrons at the center of an atom that contains nearly all of the atoms's mass. and one extranuclear electron. 2 The order of filling orbitals. Each time an electron is added, it occupies the available subshell of lowest energyA system's capacity to do work.. The appropriate shell may be determined from a diagram such as Fig. 1a which arranges the subshells in order of increasing energy. Once a subshell becomes filled, the subshell of the next higher energy starts to fill. 3 The Pauli exclusion principleThe statement that no two electrons in an atom can have the same set of four quantum numbers; the principle leads to the rule that only two electrons (having opposite spin) can occupy an atomic orbital.. No more than two electrons can occupy a single orbital. When two electrons occupy the same orbital, they must be of opposite spin (an electron pair). 4 Hund’s rule. When electrons are added to a subshell where more than one orbital of the same energy is available, their spins remain parallel and they occupy different orbitals. Electron pairing does not occur until it is required by lack of another empty orbital in the subshell. The order in which the subshells are filled merits some discussion. As can be seen in Fig. 1a within a given shell the energies of the subshells increase in the order s < p < d <f. When we discussed the boron atom, we saw that a p orbital is higher in energy than the s orbital in the same shell because the p orbital is more effectively screened from the nucleus. Similar reasoning explains why d orbitals are higher in energy than p orbitals but lower than f orbitals. Not only are the energies of a given shell spread out in this way, but there is sometimes an overlap in energy between shells. As can be seen from Fig. 1a the subshell of highest energy in the third shell, namely, 3d, is above the subshell of lowest energy in the fourth shell, namely, 4s. Similar overlaps occur among subshells of the fourth, fifth, sixth, and seventh shells. These cause exceptions to the expected order of filling subshells. The 6s orbital, for example, starts to fill before the 4f. Although the order in which the subshells fill seems hopelessly complexA central metal and the ligands surrounding it; also called coordination complex. at first sight, there is a very simple device available for remembering it. This is shown in Fig. 1b. The rows in this table consist of all possible subshells within each shell. For example, the second row from the bottom contains 2s and 2p, the two subshells in the second shell. Insertion of diagonal lines in the manner shown gives the right order for filling the subshells. EXAMPLE 1 Predict the electron configuration for each of the following atoms: (a) ; (b) . SolutionA mixture of one or more substances dissolved in a solvent to give a homogeneous mixture. In each case we follow the rules just stated. a) For there would be 15 protons and 16 neutrons in the nucleus and 15 extra nuclear electrons. Using Figure 1b to predict the order in which orbitals are filled, we have 1s2 2 electrons, leaving 15 – 2 = 13 more to add 2s2 2 electrons, leaving 11 more to add 2p2x, 2p2y, 2p2z, (or 2p6) 6 electrons, leaving 5 more to add 3s2 2 electrons, leaving 3 more to add 3p1x, 3p1y, 3p1z 3 electrons The electron configuration is thus It could also be written [Ne]3s23p1x3p1y3p1z or [Ne]3s23p3 or where [Ne] represents the neon kernel 1s22s22p6. b) In the case of cobalt there is a total of 27 electrons to fill into the orbitals. There is no difficulty with the first 10 electrons. As in the previous example, they fill up the first and second shells: 1s22s22p6 17 more to add The third shell now begins to fill. First the 3s subshell then the 3p subshell are filled by 8 more electrons: 1s22s22p63s23p6 9 more to add Since this is also the structure of argon, we can use the shorthand form - [Ar] 9 more to add We now come to an energy overlap between the third and fourth shells. Because the 3d orbitals are so well shielded from the nucleus, they are higher in energy than the 4s orbitals. Accordingly the next orbitals to be filled are the 4s orbitals: - [Ar]4s2 7 more to add Once the 4s orbital is filled, the 3d orbitals are next in line to be filled. The 7 remaining electrons are insufficient to fill this subshell so that we have the final result - [Ar]3d7 4s2 Electron configurations of the atoms may be determined experimentally. Table 1 in Electron Configurations and the Periodic Table lists the results that have been obtained. There are some exceptions to the four rules enunciated above, but they are usually relatively minor. An obvious example of such an exception is the structure of chromium. It is found to be [Ar]3d54s1, whereas our rules would have predicted [Ar]3d44s2. Chromium adopts this structure because it allows the electrons to avoid each other more effectively. A complete discussion of this and other exceptions is beyond the scope of an introductory text.
<urn:uuid:f2705fbc-26f6-4665-a099-8e60791c6545>
4.1875
1,384
Academic Writing
Science & Tech.
53.093136
Just recently, a poster to the Journal of Nuclear Physics asked Andrea Rossi about the heat output of the Hot Cat. His concern was whether or not the unit had yet been tested while actually being put to work. The excess heat produced by the Hot Cat may not be as impressive if it is siphoned away. The poster, Seppo, commented: “It could be that the Hot Cat behaves differently when put to real work, i.e. when the energy generated by it is utilized by transferring the heat efficiently away by fluid, air stream or by conduction.” Rossi’s answer was that, yes, that is an excellent point, and that they have already been working with heat exchangers. In fact, they have found that the output of the Hot Cat is stable on both the primary and on the secondary circuits. Apparently, the only difference in the amount of heat produced is the flow rate into the exchanger itself. “We are working already with heat exchangers, using the primary and the secondary circuit and the behavior is stable. Obviously the temperature in the circuits of the heat exchanger depends on the flow rate.” Rossi also went on to say that the real efficiency they are obtaining through the testing results is much better than his extremely conservative estimates reported in Pordenone.
<urn:uuid:d75cd017-cbe1-4fb8-8a90-b3776ea8ea82>
3.28125
272
Personal Blog
Science & Tech.
50.868311
The water quality studies were completed by the Centro Ecologico Akumal Water Quality Lab located in Akumal Mexico under the direction of Ms. Samantha Smith. Ms. Smith is a member of Proyecto Yaxchen. Ms. Smith has donated her time and resources to this project. Collections were done in different areas of the surface pool and cave entrances on June 3, 1998. Collections were also taken on June 9, 1998 in an upstream section of the cave refered to as the "Milky Way" due to its white cloudy nature. CEA Water Quality Laboratory - Results Sheet Client Name: Proyecto Yaxchen, Kay & Gary Walten Sampling Site: Yaxchen Cave System Sampling Date: June 3, 1998 per 100 ml (mg/L No2 -N) (mg/L NO3 -N) (mg/L PO4 3-) |Downstream @ mainline, depth = 26'||7.6|| (0.11 mg/L as P) |Upstream 30' back on mainline |7.5||8||6||0.00||0.007||1.1||0.08 (0.03 mg/L as P0||0| |Crackline, 40' back, below halocline, cloudy water (0.25 mg/L as P) |Crackline, 40' back, freshwater (0.06 mg/L as P) |Surface Pool||7.3||8||12||0.00||0.008||0.9||0.33 (0.11 mg/L as P)||Below range| Discussion of Results The pH of most natural waters is between 4 and 9, usually between 6 and 8. In water, a deviation from neutral 7.0 is normally the result of the hydrolysis of salts and strong bases and weak acids or of weak bases and strong acids. However, gases such as CO2, H2S and NH3 can have a significant effect. In Yaxchen, all sites have a pH between 7.3 and 7.6. The water is alkaline (slightly basic) in this region because of the presence of carbonate and bicarbonate found in the limestone of which this cave system is made. Carbonate and bicarbonate prevent acidification. Salinity is a measure of salts in the water, and its units are often presented in parts per thousand, or ppt. Seawater has a salinity of 35 ppt. Where fresh and salt water meet, the resulting mixture is called brackish water, defined as having a salinity between 0.5 ppt and that of full-strength seawater. The salinity of the sites sampled in Yaxchen range from 8 to 10 ppt. These numbers indicate that the water is, indeed, brackish. 3 of the 5 sites have a salinity of 8 ppt, one has a salinity of 8.5, and the cloudy layer located below the halocline has a salinity of 10 ppt. The cloudy layer is more saline than its surrounding waters. It would be interesting to get salinity readings below the cloudy layer itself as this, too, should be saltier water. Fecal coliform are an indicator species of other pathogens that may be in the water and which, in turn, may pose a threat to human health. Fecal coliform are present in the Yaxchen cave system. The numbers achieved (i.e. 6-16 colonies/100 ml) is well below the 200-colonies/100 ml limit for safe body contact. These numbers do, however exceed the 0 colonies/100 ml limit for safe drinking water. We can dive it, but we cannot ingest it. Nitrogen Cycle Nitrogen can exist in various forms in the aquatic environment. The greatest source of nitrogen on our planet is air. This nitrogen, however, is only available to those aquatic organisms that can fix it into a form available to the rest of the biota. This 'fixing' is performed by only a few species of bacteria and blue-green algae. The nitrogen cycle is a 'double cycle'…one side of the cycle involves oxidation and reduction of nitrogen by plants, animals and decomposers, while the other entails nitrogen-fixing organisms and denitrifying bacteria (Lind, 1985). Ammonia is the most reduced form and is the product of organic decomposition. The oxidized forms of ammonia are nitrites and nitrates. Nitrite and nitrate, on the other hand, result from the nitrification (bacterial oxidation) of ammonia. Denitrification, the bacterial reduction of nitrate to nitrite and then to N2 gas, occurs at low oxygen levels. If streams are diverted through wetlands (e.g. mangrove), it is possible to remove NO3 by denitrification. NH4 is the preferred form of nitrogen for plant growth. Ammonia is the product of organic decomposition. Ammonia is usually present in low (less than 1 mg/L) concentrations in non-polluted, well oxygenated water, but may reach up to 5-10 mg/L in the anaerobic hypolimnion of a eutrophic lake (Lind, 1985). The ammonia levels found in Yaxchen ranged from 0.00-0.24 mg/l NH3-N. These levels are low and indicate their source is non-polluted and well oxygenated. All of the ammonia levels were 0 mg/l NH3-N throughout the system, except in an area known as the crackline. Here, a cloudy layer persists, just below the halocline. This is where NH3-N reaches 0.24 mg/l. Above this cloudy layer, in the freshwater zone, another sample was taken. Its reading was 0.02 mg/l NH3-N. Although the levels found in the cloudy layer are low, relatively speaking, the presence of ammonia in this area suggests a zone low in oxygen (in comparison with the other water there). Ammonia can only be converted to nitrite or nitrate in the presence of oxygen. And in the absence, nitrate is converted to nitrite and nitrite is converted to ammonia. These findings also suggest that there may be denitrifying bacteria present, which are present only in the cloudy layer which convert nitrates to nitrites and ammonia. The cloudy layer may be the bi-product of their metabolism. Nitrite (NO2--N) is the partially reduced form of nitrate (NO3--N). It is the intermediate state between nitrate and ammonia. Nitrites in the Yaxchen system are highest in the cloudy layer found below the halocline, although the difference between this site and the others is not substantial. The nitrite concentrations in Yaxchen, like they are in most systems, are low. Nitrate nitrogen is usually present in low concentrations in natural water. Natural concentrations generally don't exceed 10 mg N/l and are commonly less than 1 mg/l, especially during periods of increased primary production (i.e. algal photosynthesis). In Yaxchen, an interesting story prevails with respect to the nitrogen cycle. During the first run of samples, I did a low range nitrogen test. Only one sample was within this range: the 'cloudy layer'. Its concentration is 0.18 mg/l NO3--N. All other sites were above the range of the test. The high range nitrate nitrogen test provided the appropriate range for the remainder of the samples, whose concentrations ranged from 0.8-1.2 mg/l, with an average of 1.0 mg/l NO3-N. This means that the surrounding water has almost 6 times the amount of nitrate than the cloudy layer! Thus, it appears, from the high ammonia levels and low nitrate levels in this cloudy layer, that denitrification is taking place; nitrate is being converted to ammonia. This process might be occurring because of a lack of oxygen in this cloudy layer and/or denitrifying bacteria may be present here. Total phosphorus concentrations of non-polluted waters are usually less than 0.1 mg P/L. The average P concentration in Yaxchen is 0.112 mg/l, with the cloudy layer having the highest concentration at 0.25 mg/l. The average concentration of the four remaining sites is 0.008 mg/l. At normal lake pH ranges, most soluble phosphate is present as orthophosphate in 2 ionic forms: monophosphate (HPO42-) and dihydrogen (H2PO4-) phosphate ions. Changes between these forms occur rapidly as pH changes. Since the pH in Yaxchen is essentially constant, these changes are unlikely to occur. Phosphate ions (PO43-) are adsorbed or desorbed from particles depending on the external phosphate concentration and salinity. PO4 is more readily released in water with higher salinity. More than likely, this is what is happening in the 'cloudy layer', where the salinity is higher than the surrounding water. Turbidity in this system is low, with values ranging from 0 to 7.3 FAU. The greatest turbidity is found within the cloudy layer, as expected. Water Results from June 3, 1998 Yaxchen Homepage Index
<urn:uuid:2587de5b-9ef7-444f-8790-e63b191fab95>
2.6875
1,986
Academic Writing
Science & Tech.
64.655377
Really, remote. But, how do you prove they are natural. You can assume, but you can't prove. The reality is that everyone already knows that the Monarchs go from east to west and vice versa. All the scientists now agree on that. The issue is that some states don't allow transfer because of endangered milkweeds or other issues. Journey North has been following this issue for years. Dr. Urquhart's tagging data from Monarch Watch website:http://www.monarchwatch.org/grafx/tagmig/u71map.gif I guess from CA to Utah is not close enough.http://www.monarchwatch.org/grafx/tagmig/u81map.gif California to 2/3s way across AZ.http://www.monarchwatch.org/grafx/tagmig/u94map.gif From Idaho to UT and AZ http://www.learner.org/jnorth/tm/monarc ... gUtah.htmlhttp://www.learner.org/jnorth/fall2004/ ... 92404.html Not All Monarchs Go to Mexico Utah Students Study “Western” Monarchs Mr. Ron Hellstern of Byrum, Utah, wrote: “My classes initiated the Intermountain Monarch Butterfly Project. We are associated with the Monarch Program of San Diego, and have helped them determine the winter migration destinations of Intermountain Monarchs. “When we started this project back in 1994 there was little, if any, knowledge about the migration routes or roosts of the Intermountain Western population. My students helped to establish the baseline data, and recruit other schools along the western slope of the Rocky Mountains to assist in collecting this information. "Thanks to some of our tags, our Monarchs have been spotted in Santa Cruz, California, which means these beautiful and delicate creatures cross the Great Basin Desert and the Sierra. Amazing!!! Our monarchs may not be going to Mexico, but we feel just as attached to them." http://www.greatbasinweb.com/gb2-3/monarch.htm The Way of the Monarch, Michael Pyle http://www.orionmagazine.org/index.php/ ... ticle/544/ "I remember a day in western Colorado, motoring to a noted habitat with my graduate advisor, Charles Remington. A lone monarch perched among a throng of Charlotte’s fritillaries, fiery orange males, chocolatey females, nectaring on big purple bull thistles. I asked my professor where he thought that monarch would end up. The reigning idea was that all of the fall monarchs born west of the Continental Divide wintered on the California coast. We were on the western slope, all right, if by less than a hundred miles; but it was a long, hot, arid way due west to California, across the Great Basin. Could it not be just as likely that a monarch in the intermountain West might follow the major drainages southward—the Green, the Colorado—and wind up in Mexico? Besides, as a former kid collector who’d haunted the Colorado high country whenever possible, I had seen monarchs crossing the Rockies crest in both directions, and doubted its effectiveness as an ultimate barrier." http://10000birds.com/flyways-and-byways.htm Monarchs use the same flyways as birds. Many are eaten along the way by migrating birds (esp. hawks).http://www.main.org/polycosmos/biosquat/ensom.htm Surfing Climate Change-ENSO Migration & Birdcasting I am one class away from getting my Natural History Certification. I think in more areas so I see more of a whole picture. http://www.monarchprogram.org/news/index.htm "The Monarch Program has three spring tagged retrievals in Arizona from California and Baja California overwintering sites: 1. Santa Barbara, CA (7 Nov. 1987) to Portal, AZ (9 April), 565 miles 2. El Sauzal, B.C. (13 Dec. 1997) to Gila Bend, AZ (13 March), 250 miles 3. Santa Barbara, CA (Dec. or Jan. 2000) to Page, AZ (14 April), 480 miles" "Nine Possible Reasons the Monarch Population is Changing and Why We Have New Butterfly Species in the Southwest" Scroll down and look at this. "NEWS FLASH: WHAT ARE THE ODDS? Nearly two months later, on December 8th, 2005, Marriott and a group of volunteers spotted the butterfly in a cluster of monarchs in Carpinteria at a site known as Carpinteria Creek (Santa Barbara County), about 145 miles straight line distance northwest of Camp Pendleton. One tagged monarch, one recovery -- this has never happened before. The datum continues to support Marriott’s research that monarchs fly northwest to sites that have cooler microclimates when the temperatures are too warm in the Southwest (click here for migration patterns and previous records)." http://www.fs.fed.us/monarchbutterfly/d ... Brower.pdf Search on mountains, tagging, California. There's loads of historical migration information in this document. Here are a few: Page 16 - end of second paragraph "Inkersly (1911:283) provided the first detailed description of monarchs overwintering in Pacific Grove, and speculated that they probably originated in "the country west of the Rocky Mountains." "Shepardson was the first person who clearly distinguished the eastern and western migratory populations of the monarch. She wrote: "It is presumed that those which are in the eastern and middle-western states go to the south during the cold weather, while those which winter near Pacific Grove come from a large part of the country lying west of the Rocky Mountains" (p. 29)....." continue reading this for some awesome info. "Mary Barber (1918:5-6), in another overlooked and informative booklet, Winter Butterflies in Bolinas, stated that Bolinas (immediately north of San Francisco) "is the winter home of the Monarch butterfly which comes not only from the Sierra Nevada mountains but also from the western ranges of the Rockies." In describing the fall migration, she wrote "Thousands of these frail butterflies start on their long journey toward the Pacific, in search of a mild climate, free from frost and snow, in which they can live all winter."
<urn:uuid:7c0f3968-7e7a-497f-88e9-773f2fb7d00c>
2.703125
1,372
Comment Section
Science & Tech.
63.008201
GISS Double Up On Reykjavik Temperatures By Paul Homewood Before GHCN Adjustments After GHCN Adjustments We have already seen how GHCN have adjusted the temperatures for Reykjavik, given to them by the IMO ( Iceland Met Office). (Full story here.) By reducing historic temperatures up to 1965 by 0.8C, they have added an artificial warming trend that, according to the IMO, does not exist. However, things actually get worse after GISS add their adjustments to the pot. After GISS Adjustments Up to 1972 GISS have knocked another 0.5C off the GHCN adjusted numbers, so there is now a total downward adjustment in historic temperatures of 1.3C. Let’s just recap why GISS apply their “homogeneity adjustments”. This is what they say:- The goal of the homogenization effort is to avoid any impact (warming or cooling) of the changing environment that some stations experienced by changing the long term trend of any non-rural station to match the long term trend of their rural neighbors, while retaining the short term monthly and annual variations. In other words, the UHI effect. Assuming that UHI is working to increase temperatures in urban areas over time, then the adjustment should be increasing past temperatures, and not reducing them. Of course, there may be local factors in Reykjavik that have operated to reverse the UHI effect, such as station relocation. However, the IMO confirm that there have been no significant relocations since 1945, or any other material changes. If the GISS homogenisation is working as it should, there must be comparable rural stations nearby that show that a greater warming trend than Reykjavik’s. Yet we have already seen that this is not the case, for instance, this chart from the IMO:- Figure 2. 7-year running means of temperature at three locations in Iceland, Reykjavík (red trace)), Stykkishólmur (blue trace) og Akureyri (green trace). Kuldakast = cold period. The first of the marked periods was the coldest one in the north (Akureyri), the second one was the coldest in Reykjavík Finally. let’s take a look at the exact location of Reykjavik’s station, which is outside the Iceland Met’s HQ, (marked as A). As can be seen, it is in a fairly central position in Reykjavik, which has a population of about 120,000. (Greater Reykjavik is said to exceed 200,000). Wikipedia have this to say about the post war development there. In the post-war years, the growth of Reykjavík accelerated. A mass exodus from the rural countryside began, largely due to improved technology in agriculture that reduced the need for manpower, and because of the population boom resulting from better living conditions in the country. A once primitive village was rapidly transformed into a modern city. Private cars became common and modern apartment complexes rose in the expanding suburbs. Much of Reykjavík lost its village feel. Population has grown from about 6000 in 1900 and more than half of the buildings in the Reykjavik Metro area were erected after 1970,(see here). It really is a nonsense to suggest that the Urban Heat Island effect has not been increasing, probably significantly so, in recent decades. If the GISS homogenisation system concludes that this effect has actually been declining, then there is something seriously wrong with their software.
<urn:uuid:02670c9a-07ff-40e5-8824-5d344fd7ec1a>
2.78125
760
Personal Blog
Science & Tech.
46.318694
Carry out some time trials and gather some data to help you decide on the best training regime for your rowing crew. Start with two numbers. This is the start of a sequence. The next number is the average of the last two numbers. Continue the sequence. What will happen if you carry on for ever? If you are given the mean, median and mode of five positive whole numbers, can you find the numbers? Imagine you have a large supply of 3kg and 8kg weights. How many of each weight would you need for the average (mean) of the weights to be 6kg? What other averages could you have? Invent a scoring system for a 'guess the weight' competition. Is it the fastest swimmer, the fastest runner or the fastest cyclist who wins the Olympic Triathlon? A collection of short Stage 3 and 4 problems on processing and representing data.
<urn:uuid:09a37867-36a0-403f-bdde-ab0d58e90058>
3.8125
193
Tutorial
Science & Tech.
69.546047
By Chris Parker Copyright 2012 The National Human Genome Research Institute (NHGRI) launched a public research consortium named ENCODE, the Encyclopedia Of DNA Elements, in September 2003, to carry out a project to identify all functional elements in the human genome sequence. Recently they announced some science shaking results. âIgnorance is blissâ the saying goes and many who promote or adhere to todayâs scientific paradigms are in the position to best report whether or not this saying is true. Iâm not using the word “ignoranceâ in a pejorative sense but rather in the sense of Websterâs âa state of being uninformed (lack of knowledge). Finding oneself in the state of being uninformed is common to most of us in some aspects of our lives but deciding to build up an area of science or to make scientific assertions built upon the foundation of oneâs own ignorance is a mistake thatâs likely to be made manifest once that ignorance is dispelled with a bit of the light of actual knowledge. As an example, for years scientists did not know what the function was for a number of organs or structures in the human body. They could have said âwe do not know what the function of this particular organ or part in the body isâ. What they did instead was to build on the evolution myth by tying their ignorance about the human body into âscientific knowledgeâ claiming that these âvestigial organs or structuresâ were leftovers from the evolutionary past-which had lost their functions. Eventually, other scientists were able to discover important functions for each of these âvestigialâ organs and today, arguably, none exist. Materialists and strict evolutionists believe that there is only matter and energy in the universe and that somehow that matter and energy was able to organize itself into planets, comets, stars and life. They donât believe in spirit, such as the souls of man or in God who is Spirit because such canât be scientifically quantified. There is however another sphere that exists apart from matter and energy that even the materialists have to admit is real. This sphere is called information. Information exists and in fact is the basis of life itself. Information is non-material and exists apart from any method or material used to convey it. Information exists in copious amounts in the cells of everything living. This information, DNA, is a language which the living cell can read, understand and âobeyâ. This information provides the instructions for every facet of life. âThe information in DNA is stored as a code made up of four chemical bases: adenine (A), guanine (G), cytosine (C), and thymine (T). Human DNA consists of about 3 billion bases, and more than 99 percent of those bases are the same in all people. The order, or sequence, of these bases determines the information available for building and maintaining an organism, similar to the way in which letters of the alphabet appear in a certain order to form words and sentences.â..NIH Information and language come from a mind; it comes from intelligence. DNA is such a language. The âtechnologyâ conveyed through the language DNA is infinitely above any technology of mankind. The fact that this information could have only come from a superior intelligence should be obvious; whoever placed the language in the cells of everything living first had to have stupendous and incredible knowledge in order to implant it into all life. If materialists and evolutionists gave themselves a moment to reflect they would realize that DNA is proof that God exists and so they refuse to reflect-and instead apparently spend their time trying to create counter arguments to the obvious. Bill Gates, founder of Microsoft said : âDNA is like a computer program but far, far more advanced than any software ever created.â Bill Gates, The Road Ahead . All of Gates far less complicated software codes had creators. Anyone who was honestly considering whether or not God exists had no alternative but to consider DNA absolute proof of a Creator. The smallest cell of bacteria living requires to much information to have been formed by chance and that information is on a level well above anything man has concieved or built. As I.L. Cohen, Mathematician and researcher said: “At that moment, when the DNA/RNA system became understood, the debate between Evolutionists and Creationists should have come to a screeching halt”……. I.L. Cohen, Researcher and Mathematician; Member NY Academy of Sciences; Officer of the Archaeological Inst. of America; “Darwin Was Wrong – A Study in Probabilities”; New Research Publications, 1984, p. 4 There is a small portion of the human genome that codes for proteinsâaround 2%. This area has been a central focus on gene studies. The function of the larger portion of the genome that does not code for protein has been a mystery. Materialists seized upon areas of the genome that were not as well understood and declared these areas âjunk dnaâ. Being ignorant of the function of these areas, they argued that they were evolutionary junk, left over from eons of evolutionary activity. Francis Collins, at one time the Director of the Human Genome Project said the following regarding materialist scientists using their own scientific ignorance as a basis for building on the current paradigm in science: âThere were long stretches of DNA in between genes that didn’t seem to be doing very much; some even referred to these as “junk DNA,” though a certain amount of hubris was required for anyone to call any part of the genome “junk,” given our level of ignorance.â Francis S. Collins, The Language of God: A Scientist Presents Evidence for Belief Naturally, Materialists Ignored Collinsâ Hubris Warnings and Those of Creationists The term âJunk DNAâ coined by Susumu Ohnoover 40 years ago is quite obviously a pejorative term intended to suggest lack of design and thus; lack of a Designer-God. A typical evolutionist challenge to creationists have typically gone something like this: âAnti-evolutionists: can you explain why God would make “junk” DNA? A good portion of our genetic code has no apparent purpose … that is until you account for millions if not billions of mutations that no longer have a phenotype in modern humans.â…Evolutionist, Anonymous Richard Dawkins, the worldâs preeminent Atheist said the following with unconcealed sarcasm: âOnce again, creationists might spend some earnest time speculating on why the Creator should bother to litter genomes with untranslated pseudogenes and junk tandem repeat DNA.â (Dawkins: The Information Challenge) âŚit is a remarkable fact that the greater part (95 percent in the case of humans) of the genome might as well not be there, for all the difference it makes.* The Greatest Show on Earthâ Although Encode wasnât about âpseudogenesâ there is increasing evidence that they have until now undiscovered function as well; Dawkins double downed and tripled down with this quote from âThe Greatest Show on Earthâ âWhat pseudogenes are useful for is embarrassing creationists. It stretches even their creative ingenuity to make a convincing reason why an intelligent designer should have created a pseudogene â a gene that does absolutely nothing and gives every appearance of being a superannuated version of a gene that used to do something â unless he was deliberately setting out to fool us.â Dawkins Was Wrong: The Encode Findings ENCODE Project Writes Eulogy for Junk DNA, ScienceMag.org–September 2012 by Elizabeth Pennisi “This week, 30 research papers, including six in Nature and additional papers published online by Science, sound the death knell for the idea that our DNA is mostly littered with useless bases. A decade-long project, the Encyclopedia of DNA Elements (ENCODE), has found that 80% of the human genome serves some purpose, biochemically speaking. Beyond defining proteins, the DNA bases highlighted by ENCODE specify landing spots for proteins that influence gene activity, strands of RNA with myriad roles, or simply places where chemical modifications serve to silence stretches of our chromosomes” âLong stretches of DNA previously dismissed as “junk” are in fact crucial to the way our genome works, an international team of researchers said on WednesdayâŚâŚ For years, the vast stretches of DNA between our 20,000 or so protein-coding genes â more than 98% of the genetic sequence inside each of our cells â was written off as “junk” DNA. Already falling out of favour in recent years, this concept will now, with Encode’s work, be consigned to the history booksâ Junk DNA, In the Beginning.org Les Sherlock, Sept 2012 âWell, now it is the evolutionists who are embarrassed â or certainly should be. For 40 years, ever since Susumu Ohno introduced the term in 1972, they have been waving âjunk DNAâ in the face of creationists, asking why their Creator-God would have produced DNA with only 5% that had any function. Now they know, or are beginning to find out, that it wasnât that it was without function, but simply that they knew too little about it to be aware of what it did. In fact this mirrors exactly the blunder they made 100 years ago or so, when they claimed over 100 human organs were vestigial: remnants of our evolutionary past that were no longer functional. They were wrong with vestigial organs 100 years ago, and they have been wrong for the past 40 years with junk DNA. Will they never learn?â âNow scientists have discovered a vital clue to unraveling these riddles. The human genome is packed with at least four million gene switches that reside in bits of DNA that once were dismissed as âjunkâ but that turn out to play critical roles in controlling how cells, organs and other tissues behave. The discovery, considered a major medical and scientific breakthrough, has enormous implications for human health because many complex diseases appear to be caused by tiny changes in hundreds of gene switches. The findings, which are the fruit of an immense federal project involving 440 scientists from 32 laboratories around the world, will have immediate applications for understanding how alterations in the non-gene parts of DNA contribute to human diseases, which may in turn lead to new drugs. They can also help explain how the environment can affect disease risk. In the case of identical twins, small changes in environmental exposure can slightly alter gene switches, with the result that one twin gets a disease and the other does not. As scientists delved into the âjunkâ â parts of the DNA that are not actual genes containing instructions for proteins â they discovered a complex system that controls genes. At least 80 percent of this DNA is active and needed.â Evolutionists have trumpeted the similarity of the chimpanzee genome to that of humans, claiming that since the chimpanzee DNA profile matched ours up to 98% (debated number) that this was proof of evolution. However, the 98% number related to the 2% of the respective genomes that code for protein. Given that, the Encode Project findings indicate that the vast majority of the two genomes are totally unrelated. In fact the extreme differences between the two species non coding DNA regions is too large to have occurred in the period alleged to have existed between the supposed evolution of chimps and man. The Conclusion of it All William Dembski sums up both the reasons materialists have for designating portions of the genome âjunkâ and why finding so much function in the genome tends to eliminate the possibility for evolutionary explanations to be correct. âdesign is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term “junk DNA.” Implicit in this term is the view that because the genome of an organism has been cobbled together through a long, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as “junk” merely cloaks our current lack of knowledge about functionââŚ.Dembski 1998 So far, the Encode Project and scientists working in this area have found function for only 80% of the genome. It would now betray a certain, stubborn, anti scientific ignorance to believe function wonât be found for the entire DNA code-if the world stands.
<urn:uuid:140f592d-e6d9-467f-8b6c-586e2aca15a8>
2.703125
2,697
Personal Blog
Science & Tech.
37.643037
For 33 years, America’s Voyager spacecraft have been flying toward the edges of our solar system. TIME surveys the most notable interstellar scenes captured by the Voyager’s cameras along the way. More Photography from Time Ed Stone and his science buddies have acquired and shared many photos and much scientific data from Voyagers 1 and 2.There is an outstanding amount of this data.However, I presume that even Ed Stone has forgotten a very important and nearly devastating time in the life of Voyager 2. A few days (not sure exactly how many) after launch, Voyager 2 was “lost in space”.The Deep Space Network (DSN) was unable to acquire the spacecraft.JPL engineers and data analysts got busy to find it.With vigilant tracking acquisition procedures and alert data processing and analysis, the spacecraft was found.The observed tracking data malfunctions were duplicated on the ground-based mockup of Voyager.The failure mode was determined and new DSN tracking procedures were developed to acquire, track and process the data from the impaired Voyager 2.As a result of the successful JPL engineering effort, scientists have been able to acquire and process 35 years (and counting) of data from voyager 2. Those of us who worked on the rescue would appreciate seeing the debacle mentioned somewhere in the Voyager history.I, personally have explored the Voyager program websites of JPL and NASA and have not found even a hint that there was ever as much as a “glitch” in the flight history. Applications are invited from suitably qualified candidates to fill the position of Senior Lecturer, Lecturer I and Lecturer II in the underlisted Faculties/Departments of the Federal University Oye-Ekiti. Did you know that Voyager is no furthur in space than it was here on earth.... There is no measur in space, only distances from other planets... there is no begining and no end to space. therefore you cannot measure distance.............. @JimPurrington Of course you are wrong. Earth is a planet, and there is a certain, constantly increasing distance from Earth to Voyager. There is a relative distance from Voyager to every galaxy, star, planet, asteroid, everything, in the universe. I dont know if anyone else feels it too, but I would have liked a less "touched up" photo of the Great Red Spot on Jupiter ... I know most astronomical photos (incl from Hubble) have the actual visible wavelengths changed to emphasize the detail of the astronomical object and its beauty .. But many times I yearn to see things closer to as they are even if they cannot be exactly like the original and have to be altered somewhat to bring out the detail ... I would take a less colourful and distinctive photo any day than one that has been altered significantly to ,apparently, cater to public tastes .. . to me , that makes "pale blue dot" pic (referred by someone else) a lot more impressive and exciting .. Well it's only "Time" (pun intended), but you would think an article on science could at least report that it's been 35 years since 1977 and not 33. ‘Anything Japanese can do, Nigerians can do better’…says 20-old student who designed amateur solid rocket propellant From ALOYSIUS ATTAH, Onitsha Areyou one of those worried that our universities do not produceresearchers and innovators anymore? Maybe you have lost hope thatnothing good will ever come out of our ivory towers again, and becauseof that, are making plans to send your children abroad for highereducation. There is hope. The indomitable spirit and ingenuityof Nigerians that were on display during Nigerian Civil War whenBiafrans produced “shore batteries” rockets, “ogbunigwe” (the dread masskiller) and other war arsenals are still present with us. This was inevidence, recently, during projects defense organized by the Departmentof Physics and Astronomy, University of Nigeria, Nsukka. A 20-year-oldof the department called Idoko Modestus Chijioke stunned the panel ofassessors when he presented an amateur solid rocket propellant as hisown project. His lecturers not only marveled at his ingenuity, hissupervisor, Dr. J. A. Alhassan, was so proud of him that he called onthe federal government to assist the young genius to achieve the best soas to conquer his world. The amateur rocket has PVC-pipe asthe motor casing, aluminum as the nose cone, ½-inch pvc-cover, aluminumand a ½-inch diameter pipe as the nozzle. The propellant is made up of325 grams of potassium nitrate (oxidizer) and 175 grams of sorbitol(sugar), making a total of 500 grams. The PVC-pipe is of ¾-inchdiameter, and 65cm in length. The rocket motor igniter is a mixture ofpotassium nitrate and charcoal at 80/20 ratio, to form it into a blackpowder (gun powder). Amateur rocketry is also known asexperimental rocketry. The project design objective according toChijioke and to test the workability of a potassium nitrate (oxidizer)and sorbitol (fuel) blended into a KN-SB propellant for the amateurrocket made with local materials. The rocket which was launchedsuccessfully on one of the hill-tops behind the university attained anestimated height of 35m in flight. In a chance encounter with this reporter, Chijioke shed more light on how he came up with the idea for the project. “Icame up with this project because of a childhood dream of being arocket scientist,” he said, “and this particularly led me into theDepartment of Physics and Astronomy, of UNN, to get a good grasp of thephysics/science of rockets and an understanding of astronomy for whichrocket is basically built”. Asked what challenges he encounteredin the course of carrying out the project, the soft spoken young mansaid he cannot point to any because according to him, “life is all aboutchallenges but with focus and determination, success will surely come”.“Some people around me thought this was not possible,” headded. “But this only made me determined to prove them wrong. The otherchallenges were normal in amateur rocketry as several trials were madeand several failures recorded. But at last the design objective wasobtained. I have dreams and tall ambitions. One of them is to be inspace one day. I only pray that my dreams will get support both fromgovernment and well meaning individuals so that they will beactualized”. In a chat with Education Review, Alhassan, hisproject supervisor, said that, through his project, young Chijioke hasdemystified science. “The solid rocket propellant constructed byIdoko Chijioke, though an adaptation of an amateur astronomerexperiments, is original. He fabricated the rocket from locallyavailable materials from our environment. Whenever we hear of rockets,our minds by reflex action go to the technologically advanced world.Chijoke has demystified science by his effort. If he is properlymotivated and equipped, he can break the ice in scientific world. Thereis hope for \Nigeria’s national transformation if we can support andfund scientific innovations like my student has done” he said. Culled from the Daily Sun (CAMPUS SQUARE), Tuesday, October 16, 2012, pages 31 and 32 he really needs to be encouraged.http://www.unn.edu.ng/news/unn-student-develops-amateur-rocket-propellant "TIME surveys the most notable interstellar scenes captured by the Voyager’s cameras along the way." That must be a very small survey. Voyager has not yet reached interstellar space. @dentate I saw this headline and thought maybe Voyager had taken a "goodbye solar system" photo in the past year or so — not in interstellar space, but just from very very far out. But no. These are great photos, but they are old photos from our planetary tour. Fool me with fake headline why don't you, TIME?! They left out Voyager's greatest photo - the pale blue dot. Go have a look: http://en.wikipedia.org/wiki/Pale_blue_dot @nabzif Yes, everyone should see it. Also the new "Pale Blue Dot" photo from Cassini, which is staggering: Earth is the dot at top left. Saturn is backlit by the sun: basically it's an eclipse of the sun by Saturn. @dj436582 In the cosmic realm, 11 billion miles is nothing. Compare to it a farmer that has his plot of land in a remote area, that plot of land is Earth. Where Voyager is is the equivalent to probably going out only a hundred yards from his front door If you have not watched it recently or ever seen the first Star Trek movie, where the Enterprise encounters Viger. It is worth a view. Just to gain some perspective on how totally cool this is.
<urn:uuid:55cb2522-80c5-4f25-a815-bee1c0fd088e>
3.046875
1,926
Comment Section
Science & Tech.
48.684312
When two atoms of the same kind are bonded through a single bond in a neutral molecule, then one half of the bond length is referred to as the covalent radius. This is unambiguous for molecules such as Cl2, the other halogens, and for other cases such as hydrogen, silicon, carbon (as diamond), sulphur, germanium, tin, and a few other cases. However for oxygen, O2, the situation is less clear as the order of the oxygen-oxygen bond is double. In this case, and indeed for most of the periodic table, it is necessary to infer the covalent radius from molecules containing O-O single bonds or from molecules containing a C-X bond in which the covalent radius of X is known. The data for s- and p-block elements is broadly consistent across a number of sources but note that the values quoted for N (70 pm), O (66 pm), and F (60 pm) are sometimes less than those quoted here. Also the value for hydrogen is sometimes given as 30 pm. Soemtimes sources give the values for the Group 1 and Group 2 metals as larger than those given here. It may be necessary to treat the values for the d-block elements with some caution. Values are not often given in most sources. WebElements now has an online chemistry shop at which you can buy periodic table posters, mugs, T-shirts, games, molecular models, and more.
<urn:uuid:9eb31cd1-525c-49f7-8783-cc237a8ad928>
3.53125
304
Knowledge Article
Science & Tech.
54.87
|Substrate-Induced Band-Gap Opening in Epitaxial Graphene| Prospective challengers to silicon, the long-reigning king of semiconductors for computer chips and other electronic devices, have to overcome silicon’s superb collection of materials properties as well as sophisticated fabrication technologies refined by six decades of effort by materials scientists and engineers. Graphene, one of the latest contenders, has a rather impressive list of features of its own but has lacked a key characteristic of all semiconductors, an energy gap (band gap) in its electronic band structure. A multi-institutional collaboration under the leadership of researchers with Berkeley Lab and the University of California, Berkeley, have now demonstrated that growing an epitaxial film of graphene on a silicon carbide substrate results in a significant band gap, 0.26 electron volts (eV), an important step toward making graphene useful as a semiconductor. First produced as a free-standing layer in 2004, graphene's characteristics—ballistic electron transport (i.e., without electron scattering), electrical conductivity controllable by chemically doping or by an electric field, high thermal conductivity, and high quality and strength—quickly stamped it as a possible material for future generations of semiconductor devices that are faster, smaller, cheaper, and more durable than today’s silicon-based devices. However, as a two-dimensional sheet of carbon atoms arranged in a hexagonal pattern, graphene lacks a gap between the top of its valence band and bottom of its conduction band because the two carbon atoms in its crystallographic unit cell see the same atomic environments, a symmetry that causes the two bands to just touch at the vertices of the Brillouin zone (a kind of unit cell in reciprocal or momentum space). Several promising efforts are underway to induce such a gap by breaking the symmetry (see ALSNews, Vol. 275), but the Berkeley-led group has taken a new approach. In brief, the group grew epitaxial layers of graphene on silicon carbide substrates by thermal decomposition of a silicon carbide surface oriented so that the silicon atoms were exposed. The interaction between the remaining carbon atoms and the underlying substrate resulted in a graphene layer configured in such a way that one of the carbon atoms in each unit cell has a neighboring atom in the atomic layer below and one does not, thus breaking the symmetry. Working at ALS Beamlines 12.0.1 and 7.0.1 (the Electronic Structure Factory), the group members used angle-resolved photoemission to investigate the electronic structure of the epitaxial graphene. Measurements of the photoemission intensity as functions of the photoelectron kinetic energy and the photoelectron momentum (derived from the angle of emission) yielded a map of the electron band structure (energy vs. momentum) with a sizable energy gap of 0.26 eV at the Brillouin zone vertices. However, the Fermi energy (maximum energy occupied by electrons) was well up into the conduction band, whereas for a normal semiconductor the Fermi energy would be in the band gap. In additional experiments with increasing numbers of graphene layers, the team found that the size of the energy gap decreased with thickness and all but disappeared at four layers. Finally, detailed measurements of the change in photoemission intensity symmetry around the vertices from six-fold to three-fold near the energy where the extrapolated valence and conduction bands would just meet (known as the Dirac point, see ALSNews Vol. 277) were consistent with the proposed symmetry breaking mechanism, provided that the expected buffer layer of carbon atoms was present between the graphene and the silicon carbide. Next on the agenda are finding ways to control the width of the band gap, perhaps by using a different substrate material with a different graphene–substrate interaction strength, and to move the Fermi energy from the conduction band into the band gap to allow transistor action. Research conducted by S.Y. Zhou and A. Lanzara (University of California, Berkeley, and Berkeley Lab), G.-H. Gweon (University of California, Berkeley and Santa Cruz), A.V. Fedorov (ALS), P.N. First and W.A. de Heer (Georgia Institute of Technology), D.-H. Lee (University of California, Berkeley), F. Guinea (Instituto de Ciencia de Materiales de Madrid, Spain), and A.H. Castro Neto (Boston University). Research Funding: National Science Foundation; U.S. Department of Energy, Office of Basic Energy Sciences (BES); and Berkeley Lab Laboratory Directed Research and Development. Operation of the ALS is supported by BES. Publication about this research: S.Y. Zhou, G.-H. Gweon, A.V. Fedorov, P.N. First, W.A. De Heer, D.-H. Lee, F. Guinea, A.H. Castro Neto, and A. Lanzara, "Substrate-induced bandgap opening in epitaxial graphene," Nature Materials 6, 770 (2007).
<urn:uuid:571659b4-ae46-4593-a560-842e3a13c6ab>
3.34375
1,063
Academic Writing
Science & Tech.
39.671764
butadiene (byōtˌədĪˈēn) [key], colorless, gaseous hydrocarbon. There are two structural isomers of butadiene; they differ in the location of the two carbon-carbon double bonds in the butadiene molecule. One (1,2-butadiene) has the formula CH2:C:CHCH3. The other (1,3-butadiene), often called simply butadiene, has the formula CH2:CHCH:CH2; it is used in the manufacture of synthetic rubber, latex paints, and nylon and is obtained chiefly by dehydrogenation of butane and butene obtained by cracking petroleum. Chloroprene and isoprene are the 2-chloro- and 2-methyl-derivatives of 1,3-butadiene; they also are used in the synthesis of rubber. The Columbia Electronic Encyclopedia, 6th ed. Copyright © 2012, Columbia University Press. All rights reserved. More on butadiene from Fact Monster: See more Encyclopedia articles on: Organic Chemistry
<urn:uuid:ccf2fd6f-b8dc-46e0-b5c6-8ed8bc485f48>
3.421875
230
Knowledge Article
Science & Tech.
31.142045
Water Pressure over MISO instrument Frame for last 4 Days The tide data below was gathered by a high precision digital pressure sensor deployed on the MISO instrument frame. This 4 day timeseries shows pressure in decibars caused by the depth of water above the transducer, which is approximately 1m above the bed. Each data point represents a 5 minute average of the pressure, which filters out the contribution by surface gravity waves. The resulting plot of water depth shows the sea height variations due primarily to the lunar tides. The variability in the tidal range results from the addition of different tidal components which cause complicated "beats" in the tidal cycle. The time axis is yeardays, where yearday 1.000 is the start of 1 January. Pressure over MISO instrument Frame for last 2 minutes The wave data below was collected by the same high precision digital pressure sensor noted above. This 2 minute timeseries shows pressure in decibars at the original 2 sample per second data rate, representing the sea height variations due to surface gravity waves.
<urn:uuid:225b9ed6-9dfb-44b6-aa4f-9f0d4a3773bf>
2.84375
215
Knowledge Article
Science & Tech.
34.934929
View Full Version : Arial refueling for rockets ? 01-17-04, 12:17 PM The first actual transfer of fuel from one aircraft to another was little more than a stunt. On November 12, 1921, wingwalker Wesley May climbed from a Lincoln Standard to a Curtiss JN-4 airplane with a can of fuel strapped to his back. When he reached the JN-4, he poured the fuel into its gas tank. Needless to say, this was not the most practical way of refueling an airplane in flight. Ever since, in flight fueling has been improved to become a semi-automated process wich can be done at high speeds, transferring large amounts of fuel. So, if planes can be refueled, why not use a very long hose suspended from a stratospheric blimp to pump the first stage of a rocket during launch , extending it's range with a lower startingweight? The hose i think of should be like 30 km's long and be reeled in by the blimp as the rocket goes upwards as not let the weight of the hose influence the performance/trajectory of the rocket. Maybe the hose could be even used to support several kilometres above the refuel blimp (hose should be unreeled again) , before the weight of the hose defeats its purpose. As the rocket finally detaches itself, firing it's second stage, the first stage is captured by / and dangling from the refuel hose and can be reused, no need for parachute system to recover the first stage. 01-17-04, 02:53 PM Beamed (http://adsabs.harvard.edu/cgi-bin/nph-bib_query?1987STIN...8817914M) energy (http://www.lpw.uah.edu/Program.html) will be a much more practical "hose". 01-17-04, 03:10 PM When i consider that tehy can barely manage to hit another incomign rocket on a highly funded high tech star wars program, what makes you think they could match orbits, velocity etc for long enough to transfer fuel? Remember the refuelling vehicle will have to be travelling at the same speed as the rocket. 01-17-04, 03:28 PM You're going to reel in two hoses (fuel and oxidiser) to a blimp at a linear speed of ~3-4000 m/s??? That reel is going to have some interesting bearings.... Not to mention the motor - 30 kilometres of two hoses weighing, what, 6.75 lb per 50 feet (based on firefighting hose and ignoring the cryogenics that you'd need for real fuel and oxidiser - and these are for 1.5 inch diameter hoses, a bit small for the purpose) - that means the motor has to haul 12,000 kilogrammes upwards at up to 3,000 to 4,000 metres per second inside of say, five minutes (it's about nine minutes for the shuttle to get to orbit if memory serves) - that's about 22 kilotons of TNT's worth of energy in the space of five minutes. (Hiroshima was 13 kTons...) 01-17-04, 10:55 PM Why not develop that airborne launcher concept? The L-1011's carry a load underneath, and half the ride is already done for you. I know it's already in service, and the An-225 would serve as a perfect launch platform. 01-18-04, 12:34 AM Undecided: Why not develop that airborne launcher concept? I too wonder about this. The Antonov An-225 is the biggest plane on earth. First time I saw a picture of it I thought it looked like a C-5 but I googled it and its got more than twice the payload. (250,000kg - 275, 000kg? to the C-5's 122,472kg) With twice the payload the Antonov could lift more but if NASA puts some research into this concept it could probably use a few of the 50 plus US military C-5's and develop a lighter 2nd stage vehicle. The only problem I expect (besides NASA's usual bull) is trying to make the second stage effiecient, it'd still have to carry enough fuel to hit orbit and a decent sized payload. I know it might be contrary to the current American way of doing things but I think more smaller, cheaper, launches is better than a few massive launches. Besides I've been in and around C-5's before, I don't think the US Military has released the full capabilities of them to the website I googled. The L-1011 is in use already, doing it the smart way like you said http://www.orbital.com/ You're right about the C-5. The military doesn't release everything to the public straight. There are reasons for that. Anyway, yes, most payloads, including humans, don't need a big huge launch vehicle. There are only a few loads that need large capacity on one launch, and eventually, as we procure resources and start building those in orbit, they will no longer be needed. 01-18-04, 06:26 AM Sparks, you are right, the bearings would be too interesting :eek: :) But I feel it is important we keep examining ways of detaching the fuel from the cargo, because that is what keeps chemical rockets so expensive. The airlaunch concept is very promising, you wouldn't even need an Antanov for large loads if you use multiple launches and assemble the parts/stages in orbit?? 01-18-04, 09:55 PM Just a few thoughts on the matter. Yea I know the military doesn't release everything to the public (I was in the Military very recently and they didn't release everything to us) I bet NASA could get a hold of some military hardware if it asked. Assembling things in orbit would probably be best if done entirely by computer guidence systems, however I'm not 100% positive current computers could do it. I think the second stage craft would be two completely different craft, one for carrying passengers that can survive reentry and a cargo craft that would be destroyed if it fails to reach orbit. Losing cargo would be bad but definatley more acceptable then losing human life and probably cheaper then trying to build two different infaliable craft orsomething like the space shuttle. So I figure something like this would work. First step start launching Spaceship sections in orbit second step as they get near each other a computer on each individual part is going to have to guide the parts together and connect them, or a computer sattelite located near the parts to give them quick orders for connecting and course changes, third step is send people up to check out the assembled spacecraft run system checks then take off to wherever. Did I miss anything? Maybe I should start a new thread for this idea but I think its a pretty common idea probably has been discussed before. 01-18-04, 10:37 PM 01-19-04, 01:03 AM WellCookedFetus about your beam energy. The site you gave said, "For comparison, a typical Earth-launch vehicle (of any kind) requires "jet" powers on the order of 0.1 MW per kilogram of vehicle" Maybe I misinterpreted it but that suggests we'll need .1 megawatt per kilogram of vehicle, Meaning 10 megawatt energy supply could launch 100kg's (220lbs) According to http://www.usbr.gov/lc/hooverdam/History/workings/powerplant.htm Rated output of hoover damn powerplant is 2074 megawatts so the Hoover dam at rated output could lift 20740kg or 45628lbs (damn thats alot) The Millstone Units 2 and 3 nuclear power plants in Connecticut have an installed capacity of over 1,900 megawatts of power on a 500-acre site designed for three nuclear plants, so if I estimate each plant at about 950mw each and a third is built that means 2850mw for a total lift of 28500kg or 62700lbs The idea has merit but there's alot more laser research that needs to be done and alot of infrastructure that needs to be built if it were to become a reality. I think there's also a higher risk with the plan vs other plans. call me crazy but a MW laser, a craft trying to get in orbit and a massive amount of energy being produced all in close proximity to one another just sounds too risky. If NASA backed this Idea 100% towmarrow It would probably still take them 15 years to make anything happen while the airborne launcher has already worked (http://www.orbital.com/SpaceLaunch/) and could be made to work much sooner. 01-19-04, 11:09 AM Nope that is absolutely right! We would need a 40Gw laser array to launch a space shuttle size laser plane, say 20 2Gw nuclear power plants!!! Even though it is much more technologically feasible at this time then a sky tower! 01-19-04, 12:09 PM Definatley, More feasible than a sky tower but less then an airborne launcher. Considering the rate technology has advanced over the last 100 years we'll probably have either a cheaper power source in the next 25 years or a better way of getting in orbit. Anti-Gravity if you want to think Sci-Fiction like this guy. (http://www.smh.com.au/articles/2003/01/28/1043534050248.html) 01-19-04, 01:54 PM Please don't hit us with such circumstantial and unproven hypothesis on anti-gravity! You want this thread move to psuedoscinece? Iíll believe it when I see proof and as of so far no functioning means of such propulsion has even been verified. 01-19-04, 03:06 PM Yea, I thought giving you that site was kind of stupid after I posted, I was trying to illustrate that the future is unpredictable and if we dump a massive amount of resources in one direction we might be wasting our resources cause of the rate technology is developing. I ran across this on a related search. "During off peak hours the rates are cheaper, so the power is used to "fill" the upper lake. During peak usage the water is allowed to flow back to the lower reservoir, producing cheap hydro power." Seems to suggest that the total megawatts produced by the power plant wouldn't have to be that high in fact if you only planned to launch every couple of weeks and had a near unlimited source of water a big powerplant and a bunch of massive hydro-electric turbines. Normally I go for the KISS aproach but that isn't always an option. I thought there was an energy launch thread but I don't see it maybe we should start one? 01-19-04, 03:50 PM And do you know how big the capacitors would be to store enough power for a 40Gw laser array to power say 20 minutes at full thrust??? Though now that I think about it a capacitor array the size of a oil tanker power by one nuclear power plant does sound much cheaper then 20 nuclear power plants. Good idea :) :cool: 01-21-04, 07:12 PM I don't know enough about electronics to know if capacitors would function well if they were that big or alot of them. I did find these two sites about capacitors very enlightening. http://hop.concord.org/amu/amu.concepts.caps.html I couldn't find anything about massive capacitor arrays. Building an enormous holding tank might be cheaper cause its working off technologies that have already been developed and relatively cheap materials however, a holding tank big enough for this would probably require an effort nearly equel to building the panama canal. If you know of any sites about building massive capacitors please post them. I don't know if you've mentally beat this idea to death yet or not but its starting to get interesting to me. 01-21-04, 07:18 PM Sorry RonVolk, capacitors are used in many laser applications because they release all their energy in the shortest time possible, not because they can dump lots of energy out over a long time (like 20 minutes). You'd have to have a conventional energy source like a powerplant to sustain that power draw for that length of time. And frankly, I'd hate to see the effects of a 20GW laser beam from anywhere that could be considered close. Ionisation of the local atmosphere leading to lighning strikes and other local meterological anomolies, radiated heat from the air molecules and water vapour in the beam, and you really wouldn't want any flocks of birds in the area... 01-21-04, 07:41 PM hehe yea, flocks of "well" done bird(s). I Didn't think about the atmospheric effects except for humidity reflecting light away from the target causing a slight decrease in the lasers strength. Heated air would probably create wicked updrafts and downdrafts that would last after the laser was turned off. Oh well, time to come up with a new plan. 01-21-04, 07:50 PM hehe yea, flocks of "well" done bird(s). Er, no - that much energy would heat up the air around the beam tremendously, so if they didn't turn away fast enough, they'd be cooked and burnt crisp before they even reached the beam itself. Were they caught in the beam itself, they'd vapourise if they didn't actually become plasma... So there'd be nothing left to even recognise. Oh well, time to come up with a new plan. How about waiting a few years and buying one from the Japanese? 01-21-04, 08:08 PM Just to expand on the airborne launch theory, why use transports? I think a more logical, and economical way of doing it would to use already existing rocket launchers inside for instance a Tu-160, or Tu-95. The Russians will most likely love to have more money to spend, so giving up one or two of the aforementioned bombers wouldn't mean too much. The Antonov An-225 would be able to carry a shuttle on it's back if it really had to , so it could carry huge loads, I mean it's MTOW is 1,000,000 lbs! I don't think that NASA should do it; I think private industry should develop this in co-operation with the Antonov Company. It would be significantly cheaper and you don't need a launch pad, you can launch from the equator above all the weather (which has stopped how many missions in the past? So IMO that is the best opinion, the tech is already there, now all you need is American funds, and Russian ingenuity. 01-21-04, 10:12 PM Thanks for the link EI Sparks, incredible piece of technology the Japanese are developing! I'd wait and buy one but its funner to try to design something better. At least until they come to the local Toyota dealer, then I give up on reinventing the wheel and buy one. 01-22-04, 08:19 PM Undecided the second stage vehicle or vehicles would still have to be designed. Between the two aircraft you suggested I would go with the Tu-160, from what I can tell it has 4km higher ceiling and higher maximum speed. I didn't find out anything about the payloads but I'd guess the Tu-160 would have more due to its thrust. Probably would be realtively cheap to buy one (for an Aircraft) because the Ukraine destroyed theirs. http://www.airforce-technology.com/projects/tu160/ I think a more logical, and economical way of doing it would to use already existing rocket launchers inside for instance a Tu-160, or Tu-95. TU-160 uses the Kh-55MS cruise missle its got a range of 3000km and carries a 200 kiliton warhead. I couldn't find anything about the missle except it looks like its dropped before its ignited in the picture on the prieviously posted website. I don't think that would interfere to much, the missle would have to be changed somewhat to be able to go up instead of down anyways. Anybody know about how much a 200 kiloton nuclear warhead weighs ? 01-22-04, 08:58 PM Well I think a better missile then the sub-sonic Kh-55 would probably be the Supersonic Alfa missile, imagine the Tu-160 is going Mach 2 at 60,000ft already half way there. The Russians could easily IMO design a ramjet designed missile that could go mach 10+ easily. The Load that the Tu-160 can carry is about 9.000 kg but the max. load is 40,000. http://www.fas.org/nuke/guide/russia/bomber/tu-160.htm The Tu-95 is too slow and flies to low to do anything substantive... 01-22-04, 09:39 PM The Tu-95 is too slow and flies to low to do anything substantive... My opinnion exactly. Mach 10 isn't a good idea if we want to send people up, but, electronics could most likely take it. Since I'm a big fan of sending people and cargo seperate it really wouldn't matter. Just a seperate slower rocket needs to be designed. Loading the plane up to maximum probably would make its max altitude and max speed less. To launch something larger than the cruise missle the bomb bay would have to be redesigned but that would be simple compared to designing the second stage itself. hehe Now all we need is investors and Corporate charter. 01-23-04, 08:38 AM Mach 10 isn't a good idea if we want to send people up Well how about the experiments with the X-15? They worked... 01-23-04, 04:24 PM This site http://www.sierrafoot.org/x-15/pirep4.html thinks that Mach 6.7 was the most achieved by the X-15. I'm not really sure how many times the speed of sound a human being could take. I know that at 7 Gs' or more its real uncomfortable. LOX and Hydrogen seem to be coming up a bunch in these searches, second stage should probably be powered by them cause they give alot of bang for their weight. 01-23-04, 08:10 PM I'm not really sure how many times the speed of sound a human being could take. The limit's not the speed - the limit is the acceleration. Hence the ability of astronauts to travel at 7km/sec or so in LEO while experiencing "weightlessness". 01-23-04, 10:19 PM Thanks, EI Sparks. I didn't realize that. Suppose it should of dawned on me before. Powered by vBulletin® Version 4.2.0 Copyright © 2013 vBulletin Solutions, Inc. All rights reserved.
<urn:uuid:a4e19d94-daa3-4fa8-bcc4-417ce7ede1a3>
2.90625
4,079
Comment Section
Science & Tech.
72.999264
Bulb color in onions (Allium cepa) is an important trait and has been used as a major criterion for classifying cultivars, but the mechanism of color inheritance is poorly understood at the molecular level. A previous study indicated that five major loci are involved in the determination of qualitative color difference in onion bulb, and that these loci are closely related to a pigment biosynthesis pathway, where the red color in onion bulb has been attributed to anthocyanin derivatives. In a previous study by these authors they showed that the lack of dihydroflavonol 4-reductase (DFR) transcription (a enzyme in the anthocyanin biosynthesis pathway) in yellow onion is responsible for the color difference between yellow and red onions. However, they were unable to develop reliable molecular markers for the selection of yellow and red DFR alleles, possibly due to the huge genome size of the onion which is 107-fold larger then that of Arabidopsis, and the existence of multiple genes with a very high degree of sequence homology. The objectives of the present study were the identification of the critical mutation in the DFR gene (DFR-A) and the development of a PCR-based marker for allelic selection. The researches examined three homologous onion DFR genes and the promoter region from two of these genes in an attempt to find a unique sequence that could be used to detect the DFR gene of interest. Based on the unique sequence of the promoter regions, they were able to develop a reliable co-dominant PCR-based molecular marker and subsequently used this marker to show perfect co-segregation of marker and color phenotypes in the F2 population originating from the cross between yellow and red onions. Onion genomic DNA was used as a template for PCR amplification of the DFR gene. The result showed three different isoforms of this gene. The normally transcribed onion DFR gene labeled DFR-A, and another two homolog DFR-B and DFR-C. Both genes shared more then 95% nucleotide sequence identity with the DFR-A gene. The most conspicuous difference between the DFR-A and DFR-B gene was the length of poly-A stretch in the 5'UTR. Between the DFR-A and DFR-C gene it was a deletion of 499-bp. The researches believed the DFR-B and DFR-C genes to be a pseudo-gene since no transcripts were detected in the red onion cDNA pool. It was possible to specifically amplify only DFR-A gene using primers designed to anneal to unique promoter region. The sequences of yellow and red DFR-A alleles were the same except for a single base-pair change in the promoter and an approximately 800-bp deletion within the 3' region of the yellow DFR-A allele. This deletion was used to develop a co-dominant PCR-based marker that segregated perfectly with color phenotypes in the F2 population. These results indicate that a deletion mutation in the yellow DFR-A gene results in the lack of anthocyanin production in yellow onions. The PCR-based marker developed in this study is a direct marker for gene causing the different phenotypes. Molecular markers like this one, for important traits are especially useful in biennial crops such as onions in order to reduce the breeding period. In edition for efficient selection of desirable colors, the combined use of molecular markers for the major genes is required. The PCR-based marker for allelic selection of the DFR-A gene developed in this study would be the first marker for the complete system and a valuable tool in onion breeding programs.
<urn:uuid:0ea4e7e2-a4fc-4381-8aaa-3d93d0e0e850>
2.765625
762
Academic Writing
Science & Tech.
33.470211
This page is dedicated to the many wonderful and growing resources dealing with Open Source Software on the internet. For each, I’ve learnt or hope that others can learn something about the many tools that are available in the Open Souce world. - Nine things you should know about Nautilus – A short description of some nice yet lesser-known features in the Nautilus file manager available in Gnome 2.14. - SSH Tricks – Nice overview of SSH’s (secure shell) capabilities - Dive into Python – A great on-line book about programming Python. There is also a print edition. - Learn To Program Ruby – A nice guide to Ruby for new programmers. - strace – A very powerful troubleshooting tool for all Linux users – A straight forward mini-tutorial showing how to use strace to trouble-shoot software problems on Linux. - How to Report Bugs Effectively – When you find a bug in software you want to get it fixed. With Open Source Software filing a bug with the responsible project is the way to do this. This link will help you to file effective bug reports.
<urn:uuid:3319b68b-d0a0-4d57-8d97-60714898828a>
2.75
234
Content Listing
Software Dev.
51.782255