text
stringlengths
1k
264k
meta
dict
Details of Glycemic Index (GI) The GI Scale The glycemic index uses a scale from 1 to 100, which indicates the rate at which 50 grams of carbohydrate in a particular food is absorbed into the bloodstream as blood-sugar. The main reference food (rated 100) is glucose. GI Rating Categories The glycemic index divides carbohydrate foods into three categories: GI Food Testing is Ongoing Not all foods have been given a GI value, although most food-types are covered. However, due to the way GI is measured using volunteer subjects, results can vary, so GI values for some specific foods are not yet uniformly established. GI - Diabetes and Weight Control Although the glycemic index was first designed to assist diabetes patients manage their blood-sugar levels, dietitians and weight experts now use it as a tool to help treat obesity, food cravings and appetite swings, and improve eating habits. Both the type AND quantity of carbohydrate in our food influence the rise in blood glucose. But the glycemic index only rates a standard 50 gram serving size of digestible carbohydrate in a particular food, which may not be appropriate for all foods. For example, foods whose serving size contains only a small amount of carbohydrate may in practice be better for blood sugar control than foods whose normal serving size contains a large amount of carbs. Therefore, to provide a more meaningful GI-rating system, researchers at Harvard University invented the term Glycemic Load, which applies the glycemic index to normal food serving sizes. OBESITY, OVERWEIGHT and
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1563 }
THE HISTORY OF NEW YORK STATE & HOW A BILL BECOMES A LAW New York State History New Yorkers are rightfully proud of their state’s many achievements and contributions. This synopsis is adapted from a brief history previously printed in the Legislative Manual. The New York harbor was visited by Giovanni da Verrazano in 1524, and the Hudson River was first explored by Henry Hudson in 1609. The Dutch settled here permanently in 1624 and, for 40 years, they ruled over the colony of New Netherland. It was conquered by the English in 1664 and was then named New York in honor of the Duke of York. Existing as a colony of Great Britain for over a century, New York declared its independence on July 9, 1776, becoming one of the original 13 states of the Federal Union. The next year, on April 20, 1777, New York’s first constitution was adopted. In many ways, New York state was the principal battleground of the Revolutionary War. Approximately one-third of the skirmishes and engagements of the war were fought on New York soil. The Battle of Saratoga, one of the decisive battles of the war, was the turning point of the American Revolution leading to the French alliance and thus to eventual victory. New York City, long occupied by British troops, was evacuated on November 25, 1783. There, on December 4 at Fraunces Tavern, Gen. George Washington bade farewell to his officers. The first government of New York state grew out of the Revolution. The state convention that drew up the state constitution created a "Council of Safety" that governed for a time and set the new government in motion. In June 1777, while the war was going on, an election for the first governor took place. Two of the candidates, Philip Schuyler and George Clinton, were generals in the field. Two others, Col. John Jay and Gen. John Morin Scott, were, respectively, leaders of the aristocratic and democratic groups in the convention. On July 9, Clinton was declared elected, and he was inaugurated as governor at Kingston on July 30, 1777. Albany became the state capital in January 1797. Alexander Hamilton was a leader in the movement that resulted in development of the U.S. Constitution, and he was active in its ratification. New York City became the first capital of the new nation, and it’s where President George Washington was inaugurated on April 30, 1789. In the following years, New York’s economic and industrial growth made appropriate the title, "The Empire State," an expression possibly originated by George Washington in 1784. In 1809, Robert Fulton’s "North River Steamboat," the first successful steam-propelled vessel, began a new era in transportation. The Erie Canal, completed in 1825, greatly enhanced the importance of the port of New York and caused populous towns and cities to spring up across the state. The Erie Canal was replaced by the Barge Canal in 1918, and the system of waterways was further expanded by construction of the St. Lawrence Seaway. Overland transportation grew rapidly from a system of turnpikes established in the early 1880s to the modern-day Governor Thomas E. Dewey New York State Thruway. By 1853, railroads that had started as short lines in 1831 crossed the state in systems like the Erie and New York Central. During the 19th century, America became a haven for many of the oppressed people of Europe, and New York City became the "melting pot." The Statue of Liberty (dedicated in 1886 in the harbor), with its famous inscription, "Give me your tired, your poor, your huddled masses yearning to breathe free," was the first symbol of America’s mission. The international character of New York City, the principal port for overseas commerce, and later for transcontinental and international airways, has been further enhanced by becoming the home of the United Nations, capital of the free world. Here, people of all nations and races come to discuss and try to solve world problems in a free and democratic climate. As one of the wealthiest states, New York made tremendous strides in industry and commerce. The New York Stock Exchange, founded in 1792, has become the center of world finance. Diversified and rich natural resources, together with unmatched facilities for transportation produced phenomenal growth in manufacturing and industry. Research and inventive genius have been extensive, especially in the field of electronics, power and the peaceful and productive use of atomic energy. New York City also became a leading national center for art, music and literature, as exemplified by the Metropolitan Museum of Art, the Metropolitan Opera Company and large publishing houses. The state has supplied more than its share of national leaders, beginning with Alexander Hamilton, the first treasury secretary, and John Jay, the first chief justice. Aaron Burr and George Clinton served as vice presidents. Martin Van Buren, Chester A. Arthur and Grover Cleveland went from New York politics to the presidency. In the 1900s, Theodore Roosevelt and Franklin D. Roosevelt achieved the presidency, and Nelson Rockefeller served as vice president. Governors Charles E. Hughes, Alfred E. Smith and Thomas E. Dewey all were presidential candidates. |NEW YORK STATE: "EVER UPWARD"| Know how a bill becomes a law...1. Your Assembly member gets an idea for a bill from constituents and organized interest groups, or by perceiving local or statewide needs. Bills can create new laws or repeal or amend existing ones. After deciding to sponsor a bill, the Assembly member has bill-drafting specialists write it. Then, the bill is filed and gets an official number. Many bills are co-sponsored by several Assembly members. The speaker of the Assembly, who is elected by the 150 Assembly members, assigns the bill to the appropriate committee. For example, a bill concerning tenants is assigned to the Housing Committee. The committee members (every Assembly member sits on several committees) study the bill and vote on whether to defeat, or "kill," it, hold it for further study or send it on to the full Assembly for a vote. Before going to the Assembly floor for a vote, bills may be examined by other committees as well. On the floor of the Assembly, the bill’s sponsor explains and defends it in the event there is debate. A vote on the bill is taken. If it passes, it goes to the Senate, where it undergoes a similar process. If both houses (Assembly and Senate) pass the bill, it goes to the governor, who can either sign it into law or veto it. If the governor vetoes it, the Legislature can override the veto by a two-thirds vote in favor, thus making the bill a law. |***Click here for printable view.***| How to Contact 224 Seventh Street Garden City, NY 11530 Legislative Office Building Albany, NY 12248
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6755 }
Significance and Use Accurate definition of boundary conditions is an essential part of conceptualizing and modeling groundwater flow systems. This guide describes the properties of the most common boundary conditions encountered in groundwater systems and discusses major aspects of their definition and application in groundwater models. It also discusses the significance and specification of boundary conditions for some field situations and some common errors in specifying boundary conditions in groundwater models. 1.1 This guide covers the specification of appropriate boundary conditions that are an essential part of conceptualizing and modeling groundwater systems. This guide describes techniques that can be used in defining boundary conditions and their appropriate application for modeling saturated groundwater flow model simulations. 1.2 This guide is one of a series of standards on groundwater flow model applications. Defining boundary conditions is a step in the design and construction of a model that is treated generally in Guide D5447. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use. 1.4 This guide offers an organized collection of information or a series of options and does not recommend a specific course of action. This document cannot replace education or experience and should be used in conjunction with professional judgment. Not all aspects of this guide may be applicable in all circumstances. This ASTM standard is not intended to represent or replace the standard of care by which the adequacy of a given professional service must be judged, nor should this document be applied without consideration of a project's many unique aspects. The word “Standard” in the title of this document means only that the document has been approved through the ASTM consensus process. 2. Referenced Documents (purchase separately) The documents listed below are referenced within the subject standard but are not provided as part of the standard. D653 Terminology Relating to Soil, Rock, and Contained Fluids D5447 Guide for Application of a Groundwater Flow Model to a Site-Specific Problem aquifers; boundary condition; groundwater model: Aquifers; Boundary conditions; Ground-water model; Stress-dependency; Water table; ICS Number Code 07.060 (Geology. Meteorology. Hydrology); 13.060.10 (Water of natural resources) ASTM International is a member of CrossRef. Citing ASTM Standards [Back to Top]
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2660 }
Lox/Kerosene propellant rocket stage. Loaded/empty mass 5,000/700 kg. Thrust 122.50 kN. Vacuum specific impulse 280 seconds. All values except thrust estimated. Status: In development. More... - Chronology... Gross mass: 5,000 kg (11,000 lb). Unfuelled mass: 700 kg (1,540 lb). Height: 9.60 m (31.40 ft). Diameter: 1.00 m (3.20 ft). Span: 1.00 m (3.20 ft). Thrust: 122.50 kN (27,539 lbf). Specific impulse: 280 s. Specific impulse sea level: 240 s. Burn time: 95 s. Number: 1 . Korea South South Korea became familiar with large-scale rocketry through maintenance and modification activities on American-supplied Honest John and Nike Hercules tactical missiles. By the 1990's Korea had developed an independent capability to manufacture solid propellant rocket motors of up to one tonne mass. In 1990 KARI was funded to build the first indigenous sounding rockets, flown as the KSR-I and KSR-II. In December 1997 KARI was allowed to proceed with development of liquid oxygen/kerosene rocket motor for an orbital launcher, but this was abandoned when the South Korean government decided it wanted to be among the top ten spacefaring nations by 2015. The existing program was too limited in growth potential to allow that. Therefore it was decided to leapfrog the technology by contracting with Russian companies. First launch of the KSLV-I launch vehicle from the new space centre took place in 2010. More... KSR-3 Korean Lox/Kerosene rocket engine. In development. Launch thrust 122.5 kN. Pressure-fed indigenous design. First flight 2002. More... Associated Launch Vehicles KSR-III South Korean sounding rocket. Test bed for development of an orbital launch vehicle, powered by the liquid oxygen/kerosene engine planned for the KSLV-I. However flown only once in 2002. More... KSLV-I 2002 South Korean orbital launch vehicle. In 2002 South Korea announced it was planning to develop a small satellite launch vehicle by 2005, based on technology flown on the KSR-III test vehicle. By 2005 this was replaced by a completely different design, based on the Russian Angara space booster. More... KSLV-III South Korean launch vehicle, to consist of a Russian Angara first stage, a South Korean liquid propellant second stage, and a South Korean solid propellant apogee kick motor. Scheduled for first flight by 2015. In August 2006 the Korean press reported that the first and second stages would both be Angara-UM modules... how this configuration would work (stacked versus parallel) was unclear. More... KSLV-II South Korean launch vehicle, originally scheduled for first flight by 2010. Evidently it would have consisted of a Russian Angara first stage and a South Korean liquid-propellant second stage. In August 2006 it was reported in the Korean press that this launcher configuration was cancelled. More... Lox/Kerosene Liquid oxygen was the earliest, cheapest, safest, and eventually the preferred oxidiser for large space launchers. Its main drawback is that it is moderately cryogenic, and therefore not suitable for military uses where storage of the fuelled missile and quick launch are required. In January 1953 Rocketdyne commenced the REAP program to develop a number of improvements to the engines being developed for the Navaho and Atlas missiles. Among these was development of a special grade of kerosene suitable for rocket engines. Prior to that any number of rocket propellants derived from petroleum had been used. Goddard had begun with gasoline, and there were experimental engines powered by kerosene, diesel oil, paint thinner, or jet fuel kerosene JP-4 or JP-5. The wide variance in physical properties among fuels of the same class led to the identification of narrow-range petroleum fractions, embodied in 1954 in the standard US kerosene rocket fuel RP-1, covered by Military Specification MIL-R-25576. In Russia, similar specifications were developed for kerosene under the specifications T-1 and RG-1. The Russians also developed a compound of unknown formulation in the 1980's known as 'Sintin', or synthetic kerosene. More... Home - Browse - Contact © / Conditions for Use
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4103 }
An English editor is a person who is engaged in the work of editing the manuscript or the written text that has to be published. The editor is an expert in the language and has good command over the language in which he is doing the work of editing. In the modern days English is developing as the common language that is being used in writing and in the corporate world. An editor is needed in both the sectors that is, in creative writing and for editing the written documents, mails and other texts which are commonly used in business dealings. Importance of English Editor The editor plays an important role in the publication and business sector as it is in his hand to detect all the errors that occur during writing and rectify them before it is finally published or sent to the concerned recipient. The editor has to do to two types of editing. One is copyediting which is commonly known as proofreading and the other is content editing which is related to higher level of editing as the editor makes major changes in the original manuscript so that the ideas of the writer is brought to a level which can be easily understood by the reader. A good editor must have good command over English language and through knowledge of grammar as he has to ensure that the final copy that is sent after editing is correct, comprehensible, the ideas are clear, there is a consistency in the text and the text is concise. Professional English editors are very helpful for people who use English as a second language.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1512 }
Wikipedia sobre física de partículas Rapidinho. Me falaram que a definição de física de partículas da Wikipedia era muito ruim. E de fato, era assim: Particle physics is a branch of physics that studies the elementary particle|elementary subatomic constituents of matter and radiation, and their interactions. The field is also called high energy physics, because many elementary particles do not occur under ambient conditions on Earth. They can only be created artificially during high energy collisions with other particles in particle accelerators. Particle physics has evolved out of its parent field of nuclear physics and is typically still taught in close association with it. Scientific research in this area has produced a long list of particles. Mas hein? Partículas que só podem ser criadas em aceleradores? Física de partículas é ensinada junto com física nuclear? A pesquisa produz partículas (essa é ótima!)? Em que mundo essa pessoa vive? Reescrevi: Particle Physics is a branch of physics that studies the existence and interactions of particles, which are the constituents of what is usually referred as matter or radiation. In our current understanding, particles are excitations of quantum fields and interact following their dynamics. Most of the interest in this area is in fundamental fields, those that cannot be described as a bound state of other fields. The set of fundamental fields and their dynamics are summarized in a model called the Standard Model and, therefore, Particle Physics is largely the study of the Standard Model particle content and its possible extensions. Eu acho que ficou bem melhor. Vamos ver em quanto tempo algum editor esquentado da Wikipedia vai demorar para reverter. Atualmente está um saco participar da Wikipedia por causa dessas pessoas.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1797 }
Charles Benedict DavenportArticle Free Pass Charles Benedict Davenport, (born June 1, 1866, Stamford, Conn., U.S.—died Feb. 18, 1944, Cold Spring Harbor, N.Y.), American zoologist who contributed substantially to the study of eugenics (the improvement of populations through breeding) and heredity and who pioneered the use of statistical techniques in biological research. After receiving a doctorate in zoology at Harvard University in 1892, Davenport taught there until 1899, when he left to join the faculty of the University of Chicago, where, from 1901 to 1904, he was curator of the zoological museum. He directed the department of genetics (1904–34) for the Station for Experimental Evolution at Cold Spring Harbor, N.Y., and also founded and directed the Eugenics Record Office (1910–34). While teaching experimental morphology at Harvard, he used statistical methods in population studies. Partly as a result of breeding experiments with chickens and canaries, he was one of the first, soon after 1902, to recognize the validity of the newly discovered Mendelian theory of heredity. In Heredity in Relation to Eugenics (1911), he compiled evidence concerning the inheritance of human traits, on the basis of which he argued that the application of genetic principles would improve the human race. Davenport was editor of Genetics (from 1916) and the Journal of Physical Anthropology (from 1918). Davenport’s other important works include Statistical Methods with Special Reference to Biological Variation (1899), Eugenics (1910), and Body Build and Its Inheritance (1923). What made you want to look up "Charles Benedict Davenport"? Please share what surprised you most...
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1682 }
What is a Laparoscopic Cholecystectomy? Laparoscopic Cholecystectomy is a minimally invasive surgical procedure for the removal of the gallbladder. How is it done? Laparoscopic Cholecystectomy is generally performed using a general anesthesia. During the procedure the abdomen is inflated with carbon dioxide to provide room for the procedure. Through a small incision made at the navel, a laparoscope is inserted into the abdomen. Three small additional holes are made to allow the entry of the instruments. The gallbladder is located and the cystic duct and artery are tied off. The gallbladder is removed and the incision is closed. Sometimes an x-ray is taken on the operating table (cholangiogram) to look for stones or abnormalities in the common bile duct. Why is it done? Gallbladder removal is usually done to treat the following conditions: - Gallbladder disease, such as gallstones - Infection and inflammation of the gallbladder - Gallbladder polyps Laparoscopic surgery is associated with less postoperative pain, a shorter hospital stay, and better cosmetic results than the open surgical procedure. Risks & complications There are possible risks and complications associated with anesthesia, including respiratory or cardiac malfunction. Other complications include: - Injury to the bile duct, blood vessels or other abdominal organs - Minor shoulder pain (from the carbon dioxide gas) - Post operative bleeding Risks can be reduced by following the surgeon's instructions before and after surgery. Open surgery (laparotomy) may have to be performed in patients with bleeding; if there is abnormal anatomy resulting from acute infection; or where scarring from previous surgeries or infections prevent a clear view of the anatomy. The surgeon will make the final determination of each patient’s eligibility for the procedure after an examination and consultation with the patient.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1895 }
How To Win At Science Fairs (Dec, 1960) How To Win At Science Fairs by Ronald Benrey YOU CAN WIN at a Science Fair as long as one thing interests you more than winning does. This is your project itself. It is going to be judged on scientific thought, creative ability, and presentation. You will really have to know the field your project is concerned with. This takes effort. Since you lack the means of a professional laboratory, you will have to do much with little. This takes trial and error and just plain work. Your presentation must be attractive and clear. This means good workmanship, which takes time and care. You are going to have to show some originality. After all, there is no use doing what everybody else is doing: be different. For this, you have to have the other three under control. By the way, the “laymen” who see your exhibit will ask all kinds of questions. Have good answers at your fingertips. The judges won’t be laymen, and any double-talk will scream to them that you don’t know your subject. It may also make them suspect that the best parts of your project are not your work. This would be unjust, perhaps, but deadly. Now, whether your entry covers a large table top or can just be tucked under your arm, it is going to be a big job. It can’t be left for a “crash program” in the last few weeks before the Fair. It is going to eat up big portions of your time, energy, and spending money for the next several months. All this demands your interest. But it isn’t simply a matter of “fun. ” Licking this challenge may be a turning point in your life. With or without a scholarship prize, your career may begin with it. As a reader of Electronics Illustrated your project will probably deal with electronics or applied physics rather than with biological or earth sciences. Select your topic carefully from a broad subject that really interests you. A massive effort in the direction of a passing fancy will result in a mediocre project at best. Take a limited subtopic that you think worth investigating and that you feel able to handle. To ease financial strain, plan now to build your project over a long period of time, say six months, on a pay-as-you-build basis. Once you have a rough idea of your project’s general form, don’t dash into construction. Visit technical libraries and learn all you can about current professional work in the field, and its technical jargon. This will give you much important information and helpful hints, and when you finally face the judges, you will know your subject. Here is a prickly question. It is up to you to be realistic and honest with yourself when you choose a topic. Your science teachers and advisers will certainly be helpful, but the final decision must be yours. In other words, if you have never handled a soldering iron before, don’t take on a project requiring elaborate electronic instrumentation. If you have enough time you can work up to a complex project by building a few simpler devices, like many described in EI. This is another reason for starting NOW. – Why not get your feet wet by assembling some test equipment from kits? You will certainly need a multimeter anyway, for any project, and it will be something you can use “forever. ” Another touchy subject: discussion of this often scares off good potential science fairers. Nobody requires or expects a science fair project to produce a radical new scientific discovery. However, this does not imply that an entrant can’t find a new angle on an old problem. Merely duplicating a project described in a magazine shows the judges only one thing: the builder can follow directions. The main benefit of entering a science fair is the challenge of thinking a real problem out, all the way through. Your project can be for “demonstration” rather than “research, ” but make sure you come up with fresh, clear, meaningful ways to present your material. Stay away from last year’s winning project: it was good last year. Avoid “staples” (like Tesla coils) unless they are only part of a ‘wider original project. Your project should be well presented and look impressive, but impressive need not mean expensive. Judges seldom look twice at an exhibit loaded down with excess and borrowed equipment when the same results could have been obtained more economically and without false show. Novel use of common materials shows creative ability, and this is an important judging criterion. Remember, how you solved your problem is what counts at a science fair, and not merely that you solved it. Also, neatness counts! Aside from being impossible to troubleshoot, a rat’s nest of wiring is typical of losing projects. Time spent color-coding leads, installing wire harness and cable clamps will result in a much more attractive and more reliable project. But know what you are doing! Don’t harness leads in a circuit that demands point-to-point wiring, or cable grid and plate leads together in an amplifier circuit. Read up on layout and construction techniques, and allow yourself time to make and correct mistakes. Prior planning will also pay off in dollars and cents, since you can save by purchasing some components (like resistors) in quantity, and if you live near a big city you can shop around for some items in the military surplus stores, modifying your design if necessary to take odd-value components. Now, sit back and start your thinking. The time to start is right now. IS YOUR WINNING PROJECT HERE? RADIO TELESCOPE: Home-built sensitive low-noise receiver, simple antenna system. Try to make simple “radio map.” GUIDANCE SYSTEM: For model ear. Can be programmed to run around science fair grounds without hitting anything, or to reach pre-chosen destination. SOLAR CELLS: Home-built unit as part of demonstration of basic physics of solar cells: display on recent professional research results: off-beat practical applications (eyeglass type hearing aid?). MOON MOUSE: “To be landed on the Moon. ” Self-propelled, radio controlled from Earth, instrumented and transmitter equipped. Some functions solar powered ? These are only suggestions. You may come up with ideas regarding fuel cells, space communications, navigation, etc.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6187 }
Nouns wanting in the Plural Some nouns are ordinarily found in the Singular number only (singulria tantum). These are 1. Most proper names: as, Caesar, Csar; Gallia, Gaul. 2. Names of things not counted, but reckoned in mass: as, aurum, gold; r, air; trticum, wheat. 3. Abstract nouns: as, ambiti, ambition; fortitd, courage; calor, heat. [p. 41] Many of these nouns, however, are used in the plural in some other sense. The plural of a proper name may be applied to two or more persons or places, or even things, and so become strictly common: duodecim Caesars, the twelve Csars. Galliae, the two Gauls (Cis- and Transalpine). Castores, Castor and Pollux; Iovs, images of Jupiter. The plural of names of things reckoned in mass may denote particular objects: as, aera, bronze utensils, nivs, snowflakes; or different kinds of a thing: as, ers, airs (good and bad). The plural of abstract nouns denotes occasions or instances of the quality, or the like: quaedam excellentiae, some cases of superiority; tia, periods of rest; calrs, frgora, times of heat and cold. Nouns wanting in the Singular Some nouns are commonly or exclusively found in the Plural (plrlia tantum). Such are 1. Many names of towns: as, Athnae (Athens), Thri, Philipp, Vi. 2. Names of festivals and games: as, Olympia, the Olympic Games; Bacchnlia, feast of Bacchus; Qunqutrs, festival of Minerva; ld Rmn, the Roman Games. 3. Names of classes: as, optimts, the upper classes; mirs, ancestors; lber, children; pents, household gods; Quirts, citizens (of Rome). 4. Words plural by signification: as, arma, weapons; arts, joints; dvitiae, riches; sclae, stairs; valvae, folding-doors; fors, double-doors; angustiae, a narrow pass (narrows); moenia, city walls. NOTE 1.Some words, plural by signification in Latin, are translated by English nouns in the singular number: as, dliciae, delight, darling; faucs, throat; fids, lyre (also singular in poetry); nsidiae, ambush; cervcs, neck; viscera, flesh. NOTE 2.The poets often use the plural number for the singular, sometimes for metrical reasons, sometimes from a mere fashion: as, ra (for s), the face; scptra (for scptrum), sceptre; silentia (for silentium), silence. Some nouns of the above classes ( 101. 1-4), have a corresponding singular, as noun or adjective, often in a special sense: 1. As noun, to denote a single object: as, Bacchnal, a spot sacred to Bacchus; optims, an aristocrat. 2. As adjective: as, Cat Mior, Cato the Elder. 3. In a sense rare, or found only in early Latin: as, scla, a ladder, valva, a door; artus, a joint. Nouns Defective in Certain Cases Many nouns are defective in case-forms: Indeclinable nouns, used only as nominative and accusative singular: fs, nefs, nstar, nihil, opus (need), secus. NOTE 1.The indeclinable adjective necesse is used as a nominative or accusative. NOTE 2.The genitive nihil and the ablative nihil (from nihilum, nothing) occur. Nouns found in one case only (monoptotes): 1. In the nominative singular: gls (F.). 2. In the genitive singular: dicis, nauc (N.). 3. In the dative singular: dvsu (M.) (cf. 94. c). 4. In the accusative singular: amussim (M.); vnum (dative vn in Tacitus). 5. In the ablative singular: pond (N.); mne (N.); ast (M.), by craft; iuss, iniuss, nt, and many other verbal nouns in -us (M.) ( 94. c). NOTE.Mne is also used as an indeclinable accusative, and an old form mn is used as ablative. Pond with a numeral is often apparently equivalent to pounds. A nominative singular astus and a plural asts occur rarely in later writers. 6. In the accusative plural: nfitis. Nouns found in two cases only (diptotes): 1. In the nominative and ablative singular: fors, forte (F.). 2. In the genitive and ablative singular: spontis (rare), sponte (F.). 3. In the accusative singular and plural: dicam, dics (F.). 4. In the accusative and ablative plural: fors, fors (F.) (cf. fors), used as adverbs. Nouns found in three cases only (triptotes): 1. In the nominative, accusative, and ablative singular: impetus, -um, - (M.) 50 ; lus, -em, - (F.). 2. In the nominative, accusative, and dative or ablative plural: grts, -ibus (F). 3. In the nominative, genitive, and dative or ablative plural: igera, -um, -ibus (N.); but igerum, etc., in the singular (cf. 105. b). Nouns found in four cases only (tetraptotes): In the genitive, dative, accusative, ablative singular: dicinis, -, -em, -e (F.). Nouns declined regularly in the plural, but defective in the singular: 1. Nouns found in the singular, in genitive, dative, accusative, ablative: frgis, -, -em, -e (F.); opis, - (once only), -em, -e (F.; nominative Ops as a divinity). 2. Nouns found in the dative, accusative, ablative: prec, -em, -e (F.). 3. Nouns found in the accusative and ablative: cassem, -e (F.); sordem, -e (F.). 4. Nouns found in the ablative only: ambge (F.); fauce (F.); obice (C.). Nouns regular in the singular, defective in the plural: [p. 43] 1. The following neuters have in the plural the nominative and accusative only: fel (fella), far (farra), hordeum (hordea), is, broth (ira), mel (mella), murmur (murmura), ps (pra), rs (rra), ts or ths (tra). NOTE.The neuter is, right, has only ira in classical writers, but a very rare genitive plural irum occurs in old Latin. 2. calx, cor, cs, crux, fax, faex, lanx, lx, nex, s (ris), 51 os (ossis), 52 px, pix, rs, sl, sl, vas (vadis), want the genitive plural. 3. Most nouns of the fifth declension want the whole or part of the plural (see 98. a). Nouns defective in both singular and plural: 1. Noun found in the genitive, accusative, ablative singular; nominative, accusative, dative, ablative plural: vicis, -em, -e; -s, -ibus. 2. Noun found in the genitive, dative, accusative, and ablative singular; genitive plural wanting: dapis, -, -em, -e; -s, -ibus.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5773 }
Czech Christmas Songs Early Medieval Bohemia, a historically defined space that today falls under the Czech Republic, gave birth to many carols, the earliest of which are found in manuscripts dating back to the 11th century. These chants, however, were not Christmas carols as we know them today but rather Christian songs meant to be sung in churches and monasteries to celebrate the birth of Christ. Though secluded at first, these songs spread very quickly and were received with open arms. Many of them became part of the traditional folklore and were sung during caroling, which took place on various occasions throughout December and January (from St. Nicholas's day on December 6th to Candle Mass on February 2nd). Although the ritual of going from house to house with a song was an important part of Christian folklore, the Czech name for 'caroling' and 'carol' - koleda - links its origins back to the pre-Christian festivities that took part during the calendas, the first days of the new year. While caroling was still a popular pastime some decades ago, today the tradition has usually dwindled down to the singing of carols in the family circle. It did not take long for proper carols to start being composed, and one of the oldest and most popular ones is first mentioned in the 14th century. It is called Narodil se Kristus Pán (Christ Our Lord Has Been Born), has a long list of strophes that no one ever seems to remember, and is easily one of the most played and well-known carols in the country. Interestingly, it is also the only Christmas song that people tend to rise up to, as a mark of respect, even though most Czechs today proclaim themselves Most of the carols sung today come from a later era, and often have the name of a 17th-19th century composer attached to them. They include a wide variety of compositions, from Christmas oratorios and masses, of which the most famous is the Bohemian Christmas Mass by Jakub Jan Ryba, to shorter and very melodious songs which quickly became common. An example would be a lullaby that Mary sings to Baby Jesus, called Chtíc, aby spal (Wanting Him to Sleep), or a long, mournful song known as Byla cesta, byla ušlapaná (The Road was Travelled), about a conversation between Mary and Elizabeth, mother of John the Baptist. Modern Christmas songs have taken a wide step away from the traditional Christian take on the subject and tend to have either an ironic or a more depressing approach to the holiday than the joyful and straightforward carols of the older days. While many of the modern songs are little but variations on popular world music with Czech lyrics, such as Rolničky (Jingle Bells), original songs have been composed as well. Starting in the 70s with the production of a band called Golden Kids, local musicians - at least the ones that managed to perform under the Communist regime - started coming up with modern songs on the theme of Christmas. To this day, Karel Gott's Vánoce ve Zlaté Praze (Christmas in a Prague of Gold) remains a well-known collection, Jiří Suchý's Purpura has managed to reach the status of a traditional carol, and the iconic single Vánoce, Vánoce (Christmas, Christmas) by the musical duo Josef Vomáčka and Zdeněk Borovec is, even today, played over and over. The mass popularity of this particular song probably stems from the fact that it takes an ironic view of traditional celebrations and recounts a series of unfortunate events that have befallen an unlucky family determined to celebrate their Christmas in a As a fun trivia fact, the popular English carol Good King Wenceslas, sung usually on the feast of Stephen (December 27th), tells the tale of a notable personage from Czech history, a Bohemian prince called Wenceslas, one of the first Christians in the land. He was murdered by his brother Boleslaus and later canonized and proclaimed a patron of the country - it is thought that he was introduced to the Brits by the Bohemian princess Anne, who was married to Richard II.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3996 }
Efforts to incorporate Bowmanstown as a borough occurred as early as 1892. The village contained about 300 inhabitants in 1896 but the nearby New Jersey Zinc Company soon added to its growth. Bowmanstown was incorporated as a Borough on November 29, 1913 for the purpose of providing general local government services to residents of the community. Upon incorporation of Bowmanstown as a borough its boundaries encompassed lands measuring 0.75 square mile. The borough's assessed valuation in 1918 was $279,000.00. The population of 834 in 1920 remained relatively constant for decades. The Bowmanstown Borough Municipal Building (Borough Hall) is a converted school building that was constructed in 1903 to serve the youths of the community. In 1958, the Palmerton School District was established and combined several local schools in order to create a regional school thus making the Bowmanstown campus obsolete. In 1964, the Borough acquired the old brick school building and has been using it as offices ever since. The borough kept the building in its original condition. The Bowmanstown Borough Authority was incorporated August 24, 1997 and was created for the purpose of owning and operating the Bowmanstown Public Water System. On February 11, 2002 the Authority began construction of its water system improvement project which included a new chlorine building, looping numerous water mains, installing new services, erection of a new 250,000 gallon Standpipe and a new liner to the one Reservoir. In 2009, the Authority replaced their two roofs at the Reservoirs with metal roofs. Ongoing water projects will continue to transpire throughout the years.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1662 }
Public Papers - 1991 White House Fact Sheet on The Strategic Arms Reduction Treaty (START) Today, the United States and the Soviet Union signed the Strategic Arms Reduction Treaty. This treaty marks the first agreement between the two countries in which the number of deployed strategic nuclear weapons will actually be reduced. Reductions will take place over a period of 7 years, and will result in parity between the strategic nuclear forces of the two sides at levels approximately 30 percent below currently deployed forces. Deeper cuts are required in the most dangerous and destabilizing systems. START provisions are designed to strengthen strategic stability at lower levels and to encourage the restructuring of strategic forces in ways that make them more stable and less threatening. The treaty includes a wide variety of very demanding verification measures designed to ensure compliance and build confidence. The treaty sets equal ceilings on the number of strategic nuclear forces that can be deployed by either side. In addition, the treaty establishes an equal ceiling on ballistic missile throw-weight (a measure of overall capability for ballistic missiles). Each side is limited to no more than: -- 1600 strategic nuclear delivery vehicles (deployed intercontinental ballistic missiles [ICBM's], submarine launched ballistic missiles [SLBM's], and heavy bombers), a limit that is 36 percent below the Soviet level declared in September 1990 and 29 percent below the U.S. level. -- 6000 total accountable warheads, about 41 percent below the current Soviet level and 43 percent below the current U.S. level. -- 4900 accountable warheads deployed on ICBM's or SLBM's, about 48 percent below the current Soviet level and 40 percent below the current U.S. level. -- 1540 accountable warheads deployed on 154 heavy ICBM's, a 50-percent reduction in current Soviet forces. The U.S. has no heavy ICBM's. -- 1100 accountable warheads deployed on mobile ICBM's. -- Aggregate throw-weight of deployed ICBM's and SLBM's equal to about 54 percent of the current Soviet aggregate throw-weight. Ballistic Missile Warhead Accountability The treaty uses detailed counting rules to ensure the accurate accounting of the number of warheads attributed to each type of ballistic missile. -- Each deployed ballistic missile warhead counts as 1 under the 4900 ceiling and 1 under the 6000 overall warhead ceiling. -- Each side is allowed 10 on-site inspections each year to verify that deployed ballistic missiles contain no more warheads than the number that is attributed to them under the treaty. Downloading Ballistic Missile Warheads The treaty also allows for a reduction in the number of warheads on certain ballistic missiles, which will help the sides transition their existing forces to the new regime. Such downloading is permitted in a carefully structured and limited fashion. -- The U.S. may download its three-warhead Minuteman III ICBM by either one or two warheads. The Soviet Union has already downloaded it's seven warhead SS - N - 18 SLBM by four warheads. -- In addition, each side may download up to 500 warheads on two other existing types of ballistic missiles, as long as the total number of warheads removed from downloaded missiles does not exceed 1250 at any one time. The treaty places constraints on the characteristics of new types of ballistic missiles to ensure the accuracy of counting rules and prevent undercounting of missile warheads. -- The number of warheads attributed to a new type of ballistic missile must be no less than the number determined by dividing 40 percent of the missile's total throw-weight by the weight of the lightest RV tested on that missile. -- The throw-weight attributed to a new type must be no less than the missile's throw-weight capability at specified reference ranges (11,000 km for ICBM's and 9,500 km for SLBM's). START places significant restrictions on the Soviet SS - 18 heavy ICBM. -- A 50-percent reduction in the number of Soviet SS - 18 ICBM's; a total reduction of 154 of these Soviet missiles. -- New types of heavy ICBM's are banned. -- Downloading of heavy ICBM's is banned. -- Heavy SLBM's and heavy mobile ICBM's are banned. -- Heavy ICBM's will be reduced on a more stringent schedule than other strategic arms. Because mobile missiles are more difficult to verify than other types of ballistic missiles, START incorporates a number of special restrictions and notifications with regard to these missiles. These measures will significantly improve our confidence that START will be effectively verifiable. -- Nondeployed mobile missiles and non-deployed mobile launchers are numerically and geographically limited so as to limit the possibility for reload and refire. -- The verification regime includes continuous monitoring of mobile ICBM production, restrictions on movements, on-site inspections, and cooperative measures to improve the effectiveness of national technical means of intelligence collection. Because heavy bombers are stabilizing strategic systems (e.g., they are less capable of a short-warning attack than ballistic missiles), START counting rules for weapons on bombers are different than those for ballistic missile warheads. -- Each heavy bomber counts as one strategic nuclear delivery vehicle. -- Each heavy bomber equipped to carry only short-range missiles or gravity bombs is counted as one warhead under the 6000 limit. -- Each U.S. heavy bomber equipped to carry long-range nuclear ALCM's (up to a maximum of 150 bombers) is counted as 10 warheads even though it may be equipped to carry up to 20 ALCM's. -- A similar discount applies to Soviet heavy bombers equipped to carry long-range nuclear ALCM's. Each such Soviet heavy bomber (up to a maximum of 180) is counted as 8 warheads even though it may be equipped to carry up to 16 ALCM's. -- Any heavy bomber equipped for long-range nuclear ALCM's deployed in excess of 150 for the U.S. or 180 for the Soviet Union will be accountable by the number of ALCM's the heavy bomber is actually equipped to carry. Building on recent arms control agreements, START includes extensive and unprecedented verification provisions. This comprehensive verification regime greatly reduces the likelihood that violations would go undetected. -- START bans the encryption and encapsulation of telemetric information and other forms of information denial on flight tests of ballistic missiles. However, strictly limited exemptions to this ban are granted sufficient to protect the flight-testing of sensitive research projects. -- START allows 12 different types of on-site inspections and requires roughly 60 different types of notifications covering production, testing, movement, deployment, and destruction of strategic offensive arms. START will have a duration of 15 years, unless it is superseded by a subsequent agreement. If the sides agree, the treaty may be extended for successive 5-year periods beyond the 15 years. Noncircumvention and Third Countries START prohibits the transfer of strategic offensive arms to third countries, except that the treaty will not interfere with existing patterns of cooperation. In addition, the treaty prohibits the permanent basing of strategic offensive arms outside the national territory of each side. Air-Launched Cruise Missiles (ALCM's) START does not directly count or limit ALCM's. ALCM's are limited indirectly through their association with heavy bombers. -- Only nuclear-armed ALCM's with a range in excess of 600 km are covered by START. -- Long-range, conventionally armed ALCM's that are distinguishable from nuclear-armed ALCM's are not affected. -- Long-range nuclear-armed ALCM's may not be located at air bases for heavy bombers not accountable as being equipped for such ALCM's. -- Multiple warhead long-range nuclear ALCM's are banned. Sea Launched Cruise Missiles (SLCM's) SLCMs are not constrained by the treaty. However, each side has made a politically binding declaration as to its plans for the deployment of nuclear-armed SLCM's. Conventionally-armed SLCM's are not subject to such a declaration. -- Each side will make an annual declaration of the maximum number of nuclear-armed SLCM's with a range greater than 600 km that it plans to deploy for each of the following 5 years. -- This number will not be greater than 880 long-range nuclear-armed SLCM's. -- In addition, as a confidence building measure, nuclear-armed SLCM's with a range of 300 - 600 km will be the subject of a confidential annual data exchange. The Soviet Backfire bomber is not constrained by the treaty. However, the Soviet side has made a politically binding declaration that it will not deploy more than 800 air force and 200 naval Backfire bombers, and that these bombers will not be given intercontinental capability. The START agreement consists of the treaty document itself and a number of associated documents. Together they total more than 700 pages. The treaty was signed in a public ceremony by Presidents Bush and Gorbachev in St. Vladimir's Hall in the Kremlin. The associated documents were signed in a private ceremony at Novo Ogaryevo, President Gorbachev's weekend dacha. Seven of these documents were signed by Presidents Bush and Gorbachev. Three associated agreements were signed by Secretary Baker and Foreign Minister Bessmertnykh. In addition, the START negotiators, Ambassadors Brooks and Nazarkin, exchanged seven letters related to START in a separate event at the Soviet Ministry of Foreign Affairs in Moscow. Magnitude of START -- Accountable Reductions Following is the aggregate data from the Memorandum of Understanding, based upon agreed counting rules in START. (Because of those counting rules, the number of heavy bomber weapons actually deployed may be higher than the number shown in the aggregate.) This data is effective as of September 1990 (TABLE START)and will be updated at entry into force: Delivery Vehicles .... 2,246 .... 2,500 Warheads .... 10,563 .... 10,271 Ballistic Missile Warheads .... 8,210 .... 9,416 Heavy ICBM's/Warheads .... None .... 308/3080 Throw-weight (metric tons) .... 2,361.3 .... 6,626.3 As a result of the treaty, the above values will be reduced by the following percentages: Delivery Vehicles .... 29 percent .... 36 percent Warheads .... 43 percent .... 41 percent Ballistic Missile Warheads .... 40 percent .... 48 percent Heavy ICBM's/Warheads .... None .... 50 percent Throw-weight (metric tons) .... None .... 46 percent
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 10493 }
Common Core Catholic Identity Initiative A national working group has begun the Common Core Catholic Identity Initiative (CCCII) to develop and disseminate resources and guidelines to assist Catholic elementary and secondary schools in integrating elements of Catholic identity (Catholic values, Scripture, Church social teachings, encyclicals, etc.) into curriculum and instruction based on the Common Core State Standards. The initial phase of CCCII focuses on K-8 English/Language Arts/ Literacy. Resources for other subjects and for 9-12 curriculum will be developed in later phases. Forty-six states have agreed to adopt the Common Core State Standards, a set of high quality K-12 learning standards that includes rigorous content and application of knowledge using higher-order thinking skills, leading students to college and career readiness. Currently, Catholic schools are assessing what the implications of the standards and accompanying assessments may be for them. While Catholic schools have their own local or diocesan standards, their ability to continue to provide high-quality education for their students is compelling them to consider adoption of the common core standards. Catholic schools will be impacted as curriculum resources and professional development opportunities become aligned with Common Core State Standards by producers of instructional materials, college teacher preparation programs, or regulations for participation in the federal programs that currently benefit their students and teachers. Within this environment, maintaining the uniqueness and integrity of the Catholic school will require integrating the demands of their mission and the academic expectations of their constituents and the wider education community. To assist Catholic schools with enhancing Catholic identity integrated into the curriculum, the Common Core Catholic Identity Initiative (CCCII) has been launched as a collaborative project involving Catholic universities, corporations and sponsors invested in Catholic education, and the National Catholic Educational Association (NCEA). The Common Core Catholic Identity Initiative has two goals: - to empower Catholic schools and dioceses to design and direct the implementation of the Common Core standards within the culture and context of a Catholic school curriculum - to infuse the Common Core standards with the faith/principles/values/social justice themes inherent in the mission and Catholic identity of the school. The CCCII project aims to accomplish its goals by creating a process and a product: Phase 1: Gather approximately 35 practitioners and curriculum and catechetics experts to pilot a CCCII ELA Unit development process to be shared with the larger Catholic educational community. (June 2012) Phase 2: Revise and refine the unit development process so that it can be replicated in dioceses around the country. Phase 3: Invite participation in development of additional CCCII ELA Units by Catholic educators around the country. Phase 1: Utilize the expertise and strength of experienced and innovative teachers to develop complete units/exemplars that join Catholic identify with the Common Core curriculum standards. Utilize the expertise of CCCII leaders to develop supporting resources and guidelines. (June 2012) Phase 2: Post exemplar units, guidelines, and resources developed in for the June 2012 launch for open access by Catholic educators on the Catholic School Standards Project Website www.catholicschoolsstandards.org) . (July 2012) Phase 3: Expand exemplar units and Catholic Identity resources available for use by local Catholic schools. Tailor the CCCII Unit development process for Catholic secondary schools. Expand CCCII to include additional subject areas. Meet the CCCII Leadership and Planning Teams
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3806 }
Replication and control of circular bacterial plasmids. An essential feature of bacterial plasmids is their ability to replicate as autonomous genetic elements in a controlled way within the host. Therefore, they can be used to explore the mechanisms involved in DNA replication and to analyze the different strategies that couple DNA replication to other critical events in the cell cycle. In this review, we focus on replication and its control in circular plasmids. Plasmid replication can be conveniently divided into three stages: initiation, elongation, and termination. The inability of DNA polymerases to initiate de novo replication makes necessary the independent generation of a primer. This is solved, in circular plasmids, by two main strategies: (i) opening of the strands followed by RNA priming (theta and strand displacement replication) or (ii) cleavage of one of the DNA strands to generate a 3'-OH end (rolling-circle replication). Initiation is catalyzed most frequently by one or a few plasmid-encoded initiation proteins that recognize plasmid-specific DNA sequences and determine the point from which replication starts (the origin of replication). In some cases, these proteins also participate directly in the generation of the primer. These initiators can also play the role of pilot proteins that guide the assembly of the host replisome at the plasmid origin. Elongation of plasmid replication is carried out basically by DNA polymerase III holoenzyme (and, in some cases, by DNA polymerase I at an early stage), with the participation of other host proteins that form the replisome. Termination of replication has specific requirements and implications for reinitiation, studies of which have started. The initiation stage plays an additional role: it is the stage at which mechanisms controlling replication operate. The objective of this control is to maintain a fixed concentration of plasmid molecules in a growing bacterial population (duplication of the plasmid pool paced with duplication of the bacterial population). The molecules involved directly in this control can be (i) RNA (antisense RNA), (ii) DNA sequences (iterons), or (iii) antisense RNA and proteins acting in concert. The control elements maintain an average frequency of one plasmid replication per plasmid copy per cell cycle and can "sense" and correct deviations from this average. Most of the current knowledge on plasmid replication and its control is based on the results of analyses performed with pure cultures under steady-state growth conditions. This knowledge sets important parameters needed to understand the maintenance of these genetic elements in mixed populations and under environmental conditions.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2721 }
Young Kids May Be Able to Unbuckle Car Seats Survey of Parents Finds Some Kids May Be Unbuckling While Car Is in Motion May 2, 2011 -- Children as young as 1 year old can unbuckle themselves from car safety seats, a new survey of parents finds. "We found that children can unbuckle from their child car safety seats by their fourth birthday, and there is an alarming 43% who do so when the car is in motion," says researcher Lilia Reyes, MD, a clinical fellow in pediatric emergency medicine at the Yale School of Medicine in New Haven. "It was reported as early as 12 months." The findings are being presented at the Pediatric Academic Societies annual meeting in Denver. Child Car Seats: How Secure? While working in the pediatric emergency room at Yale, Reyes encountered two different mothers who had minor car accidents. They told her it happened when they turned their heads around after discovering their kids had unbuckled themselves. Trying to determine how frequently it happened, she and her colleagues from Yale surveyed 378 parents of young children. Among the other findings: - 51% or about 191 families reported that at least one of their children had unbuckled their car seats. Of these, 75% were age 3 or younger. The youngest was 12 months old. - Boys unbuckled more than girls; 59% of the kids who unbuckled were boys. Parents were not asked if they were sure they had buckled correctly, Reyes tells WebMD. So there is a possibility the children weren't buckled in correctly. But parents do typically hear a click, like a seat safety belt, when the buckle latches, she says. The problem, she says, is that while children may be able to physically unbuckle the seat, they are just beginning, at around age 3, to develop reasoning skills to appreciate the consequences of unbuckling. Parents used seats of various types. They included the five-point harness, convertible seats, and booster seats, depending on their child's age and weight. Are Car Seats Really Buckled? ''This study raises questions about how the child restraint was used," says Lorrie Walker, training manager and technical advisor for Safe Kids USA, an advocacy group. "Federal motor vehicle safety standard 213 requires the buckle to release using between 9 and 14 pounds of pressure," she says. "It is often challenging for an adult to unbuckle the harness." She wonders if the buckle was not adequately locked in some cases. "A buckle may give the appearance of being buckled when it has not completely latched," she tells WebMD. Among the mistakes many parents make when placing a child in a car seat she says, is to loosely attach the harness straps or place the straps in the wrong harness slots. If these mistakes occur, she says, it makes it easy for a child to climb out. The finding that a child as young as age 1 could unbuckle the seat is a surprise to Jennifer Stockburger, program manager of vehicle and child safety for Consumer Reports. She reviewed the findings for WebMD but was not involved in the study.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3009 }
LeitnerBox is a technique for learning with more & better effect. We should learn the things by heart in 5 steps (30 days) with Leitner Box. This application was created for learning English words or words of another language. I've created this project with .NET Framework 3.5 SP1, so you have to install it to use the project. Leitner Box's Algorithm According to Leitner's Algorithm, we have to study our questions everyday like this : 1 : Answer all questions in Box 5 -> Part 1 If your answer was True the question goes to the Data Base else it goes to Box 1. 2 : Shift all parts of Box5 to the left ( In the application use Shift up button ) 3 : Answer all questions in Box 4 -> Part 1 4 : Shift all parts of Box4 to the left 5 : And so on ... 10 : Add new questions to Box 1 I've implemented this algorithm in this project. Using the Application At first you have to create a user, so you will see this form : Notice: You can use A-Z and a-z for the name. Press the button to create new user, new user will save in a folder beside the main EXE file. If there are multiple users, you will see this form: Appending a Word You have to select a destination box or a part and then add a question. Notice: You can't add two words with the same questions. Whenever you type a word into 'Add Questions' textboxes, it searches for the inserted word in former words (in all boxes and database). If something is found, it will show a list of them below the textbox. Now you can choose it by pressing Enter on it. This is a good way to avoid duplication. - 2nd March, 2009: First post - 14th March, 2009: Updated source and demo files - fixed some bugs - 25th March, 2009: Updated source and demo files - 2nd April, 2009: Updated source and demo files - fixed some bugs - 11th April, 2009: Auto Complete added - 22nd March, 2010: Updated source and demo files
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1853 }
Tornadoes are the most intense storms on the planet, and they’re never discussed without at least some mention of the term wind shear. Many of us sitting at home, though, have no idea what wind shear is, or if we do, how it affects tornado production. What is Wind Shear Wind shear, although it might sound complex, is a simple concept. Wind shear is merely the change in wind with height, in terms of wind direction and speed. I think that we all understand that the wind is generally stronger in the atmosphere over our heads than it is here on the ground, and if we think of the atmosphere in terms of the three dimensions that it has, it should not be surprising that the wind above us might also be blowing from a different direction than the wind at the ground. When that happens–the wind speed and direction vary with height–wind shear is occurring. Wind Shear and Supercell Thunderstorms This wind shear is an important part of the process in the development of a supercell thunderstorm, from which the vast majority of strong tornadoes form. All thunderstorms are produced by a powerful updraft–a surge of air that rises from the ground into the upper levels of the atmosphere, and when this updraft forms in an area where wind shear is present, the updraft is influence by this speed and different direction of the wind above, pushing the column of air in the updraft into a more vertical alignment. Rain’s Influence on Tornado Production Needless to say, thunderstorms typically produce very heavy rain, and rain-cooled air is much heavier than the warm air of the updraft, so the rain-cooled air, produces a compensating downdraft (what comes up, must come down). This downdraft pushes the part of the rotating air that was forced in its direction by the stronger wind aloft downward, and the result is a horizontal column of rotating air. That’s Not a Tornado! I know what you’re thinking that you’ve seen enough TLC or Discovery Channel shows to know that a horizontal column of air is NOT a tornado; you need a vertical column of air. This Can Be a Tornado You’re right, but remember the updraft that is driving the thunderstorm is still working, and it’s able to pull the horizontal, spinning column of air into the thunderstorm, resulting in a vertical column of spinning air. (NOAA image showing vertical column of air in a supercell thunderstorm) The result is a rotating thunderstorm capable of producing a tornado, and it would not be possible without wind shear. (NOAA image showing tornado formation in supercell thunderstorm)
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2549 }
Get News & Views Updates Most Popular This Week Today's Top News Bush's Dangerous Liaisons MONTREAL -- MUCH as George W. Bush's presidency was ineluctably shaped by Sept. 11, 2001, so the outbreak of the French Revolution was symbolized by the events of one fateful day, July 14, 1789. And though 18th-century France may seem impossibly distant to contemporary Americans, future historians examining Mr. Bush's presidency within the longer sweep of political and intellectual history may find the French Revolution useful in understanding his curious brand of 21st- century conservatism. Soon after the storming of the Bastille, pro-Revolutionary elements came together to form an association that would become known as the Jacobin Club, an umbrella group of politicians, journalists and citizens dedicated to advancing the principles of the Revolution. The Jacobins shared a defining ideological feature. They divided the world between pro- and anti-Revolutionaries - the defenders of liberty versus its enemies. The French Revolution, as they understood it, was the great event that would determine whether liberty was to prevail on the planet or whether the world would fall back into tyranny and despotism. The stakes could not be higher, and on these matters there could be no nuance or hesitation. One was either for the Revolution or for tyranny. By 1792, France was confronting the hostility of neighboring countries, debating how to react. The Jacobins were divided. On one side stood the journalist and political leader Jacques-Pierre Brissot de Warville, who argued for war. Brissot understood the war as preventive - "une guerre offensive," he called it - to defeat the despotic powers of Europe before they could organize their counter-Revolutionary strike. It would not be a war of conquest, as Brissot saw it, but a war "between liberty and tyranny." Pro-war Jacobins believed theirs was a mission not for a single nation or even for a single continent. It was, in Brissot's words, "a crusade for universal liberty." Brissot's opponents were skeptical. "No one likes armed missionaries," declared Robespierre, with words as apt then as they remain today. Not long after the invasion of Austria, the military tide turned quickly against France. The United States, France's "sister republic," refused to enter the war on France's side. It was an infuriating show of ingratitude, as the French saw it, coming from a fledgling nation they had magnanimously saved from foreign occupation in a previous war. Confronted by a monarchical Europe united in opposition to revolutionary France - old Europe, they might have called it - the Jacobins rooted out domestic political dissent. It was the beginning of the period that would become infamous as the Terror. Among the Jacobins' greatest triumphs was their ability to appropriate the rhetoric of patriotism - Le Patriote Français was the title of Brissot's newspaper - and to promote their political program through a tightly coordinated network of newspapers, political hacks, pamphleteers and political clubs. Even the Jacobins' dress distinguished "true patriots": those who wore badges of patriotism like the liberty cap on their heads, or the cocarde tricolore (a red, white and blue rosette) on their hats or even on their lapels. Insisting that their partisan views were identical to the national will, believing that only they could save France from apocalyptic destruction, Jacobins could not conceive of legitimate dissent. Political opponents were treasonous, stabbing France and the Revolution in the back. To defend the nation from its enemies, Jacobins expanded the government's police powers at the expense of civil liberties, endowing the state with the power to detain, interrogate and imprison suspects without due process. Policies like the mass warrantless searches undertaken in 1792 - "domicilary visits," they were called - were justified, according to Georges Danton, the Jacobin leader, "when the homeland is in danger." Robespierre - now firmly committed to the most militant brand of Jacobinism - condemned the "treacherous insinuations" cast by those who questioned "the excessive severity of measures prescribed by the public interest." He warned his political opponents, "This severity is alarming only for the conspirators, only for the enemies of liberty." Such measures, then as now, were undertaken to protect the nation - indeed, to protect liberty itself. If the French Terror had a slogan, it was that attributed to the great orator Louis de Saint-Just: "No liberty for the enemies of liberty." Saint-Just's pithy phrase (like President Bush's variant, "We must not let foreign enemies use the forums of liberty to destroy liberty itself") could serve as the very antithesis of the Western liberal tradition. On this principle, the Terror demonized its political opponents, imprisoned suspected enemies without trial and eventually sent thousands to the guillotine. All of these actions emerged from the Jacobin worldview that the enemies of liberty deserved no rights. Though it has been a topic of much attention in recent years, the origin of the term "terrorist" has gone largely unnoticed by politicians and pundits alike. The word was an invention of the French Revolution, and it referred not to those who hate freedom, nor to non-state actors, nor of course to "Islamofascism." A terroriste was, in its original meaning, a Jacobin leader who ruled France during la Terreur. François Furstenberg, a professor of history at the University of Montreal, is the author of "In the Name of the Father: Washington's Legacy, Slavery and the Making of a Nation." Copyright 2007 The New York Times Company
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5704 }
Website Detail Page written by David McIntyre Spins is an interactive computer program that simulates Stern-Gerlach measurements on spin 1/2 and spin 1 particles. This software is used as part of the "Paradigms in Physics" curriculum. This can be used as an example to study general problems in quantum measurement and time dependence. ComPADRE is beta testing Citation Styles! Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ. SPINS Java Homepage: Is Required By PH425: Spin and Quantum Measurement The SPINS Java software is the foundation of the PH425 course.relation by Bruce Mason Is Based On A computer-simulated Stern–Gerlach laboratory The SPINS Java software is based on an earlier application written for the Macintosh computer.relation by Bruce Mason Is the Basis For QM Spins Program The SPINS program is the basis for the QM Spins program.relation by Mario Belloni References The Nobel Prize in Physics 1943 - Otto Stern The SPINS Java software can be used to build virtual experiments based on the work initially done by Stern and Gerlach.relation by Bruce Mason Covers the Same Topic As Quantum Physics Online: Spin 1/2 "Spin 1/2" explores the dynamics of a single spin in static and time-dependent magnetic fields.relation by Bruce Mason Know of another related resource? Login to relate this resource to it. Is Required By
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1810 }
First - you might want to redefine you search. Are you looking for happiness or rather positive affect? Happiness is fairly ambigious term, and it's much more associated with positive psychology studies on well-being. If you are interested in more global definition of happiness, check the work of Mihaly Csikszentmihalyi. On the other hand, there is a large number of studies on physiological measurements of positive affect. One such physiological measurement is Electromyography (EMG) - recording the electrical activity produced by skeletal muscles. EMG will detect very brief smiles or higher activity in cheek muscles (zygomaticus major) which are correlated with positive affect. There is quite classic (but very quoted) paper on that: Cacioppo JT, Petty RE, Losch ME, Kim HS. (1986) Electromyographic Activity Over Facial Muscle Regions Can Differentiate the Valence and Intensity of Affective Reactions. J Pers Soc Psychol., 50(2):260-8. download Another simple physiological assesment is heart rate measured by the interbeat interval (IBI). For example, study by Brosschot & Thayer (2003) shows that heart rate response is longer after negative emotions than after positive emotions. Brosschot JF, Thayer JF. (2003) Heart rate response is longer after negative emotions than after positive emotions. Int J Psychophysiol., In fact, the full spectrum of somatic measurements have been used along heart rate including pulse transmission time to the finger, skin conductance level or pupil dilation (Partala, 2003). All those are a bit less reliable methods and usually they detect arousal rather then indicate physiological differences between positive and negative affect. Partala T.; Surakka V. (2003) Pupil size variation as an indication of affective processing. International Journal of Human-Computer Studies, Finally, I would advise browsing literature on measurements of negative affect. You are likely to find some interesting methods there, like in this paper on the psychophysiology of crying (Gross et al., 1994). Gross JJ, Frederickson BL, Levenson RW. (1994) The psychophysiology of crying. Psychophysiology, 31(5):460-8. download
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2151 }
The plant collections of the Smithsonian Institution began with the acquisition of specimens collected by the United States Exploring Expedition (1838-1842). These formed the foundation of a national herbarium which today numbers 4.8 million historical plant records, placing it among the world's largest and most important. Nearly 800,000 specimen records (including over 90,000 type specimens with images) are currently available in this online catalog. Select a tab on this page to search by Keyword or Selected Fields. If you don't know what you want to see, you may want to look at the sample records in the Quick Browse section below. Searches are limited to 2000 records and the results are sorted by taxonomic group. If you need to retrieve a larger record set, contact the Department of Botany's Data Manager. See the Help tab to learn more about searching and then exploring your returned results (sorting, exporting, etc.). ||Sample Records from the DC Flora Collection ||2205692 2197595 2191752 2175968 2213272 2196389 2200318 2192830 2219158 2200909 2208745 2223985 2175937 2192264 2220376 ||Sample Records from the Botanical Type Register ||2119407 2149872 2161549 2790611 2105614 2099734 2134596 2116358 2166713 2151580 2158541 2143664 2097212 2076608 2167306 2121665 2095940 2075490 ||Sample Records from the Wilkes Expedition ||2524597 2705372 2705371 2743367 2699717 2741233 2741229 2733613 2741227 2680776 2741226 2741217 2741216 2687168 2702446 2684992 2680753 2680752 2741176 2741175 2693758 2680751 2678261 Enter your keywords separated by spaces and click Search. Records that match your search terms will be returned. - Using parentheses to clarify the logic, you can create complex queries with OR and NOT (here capital letters are required, otherwise they will be treated as keyword terms). - You can also use double-quotes to specify terms that should be treated as one. - Lastly, you can include the terms image(s) or type(s) to find records that have images or that are type specimens. Note that searching for common (vernacular) names may not yield the expected results. Associating common names with specimen records is a work in progress. Keyword search example: marantaceae ("new guinea" OR australia) images Use the By Field search to find specimen data that match values in specific database fields. Enter a value or choose one from the dropdown lists. - Click the Search button to initiate a search. Clear resets all fields. - Some lists are linked, so for example, choosing a Country narrows the choices for Province/State/Territory, and District/County. Dropdown choices also narrow as you type, for example, typing zing in the Family field might narrow the choice to Zingiberaceae. - Note that the Province/State dropdown is populated only after you have chosen a Country. You can type a Province/State without selecting a Country. - Check Only Records with Images if you want to restrict the search to records with multimedia content. - You will receive a warning when you enter invalid information in the text fields. For example, Catalog Numbers are composed strictly of letters and numbers; other characters will raise a warning. The results of your searches can be displayed in Grid (a sortable, customizable table) or Gallery View (best for reviewing images). Use the Switch button to cycle between these views. - You can choose whether to display 5, 10, 20, 50, or 100 records at a time. In Sheet View: - Click on the scientific name to view the full record. - Click on the thumbnail to view larger resolutions of the image. Use Control+Click (Command+Click) to open a new browser tab. In Grid View: - You can choose the columns to display from any column's dropdown menu (mouse into a column header and click the dropdown icon). Under Columns, click the name to display or hide the field (you do not need to click the checkbox specifically). - You can drag a column header to change its order of appearance in the grid. - You can also drag the edge of a column to make it wider or narrower. - Click in the expansion () column to view the full record. In Gallery View: - Click the image to view the full record. See Exporting Results for information on downloading results to, for example, Excel or Google Earth. Open the full collection record by clicking the expansion button () in Grid View, on the scientific name in Sheet View, or anywhere within the image frame in Gallery View. Inverse expansion buttons () indicate records with multimedia (typically, images). - In the Record window, metadata for the multimedia content is available when you mouseover the thumbnail. - Clicking the thumbnail opens the content in your browser or other appropriate application. - Record windows may be resized or moved within the browser window. - You may have up to ten Record windows open at any one time. Sort results in Grid View by clicking the column header (or by choosing Sort from the column's dropdown menu). - Sort on multiple columns by consecutively sorting columns in reverse order. For example, to view results sorted by Country and Province/State, first sort by Province/State and then sort again by Country. - For any column you can choose to sort in Ascending or Descending order. Export all or selected results by clicking the Export Results as CSV button in the bottom toolbar in Grid, or Gallery View. - Select individual records for Export by checking the export selection box (along the left edge of the Grid View grid). - Clear all selections with the Clear Selections button in the bottom toolbar. - Results are exported as comma-separated-values, one record per line, which can be saved to disk or opened directly with applications such as Microsoft Excel. You can also export all or selected results to a KML file for viewing with Google Earth or other KML viewers, by clicking the Export as KML button. This button is grayed when all or selected results lack latitude/longitude values. To create a link to specific records at NMNH provide the appropriate unit and querystring to: where UNIT is: - anth, birds, botany, ento, fishes, herps, iz, mammals, ms, or paleo and QUERYSTRING is (use a plus-sign to separate words): - One or more CATALOG NUMBERS, e.g. - One or more BARCODES, e.g. - The NAME of a TYPE specimen, e.g.: - The NAME of a specimen or object, e.g.: - The NAME (qn) and/or TYPE STATUS (qt) of a specimen, and/or its COLLECTOR (co), and/or the COLLECTION (cn) it is part of, e.g.: (Holotypes whose name includes Torre and Bartsch collected by Webb and part of the Henderson Collection) - To open the Collections Search to a specific search tab, e.g. Tabs are numbered left to right, beginning with zero. - iz/?ti=1 (Invertebrate Zoology Keywords Search) - mammals/?ti=3 (Mammals Whale Collection Search) There are ways to speed up your queries (or slow them down!) and to find specific information. - The more specific you make your queries the faster they will execute. Using more, rather than fewer, terms will very likely speed up your search. - These following special characters modify the interpretation of search terms (use with as many other terms as possible to avoid slowing your search): - * matches any number of characters, e.g. *pseudo* - ? matches a single character, e.g. young?lus frank? - ! negates the presence of a term, e.g. !new - ~ matches all terms with the given stem, e.g. ~spear for spear, spears, spearing, etc. - = match is case-sensitive, e.g. =Paris - Query results are typically limited to 5000 records. Avoid general queries, when you can, that are likely to bring back very large numbers of records, e.g. searching for poaceae. - Long running queries are automatically terminated, with no results returned. Please use the Feedback page to report back any problems you find with the data, or with using these search pages.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 7833 }
Coyotes spend a good deal of their day sleeping. Members of a pack or family may sleep within close proximity of each other, or they may sleep much further apart, but probably within the same couple of acres of each other. They have amazing built-in time clocks, but they also are influenced by circumstances of the moment. My own dog could tell the time and knew what was to be done at that time. For example, I always set off, with my dog, at exactly 2:40 to pick up one of my kids at school. But one day I fell asleep — I would not have made it on time except that my dog began poking me with her muzzle at exactly 2:40. Needless to say, I was amazed. The same is true for coyotes — they seem to know when it is time to meet up, but if people or dogs are around, they will delay. Most coyotes I know like to go trekking alone. After all, their staple diet consists of voles and gophers — animals that really can’t be divvied up very well. Might as well hunt alone. But some coyotes do enjoy trekking together, usually in pairs. When they hunt in pairs, there is usually a rendezvous beforehand. Rendezvous locations can remain the same for a while, or they can change drastically from day to day, but coyotes seem to have various favorite meeting spots which they alternate between for a while, before changing these altogether . This is where they congregate to then move together for their foraging. In this case here, the older female had spent her day sleeping in the sun quite some distance from where the young male had been also sleeping in the sun. The female was the first to move around — she disappeared into some bushes. In the meantime, I watched the male who moved from where he had been sleeping to a new location where he curled up and then dozed a while longer. Finally, he got up, stretched, scratched, and began to forage. I watched him catch a vole and toy with it. He continued searching for voles and then looked up ahead. He must have seen the female approaching, because he sat down and watched intently. She trotted over, and arrived on the scene. The ritual began with hugs and kisses. They are hidden in the grass in these photos, but you can see what is going on. It was intense, but lasted only about a minute. That was the first phase of the meeting. Then there was a pause where all activity ceased. I think the male was waiting for something, but since nothing happened he turned around and backed into her — it looked like a request. He did it again and then looked over his shoulder: “well?”. The older female was obliging. She began grooming the young fellow, pulling off burrs and bugs. He accepted this, repeatedly laying his ears back against his head — he seemed to melt with the attention. There was care, affection, and intensity here which few animals that I have seen show each other. The next phase of the meeting involved trotting off together. From what I have seen in the past — though I did not follow them this time — they will spend their time together trekking, marking their territory, hunting, playing, exploring and maybe even meeting up briefly with a couple of lone coyotes who live adjacent to this territory, before again returning to separate localities to rest.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3224 }
In the January 1994 earthquake in Northridge, California, it was reported that there were at least 50 gas-related fires in structures above ground, where rigid pipe had been the predominant gas piping material for decades. In the 1995 Kobe earthquake in Japan, where flexible gas piping had been in general use for more than 25 years, there were a negligible number of gas leaks reported in CSST piping systems compared to those reported in threaded rigid pipe. In the larger cities in Japan, CSST is now mandatory for house gas piping. In the U.S., Factory Mutual Research (Approval Guide May 2001 Section 7-7) approved CSST gas piping systems based upon demonstrated ability in the laboratory to withstand the stresses imposed by shifting or tipping appliances and/or by damage to structural framing caused by earthquakes, without fracture or leakage. Factory Mutual approval is based upon testing side-by-side with black iron piping components by a nationally recognized seismic laboratory. It is now documented that conventional rigid piping systems with threaded joints are more prone to breakage under seismic loading than flexible systems. This testing demonstrates that CSST has the ability to withstand the same motions that cause threaded rigid pipe to leak. This was demonstrated in a recent fire in Pennsylvania. A lightning strike to the house at the electrical meter caused extensive damage and fire to the house. The home was originally built in the 1950s, but a master bedroom suite and garage were added in the late 1990s, and CSST was used in the addition because of its ease of installation. As a result of the fire, the addition was completely destroyed, and the roof and walls caved into the master bedroom and garage area below. Upon examination, it showed that the CSST installed in the addition withstood the falling roof and walls, and that the CSST line was still intact after the fire. The loads placed on the CSST lines were so severe, that it stretched the corrugations out of the CSST until the pipe was almost smooth. There is no question that if rigid steel pipe were installed in that addition, it would have cracked and leaked natural gas in the fire, causing much more damage. Thanks to CSST, that danger was averted.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2252 }
Media Advice: Letters to the Editor There are a number of reasons why you might want to write a letter to the editor of a newspaper or broadcast program. You might wish to raise an issue which you feel has gone under-reported; respond to an article or to notify them of any errors they may have made. Whether dealing with the editor of a local free paper, or the BBC News, there are certain principles which it pays to follow. Firstly, it is absolutely vital that your letter is well written, professional, and does not come across as a rant. A badly written, ranting letter will do your campaign no end of trouble - undoing any good relationships that you may have built up with various journalists. It is worth noting that there are two different types of 'letter to the editor'. The first is (rather obviously) a personal letter written to the editor; the second is a letter written for publication. There are slightly different guidelines to follow: - If you are writing directly to the editor, then try to find out his or her name. If you can, then use it. - If you are writing a letter for publication, then it is customary to begin your letter with "Sir," rather than addressing the editor directly. Be sure to check whether there is a seperate address or email for letters (often email@example.com). - Give a daytime phone number where you can be contacted. Be sure to be available for comment. - Make your first paragraph count. - Keep your letter simple and short. Editors are pressed for time, and are unlikely to struggle through a long, rambling letter. Also, if you are writing for publication, then you are more likely to get published if there is little or no editing to be done. - If writing for publication, remember that your letter can be edited. Make sure that you stick to one point, so that your words cannot be twisted or misinterpreted if cut down. - Avoid point scoring and piety. It looks petty. Be calm, rational and authoritative. - React quickly, particularly if writing for publication. Newspapers have deadlines, and the letters section's is normally about 2pm for a daily. What was relevant today may not be relevant tomorrow. - If commenting on an article, be sure to mention the title and date of the piece. This will make it easier for the letters editor to know what you are talking about. - It is also worth reading a few examples of published letters to get an idea of the house style. Tabloids are more acceptable of slang; broadsheets tend towards dry, academic responses. - Some groups have written letters opposing their own demands (under a pseudonym) to keep a debate going. This can work well, but can also backfire. Be sure to make your fake letter less convincing than your real one, or you might lose the argument!
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2763 }
End statistics with hammer - 2+: Ends where the team scored at least two points (i.e. the team was able to capitalize the hammer). - 1: Ends where the team scored one point (i.e. opponent forced the team to take only one point with hammer). - 0: Ends that the team blanked with hammer. - Steal: Ends where the opponent scored at least one point (i.e. opponent stole points without hammer). End statistics without hammer - Steal: Ends where the team scored at least one point (i.e. the team managed to steal points without hammer). - 0: Ends where the opponent blanked the end with hammer. - 1: Ends where the opponent scored one point (i.e. the team managed to force the opponent to take only one point with hammer). - 2+: Ends where the opponent scored at least two points (i.e. the opponent managed to capitalize the hammer). - Result avg: Average result (points for - points against) - Hammer avg: Averate points / end in those ends where the team played with hammer. - Without hammer avg: Average points / end where the team played without hammer. - Hammer value: Describes the value of hammer to the average points the team scored in an end. Remarks about statistics Apart from the result average, last end scores have been ignored in those cases where the losing team scored points. The purpose of this is to clean the data from distortions so the statistics would describe how teams have played in "normal" situations. Example: the winning team has given a steal of one point in the last to guarantee a victory. This end has been ignored because the situation in the game is not "normal"
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1594 }
| Recommend this book || Back | |Title Review | Manipuri is one of the scheduled languages of India. The outcome of a series of workshops, the volume presents a total of 60 lessons, meant primarily for the Manipuri teacher trainees who were not acquainted with the language earlier, which constitute the Basic Course of learning Manipuri in the North East Regional Language Centre, Guwahati of the Central Institute of Indian Languages. The Basic Course is a three-phase ten-month course of 1100 hours of instruction. The Basic Course is intended to help readers perceive and reproduce sounds and their meaningful sequences, form sentences orally from given patterns and lexical items, narrate specific events orally, and converse with the teachers and fellow trainees on specific topics. The lessons in the book comprise dialogue, drills, exercises, vocabulary and notes. The language variety used for the dialogues and other purposes is the standard colloquial as spoken by educated Manipuris in the valley districts of Manipur. Some of the lessons in the book are provided with literature on Parivardhit Devanagari using extra symbols to represent the different Manipuri sounds. To make the teaching much more relevant, the work contains characters and situations that are not typical of the Manipur environment and context but are essentially relevant to the use of Manipuri language by the adult second language learners. Though the book is the prescribed text for the Basic Course phase of learning Manipuri, it may also be used for any generalised second language programme in Manipuri by adult learners and their teachers. | Similar Books | | 1. Learning Hindi through English / | | 2. A glimpse of Santali phonology / | | 3. A glimpes [i.e. glimpse] of Santali grammar / | | 4. An intermediate course in Malayalam / | | 5. A programmed course in Tamil / | |Related Subjects | | 1. Linguistics| | 2. Languages| | 3. Himalayan And North East Indian Studies| An intensive course in Manipuri
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1996 }
How Much Potential Energy Do Different Nuts Have? It seems that at least once a year California has another energy crisis, and almost as often, someone wonders what will happen when California runs out of energy sources. Christopher Crews, knowing that all living things contain energy, set out to prove that different varieties of nuts have energy stores that can be released by burning. I believe that burning different varieties of nuts will produce energy, and that peanuts will produce the most energy. Amount of heat produced - Number of nuts tested - Amount of water - Ten each whole, raw, unshelled nuts: - 32-ounce coffee can - Two 10-ounce soup cans - Drill and bits - Kitchen scale - Fill large coffee can with water. - Measure nut. - Weigh water on kitchen scale. - Measure and record starting water temperature. - Drill hole through nut. - Insert skewer through hole in nut. - Heat nut with lighter and let the nut burn fully. - Measure and record ending water temperature. - Calculate BTU (starting temperature minus ending temperature divided by the weight of the water). BTU stands for British Thermal Unit, the energy necessary to raise the temperature of one pound of water by 1@dgsF. One BTU equals approximately 1,055 joules (or 1,055 watt-seconds). - Repeat Steps 1 through 9 for each nut. The average BTU for each type of nut is as follows: Cashews 15.75 BTU Almonds 13.76 BTU Peanuts 10.77 BTU The hypothesis was that burning different varieties of nuts would produce energy, with peanuts producing the most energy. That’s partially correct because all the nuts produced energy. However, the hypothesis was incorrect because the cashews produced the most energy, and peanuts produced the least. Warning is hereby given that not all Project Ideas are appropriate for all individuals or in all circumstances. Implementation of any Science Project Idea should be undertaken only in appropriate settings and with appropriate parental or other supervision. Reading and following the safety precautions of all materials used in a project is the sole responsibility of each individual. For further information, consult your state’s handbook of Science Safety.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2175 }
Referrals & Appointments CT Lung Scan Edward Cancer Center offers a CT Lung Screening for at risk patients. This lung screening, designed to assist in early detection, can give those at risk the opportunity for a healthier lifestyle. Why should I have a CT Lung Screen? Lung cancer is the No. 1 cancer killer in the United States, with 222,520 people expected to die from it this year. In part, this is because there has been no reliable way to detect lung cancer in its earliest, most treatable stage. Now, new research suggests a way to screen people at high risk for lung cancer and reduce the number of deaths from this disease. Who should have a CT Lung Screen? The National Comprehensive Cancer Network (NCCN) has recommended screening for high-risk individuals ages 55 to 74 who have smoked a pack or more of cigarettes a day for 30 years or more, and who are still smoking or who quit less than 15 years ago. The NCCN also recommends screening for those 50 and older who have smoked a pack a day or more of cigarettes for 20 years or longer and have one additional risk factor for lung cancer. This could include a history of exposure to radon or occupational exposure to certain chemicals. Does the CT Lung Screen hurt or is it uncomfortable? No, the screen is simple and painless. You simply lay on the screenning table, which slides under a large open cylinder, and hold your breath for a few seconds. It is a quick procedure – about 15 minutes including prep time. Are there any risks? A CT Lung Screening finds abnormalities in about 20 to 60 percent of smokers and former smokers, but most of these abnormalities are scars from inflammation or other noncancerous conditions. In some cases, screenings may lead to biopsy, causing anxiety for those getting screened and their loved ones. Often the spot is found to be benign, but that can only be determined through an invasive procedure. CT screens are so good at seeing nodules or spots on the lung that they can see very small nodules that don't need immediate testing, but do need follow-up CT screenings to detect any changes, and CT screens expose you to a small dose of radiation. Compared with some other parts of the body, such as the breast, lungs have greater potential for developing radiation-induced cancer. The risk of developing cancers from CT screens is small, but it's a reminder of the importance of weighing the risk versus the benefit of any medical test. Why should I have my CT Lung Screen at Edward? Unlike other centers that simply screen and provide your results, the Edward Cancer Center has a Multidisciplinary Thoracic Oncology Clinic made up of a team of cancer experts. Once you are screened, you will automatically be enrolled in our Multidisciplinary Clinic and our experts will actively follow up with you on a yearly basis to monitor your lung health. In the event a nodule is found to be cancerous, our team will notify you immediately and schedule you to meet with the physicians who make up our Multidisciplinary Clinic. These physicians gather with you to review and discuss your case and develop an individualized treatment plan. The result is coordinated, faster, and more efficient treatment. With our Clinic, patients are able to start treatment within weeks, whereas at other facilities, it can take up to a month just to coordinate the treatment plan. In addition, we have a nurse navigator who helps guide patients and families with anything and everything – from answering treatment questions to lending a supportive ear. While our team will track and closely monitor your care, we will also keep your primary care physician informed of any tests, treatments, and results in order to enhance communication and coordination of care.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3741 }
When Someone Dies A Practical Guide to Holistic Care at the End of LifeBy - Hannah Cooke, BSc(Hons), MSc(Econ), MSc(Nursing Studies), RGN, DN, RNT, Lecturer in Nursing, University of Manchester School of Nursing Studies, Manchester, UK Caring for patients who have been diagnosed with terminal illness provides a challenge to all healthcare professionals, as confronting the possibility of death and discussing it openly and supportively with the patient and their family is not easy. To ensure death with dignity, patients need holistic care: practical, social, emotional as well as spiritual needs must be met. This new book by Hannah Cooke links all these areas to provide practical support and advice on areas such as choices of treatment (including the use of complementary therapies), pain control, counselling and religious beliefs. There is also a section on the care of the bereaved with a list of helpful organisations that can be contacted. 'When Someone Dies' will help nurses and other healthcare professionals implement a partnership of care that will ensure that their patients live until they die. Paperback, 192 Pages Published: June 2000 Imprint: Butterworth Heinemann - Part One: Caring for the Dying Patients and their Families: Communicating with dying patients and their families; Choices for dying people; Patient comfort and control of symptoms; Wills and end of life decisions; Part Two: Care of the patient after death; Organ donation and donation of bodies; Part Three: Care of the Bereaved: Supporting the bereaved; Practical matters following a death; Part Four: Caring of the Religious Needs of the Dying: Meeting religious needs in the hospital setting; Religious traditions and healthcare; Index
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1727 }
How hot is it where you are? Tell your stories at CNN's iReport. (CNN) -- For many Americans, this summer has been miserably hot. Heat advisories and warnings have been issued from coast to coast, with high temperatures often reaching into the triple digits, and July went into the record books as the hottest month ever for the continental United States. But in certain parts of the world, this is the norm -- or maybe even on the cool side. Try Kuwait City, for instance. In July, its average high temperature is 116 degrees Fahrenheit. Or Timbuktu in Mali, where the highs average 108 in May and was once recorded at 130. 130! That ranks fifth on the all-time list. The highest temperature ever recorded on the planet was in 1922, when a thermometer in El Azizia, Libya, hit 136. Some dispute that mark, saying it was improperly measured. If that's true, the record would be the 134, reached nine years earlier in Death Valley, California. But the world's hottest place might not be any of these, according to a team of scientists from the University of Montana. It says the highest temperatures on Earth are found in areas that don't even have weather stations. "The Earth's hot deserts -- such as the Sahara, the Gobi, the Sonoran and the Lut -- are climatically harsh and so remote that access for routine measurements and maintenance of a weather station is impractical," said David Mildrexler, lead author of a recent study that used NASA satellites to detect the Earth's hottest surface temperatures. The satellites detect the infrared energy emitted by land. And over a seven-year period, from 2003 to 2009, they found Iran's Lut Desert to be the hottest place on Earth. The Lut Desert had the highest recorded surface temperature in five of the seven years, topping out at 159 degrees in 2005. Other notable annual highs came from Queensland, Australia (156 degrees in 2003) and China's Turpan Basin (152 degrees in 2008). It's important to stress that surface temperatures are naturally higher than the air temperatures measured by weather stations. Air temperatures have to be measured by thermometers placed off the ground and shielded from sunlight, according to global meteorological standards. But the study shows that today's modern records might not necessarily be the most accurate. "Most of the places that call themselves the hottest on Earth are not even serious contenders," co-author Steve Running said. The world's highest recorded air temperatures 1. El Azizia, Libya (136 degrees Fahrenheit) 2. Death Valley, California (134) 3. Ghadames, Libya (131) 3. Kebili, Tunisia (131) 5. Timbuktu, Mali (130) 5. Araouane, Mali (130) 7. Tirat Tsvi, Israel (129) 8. Ahwaz, Iran (128) 8. Agha Jari, Iran (128) 10. Wadi Halfa, Sudan (127) Highest recorded air temperature (by continent) Africa: El Azizia, Libya (136) North America: Death Valley, California (134) Asia: Tirat Tsvi, Israel (129) Australia: Cloncurry, Queensland (128*) Europe: Seville, Spain (122) South America: Rivadavia, Argentina (120) Antarctica: Vanda Station, Scott Coast (59) Sources: NOAA, World Meteorological Organization * This temperature was measured using the techniques available at the time of recording, which are different to the standard techniques currently used in Australia. The most likely Australian record using standard equipment is an observation of 123 degrees, recorded at Oodnadatta, South Australia.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3412 }
is a lightweight, openwork fabric, patterned with open holes in the work, made by machine or by hand. Lace-making is an ancient craft. True lace was not made until the late 15th and early 16th centuries. A true lace is created when a thread is looped, twisted or braided to other threads independently from a backing fabric. Originally linen, silk, gold, or silver threads were used. Now lace is often made with cotton thread. Manufactured lace may be made of synthetic fiber. A few modern artists make lace with a fine copper or silver wire instead of thread. There are many types of lace, defined by how they are made. These include: *Needle lace: made using a needle and thread. This is the most flexible of the lace-making arts. *Cutwork, or whitework; lace constructed by removing threads from a woven background, and the remaining threads wrapped or filled with embroidery. *Bobbin Lace: as the name suggests, made with bobbins and a pillow. *Tape lace: makes the tape in the lace as it is worked, or uses a machine- or hand-made textile strip formed into a design, then joined and embellished with needle or bobbin lace. *Knotted lace: including Macramé and Tatting. *Crocheted lace: including Irish crochet, pineapple crochet, and filet crochet. *Knitted lace: including Shetland lace, such as the "wedding ring shawl", a lace shawl so fine that it can be pulled through a wedding ring. *Machine-made: any style of lace created or replicated using mechanical means. The machine is used to tie up the 'M' point together. is most typically a closely-woven unbleached or white cloth, produced from corded cotton yarn. Wide muslin is called "sheeting". It is often used to make dresses or curtains but may also be used to complement foam for bench padding. Muslin breathes well, and is a good choice of material for clothing meant for hot, dry climates. is a natural protein fiber, some forms of which can be woven into textiles. The best-known type of silk is obtained from cocoons made by the larvae of the silkworm Bombyx mori reared in captivity (sericulture). The shimmering appearance for which silk is prized comes from the fibers' triangular prism-like structure which allows silk cloth to refract incoming light at different angles. is a soft fibre that grows around the seeds of the cotton plant (Gossypium sp.), a shrub native to tropical and subtropical regions around the world, including the Americas, India, and Africa. However, virtually all of the commercial cotton grown today worldwide is grown from varieties of the native American species Gossypium hirsutum and Gossypium barbadense. The fiber is most often spun into yarn or thread and used to make a soft, breathable textile, which is the most widely used natural-fiber cloth in clothing today. is fiber with strands less than one denier. Microfiber is the perfect blend of polyester and polyamide. Fabrics made with microfibers are exceptionally soft and hold their shape well. When high quality Microfiber is combined with the right knitting process, it creates an extremely effective cleaning material. This material can hold up to seven times its weight in water. They are also used for some cleaning applications, because of their exceptional ability to absorb oils. is a manufactured regenerated cellulosic fiber. Rayon is produced from naturally occurring polymers and therefore it is not a truly synthetic fiber, nor is it a natural fiber. is a synthetic fiber known for its exceptional elasticity (stretchability). It is stronger and more durable than rubber, its major non-synthetic competitor. is a cloth that typically has a glossy surface and a dull back. It is a warp-dominated weaving technique that forms a minimum number of interlacings in a fabric. If a fabric is formed with a satin weave using filament fibers such as silk, nylon, or polyester, the corresponding fabric is termed a "satin". The popularity of stockings increases and decreases with fashion. It was formerly made of woven cloth but now of knitted wool, silk, cotton or nylon. The most common and well-known use of corsets is to slim the body and make it conform to a fashionable silhouette. For women this most frequently emphasizes a curvy figure, by reducing the waist, and thereby exaggerating the bust and hips. However, in some periods, corsets have been worn to achieve a tubular straight-up-and-down shape, which involves minimizing the bust and hips. A wide variety of types of panties exist. Bikini panties are designed so that the hip connectors are small, like on those of swim wear. String bikini panties are the most commonly worn type in the United States by high school and college age women, and are similar to regular bikini panties, but instead of a thin hip grip, they have a small string, which sometimes ties around the waist rather than being pulled up over them. String bikini is considered more revealing. String bikini are usually made of satin or silk, but occasionally from other fabrics. High-cut, or control top, are cut higher on the hip to slightly pull in and shape the stomach to conceal obesity. High-cut are usually worn by older women and are often shunned by younger women. Boyshorts describe a type of female underwear that has a lower, thicker cut of material around the hips, making them appear as shorts that men would wear. They are sometimes by men and women alike criticized as not being feminine, although some women do wear them. The g-string is a thong panty with a string running between the buttocks. It is often jokingly referred to as "floss" by critics and some comedians. Panties are made of a variety of materials and fabrics including satin, silk, pvc, cotton, nylon, mesh, lace, rawhide, leather, lycra, and/or polyester. In British English, and in places such as Great Britain, Wales, Australia, New Zealand, Scotland, Ireland, South Africa and India, panties are often referred to as knickers. The term knickers is not generally used in The United States and Canada, where the term "panties" is usually favored. A G-string (alternatively gee-string or gee string) is a type of thong is a narrow piece of cloth, leather, or plastic that covers or holds the genitals, passes between the buttocks, and is attached to a band around the hips, worn as swimwear or underwear by both men and women. is an article of clothing that covers, supports, and elevates the breasts. The bra is considered a foundation garment, as well as an undergarment, because of its role in shaping the wearer's figure. It was originally developed in the late nineteenth and early twentieth centuries to replace the corset, and has now become, in many parts of the world, the most popular form of undergarment for the upper body, although camisoles and chemises are becoming more popular. The bra may be worn to support and enhance breast shape during everyday activities and a specialized bra, the sports bra to support and restrain breasts during exercise. Some wearers believe that wearing it will prevent their breasts from sagging later in life A wide range of styles of brassieres now exists, to be worn in a variety of situations, and with a variety of outergarments. For instance strapless, backless and multiway bra styles specialise in being invisible underneath less than full coverage garments whereas push up and plunge focus on shaping the bust and cleavage. The degree of shaping and coverage of the breasts varies between styles, as do functionality and fashion, fabric, and colour. Styles range from the purely utilitarian to the sensual. Others include various accessory structures such as padding and underwiring.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 7622 }
Some operations are difficult or even impossible to obtain with an expression, or the operation could become too complex to achieve. The alternative is to create a function that would take care of performing the operation and supplying the result to the table. Of course, as you may know already, a function is its own object. This means that, after creating it, to use its result in a table, you must call it. For example, you can create a function that returns a value, then call that function and assign its returned value to a column. You can create your own function and use it, or you can use one of the built-in functions of Transact-SQL. In order to involve a function with your data entry, you must have one. You can create your own function using the techniques we learned already. To make the function easily accessible, you should create it as part of the database that would use it. Here is an example: -- ============================================= -- Author: FunctionX -- Create date: Saturday 22 December 2007 -- Description: Used to calculate the greatest common divisor -- ============================================= CREATE FUNCTION GCD ( @a int, @b int ) RETURNS int AS BEGIN DECLARE @Remainder int; WHILE @b <> 0 BEGIN SET @Remainder = @a % @b; SET @a = @b; SET @b = @Remainder; } RETURN @a } When calling the function, follow the normal rules. Here are examples: INSERT INTO Calculations VALUES(345, 135, dbo.GCD(345, 135)); GO INSERT INTO Calculations VALUES(40, 6, dbo.GCD(40, 6)); GO INSERT INTO Calculations VALUES(16, 28, dbo.GCD(16, 28)); GO You can use one of the built-in functions of Transact-SQL. Probably the best way to be familiar with the built-in functions is to check the online documentation to find out if the assignment you want to perform is already created. Using a built-in functions would spare you the trouble of creating your own function. For example, imagine you have a database named AutoRepairShop and imagine it has a table used to create repair orders for customers: CREATE TABLE RepairOrders ( RepairID int Identity(1,1) NOT NULL, CustomerName varchar(50), CustomerPhone varchar(20), RepairDate DateTime ); GO When performing data entry for this table, you can let the user enter the customer name and phone number. On the other hand, you can assist the user by programmatically entering the current date. To do this, you would call the GETDATE() function. Here are examples: INSERT INTO RepairOrders(CustomerName, CustomerPhone, RepairDate) VALUES('Annette Berceau', '301-988-4615', GETDATE()); GO INSERT INTO RepairOrders(CustomerPhone, CustomerName, RepairDate) VALUES('(240) 601-3795', 'Paulino Santiago', GETDATE()); GO INSERT INTO RepairOrders(CustomerName, RepairDate, CustomerPhone) VALUES('Alicia Katts', GETDATE(), '(301) 527-3095'); GO INSERT INTO RepairOrders(RepairDate, CustomerPhone, CustomerName) VALUES(GETDATE(), '703-927-4002', 'Bertrand Nguyen'); GO You can also involve the function in an operation, then use the result as the value to assign to a field. You can also call a function that takes one or more arguments; make sure you respect the rules of passing an argument to a function when calling it. If none of the Transact-SQL built-in functions satisfies your requirements, you can create your own.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3282 }
Menezes, Pradeep L and Kishore, * and Kailas, Satish V (2006) Studies on friction and transfer layer using inclined scratch. In: Tribology International, 39 (2). pp. 175-183. Restricted to Registered users only Download (562Kb) | Request a copy Friction influences the nature of transfer layer formed at the interface between die and sheet during forming. In the present investigation, basic studies were conducted using 'Inclined Scratch Test' to understand the mechanism of transfer layer formation during sliding of pins made of an Al-Mg alloy on EN8 steel flats of different surface roughness under dry and lubricated conditions. The surfaces produced can be categorized into three different types: (a) uni-directional (b) 8-ground and (c) random. Rubbing the EN8 flat in a uni-directional manner and a criss-cross manner on emery sheets produced the uni-directional and 8 ground surfaces. The random surfaces were produced by polishing the EN8 flats using various abrasive powders. The influence of the 'nature of surface roughness' on material transfer and coefficient of friction were investigated. Scanning Electron Microscopy studies were performed on the contact surfaces of the Al-Mg alloy pins and EN8 steel flats to reveal the morphology of the transfer layer obtained. It was seen that the transfer layer is dependant on the coefficient of friction. The coefficient of friction, which has two components-the adhesion component and the plowing component, is controlled by the 'nature of surface'. A surface that promotes plane strain conditions near the surfaces increases the plowing component of friction. |Item Type:||Journal Article| |Additional Information:||Copyright for this article belongs to Elsevier.| |Keywords:||Friction; Nature of surface; Inclined scratch| |Department/Centre:||Division of Mechanical Sciences > Materials Engineering (formerly Metallurgy) Division of Mechanical Sciences > Mechanical Engineering |Date Deposited:||19 Jan 2006| |Last Modified:||19 Sep 2010 04:23| Actions (login required)
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2032 }
- Futurity.org - http://www.futurity.org - On sale! But does inexpensive mean cheap? Posted By Amy Wolf-Vanderbilt On November 27, 2012 @ 10:32 am In Society & Culture | No Comments VANDERBILT (US) — The holidays usher in the biggest shopping season of the year, but sellers need to beware: sale prices can mean different things to different consumers. New research shows that consumers often make inferences to fill in gaps in knowledge when they don’t have complete information regarding products. To some, price denotes quality. For others, a low price means a good value. New research by Vanderbilt Owen Graduate School of Management professor Steve Posavac and his co-authors finds that in some consumers’ minds, price denotes quality. For others, low price leads a consumer to believe he or she is getting a good value. “The bottom line of our research is that people can hold two opposing beliefs about a product. In the case of price, most people simultaneously believe that low prices mean good value and that low prices mean low quality. But these two beliefs are not equally present in consumers’ minds all the time,” says Steve Posavac, professor in marketing at Vanderbilt University. Published in a forthcoming Journal of Consumer Research , the study finds that consumers use a series of theories when considering value and price. And how they size up a possible purchase depends on what’s on their mind when they’re thinking about a given product. “Consumers rarely have complete information and use various strategies to fill the gaps in their knowledge as they consider and choose products. One of these strategies involves using naive theories: informal, common sense, explanations that consumers use to make sense of their environment,” write Posavac and his co-authors Hélène Deval, Susan P. Mantel, and Frank R. Kardes. For example, consumers may believe that popular products that are well advertised are high in quality while also believing that scarce or rarely heard of products are high in quality. Price vs. quality The researchers conducted eight experiments that tested marketing techniques that leaned toward price or quality. In one experiment, consumers were shown an ad for a bottle of wine with either a high or low price. When subtly reminded of quality, consumers evaluated the expensive wine more favorably than the cheap wine. However, when subtly reminded of value, they rated the cheap wine more favorably. When marketing backfires Sales promotions succeed when consumers perceive that they are getting a good deal, but they can also backfire if consumers perceive that lower prices indicate poor quality. Posavac uses department store J.C. Penney as an example. “A company may implement an everyday low-pricing strategy that manages to reduce brand value and alienate consumers if many of them believe that low prices equal low quality. Over the years, J.C. Penney customers had become so used to sales that they no longer believed they were getting a good deal,” says Posavac. Because consumers use multiple “naïve theories” when analyzing a product, a company’s subtle marketing tactics toward price or quality may attract one consumer while easily turning another consumer off. “[Companies] design a strategy by assuming that a certain naive theory is going to drive consumer evaluation and choice when, in fact, several naive theories are available to the consumer,” the authors conclude. Source: Vanderbilt University Article printed from Futurity.org: http://www.futurity.org URL to article: http://www.futurity.org/society-culture/on-sale-but-does-inexpensive-mean-cheap/ URLs in this post: Read the original study: http://www.jstor.org/userimages/ContentEditor/1351717003127/DevalRelease.pdf Vanderbilt University: http://news.vanderbilt.edu/2012/11/cheap/ Copyright © 2009 Futurity.org. All rights reserved.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3861 }
FAUZIAH, CATUR KHUROTUL (2007) HUBUNGAN ANTARA RELIGIUSITAS DENGAN KEPUASAN HIDUP PADA LANJUT USIA. Other thesis, University of Muhammadiyah Malang. Download (58Kb) | Preview Religiosity is an appreciation, confidence, experience or individual conscience against religious teachings are realized in practice worship and religious rituals. Terbinanya religiosity properly, can raises life satisfaction for the elderly. Where life satisfaction is a state which includes feelings of passion in it, have assertiveness and tough or resilient, the match between the desire to achievement of goals, have positive self concept, and mood calm. The purpose of this study was to determine the relationship between religiosity to life satisfaction in elderly. This study uses a quantitative approach. The subject of this research is the elderly who followed the routine recitation in boarding school Nurul Ulum. The sampling technique used is total sampling. With number of study subjects 50 people. Data collection methods used there are 2 kinds of scales are scales of religiosity and life satisfaction scale. Data collected and then analyzed by using correlation product moment using the computer program SPSS for Windows version 10. Results obtained from this study indicate that there is a relationship positive and highly significant correlation between religiosity to life satisfaction in advanced age (r = 0.419 p = 0.002). This means that the higher the religiosity which is owned the higher the person's life satisfaction, and vice versa the lower the religiosity of a person will get low life satisfaction. The effective contribution of religiosity to the satisfaction living elderly by 17.6%, while 82,4% influenced by other variables had not been examined. |Item Type:||Thesis (Other)| |Subjects:||B Philosophy. Psychology. Religion > BF Psychology| |Divisions:||Faculty of Psychology > Department of Psychology| |Depositing User:||Zainul Afandi| |Date Deposited:||29 May 2012 09:29| |Last Modified:||29 May 2012 09:29| Actions (login required)
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2045 }
Click chemistry is a term introduced by Professors Valery V. Fokin and Nobel laureate K.B. Sharpless to describe chemistry tailored to generate substances quickly and reliably by joining small units together similar to the modular strategy adopted by Mother Nature. The term “click chemistry” implies that the reactions are highly efficient, wide in scope, product isolation is easy, stereospecific, simple to perform using inexpensive reagents, and can be conducted in benign solvents such as water. The copper-catalyzed variant of Huisgen azide-alkyne cycloaddition (CuAAC) fits the concept well and is one of the most popular prototype click reactions to date. Click chemistry is finding a number of applications in the areas of drug discovery, bioconjugation, and material science. It has been successfully utilized in the synthesis of peptides, particularly in peptide cyclization and modifications. CuAAC click reaction between an azide and alkyne takes place in presence of a Cu (I) catalyst under mild conditions, resulting in the formation of a triazole link connecting the two molecules. In peptide chemistry the increasing popularity of the CuAAC is largely a result of the unique properties of both azides and the resulting triazoles. Azide groups are easy to introduce, stable to water and oxidative conditions, and orthogonal to many functional groups in peptide synthesis. For applications in vitro and in vivo, azides are virtually absent from any naturally occurring species (bioorthogonal). Interestingly, the triazole moiety formed by click reaction has unique similarity to an amide bond. The relative planarity, strong dipole moments, and hydrogen bonding ability of triazole linkage make it as attractive as an amide bond with the added advantage that it is less prone to hydrolytic cleavage. Click chemistry provides a number of avenues for peptide synthesis and modifications and could be combined with other techniques to make complex structures and multicomponent functionalized systems with ease. For example, peptides can be converted postsynthetically to an azido derivative which can be clicked with an appropriate substrate containing a clickable alkynyl group or vice versa. Peptides can also be made by inter- and intramolecular click reactions using azide or alkyne containing amino acids or building blocks during peptide synthesis. Building blocks containing clickable moieties are instrumental in constructing side-chain modified peptides, interside-chain peptide chimera, peptide small molecule conjugates, and cyclic peptides. Solid phase resins modified with clickable groups can also be used for making clickable/modified peptides. Click chemistry is compatible with various protected amino acid side chains used in peptide synthesis. A number of reagents and building blocks can be utilized for peptide click chemistry. These include azido-amino acids (e.g. Fmoc- protected for solid phase synthesis), propargyl amino acids, PEG azide and alkynes (Maleimide-PEG3-Azide and acetylene PEG maleimide for pegylation), alkyne and azide containing chemical modification reagents (propargyl amine, 1-(2-nitrophenyl) propargyl alcohol, succinidyl-hex-5-ynoate, succinimidyl-4-azovalerate, pentynoic acid, 2-azido-3-methyl propanoic acid) and Diazo transfer reagents (imidazole-1-sulfonyl azide). Synthesis of Cyclic Peptides A variety of macrocyclization methods are available to increase the clinical efficacy and bioavailability of peptide drugs. Cyclization stabilizes the peptide molecule by locking its conformation, thus increasing potency and in vivo half-life. Introduction of azide and alkyne moieties into structurally diverse peptide side chains, combined with on-resin macrocyclization conditions, is used to design structurally constrained peptides. Click reaction has been exploited in a number of different peptide cyclization reactions such as the cyclization of a disulfide-containing peptide on the resin with or without protecting groups on; the preparation of novel heterodetic cyclopeptides by an intramolecular side chain-to-side chain click reaction, forming a 1,4-disubstituted [1,2,3] triazolyl-containing bridge; cyclization of tripeptides for making vancomycin-inspired mimics; on-resin cyclization of peptide ligands of the vascular endothelial growth factor receptor 1, etc. Formation of macrocyclic heterodimers were observed in many cases in high yield during click-mediated macrocyclization reactions, opening up the prospects of synthesizing complex peptide structures, which are otherwise difficult to make. Side chain-to-side chain cyclization, e.g., by ring-closing olefin metathesis, known as stapling, is one approach to increase the biological activity of short peptides that has shown promise when applied to 3(10)- and α-helical peptide. A novel stapling methodology for 3(10)-helical peptides using CuAAC click reaction in a model aminoisobutyric acid (Aib) rich peptide resulted in a more ideal 3(10)-helix than its acyclic precursor. Chemical Ligation of Peptides Joining two or more peptide fragments together to make a larger peptide chain is called ligation. Click chemistry can be conveniently utilized to make peptide–peptide linkages. A peptide fragment functionalized with an alkyne group could be ligated to another peptide with an N-terminal azide moiety resulting in a triazole linker (similar to an amide bond as explained earlier) holding two peptide units together. Similarly, multimeric peptides can be made by orthogonal side chain–protecting groups such as Ivdde or Aloc (e.g. side chain of Lys) followed by deprotection, attachment of an alkyne function and clicking with N-terminal azide peptides. Several examples of peptide ligation is available, such as: the synthesis of a clickable RGD peptide (made by reacting Lys side chain with azido acetic acid) that can be clicked to another peptide fragment; synthesis of cell-permeable peptide by ligating a therapeutic alkynyl-modified peptide (using inexpensive propargylamine or 1-(2-nitrophenyl) propargyl alcohol) with nona-arginine modified with an azide group; synthesis of neurotensin (8–13) containing heterodimers by clicking alkyne-neurotensin (made by reacting with succinidyl-hex-5-ynoate) with azide of a Plk1-PBR binding phosphorylated hexapeptide (made by reacting with succinimidyl-4-azidovalerate). Click triazole-based oligopeptides were also found to self-dimerize in a head-to-tail fashion. Synthesis of Modified Peptides Modification of peptides by pegylation has been achieved by click chemistry. For example a lipopeptide was assembled on a solid phase resin followed by an on-resin pegylation reaction (using azido-peg) and cleavage of the pegylated peptide off the resin. Such molecules are ideally suited for functionalization of solid-supported lipid bilayers and liposomal drug delivery systems, and are particularly valuable in enzyme activation strategies. There is a tremendous potential for click chemistry for various chemical modification of peptides and proteins (e.g., attaching ligands, lipophilic or liphophobic groups, hydrophilic and hydrophobic linkers etc.) and a number of clickable substrates can be designed for this purpose. Arginine-rich TAT peptides (capable of penetrating plasma membrane directly) modified with clickable azido group can be conjugated to oligonucleotides, cytotoxic drugs, kinase inhibitors etc. to facilitate cell penetration for therapeutic applications. Alkyne or azido containing prosthetic groups of radioisotopes could be used for labeling modified peptides. In conclusion, click chemistry is a powerful technique for the synthesis and modification of peptides. In the future, the applications of click chemistry to peptides will grow drastically due to the potential of these molecules in drug development, diagnostics, cosmetics, and material science combined with the simplicity and efficiency of click reactions. It will be possible to overcome difficulties in making complex peptides employing this elegant chemistry.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 8039 }
Glanford Brigg Poor Law Union - As a result of the Poor Law Amendment Act reform of 1834, 50 Brigg-area parishes covering 252 square miles in Lincolnshire, just south of the River Humber, became part of the Glanford Brigg Poor Law Union which was formally constituted on 18 January 1837. The count of parishes increased over the years as new parishes formed in the district. See an expanded history at the Peter Higganbotham site. - This Poorlaw region covered 165,470 acres of land. - The Registration District had three subdistricts: Brigg, Barton and Winterton. The Winterton subdistrict was originally called the Messingham subdistrict (this changed between 1842 and 1872). - A workhouse was built on Wrawby Street, at the entrance to the town from Wrawby parish. It was built in 1835-37 of white brick, designed to hold 220 inmates. It had a detached fever ward. A new infirmary was erected in 1914-15. A photo of the infirmary is at: Workhouse Infirmary - The Board of Guardians met on alternate Thursdays at the workhouse. - The workhouse later became "Glanford Hospital". This closed in 1991. The buildings were later demolished, with the exception of the infirmary. - The Lincolnshire Archives has the Guardians' minute books 1920-1930; the Minutes of Committees 1912-30, Register of Inmates 1928-37; Register of Births 1914-1944; Boarding out 1912-30; etc. All records are subject to 100-year closure for privacy. - For a map of the area, see: Alan Godfrey maps. - For more on what the LFHS and the Lincoln Archives have on Lincolnshire Poor Law records, see our Poorhouses page. ||H.O. 107 / 2116 - 1842: Relieving officer: Charles CAPES. - 1856: Sir Robert SHEFFIELD, baronet, Chairman of the Board of Guardians; John HETT, clerk; Rev. J. R. WEST, chaplain; George EMPRINGHAM, workhouse master; Mrs. EMPRINGHAM, matron. Relieving officers: Thomas JAMESON, William MASON and Charles CLAY. - 1872: Rowland WINN, Chairman of the Board of Guardians; John HETT, clerk; Rev. J. R. WEST, chaplain; George EMPRINGHAM, workhouse master; Mrs. EMPRINGHAM, matron. Relieving officers: George WRIGHT, William MASON and Charles CLAY. - 1881: Henry GIBSON, workhouse master; Mrs. Caroline Hall GIBSON, matron. Miss Emily OWENS, schoolmistress; Miss Charlotte GRADY, Nurse; Miss Harriet ANDREWS, Nurse; Thomas HILL, Porter. - 1913: Rev. George GODFREY, Chairman of the Board of Guardians; John BEAULAH, Vice Chairman; Richard Fox SMITH, Vice Chairman; Frank C. HETT, clerk; Alexander S. L. MELVILLE, Treasurer; Rev. A. N. CLAYE, chaplain; H. F. GIBSON, workhouse master; Mrs. GIBSON, matron. Relieving officers: Elwood Alfred ELWOOD, Walter Arnold TAYLOR, Joseph H. KENDALL. Find help, report problems, or contribute information. [Last updated: 30-April-2007 - Louis Mills]
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2769 }
US 7278346 B2 A cutting device holds an article to be cut stationary while a blade is traversed along a curved guide to peel or slice away material from the article. The article may be rotated using an indexing mechanism to realign the article with respect to the blade. The cutting device is especially configured to cut complex shapes, such as a seven-sided tourné. Portions of the cutting device may be separated for storage or travel, and a storage container may be provided to protect the device as well as to keep the separated pieces together. 1. A cutting device for cutting an article, comprising: a cutting portion comprising a curved blade guide having a top portion and a bottom portion, a support arm attached to the blade guide, and a needle-shaped first holder contacting the support arm; and a base portion connectable to the cutting portion, the base portion comprising a second needle-shaped holder and an indexing mechanism, wherein the first and second holders hold the article stationary relative to the base portion when cutting said article with a blade that travels along the curved blade guide from the top portion to the bottom portion, and wherein the indexing mechanism permits rotation of the article relative to the cutting portion before executing a cut with the blade. 2. The cutting device of 3. The cutting device of 4. The cutting device of 5. The cutting device of 6. The cutting device of 7. The cutting device of 1. Field of the Invention The present invention relates generally to the art of slicing, and more particularly to the art of cutting food items into desired shapes. 2. Description of the Related Art Slicing shapes in vegetables is generally well known in the culinary art. In particular, the tourné shape is highly regarded for its uniqueness. As shown in The tourné shape is unique because it has an uneven number of curved or arced sides. Such shape is desired for food items, such as vegetables and roots, particularly potatoes. However, the unusual shape is difficult to accurately produce by manual cutting. Thus, recreating the tourné shape typically requires a great deal of effort, time, and skill. It would be very desirable to form the tourné shape quickly and efficiently using a device that is simple, portable, and easy to clean. U.S. Design Pat. No. 397,921 to Joergensen shows a manual potato peeler with a handle and blade. Such peelers and blades are well known in the art, but require great skill to create complex designs. Peeling apparatuses that hold an article during peeling are generally well known in the culinary art. For example, U.S. Pat. Nos. 1,006,621; 2,130,980; 2,521,987; 4,738,195; 4,765,234; 5,950,528; 5,957,045 and 6,408,520 teach various peeling, slicing and coring machines. However, these devices are incapable of shaping an article into a tourné, i.e., a three-dimensional shape with seven arced sides. U.S. Pat. 5,582,096 to Marton illustrates a more complex vegetable peeling and shaping machine. A potato is held in a chute or tube as blades are fed into and out of slots through the tube or chute to cut portions of the potato away. Similarly, Japanese publication JP 06141991 A illustrates a vegetable cutter, comparable to Marton, that utilizes cutting edges that travel through guides 3 to form curved surfaces. These devices permit the creation of complex shapes, such as the symmetrical “Chateau” shape. However, these devices are complex, relatively expensive, difficult to clean, and relatively difficult to move about or use in a typically crowded kitchen. Commercial and home chefs still seek a device for forming complex cut shapes, such as a tourné, that is simple, portable, handheld, and easy to clean. In a first aspect of the invention, a cutting device is provided that holds an article stationary while a blade is traversed along a curved track or blade guide to peel or slice away material from the article. The article may be rotated using an indexing mechanism to re-align the article with respect to the blade and form a shaped article, such as a seven sided tourné. The article to be cut may be a root or vegetable, such as a potato. Additionally, portions of the cutting device may be separated for storage or travel, and may have a container to protect the device as well as to keep the separated pieces together. In a second aspect of the invention, a cutting method includes the steps of fixing an article to be cut to a base portion of a cutting device and to a cutting portion of the cutting device, where the cutting portion includes a blade guide or track. After fixing the article, a blade is moved along the blade guide from a start position to slice through a portion of the article with a first pass of the blade along the blade guide. The article is indexed to a new cutting position, such as by rotating, and the blade is returned to its start position within the blade guide in preparation for a second cutting pass along the blade guide. Preferably, indexing the article is by disengaging an engagement mechanism that fixes the position of the article relative to the cutting portion and rotating the article to a new position. Preferably, the engagement mechanism has seven indexed positions to permit cutting portions from the article to form a seven-sided tourné. The base portion may be handheld when carrying out the cutting method. Novel features and advantages of the present invention in addition to those mentioned above will become apparent to persons of ordinary skill in the art from a reading of the following detailed description in conjunction with the accompanying drawings wherein similar reference characters refer to similar parts and in which: As shown in Article 20 can be fixed relative to cutting portion 16 and base portion 18, as shown in A second holder 30, which may be a needle or double needle (a single needle is shown in The blade guide 22 may be aligned relative to article 20 for cutting a first side of a tourné shape 10 simply by virtue of the fact that article 20 is fixed relative to the joined cutting portion 16 and base portion 18. Preferably, blade guide 22 permits blade 24 to travel along the guide without binding or other interruption to slice a side of article 20 into a predetermined shape, such as an arc in the case of the tourné shape. As shown in As shown in As shown in To prevent debris, fingers, or other foreign objects from entering the indexing mechanism, an upper mounting plate 38 and lower control plate 40 may be joined to at least partially enclose the indexing mechanism. In the embodiment of Article 20 may be elevated from the upper mounting plate 38 by platform 50 to reduce friction while advancing article 20 through the rotating and slicing process. Upper mounting plate 38, lower control plate 40, screws 42, feet 44, and any portion of base 18 may comprise metal, plastic, wood or other suitable material. The cutting device may be handheld, and its component parts may be separated for cleaning, transportation, or storage. As shown in This stacked arrangement may also be inserted into a container 60 that may have a lid 64 with a flap 62 or other fastening or securing means (see The foregoing description of the invention illustrates and describes the present invention. Additionally, the disclosure shows and describes only the preferred embodiments of the invention, but it is to be understood that the invention is capable of use in various other combinations, modifications, and environments and is capable of changes or modifications within the scope of the inventive concept as expressed herein, commensurate with the above teachings, and/or the skill or knowledge in the art of article shaping, particularly the shaping and cutting of items. The embodiments described hereinabove are further intended to explain the best modes known of practicing the invention and to enable others skilled in the art to utilize the invention in such, or other, embodiments and with the various modifications required by the particular applications or uses of the invention. Accordingly, the description is not intended to limit the invention to the form disclosed herein. Also, it is intended that the appended claims be construed to include alternative embodiments.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 8266 }
| Welcome to the lava tube portion of the Virtual Cave. Lava tube caves are found throughout the world in places where fluid lava has flown over the surface. The longest and most vertically extensive lava tubes known are on the Big Island of Hawai`i. Our idealized lava tube cross-section is based on the tubes there, and most of the photos are from there. Lava tubes are found in the western U.S.A. (Washington, California, Oregon, Nevada, Idaho, New Mexico, Utah, and Arizona), the Canary Islands, Galapagos Islands, Italy, Japan, Korea, Kenya, Mexico, and many other volcanic regions. Most tubes form when fluid lava flows down the sides of volcanoes, the upper layer begins to cool, and the lava beneath continues to flow in tubular conduits beneath the surface. Due to the insulating effects of the hardened lava above, molten lava is able to travel a considerable distance underground with very little cooling. In Hawaii, lava tubes have carried fluid lavas 50 or more miles from their source! Tubes may also form when lava follows trenches or gulleys on the surface, which then roof over as lava accumulates along the top edges. Lava tubes contain many features similar to those in limestone caves, such as stalactites and stalagmites, helictites, and a sort of flowstone. Most of the features in the diagram were made when the cave was active and during the early cooling stage. Secondary minerals may be deposited in the tubes later, such as gypsum or calcite crystals, but these tend to be on a much smaller scale than you can find in limestone caves. To take a tour of the wondrous world of lava tubes, select a feature in context on our very cool Virtual Lava Tube Map (drawn by master lava tube cartographer Carlene Allred) or choose from the list above it. Not all of the items in the text list are represented on the image map. Those are shown in all capital letters, so be sure to check these newer pages out. Start here: BIRTH of a LAVA TUBE Check out my new book on lava tubes, based on the Virtual Lava Tube, called CAVES OF FIRE: INSIDE AMERICA'S LAVA TUBES. It is both a guide to lava tube features (with many more examples of each than shown here on the website) and describes and pictures many lava tubes that you can easily find and visit in national, state, and county parks and forests. It has 128 pages with 345 color images. Now available through the National Speleological Society Bookstore |Created: August 4, 2000 Last update: December 11, 2008 Author: Dave Bunnell Reviewed by Kevin & Carlene Allred
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2531 }
Question from Amanda: Where and what acids are found in wine. Which wines have more acid (dry or sweet) and why due to the climate. Explain why and how titration can be used to determine the relative acid content of wine. If you could help answer my question i would be very grateful. Answer: Hi, Amanda! Thanks for your question! I’ll do my best…. The main grape acid is tartaric, a relatively strong acid, unlike most fruits. It’s followed by malic (found in lots of fruits and vegetables) and there are trace amounts of lots of different acids. We have an article on wine components, including acid, at goosecross.com. Generally, white wines are higher in acid than reds, for aesthetic reasons. Sweet wines should be the highest of all, to offset the sweetness, or the wine will be cloying. Cool climates usually produce wines of high acid compared to warm climates because heat causes the sugar to go up and the acid to go down. A Chardonnay from Burgundy, France is almost always higher in acid than a Napa Valley Chardonnay because of the difference in climate. Imagine trying to ripen tomatoes in a cold climate–they will be quite tart! Titration is a simple color-change test. I’ve paraphrased this from a wine text: Titration is the process of determining the concentration of a substance, such as acid, in a solution by adding a carefully measured standard reagent (usually sodium hydroxide) until a reaction (change in color) occurs due to the presence of an indicator (phenolphthalein). Most home winemakers buy inexpensive kits to do this. I hope this helps you. Are you studying wine making?
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1605 }
The basics of heat stress When the thermometer rises, it can-and often does-create a multitude of problems. Anyone, given the right (or wrong) conditions, can get heat stress. Some are lucky enough to suffer only from heat cramps, while those who are less fortunate may be laid up by heat exhaustion or devastated by heat stroke. As the long, hot days of summer approach, it is helpful to review the effects of warm weather on the human body, the illnesses that may result and what you can do. How the body stays cool Unknowingly, you constantly engage your body in the life-and-death struggle to disperse the heat it produces. If allowed to accumulate, this heat would quickly increase your body temperature beyond its comfortable 98.6oF. This does not normally happen because your body is able to lose enough heat to maintain a steady temperature. You become aware of this struggle for heat balance during hard labor or exercise in hot environments, when your body produces heat faster than it can lose it. Under certain conditions, your body may build up too much heat, your temperature may rise to life-threatening levels, and you may become delirious or lose consciousness. This is called heat stroke, and it is a serious medical emergency. If you do not rid your body of excess heat fast enough, it cooks the brain and other vital organs. It often is fatal, and those who survive may have permanent damage to their vital organs. Before your temperature reaches heat-stroke levels, however, you may suffer heat exhaustion with its flu-like symptoms, and while treating its symptoms you avoid heat stroke. How does your body dispose of excess heat? Humans lose heat largely through their skin, similar to how a car loses heat through its radiator. Exercising muscles warms the blood, just as a car's hot engine warms its radiator fluid. Warm blood travels through the skin's dilated blood vessels losing heat by evaporating sweat to the surrounding air, just like a car loses engine heat through its radiator. When blood delivers heat to the skin, two of the most important ways the body loses heat are radiation and evaporation (vaporization of sweat). When the temperature is 70oF or less, the body releases its heat by radiation. As environmental temperatures approach your body temperature, you lose less heat through radiation. In fact, people working on hot summer days actually gain heat through radiation from the sun. This leaves evaporation as the only way to effectively control body temperature. Water loss Your body is about half water. You lose about 2 quarts every day (breathing, urinating, bowel movements and sweat). A working adult can produce 2 quarts of sweat per hour for short periods and up to 15 quarts per day. Because the body's water absorption rate of 1.5 quarts per hour is less than the body's 2 quarts per hour sweat rate, dehydration results. This happens because you cannot drink enough water to keep up with your sweat losses. If you drink only when you are thirsty, you are dehydrated already. Thirst is not a good guide for when to drink water. In fact, in hot and humid conditions, you may be so dehydrated by the time you become thirsty that you will have trouble catching up with your fluid losses. One guideline regarding your water intake is to monitor your urine. You are getting enough water if you produce clear urine at least five times a day. Cloudy or dark urine, or urinating less than five times a day, means you should drink more. In the Gulf War, American armed forces followed the practice of the Israeli army: drinking a minimum of 1 quart of fluid per hour. This tactic resulted in zero deaths from heat illness. In contrast, during the Six Day War of 1967, more than 20,000 Egyptian soldiers died3/4with no visible wounds3/4most likely from dehydration and heat illness because they were restricted to 3 quarts daily. While working in hot weather, drink 8 ounces of water every 20 minutes. Generally, 16 ounces is the most a person can comfortably drink at once. You cannot "catch up" by drinking extra water later because only about 1 quart of water per hour can pass out of the stomach. Therefore, if possible, workers should begin drinking water before they start work. Cool water (50oF) is easier for the stomach to absorb than warm water, and a little flavoring may make the water more tasty. The best fluids are those that leave the stomach fast and contain little sodium and some sugar (less than 8 percent). You should avoid coffee and tea because they contain caffeine, which is a diuretic that increases water loss through urination. Alcoholic beverages also dehydrate by increasing urination. Soda pop contains about 10 percent sugar and, therefore, your body does not absorb it as well as water or commercial sports drinks. The sugar content of fruit juices ranges from 11 to 18 percent and has an even longer absorption time. Commercial sports drinks contain about 5 to 8 percent sugar. Electrolyte loss Sweat and urine contain potassium and sodium, which are essential electrolytes that control the movement of water in and out of the body's cells. Many everyday foods contain these electrolytes. Bananas and nuts are rich with potassium, and most American diets have up to 10 times as much sodium as the body needs. Getting enough salt is rarely a problem in the typical American diet. In fact, most Americans consume an excessive amount of sodium-averaging 5 to 10 grams of sodium per day-although we probably require only 1 to 3 grams. Therefore, sodium loss is seldom a problem, unless a person is sweating profusely for long periods and drinking large amounts of water. Commercial sports drinks can be useful if you are participating in vigorous physical activity for longer than 1 hour (some experts say longer than 4 hours). Most of the time, however, people merely require water to remain hydrated. The truth is that excessive sodium can draw water out of the body cells, accentuating the dehydration. In addition, drinking large amounts of water (more than 1 quart an hour) can cause water intoxication, a condition that flushes electrolytes from the body. Frequent urination and behavior changes (irrationality, combativeness, coma, seizures, etc.) are signs of water intoxication. Effects of humidity Sweat can only cool the body if it evaporates. In dry air, you will not notice sweat evaporating. However, sweat cannot evaporate in high-humidity conditions; it just drips off the skin. At about 70-percent humidity, sweating is ineffective in cooling the body. Because humidity can significantly reduce evaporative cooling, a highly humid but mildly warm day can be more stressful than a hot, dry one. Therefore, the higher the humidity, the lower the temperature at which heat risk begins, especially those who are generating heat with vigorous work. Who is at risk? Everyone is susceptible to heat illness if environmental conditions overwhelm the body's temperature-regulating mechanisms. Heat waves can set the stage for a rash of heat-stroke victims. For example, during the 1995 summer heat wave in Chicago, the death toll reached 590. People who are obese, chronically ill or alcoholics have an increased risk. The elderly are at higher risk because of impaired cardiac output and decreased ability to sweat. Infants and young children also are susceptible to heat stroke, as well. The fluid loss and dehydration resulting from physical activity puts outdoor laborers at particular risk. Certain medications predispose individuals to heat stroke, such as drugs that alter sweat production (antihistamines, antipsychotics, antidepressants) or interfere with thermoregulation. Heat illnesses Several disorders exist along the spectrum of heat illnesses. Heat cramps, heat exhaustion and heat stroke are on the more serious side of the scale, whereas heat syncope, heat edema and prickly heat are less serious (see "Heat illnesses," page C 18). Only heat stroke is life-threatening. Untreated heat-stroke victims always die. * Heat cramps are painful muscular spasms that occur suddenly. They usually involve the muscles in the back of the leg or the abdominal muscles. They tend to occur immediately after exertion and are caused by salt depletion. Victims may be drinking water without adequate salt content. However, some experts disagree because the typical American diet is heavy with salt. * Heat exhaustion is characterized by heavy perspiration with normal or slightly above-normal body temperatures. A depletion of water or salt3/4or both3/4causes this condition. Some experts believe severe dehydration is a better term because it happens to workers who do not drink enough fluids while working in hot environments. Symptoms include severe thirst, fatigue, headache, nausea, vomiting and diarrhea. The affected person often mistakenly believes he or she has the flu. Uncontrolled heat exhaustion can evolve into heat stroke. * Heat stroke is classified in two ways: classic and exertional. Classic heat stroke, also known as the "slow cooker," may take days to develop. This condition is prevalent during summer heat waves and typically affects poor, elderly, chronically ill, alcoholic or obese persons. Because the elderly often have medical problems, heat stroke exacerbates the problem, and more than 50 percent of elderly heat-stroke victims die3/4even with medical care. Death results from a combination of a hot environment and dehydration. Exertional heat stroke also is more common in the summer. You see it frequently in athletes, laborers and military personnel who sweat profusely. Known as the "fast cooker," this condition affects healthy, active individuals who strenuously work or play in a warm environment. Exertional heat-stroke victims usually are sweating when stricken, while the classic victims are not sweating. Its rapid onset does not allow enough time for severe dehydration to occur. Because uncontrolled heat exhaustion can evolve into heat stroke, you should know how to tell the difference between them. If the victim feels extremely hot when touched, suspect heat stroke. Another mark of heat stroke is that the victim's mental status (behavior) changes drastically3/4ranging from being slightly confused and disoriented to falling into a coma. In between these conditions, victims usually become irrational, agitated or even aggressive and may have seizures. In severe cases, the victim can go into a coma in less than 1 hour. The longer a coma lasts, the lower the chance for survival, so rescuers must be quick. A third way of distinguishing heat stroke from heat exhaustion is by rectal temperature. Obviously, this is not very practical because conscious heat-stroke victims may not cooperate. Taking a rectal temperature can be embarrassing to both victim and rescuer. Moreover, rectal thermometers are seldom available, and the whole procedure of finding the appropriate thermometer and then using it wastes time and distracts from important emergency care. In most cases, an ambulance arrives within 10 to 20 minutes. * Heat syncope, in which a person becomes dizzy or faints after exposure to high temperatures, is a self-limiting condition. Victims should lie down in a cool place when it occurs. Victims who are not nauseated can drink water. * Heat edema, which is also a self-limiting condition, causes ankles and feet to swell from heat exposure. It is more common in women unacclimated to a hot climate. It is related to salt and water retention and tends to disappear after acclimation. Wearing support stockings and elevating the legs often helps reduce swelling. * Prickly heat, also known as a heat rash, is an itchy rash that develops on skin that is wet from sweating. Dry and cool the skin. Cooling methods Sometimes the only way to stop possible damage is to cool the victim as quickly as possible. However, it is important to pay attention to both the cooling methods and cautions. * Ice baths cool a victim quickly but require a great deal of ice3/4at least 80 pounds3/4to be effective. Needing a big enough tub also limits this method. Cool-water baths3/4(less than 60oF)3/4can be successful if you stir the water to prevent a warm layer from forming around the body. This is the most effective method in highly humid conditions (greater than 75-percent humidity). * Spraying the victim with water combined with fanning is another method for cooling the body. The water droplets act as artificial sweat and cool the body through evaporation. However, this method is not effective in high humidity3/4greater than 75 percent. * Ice bags wrapped in wet towels and placed against the large veins in the groin, armpits and sides of the neck also cool the body, though not nearly as quickly as immersion. Cautions to remember when employing any cooling method include: * Do not delay the onset of cooling while waiting for an ambulance. Doing so increases the risk of tissue damage and prolonged hospitalization. * Stop cooling when the victim's mental status improves to avoid hypothermia. * Do not use rubbing alcohol to cool the skin. It can be absorbed into the blood, causing alcohol poisoning. Its vapors are a potential fire hazard. * Do not use aspirin or acetaminophen. They are not effective because the brain's control-center temperature is not elevated as it is with fever caused by diseases. Adjusting to heat Most heat illness occur during the first days of working in the heat. Therefore, acclimation (adjusting to the heat) is the main preventive measure. To better handle the heat, the body adjusts by decreasing the salt content in sweat and increases the sweating rate. Year-round exercise can help workers prepare for hot weather. Such activity raises the body's core temperature so it becomes accustomed to heat. Full acclimation, however, requires exercise in hot weather. You can do this by exercising a minimum of 60 to 90 minutes in the heat each day for 1 to 2 weeks. The acclimated heart pumps more blood with each stroke than a heart unused to working in the heat. Sweating earlier and doubles the amount of sweat per hour from 1.5 quarts to 3 quarts or more. When new workers are exposed to hot weather, team them with veterans of the heat who know how much water to drink. Heat illnesses are avoidable. With knowledge, preparation, fluid replacement and prompt emergency care, heat casualties need not be a factor for those working in warm weather. Dr. Alton Thygerson is a professor of health science at Brigham Young University, Provo, Utah. He also serves as the technical consultant for the National Safety Council's First Aid Institute. Want to use this article? Click here for options! © 2013 Penton Media Inc.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 14699 }
Contemporary full brown calf. Small paper spine label. Boards triple ruled in blind. edges speckled red. Boards and joints rubbed and bumped. Head and tale of the spine chipped. Some toning and browning throughout, but mainly to preliminary and final leaves. Leaves A2 and A3 with some chipping along fore-edge, not affecting text. A bit of marginal worming, not affecting text. Previous owner's old ink signature on title-page and some instances of marginalia and text corrections in the same hand. Overall a very good copy. “Bacon’s major contribution to the development of science lies in his natural philosophy, his philosophy of scientific method, and in his projects for the practical organization of science. During the last years of his life, he expounded these ideas in a series of works, of which the Twoo bookes was the first. The only work Bacon ever published in English, it was later expanded and latinized into De augmentis scientiarum (1623). In the Twoo bookes, Bacon concerned himself primarily with the classification of philosophy and the sciences and with developing his influential view of the relation between science and theology. While preserving the traditional distinction between knowledge obtained by divine revelation and knowledge acquired through the senses, Bacon saw both theoretical and applied science as religious duties, the first for a greater knowledge of God through his creation, and the second for the practice of charity to one’s fellows by improving their condition. This view of science as a religious function maintained its authority throughout the seventeenth and early eighteenth centuries, and was an important factor in the public success of the scientific movement” (Norman Library). Gibson 82. STC 1165. HBS # 65822 $850
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1774 }
Gallery: Huntington State Hospital fire, Nov. 26, 1952 On Nov. 26, 1952, a ward building at Huntington State Hospital, now Mildred Mitchell-Bateman Hospital, caught fire, killing 17 people. Some readers may find these images (and the news story below) disturbing. Established on a 33-acre tract in 1899 as the West Virginia Asylum, the state-supported hospital has evolved from the dark ages when as many as 1,800 patients were crammed into open wards and the staff included a superintendent and seven or eight physicians. Today, it is a 90-bed facility. In 1958, West Virginia had 5,500 patients placed in a half-dozen mental health facilities scattered over the state. Because of improved treatment and drugs, that number had decreased to 2,400 in 1976. The Hartley Act, which was brought on by a class action suit, caused more dramatic changes in the 1980s. The act mandated that all state hospitals eliminate open wards and provide only two-bed, semi-private rooms for mentally ill patients. West Virginia now operates just two psychiatric care hospitals, Huntington and Weston, W.Va. In 1999, Gov. Cecil Underwood announced that Huntington State Hospital would be renamed the Mildred Mitchell-Bateman Hospital in honor of the retiring doctor and her dedication to bringing mental health issues to the state's attention. This old Associated Press story about the fire was found on the GenDisasters website. HUNTINGTON HOSPITAL FIRE KILLS 14 PATIENTS. NINE WOMEN, 5 YOUNG GIRLS ARE VICTIMS. ATTENDANTS ARE CREDITED WITH 'HEROIC JOB' BY FIREMEN. Huntington, Nov. 26 -- (AP) -- Fourteen women and children perished tonight in a fierce blaze that swept a three-story building at the Huntington State Hospital, a mental institution. President Joe F. Burdett of the State Board of Control, which supervises the state's institutions, announced the death toll as complete and official. Two hours after the fire roared through the 56-year-old building, Burdett said the blaze took 15 lives. After a thorough recheck, he brought the figure down one. Five of the victims were young girls, 15 or younger. The others were women, the oldest 89. There were about 275 patients in the three-story brick structure. The fire broke out in the basement shortly after 7 p.m. and burned for about two hours. The flames were confined to the first two floors but the thick acrid smoke played havoc with the youngsters trapped on the top level. Firemen had to cut through heavy wire mesh with acetylene torches to get inside the building when the front entrance became an inferno. The screaming patients had to be removed by means of an old wrought iron circular stairway at the rear of the building. The rescuers couldn't use stretchers on the narrow escapeway, so they bundled the patients --some alive, some dead -- in blankets and carried them down on their shoulders. Fire Capt. C. C. Martin credited attendants on duty with a "heroic job" in getting the most of the patients out of the building. He said they tripped the latches on the ward doors so the patients could flee by themselves. The kitchen, one of the several buildings nearby, was turned into a hasty receiving station for the screaming, weeping, vomiting victims. One reporter called it a "sorry sight." The patients were sprawled on the kitchen floor, some of them dead, most with only a blanket covering them, reeking with the strong smell of smoke. One fireman said the blaze started in the basement. A staff physician who refused to be quoted said some of the patients sometimes went to the basement to smoke during off hours, which was against the rules. The hospital built in 1896, has been under recent scrutiny both through the press and the state legislature for its condition. "I know it's too late to say this," Burdett said, "but we submitted to the budget director a recommendation for one million dollars for fireproofing all this -- ward buildings one, two, three and four." It was ward building four which burned. Burdett said the requested appropriation was cut out somewhere along the line in the last legislative session. A new building was being constructed on the grounds nearby which was to house the patients in the structure which burned. They would have been transferred into the quarters withing a few weeks. Two other fires have occurred at the state hospital within the last two years -- one in a third floor sewing room and the other in a basement storage bin. Both were extinguished quickly. "The same situation exists at Spencer and Weston State (two other state mental institutions). Recommendations for fireproofing those two hospitals and Huntington have been approved by the budget director for submission to the 1953 legislature." He added that part of Spencer and Weston State Hospitals already have been fireproofed. Two elderly women were listed as in serious condition from burns. The only other person listed as injured was a Huntington fireman who suffered a broken foot when a battering ram fell on him. All three members of the State Board of Control were at the hospital during cleaning-up operations. They are, besides Burdett, L. Steele Trotter, treasurer, and Dell White, Secretary. State Fire Marshal C. A. Raper, also at the scene, said he had not had a chance to make an inspection or estimate of the damage. Here's List of Dead In Huntington Fire. Huntington, Nov. 26 -- (AP) -- Here is the list of dead in tonight's fire at Huntington State Hospital as released by Chief of Detectives Herman A. Frazier of the city police. ADA CARVER, 89, Huntington. JOYCE TUCKER, 20, Fairmont. ELIZABETH BRIGHT, 31, Wellsburg. EVANGELINE ELZY, 15, Dunbar. PATRICIA LONG, 15, Sutton. LENA WENTZ, 11, Cabell County. LILLIAN GOULD, 36, Huntington. GERALDINE CURRY, 26, Mingo County. CASSIE SUMMERFIELD, 44, Huntington. AVANELE KEIFER, 15, Huntington. ETHEL MUNDAY, 68, Charleston. HELEN FINDLEY, 33, Sistersville. PATRICIA CLARK, 14, Vallscreek, McDowell County. MADALINE PRESTON, 24, Maidsville, Monongalia County.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5997 }
There are many aspects to learning the creation of interactive fiction. Here we mostly undertake to explain approaches to using Inform, and leave the larger questions of craft and design for elsewhere. The two manuals There are two interlinked manuals built into every copy of the Inform application: if you've downloaded Inform, you already have them. But they are also available to read or download separately from this website. Writing with Inform is an overview of the language, beginning with the simplest kinds of construction (such as building a map of rooms, objects, and doors) and working its way up to more advanced tasks. It is meant to be read more or less sequentially, since later chapters build on the ideas in earlier ones; though some of the late chapters (such as those covering numbers, activities, or advanced text) might reasonably be read out of order. The Recipe Book approaches the problem of authorship from a different perspective. Instead of trying to teach the language from start to finish, it is organized for the author who wants to accomplish something specific, such as asking the player's name at the start of play or implementing a system of measured liquids. It shares the same set of examples that are keyed to Writing with Inform, but organizes them into a new order and accompanies them with text about design problems in creating interactive fiction, rather than explanation of language features. Following requests from partially sighted Inform users, we've also made two plain vanilla versions of the manual available - they have as little decoration or web design as possible, which means less clutter for screen-reading software to cope with. We offer a choice of: Minimally tagged HTML provides an archive containing the pages of the manuals and examples as vanilla-flavoured HTML files. Writing with Inform in plain text format is just what it claims to be - one single file containing only text, with no marking-up of any kind. This contains all of the examples, following the text in numerical order, but not the Recipe Book. (The whole idea of two interleaved manuals can't really be achieved in one flat text file.) We receive occasional questions about publishing a printed form of the manuals. The answer is that we intend to do exactly that, in due course, but that we expect the current text will be revised wholesale once the system is more mature. (The same thing happened with Inform 6, with the appearance of the printed Designer's Manual in 2001 essentially marking the end of its design cycle.)
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2555 }
From: Rolf Furuli (email@example.com) Date: Sun Sep 28 1997 - 04:29:51 EDT Rod Decker Wrote: EIS TON AIWNA (or the plural, EIS TOUS AIWNAS) is a common NT idiom; I had assumed that this reflected the Jewish concept of the age to come, but was somewhat surprised to note that BAGD lists a number of classical references to the same phrase (p. 27). Any observations as to how extensive this phrase is in classical literature and or the relationship of the classical use to the Jewish and/or biblical (Neither TDNT nor NIDNTT have much to say about AIWN in classical--less than they usually provide--and neither one comments on the idioms EIS TON >From the posts of Michael, Carl and Will it appears that Classical Greek contributes little to the use of AIWN in the NT. I would like to give some comments from the OT. The Hebrew verb Ťalam has the basic meaning "to concel, to hide". Thus the noun Ťolam came to mean "a time span with no fixed endpoint", in some cases it was used absolutely with the meaning "eternity", but in most cases it was used of a shorter and at the point of writing, indefinite time period. In Aramaic the noun Ťalam was used with the same meaning as in Hebrew but in time it was also used for "the environment in which people live" (world) or for "people". In the NT the two words KOSMOS and AIWN have some similarity. There is no Hebrew/Aramaic word corresponding to KOSMOS, but it seems that different sides of the Aramaic Ťalam were cultivated in the words KOSMOS and AIWN. (1) KOSMOS and AIWN are clearly different. In Matt 13:38,39 KOSMOS is "the world" (of mankind) but AIWN is the "world-age" (I like Carl`s "world-age" because "age" is too weak). When KJV has "end of the world" in v 39, the only possible conclusion is that the harvest is the end of the field, which is not what the writer intended to convey. According to John 17:15 Jesus did not request his father to take his followers EK TOU KOSMOU (which would mean a translation to heaven), but it sees that according to Gal 1:4 the followers of Jesus already had experienced a deliverance EK TOU AIWNOS TOU ENESTWTOS just as people were admonished to get saved from THS GENEAS THS SKOLIAS TAUTHS (Acts 2:40) (2) To use the English word "world" to signal the concept behind Greek KOSMOS is excellent (Even TEV uses "world" in 96% of the instances). I find the following principal sides of the concept KOSMOS illuminated: (a) "the human family" (John 3:16), (b) "the human family separated from the Church" (John 17:14), (a) "the environment in which the human family lives" (John 16:21) (d) "the universe" (Acts 17:24) This is the only place where I find this classical meaning, and (e) "adornment" (1 Pet 3:3). Nowehere in the NT is it said that a new "KOSMOS" will come; the human family is in all generations the same. If these points be correct, the Greek KOSMOS has aquired and cultivated the "world/people"-side of the Aramaic Ťolam. (3) While KOSMOS is stable and "local", AIWN is changing, the word being applied to different ages characterized by different circumstances and things. In many instances in the NT, AIWN has the normal Hebrew and Aramaic meaning "eternality" (1 Tim 1:17; 1 Pet 5:10) without any extra stress. In other instances, the element of "a shorter indefinite time" is stressed, and in these cases, not only is time as an abstract element focussed upon, but rather a period of time which has a particular stamp, which is characterized by something. The term "age" just covers "time" and is too weak. Christians are hardly taken out of "an age" (Gal 1:4) but may be taken out of a "wicked age-system" or a "wicked world-age". And similarly, Christians will not in the future experience a "new age" (Luke 20:35) but a new period characterized by completely different things. So the time element is evident in this use of AIWN but the QUALITY of life experienced in the particular AIWN is more important. This use of AIWN seems to be somewhat different from the biblical use of `olam/Ťalam (Hebrew/Aramaic), and more in line with the later rabbinical use of the word. The conclusion therefore seems to be that the NT use of AIWN is not rooted in the Classical use of the word but rather in the Hebrew/Aramaic use. While KOSMOS is more "local", related to the human family, AIWN is more transitory, being applied to shorter or longer periods of time with particular qualities and characteristics. A translation should always use different English words to represent KOSMOS and AIWN. University of Oslo This archive was generated by hypermail 2.1.4 : Sat Apr 20 2002 - 15:38:30 EDT
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4594 }
Milestones:First Intelligible Voice Transmission over Electric Wire, 1876 First Intelligible Voice Transmission over Electric Wire, 1876Alexander Graham Bell called out to his assistant Thomas Watson, “Mr. Watson, come here! I want to see you.” This transmission took place in their attic laboratory located in a near here at 5 Exeter Place. The milestone plaque may be viewed close to Lafayette Place, near where Avenue de Lafayette and Essex Streets intersect. There is an existing historical marker close by commemorating the site of the original laboratory where Watson and Bell constructed their telephone equipment at 109 Court Street. A pioneer in the field of telecommunications, Alexander Graham Bell was born in 1847 in Edinburgh, Scotland. He moved to Ontario, and then to the United States, settling in Boston, before beginning his career as an inventor. Throughout his life, Bell had been interested in the education of deaf people. This interest led him to invent the microphone and, in 1876, his “electrical speech machine,” which we now call a telephone. The first transmission of voice over electric wires was from Alexander Graham Bell to his laboratory assistant Thomas Watson on March 10, 1876. This historic event was marked by Bell’s famous phrase, “Mr. Watson, come here! I want to see you.” This first telephone transmission took place at the Bell & Watson laboratory located at 5 Exeter Place in Boston. News of Bell’s invention quickly spread throughout the country, even throughout Europe. The first long distance telephone call was made on August 10, 1876 by Bell from the family home in Brantford, Ontario to his assistant located in Paris, Ontario, ten miles away. By 1878, Bell had set up the first telephone exchange in New Haven, Connecticut. Long distance connections were made between Boston, Massachusetts and New York City by 1884, (the year IEEE was founded.) In 1876, Bell got a patent for the telephone and started the Bell Telephone Company with others in July, 1877. Two years later, this company joined the New England Telephone Company to form the National Bell Telephone Company. In 1880, they established the American Bell Telephone Company, and in 1885, American Telephone and Telegraph Company (AT&T), still a large enterprise today. Electric communication has had a long evolving history. It began with early telegraph inventions by Wheatstone, Morse, Hughes, Henry, and has continued with the submarine cable, and with pioneers like Marconi and Popov. These early telegraphic innovations and Marconi’s wireless system were improvements in the way people communicated with each other. Yet, the invention of the telephone was a quantum leap over all previous technologies. By allowing individuals to communicate remotely and instantly from the safety of their home, the telephone has had global, pervasive, and profound impacts on mankind.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2887 }
The history of the Irish Soft Coated Wheaten Terrier has been somewhat obscured by its closeness to the other Irish Terrier breeds. The Wheaten is probably the oldest of the four breeds. Its existence for at least 200 years can be inferred from textual references to "soft-coated" dogs. The relation of the modern Irish Terrier to the Wheaten, though less well documented, appears to have been the result of deliberate breeding experiments. So the humble Wheaten probably has a fairly mixed ancestry. Despite the long history of the Wheaten, it wasn't until 1937 that the Soft Coated Wheaten was officially recognised by the Irish Kennel Club. The breed has grown steadily in popularity since and is now well known world-wide. FCI Standard No. 40 Origin – Ireland Wheaten Terriers were always used by small farmers to kill vermin or help with the work about the farm. They were used for a long time in the difficult job of hunting badgers and otters. CLASSIFICATION F.C.I.: Group 3 Terriers Section 1 Large and medium sized Terriers Without working trial. IRISH CLASSIFICATION: Terrier Group BRIEF HISTORICAL SUMMARY: The history of the Irish Soft Coated Wheaten Terrier has been somewhat obscured by its closeness to the other Irish Terrier breeds. The Wheaten is probably the oldest of the four breeds. Its existence for at least 200 years can be inferred from textual references to "soft-coated" dogs. The relation of the modern Irish Terrier to the Wheaten, though less well documented, appears to have been the result of deliberate breeding experiments. So the humble Wheaten probably has a fairly mixed ancestry. Despite the long history of the Wheaten, it wasn't until 1937 that the Soft Coated Wheaten was officially recognised by the Irish Kennel Club. The breed has grown steadily in popularity since and is now well known world-wide. A hardy, active, short coupled dog, well built, giving the idea of strength. Not too leggy nor too low to the Spirited and game. Good tempered. Most affectionate and loyal to his owners. Most intelligent. A trusty, faithful friend, defensive without aggression. HEAD: In general powerful without being coarse. Long, in good proportion to the body. Hair same colour as on body. Skull: Flat and clean between ears, not too wide. Nose: Black and well developed. Muzzle: Foreface not longer than skull. Jaws: Jaws strong and punishing. Teeth: Teeth large, regular; scissor or level bite, (i.e. edge to edge) neither undershot nor overshot. Cheeks: Bones not prominent. Eyes: Dark, dark hazel, not too large, not prominent, well placed. Ears: Small to medium, carried in front, level with skull. Dark shading on base of ear allowed, and not uncommon, accompanied by a light wheaten coloured overlay. This is the only area of the dog where under-coat is allowed. "Rose" or "Flying" ears are objectionable. NECK: Moderately long and strong but not throaty. BODY: Compact. Not too long. Length from withers to base of tail approximately the same as from ground to withers. Back: Strong and level with even top line. Loins: Short, powerful. Chest: Deep, ribs well sprung. TAIL: Well set, not too thick. Carried gaily but never over the back. The tail is docked so that two thirds of its original length remains assuming it is in proportion to the dog. An undocked tail is permitted. Shoulders: Fine, well laid back, muscular Forelegs perfectly straight viewed from any angle. Good bone and muscle. Well developed with powerful muscle. Thighs: Strong and muscular. Hocks: Well let down, turned neither in nor out. Hind dewclaws should be removed. FEET: Small, not spreading. Toenails preferably black but varying dark colours allowed. Straight action fore and aft, going and coming. Elbows tucked in. Side view : free, light co-ordinated movement. A single coated dog. Texture soft and silky to feel and not harsh. Young dogs excluded from this. Trimming permitted. Coat cut close at neck, chest and skull, and left especially long over eyes and under jaw. Whiskers encouraged. Profuse feathering on legs. Body coat trimmed to follow the outline of the dog but not sculpted. Tail trimmed close and neatly tapered. The coat at its longest not to exceed five inches (12.7 cm). Soft, wavy or loosely curled with the sheen of silk. Under no circumstances should the coat be "fluffed out" like a Poodle or an Old English Sheepdog. Dogs shown in this condition should be heavily penalised as they give a wrong impression of type and breed. Special attention is drawn to puppy coat development. Pups are seldom born with the correct coat of maturity, care must be taken when assessing this point. They go through several changes of colour and texture before developing the mature adult coat. This usually occurs between 18 months and 2½ years. Are seldom born with the correct colour or texture of coat. They come reddish, greyish and sometimes clear wheaten. The masks are generally black. Sometimes there is a black streak down the centre back or black tips to the body coat. These dark markings clear away with maturity. A good clear wheaten of shades from light wheaten to a golden reddish hue. SIZE (Height & Weight) Height at the withers: Dogs 18-19 ins (46-48 cm). Bitches somewhat less. Weight: Dogs 40-45 lbs (18 – 20.5kg). Bitches somewhat less. Any departure from the foregoing points should be considered a fault and the seriousness with which the fault should be regarded should be in exact proportion to its degree. • Nose any colour other than black. • Undershot mouth. Overshot mouth. • Overall mature coat not clear wheaten colour. • Nervousness. Viciousness. • Yellow eyes. • Dull, thick, woolly or cottony textured hair. • White coat. Brown coat. Dogs carrying any of the above eliminating faults should never be bred from. NB. Male animals should have two apparently normal testicles fully descended into the scrotum.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5865 }
May 23, 2012 Survivor: Jack Seror Jack Seror didn’t know what to do. He was 25 and knew he had to leave Salonika; it wasn’t safe for Jews. And now a contact from the Greek resistance had come to fetch him. Jack stood with his parents in their living room, crying. They hugged, kissed and hugged some more. “We have to leave,” the contact said. Half of Jack wanted to stay with his parents; the other half wanted to escape. Finally, his father, with tears in his eyes, said, “Go. And remember, if you survive, to say Kaddish for us.” Jack was born Oct. 15, 1917, in Salonika, Greece, the fifth of six children of David, a milk wholesaler, and Mazeltov Seror. The family was religious. On Friday nights, after Shabbat dinner and singing, Jack recalls that his father always told a new story about a character he remembers as Johah. Jack attended an Alliance Israelite Universelle school through seventh grade. After that he worked in his uncle’s dry goods store and then for an insurance company. But in March 1940, he was drafted into the Greek army. Eight months later, Italy invaded Greece. Then, as the Greeks drove the Italians back into Albania, Jack’s unit was sent to the Bulgarian border, where the Germans were advancing. After the Germans took control of Salonika, on April 9, 1941, Jack’s unit was sent to southern Greece to continue fighting. The Greek army, however, was soon disbanded, and Jack returned home, mostly walking and occasionally riding a bus, from Thebes to Salonika. The trip took five weeks. In Salonika, Jack just tried to survive. His older brother Albert had been killed fighting the Italians. His father, no longer a milk wholesaler, was working as a deliveryman. Jack sold carob syrup. The situation worsened. On Feb. 6, 1943, Jews were ordered to wear yellow stars. On the streets, Jack witnessed Nazi round-ups. He also saw photos of cattle cars carrying Jews in a Belgian magazine that was soon confiscated from the newsstands by the Nazis. He told his parents the Nazis were planning to kill the Jews. His father answered, “Passover will be here in a couple of months, and God will not let us perish.” Jack didn’t believe it. In March, the Nazis enclosed the area adjacent to Salonika’s railroad station with barbed wire, calling it the Baron Hirsch camp or ghetto, and transferring Jews there. A few days later, cattle cars arrived, and on March 15, the first transport left for Auschwitz. Two days later, another transport departed. At that point, Jack knew he had to leave. Jack and his contact from the resistance picked up Jack’s sister Katy from a neighboring village, and they made their way to Grevena, a small city in the mountains of northwestern Greece. Jack’s resistance group, about 35 men, was headquartered there. Katy and the other women stayed near Grevena. Katy’s job was to sew shirts out of the parachutes used by British soldiers who were dropped into the mountainous area to assist the resistance fighters. Jack’s group trekked from village to village, from one hill to another. “We were scared. We were always thinking about what we left back home,” Jack said. But they never talked about their personal lives. Instead, everyone had a fake name, including Capt. Bourna, the leader, rumored to be a Greek army officer. Jack was Alekos Saridis. Every morning, Jack’s group did aerobic exercises, followed by chores — including fetching water, cooking the ever-present lentils, helping villagers — and then combat training. Plus, they were always watching for enemy soldiers. “We went there to survive, but we also knew we had to fight the Germans.” Jack said. Jack’s group didn’t directly encounter any Germans, though one man, sent to deliver shoes, never returned. And Jack’s younger brother Haim, in a different resistance group, was killed fighting Germans. Finally, in October 1944, the Germans retreated from Greece. Jack’s resistance group disbanded soon after, and he and Katy slowly made their way back to Salonika, arriving in early 1945. Jack and Katy were the only survivors in their immediate family. Overall, 96 percent of Salonika’s almost 60,000 Jews perished. Jack secured an accounting job at a social club for British troops. “It was very good to be able to be human again,” he said. There he met Katie Zinda, who worked in the gift shop. After a six-month friendship, they fell in love and decided to marry. Katie, who wasn’t Jewish, converted, taking the formal name Sarah. Jack and Katie were married on Sept. 9, 1949, with 10 people in attendance. “People were so sure the marriage wasn’t going to last that we didn’t get any presents,” Jack said. On July 9, 1950, their son David was born. Just over a year later, destitute and wanting to start over, they immigrated to the United States, settling in Boston in October 1951, where they were helped by Jewish Family & Children’s Service. Jack found temporary bookbinding work at Houghton Mifflin and also worked at a warehouse. Their son Marc was born Aug. 16, 1952. But the winters were brutal, and the family moved to Los Angeles in February 1952. Jack took a warehouse job for a year and then worked for a calendar company. In 1959, he and Katie purchased a small grocery store, Quinn’s Market, near Glendale. In 1966, they sold it and purchased another grocery store in Venice. “We worked hard, six and sometimes seven days a week,” Jack said. They sold the store in 1979. Jack and Katie also worked hard for Sephardic Temple Tifereth Israel. “Jewish Family Service was very good to us. We wanted to pay back for what the Jewish people did for us,” Jack said. Katie died in May 2010. Today Jack, 94 and legally blind, walks, listens to tapes from the Braille Institute and visits with his grandchildren every week. He also travels by bus every Saturday from his Culver City home to the Westside Pavilion, where he visits with other Greek survivors. Of the original group of 30, four remain. “I am thankful for what we accomplished,” Jack said.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5978 }
* This book takes a deep look at appearance and evolution. Why is society so obsessed with beauty? We spend billions of dollars on cosmetics, plastic surgery and diet foods, and still we're never satisfied. Blame Madison Avenue if you want. But psychologist Nancy Etcoff contends our tireless pursuit of beauty reflects something more primal - the workings of a basic instinct. In her book ""Survival of the Prettiest: The Science of Beauty,"" Etcoff challenges the notion that the media has created our cultural fascination for good looks. Instead she argues that there are evolutionary reasons why cultures - not just in the United States - place so much value on appearance. Quite simply, she says, throughout human history both sexes have been attracted to people they are most likely to reproduce with. And physical characteristics (like bulging muscles or fatty tissue) indicate to potential mates the status of one's health and fertility. Thus we have a culture focused on the superficial. ""If our ancestors did not have radar for healthy, fertile bodies, we'd have become biological dead ends long ago,"" Etcoff writes. It seems a simplistic explanation for why the StairMaster is so popular. But through a series of global scientific studies, Etcoff - a faculty member at Harvard Medical School - presents a compelling argument for why so many cultures are influenced by beauty. The studies are fascinating because they touch on controversial cultural taboo topics that most of us have at least thought about: Are parents more affectionate toward cute newborns? Are handsome men promoted faster in the workplace? Do husbands and wives tend to resemble each other? Etcoff forgives our vanity by demonstrating that beauty isn't just in the eye of the beholder, it's part of the beholder's biology. Etcoff begins her book by debunking the perception that young children are immune to society's definition of beauty. She cites a study that showed infants will stare longer at photos of people who are considered attractive by adults. Another study Etcoff cites showed that people in one-third of the non-Western and non-North American countries placed more importance on the looks of their mates than did college students in the United States. That runs counter to the belief that Americans place more emphasis on beauty than other cultures. Another study indicates that there may be some innate definition of beauty. People in Brazil, Russia, Venezuela, Paraguay, and the United States ""were attracted to similar geometric proportions in the face. They liked females with small lower faces (delicate jaws and relatively small chins) and eyes that were large in relation to the length of the face."" The book includes other studies as well - such as whether beautiful people are happier (no) and whether good lookers have advantages in the workplace (sometimes). All of the studies are a great counterpoint to the likes of ""Vogue,"" especially for anyone who has ever pondered why blondes have more fun. ""Survival of the prettiest: The science of beauty"" By Nancy Etcoff
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3080 }
A while back, in response to a reader’s question regarding storage inside the exhaust cabinet, we wrote about the fundamentals of chemical fume hoods. In that article, we discussed the basic design principles and operation of chemical fume hoods. (If your memory is like ours and needs refreshing or you require another copy, just let us know.) Since exhaust hoods are among the major expense items for research laboratories and have a huge impact on continuing operational costs, we’ve decided to provide you with information on some of the newer hood designs that offer good performance and energy conservation. Laboratory exhaust systems fall into three main classes: chemical fume hoods, for working with corrosive acids and bases, volatile solvents, and other hazardous chemicals; biological safety hoods, which can be designed to protect the work (clean-air bench) or the worker (true biosafety cabinet); and standard exhaust hoods, typically used in mechanical or machine shops and their production areas. We are going to limit this discussion to the first category, the chemical fume hood, since this is the cornerstone of most research laboratories. Laboratory fume hoods are designed to protect the worker by containing and exhausting harmful or toxic fumes, gases, or vapors emitted by chemicals used in the hood. A typical fume hood has an exhaust blower mounted so that air from the room is pulled into and through the hood, creating directional airflow. The “pull” at the hood opening is termed “face velocity” and usually is measured in feet per minute (fpm). Proper face velocity of the hood is critical to the protection of the worker. Too little flow allows currents or disturbances in the laboratory air to overpower the hood and draw contaminants into the room. Too much flow can result in turbulence and eddies that also can lead to contaminants escaping the hood. Baffles and other aerodynamically designed components determine how air moves into and through the hood. Contaminants inside the hood are diluted with room air and exhausted to the outside via the hood’s duct system, where they are dispersed. The volume of air exhausted by the hood depends on a number of factors, the most important of which are hood size and design. With the average chemical fume hood exhausting around 750 to 1,000 cubic feet per minute of conditioned air, you can see how hoods put a large load on the laboratory’s heating, ventilating and air-conditioning (HVAC) system, thus impacting operational costs. Let’s look at some of the different chemical fume hood designs available, along with their pros and cons. Constant air volume (CAV) There are two basic types of laboratory fume hoods: conventional and bypass. Conventional hoods consist of a basic enclosure with a movable sash (or window). Since the face velocity, or “pull,” is a function of the total volume divided by the area of the sash opening, closing the sash on a conventional CAV hood will increase the face velocity. The conventional hood’s performance depends primarily on sash position. However, as the sash is closed, velocities can increase to the point where they disturb instrumentation and delicate apparatuses, cool hot plates and slow reactions, or create turbulence that can force contaminants into the room. Bypass hoods contain openings above the sash, in addition to an airfoil sill that will redirect the airflow as the sash is closed. The bypass openings reduce changes in face velocity to a narrow range by keeping the area for airflow equal (within the limits of the bypass) as the sash is moved up or down. Therefore, face velocities do not reach the detrimental levels often seen with conventional hoods. For this reason, bypass hoods hold a major share of the market today. Recent models of bypass hoods, called high-performance or “low-flow” hoods, display improved containment and safety features as well as energysaving designs. These design features vary by manufacturer but generally have one or more of the following: sash stops or horizontal-sliding sashes to limit the openings; sash position and airflow sensors that can control mechanical baffles; small fans to create an air-curtain barrier in the operator’s breathing zone; and refined aerodynamic designs and variable dual-baffle systems to maintain laminar (undisturbed, nonturbulent) flow through the hood. Although the initial cost of a high-performance hood is slightly more than that of a conventional bypass hood, the improved containment and flow characteristics allow these hoods to operate at a face velocity as low as 60 fpm, which can translate into $2,000 per year or more in energy savings, depending on hood size and sash settings.1 Reduced air volume (RAV) In laboratory settings where the tasks may be very specific and unchanging, the reduced air volume hood (a variation of the low-flow hood) is an option to consider. This design incorporates a bypass block to partially close off the bypass, reducing the air volume and thus conserving energy. Usually, the block is combined with a sash stop to limit the height of the sash opening, ensuring a safe face velocity during normal operation while lowering the hood’s air volume. By reducing the air volume, the RAV hood can operate with a smaller blower, which is another costsaving advantage. One downside to the RAV hood is that its restricted sash movement and reduced air volume also constrain its flexibility and narrow the realm of tasks that can be performed. Another major caution to note is the potential to override or disengage the sash stop. If this occurs, the face velocity could drop to an unsafe level. To counter this condition, operators must be trained never to override the sash stop while in use, and only to do so when loading or cleaning the hood. In addition, an airflow monitor is always recommended. Variable air volume (VAV) The newest generations of laboratory fume hoods vary the volume of room air exhausted while maintaining the face velocity at a predetermined level. Variable air volume hoods change the exhaust volume using different methods, such as a damper or valve in the exhaust duct that opens and closes based on sash position, or a blower that changes speed to meet air-volume demands. Most VAV hoods integrate a modified bypass-block system that ensures adequate airflow at all sash positions. They are connected electronically to the laboratory building’s HVAC, so hood exhaust and room supply are balanced. In addition, VAV hoods feature monitors and/or alarms that warn the operator of unsafe hood-airflow conditions. Although VAV hoods are much more complex than traditional constant-volume hoods, and correspondingly have higher initial costs, they can provide considerable energy savings by reducing the total volume of conditioned air exhausted from the laboratory. Since most hoods are operated the entire time a laboratory is open, this can quickly add up to significant cost savings. 1. How to Select the Right Laboratory Hood System, Labconco Corp., Kansas City, Mo., 2003. Chemical Fume Hood Handbook, Northwestern University, Chicago, Ill. Last revision, May 2007. http://www.research. northwestern.edu/research/ors/labsafe/hoods/index.htm National Research Council Recommendations Concerning Chemical Hygiene in Laboratories, U.S. Department of Labor, Occupational Safety and Health Administration, Washington, DC. http://www.osha.gov/pls/oshaweb/owadisp.show_ document?p_table=STANDARDS&p_id=10107
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 7479 }
If it wasn’t for earthquakes, humans wouldn’t have innovated architecture They wouldn’t have looked into new ways of building homes, but the problem is that we got good at it – good to the points our homes won’t be destroyed frequently enough aka they won’t evolute frequently. If you look around you, there is very few free space - and in those spots you find big centers being eradicated everyday – safe and resistant enough – specially to earthquakes – what on earth will take down those inefficient dumb primitive beton monsters and make room for better buildings in the future ? So the problem behind this is the ever expanding gap between technology and architecture : our homes will always be behind technology/progress – they will be always less optimal. I can only imagine how better the earth will be if our houses were “smart” or modern enough – it is not science fiction – the way we build stuff is very retarded to say the least when it comes to the material used, energy saving, what a home can “do” and it is just not possible “business wise” to say : ok, let us destroy and rebuild. Before, nature took care of this, slowly and “less painfully” As little earthquakes happened, our primitive cities got “devastated”, we rebuilt them in a better way but the costs were small. We kept gradually improving till our cities became resistant to medium/high earthquakes. We reached this point of the graph where things slow down, become stable – it is cool not to have the tragedy and misery of earthquakes, but on the other hand there is the hidden and expensive cost of stability and non-progress. It is invisible and super slow but as devastating in its effect as that 2 minutes tragedy called earthquake Our homes are costing the earth dearly and suffocating it – we need earthquakes to give engineers another better large-scale chance/try. Before I start sounding too embarrassingly enthusiastic about earthquakes and destruction, here is a link on list of earthquakes – it has - Main lists of earthquakes - Historical earthquakes (before 1901) - List of 20th century earthquakes (1901–2000) - List of 21st century earthquakes (2001–present) - Lists of earthquakes by country - Largest earthquakes by magnitude - Deadliest earthquakes on record Enjoy the read !Read More
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2283 }
Deep-space communication improved with electromagnetic radiation antenna - Robert C. Dye - Technology Transfer - (505) 667-3404 Electromagnetic radiation antenna has potential for deep-space communication - Directed Energy - Long-range communications - Medicine (Oncology) - RADAR imaging applications are countermeasure-resistant - Communications can be spatially-encrypted - 4-dimensional volumes of energy can be aimed at a single space-time point for directed energy applications - Nonspherical decay of the cusp enables low-power communications and propagation over great distances Los Alamos National Laboratory (LANL) researchers have developed the Lightslinger, a completely new type of antenna that produces tightly-focused packets of electromagnetic radiation fundamentally different from the emissions of conventional transmitters. The device has potential applications in RADAR, directed-energy (non-kinetic kill), secure communications, ultra-long-range communications (e.g., deep-space), medicine (oncology) and astrophysics. The Lightslinger functions by producing a moving polarization pattern in a ring of alumina. By careful timing of voltages applied to electrodes that surround the alumina, the polarization pattern can be made to move superluminally, i.e., faster than the speed of light in a vacuum. Nobel laureate Vitaly Ginzberg showed both that such superluminal polarization patterns do not violate the principles of special relativity and that they emit electromagnetic radiation. Once a source travels faster than the waves that it emits, it can make contributions at multiple retarded times to a signal received instantaneously at a distance. This effect is already well known in acoustics; when a supersonic airplane accelerates through the speed of sound, a violent “sonic boom” is heard many miles away, even if the airplane itself is rather quiet. The Lightslinger enables the same thing to be done with electromagnetic radiation; i.e., a relatively low-power source can make an “electromagnetic boom”, an intense concentration of radiowaves at a great distance. The “electromagnetic boom” is due to temporal focusing, that is, focusing in the time domain. Because of this effect, part of the emitted radiation possesses an intensity that decays with distance r as 1/r rather than as the conventional inverse square law, 1/r2. These nonspherically-decaying wavepackets represent a game-changing technology in the applications of electromagnetic radiation. Development stage: Working prototype Patent status: Patent pending Licensing status: Available for exclusive or non-exclusive licensing
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2624 }
Helping a friend It is sometimes easy to overlook how much difference we can make by offering a listening ear to someone close to us who is suffering. It may seem difficult to approach someone if you can’t see their problems from the outside, or if you fear getting involved. But it is important not to ignore your concerns; by using the information below you can learn how to help without taking on personal responsibility for the problem. People will express suffering in many different ways. The key is to go with your instincts, and if you do notice significant behavioural or emotional changes in a friend, try not to ignore this. If someone has become noticeably disengaged, lethargic or un-motivated in their working or social environment, this could indicate a problem. If they are frequently ill, un-kempt or drinking excessively, this could also be a sign. Furthermore, if you notice sudden mood changes, anxiety or irrational beliefs in a friend, this may be time to offer some help. Take a few steps to offer support, and if you don’t feel comfortable, speak to a caring relative or senior member of staff to explain your concerns. - Try talking to your friend and telling them you are concerned; this opens the opportunity to discuss their issues. Try to respect if they do not wish to talk about their problems. - Be prepared that the situation may only require sympathetic listening. That can sometimes mean it’s best to avoid practical advice. - Identify support networks from the services listed. This will also help you to avoid taking on responsibility for the problems yourself. - If your friend refuses help and you are still concerned, speak to someone in a specialist support service. You do not need to mention their name when asking for advice; in this way you are not breaking their confidence. There may be exceptional circumstances where there is a need to act without consent, e.g. if health has deteriorated to the extent of threatening a person’s personal safety or that of others. - GPs and Local Health Service can offer practical support with the impacts of feeling troubled. - University Counselling Service can help people to explore feelings for decision making and moving forward. - The Samaritans are available to talk online or on the telephone 24 hours a day. - Self help works very effectively for many. It’s important to be aware of unreliable information out there but try trusted sites Living Life to the Full and Beating the Blues. - National Domestic Violence Helpline or Men's Advice Line can help with domestic violence issues. - BEAT specialise in concerns about problems with eating. - Citizens Advice can provide free, independent advice to anyone with legal, money or social problems. - Relate are a service especially for those suffering with relationships. - Swanswell provide a health and social care service including problems relating to drug and alcohol misuse.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2920 }
The test team views the use of a pulley as an intermediate step only, and has planned to shift to a reliance on windlasses like those that apparently were used to hoist sails on Egyptian ships. "The whole approach has been to downgrade the technology," Gharib said. "We first wanted to show that a kite could raise a huge weight at all. Now that we're raising larger and larger stones, we're also preparing to replace the steel scaffolding with wooden poles and the steel pulleys with wooden pulleys like the ones they may have used on Egyptian ships." For Gharib, the idea of accomplishing heavy tasks with limited manpower is appealing from an engineer's standpoint because it makes more logistical sense. "You can imagine how hard it is to coordinate the activities of hundreds if not thousands of laborers to accomplish an intricate task," said Gharib. "It's one thing to send thousands of soldiers to attack another army on a battlefield. But an engineering project requires everything to be put precisely into place. "I prefer to think of the technology as simple, with relatively few people involved," he explained. Gharib and Graff came up with a way of building a simple structure around the obelisk, with a pulley system mounted in front of the stone. That way, the base of the obelisk would drag on the ground for a few feet as the kite lifted the stone, and the stone would be quite stable once it was pulled upright into a vertical position. If the obelisk were raised with the base as a pivot, the stone would tend to swing past the vertical position and fall the other way. The top of the obelisk is tied with ropes threaded through the pulleys and attached to the kite. The operation is guided by a couple of workers using ropes attached to the pulleys. No one has found any evidence that the ancient Egyptians moved stones or any other objects with kites and pulleys. But Clemmons has found some tantalizing hints that the project is on the right track. On a building frieze in a Cairo museum, there is a wing pattern in bas-relief that does not resemble any living bird. Directly below are several men standing near vertical objects that could be ropes. Gharib's interest in the project is mainly to demonstrate that the technique may be viable. "We're not Egyptologists," he said. "We're mainly interested in determining whether there is a possibility that the Egyptians were aware of wind power, and whether they used it to make their lives better." Now that Gharib and his team have successfully raised the four-ton concrete obelisk, they plan to further test the approach using a ten-ton stone, and perhaps an even heavier one after that. Eventually they hope to obtain permission to try using their technique to raise one of the obelisks that still lie in an Egyptian quarry. "In fact, we may not even need a kite. It could be we can get along with just a drag chute," Gharib said. An important question is: Was there enough wind in Egypt for a kite or a drag chute to fly? Probably so, as steady winds of up to 30 miles per hour are not unusual in the areas where pyramids and obelisks were found. (c) 2001 Caltech SOURCES AND RELATED WEB SITES
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3168 }
Refraction and Acceleration Name: Christopher S. Why is it that when light travels from a more dense to a less dense medium, its speed is higher? I've read answers to this question in your archives but, sadly, still don't get it. One answer (Jasjeet S Bagla) says that we must not ask the question because light is massless, hence questions of acceleration don't make sense. It does, however, seem to be OK to talk about different speeds of light. If you start at one speed and end at a higher one, why is one not allowed to talk about acceleration? Bagla goes on to say that it depends on how the em fields behave in a given medium. It begs the question: what is it about, say, Perspex and air that makes light accelerate, oops, travel at different speeds? If you're dealing with the same ray of light, one is forced to speak of acceleration, no? What other explanation is there for final velocity>initial velocity? Arthur Smith mentioned a very small "evanescent" component that travels ahead at c. Where can I learn more about this? Sorry for the long question. I understand that F=ma and if there is no m, you cannot talk about a, but, again, you have one velocity higher than another for the same thing. I need to know more than "that's just the way em fields are!" An explanation that satisfies me relates to travel through an interactive medium. When light interacts with an atom, the photon of light is absorbed and then emitted. For a moment, the energy of the light is within the atom. This causes a slight delay. Light travels at the standard speed of light until interacting with another atom. It is absorbed and emitted, causing another slight delay. The average effect is taking more time to travel a meter through glass than through air. This works like a slower speed. An individual photon does not actually slow down. It gets delayed repeatedly by the atoms of the medium. A more dense medium has more atoms per meter to Dr. Ken Mellendorf Illinois Central College Congratulations! on not being willing to accept "that is just the way em fields are!" The answer to your inquiry is not all that simple (my opinion), but I won't try to do so in the limited space allowed here, not to say my own limitations of knowledge. Like so many "simple" physics questions, I find the most lucid, but accurate, explanation in Richard Feynman's, "Lectures on Physics" which most libraries will have. Volume I, Chapter 31-1 through 31-6, which describes refraction, dispersion, diffraction. The "answer" has to do with how matter alters the electric field of incident radiation, but I won't pretend to be able to do a better job than Feynman. The answer is that you are not dealing with the same ray of light. In vacuum a photon just keeps going at the speed of light. In a medium, however, it interacts with the atoms, often being absorbed while bumping an atomic or molecular motion into a higher energy state. The excited atom/molecule then can jump to a lower energy state, emitting a photon while doing so. This can obviously make light appear to travel slower in a In detail, it is a very complicated question, requiring at least a graduate course in electromagnetism to begin to understand. Why, for example do the emitted photons tend to travel in the same direction? Best, Richard J. Plano Click here to return to the Physics Archives Update: June 2012
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3362 }
Jews and Non-Jews: Interfaith Relations "Dialogue" is the watchword in defining relations between Jews and peoples of other religions, particularly in North America's environment of religious pluralism. The emphasis on dialogue comes as a result of years of hard work on the part of religious leaders and a growing concern about religious intolerance that has continued to brew and cause turmoil throughout the world. Leaders from the Catholic Church, for example, take a proactive role in seeking dialogue with Jewish leaders. Since the Vatican II decision of the 1960s formally ending the Catholic belief that Jews were responsible for Jesus' death, Catholic leaders such as Pope John Paul II have attempted to change their relationship with Jewish people. All major archdiocese include specific offices of interreligious affairs, in which a team of priests, nuns, and educators work with members of clergy from the Jewish (and other) faiths. These offices often play a key role in helping to create annual community-wide Holocaust memorial services on Yom Hashoah (Day of Holocaust commemoration). Jewish leaders, too, are taking an active role in facilitating dialogue with other religious groups. In the aftermath of the September 11th terrorist attacks, many Jewish leaders, along with their Christian peers, acknowledged an ignorance or misunderstanding of the Muslim religion. Chapters of the American Jewish Committee have facilitated Jewish-Muslim dialogues in conjunction with their Islamic peers. Many Jewish religious schools have added a class on Jewish-Muslim relations to their roster of high school courses. Perhaps most moving, however, were the number of synagogues in metropolitan areas who came forward to volunteer their services to walk members of local mosques to their cars after the 9/11 attacks, when anti-Muslim rage was spreading. These dialogues and attempts at understanding are but rays of hope in the darkness; they do not take away the layers of misunderstanding and distrust that exist between Jews and Muslims in the Middle East and around the world. The same goes with Jewish-Christian dialogue: After two millennia of persecution, the past is not forgotten or abandoned easily. Tensions remain between Jews and Christians, but dialogue has replaced violence as the means to air these differences.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2334 }
Enuresis (involuntary peeing that is abnormal for a child’s age) is one of the most common types of voiding dysfunction, and includes both nighttime wetting (nocturnal enuresis) and daytime wetting (diurnal enuresis). Children often exhibit posturing behaviors, (pee-pee dance, cross their legs, squat). Although it is normal for very young children to do this as they are learning to toilet train, sometimes these symptoms can continue even as the child grows older. Voiding dysfunction may cause a child to run to the bathroom frequently. Children may have to urinate every 10-30 minutes or in less severe cases, every 1-3 hours. They will often urinate small volumes or feel the urge to urinate again soon after voiding. What causes voiding dysfunction? The bladder is a muscle that stores urine, and it empties by contracting the muscle. A normally functioning bladder only contracts when it is at full capacity (the normal amount of urine that it can hold comfortably) and it is time to void. When the bladder is irritable or overactive, it tends to contract at will, regardless of how much urine it is holding. It’s important for you to know that what your child is feeling is real and they do not have conscious control over it. Constipation often contributes to these symptoms of voiding dysfunction. Your child may have mild to moderate constipation without complaining and the rectum and colon can stretch to accommodate the stool. This causes pushing on the bladder resulting in urgency/frequency, a decrease in capacity, and incomplete emptying. How is voiding dysfunction diagnosed and treated? In diagnosing overactive bladder, your Nemours pediatric urology team will do few things to rule out infection, or any serious, but rare, disorder: thorough health history urinalysis and urine culture renal and bladder ultrasound to check for bladder and kidney abnormalities urine flow study (which uses a special toilet to measure your child’s voiding pattern) post void residual (similar to the ultrasound, this is done after voiding to make sure your child is able to empty his or her bladder completely) We will also ask you to keep a Voiding/Bowel Diary (PDF). This diary provides invaluable information that helps our Nemours pediatric urologists assess your child’s exact voiding problem. It will tell us how frequently your child is voiding, how much their bladder is letting them hold, if there is wetting and when this wetting occurs in relation to voiding. It will also allow us to better assess their stooling pattern and assure there is no constipation. Most children will outgrow the symptoms of overactive bladder on their own without intervention, if there is no abnormality present. Your Nemours urologist may recommend some medications to relax the bladder depending on your preference and the age of your child. Addressing your child’s symptoms of overactive bladder and wetting can dramatically improve your child’s quality of life. We often see children’s nighttime bedwetting improve after their daytime symptoms are addressed. A urine dipstick test is often done as part of an overall urinalysis, but it can also be done on its own, depending on the doctor's concerns. Once a urine sample is collected, a nurse or technician will place a specially treated chemical strip (dipstick) into your child's urine. Patches on the dipstick will change color to indicate the presence of such things as white blood cells, protein, or glucose. Why It's Done The results of a urine dipstick test may point to a diagnosis of urinary tract infection (UTI), kidney disease, diabetes, or a urinary tract injury. If test results are abnormal, other tests will be needed before a definite diagnosis can be made. No preparation other than cleansing the area around the urinary opening is required for the urine dipstick test. Your child will be asked to urinate into a clean sample cup in the doctor's office. If your child isn't potty trained and can't urinate into a cup, a catheter (a narrow, soft tube) may need to be inserted into the bladder to obtain the urine specimen. The skin surrounding the urinary opening has to be cleaned and rinsed just before the urine is collected. In this "clean-catch" method, you or your child cleans the skin around the urinary opening with a special towelette. The child then urinates, stops momentarily, and then urinates again into the collection container. Catching the urine in "midstream" is the goal. Be sure to wash your hands and your child's hands after this process. Once collected, the technician or nurse will then place the dipstick into the urine sample. Collecting the specimen should only take a few minutes. What to Expect Because the test involves normal urination; there shouldn't be any discomfort as long as your child can provide a urine specimen. It's important to keep the area around the urinary opening clean before the test and to catch the urine sample midstream. Getting the Results The results of the urine dipstick test will be available right away. If abnormalities are found, further urine tests will be needed. Talk to your child's doctor about the meaning of the specific test results. No risks are associated with taking a urine dipstick test. If a catheterized specimen is required it may cause temporary discomfort and you can discuss any questions you have about this procedure with your healthcare provider. Helping Your Child The urine dipstick test is painless. Explaining how the test will be conducted, and why it's being done, can help ease your child's fear. Make sure your child understands that the urinary opening must be clean and the urine must be collected midstream. If You Have Questions If you have questions about the urine dipstick test, speak with your doctor.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5770 }
Phantom Phone Calls ospri.net- Alleged contact with the dead has occurred universally throughout history, taking various forms; as dreams, waking visions and auditory hallucinations, either spontaneous or induced through trance. In many cultures, the spirits of the dead have been sought for their wisdom, advice and knowledge of the future. The dead also seem to initiate their own communication, using whatever means seem to be most effective. With the advent of electromagnetic technology, mysterious messages have been communicated by telegraph, wireless, phonographs and radio. A curious phenomenon of modern times is the communication via the telephone. Phone calls from the dead seem to be random and occasional occurrences that happen without explanation. The great majority are exchanges between persons who shared a close emotional tie while both were living: spouses, parents and children, siblings, and occasionally friends and other relatives. Most communications are "intention" calls, initiated by the deceased to impart a message, such as farewell upon death, a warning of impending danger, or information the living needs to carry out a task. For example, actress Ida Lupino's father, Stanley, who died intestate in London during World War II, called Lupino six months after his death to relate information concerning his estate, the location of some unknown but important papers. Some calls appear to have no other purpose than to make contact with the living; many of these occur on emotionally charged "anniversary" days, such as Mothers day or Fathers day, a birthday or holiday. In a typical” anniversary” call, the dead may do nothing more than repeat a phrase over and over, such as "Hello, Mom, is that you?" Persons who have received phone calls from the dead report that the voices are exactly the same as when the deceased was living, furthermore, the voice often uses pet names and words. The telephone usually rings normally, although some recipients say that the ring sounded flat and abnormal. In many cases, the connection is bad, with a great deal of static and line noise, and occasionally the faint voices of the other persons are heard, as though lines have been crossed. In many cases, the voice of the dead one is difficult to hear and grows fainter as the call goes on. Sometimes, the voice just fades away but the line remains open, and the recipient hangs up after giving up on further communication. Sometimes the call is terminated by the dead and the recipient hers the click of disengagement, other times, the line simply goes dead. The phantom phone calls typically occur when the recipient is in a passive state of mind. If the recipient knows the caller is dead, the shock is great and the phone call very brief, invariably, the caller terminates the call after a few seconds or minutes, or the line goes dead. If the recipient does not know the caller is dead, a lengthy conversation of up to 30 minutes or so may take place, during which the recipient is not aware of anything amiss. In a minority of cases, the call is placed person-to-person, long-distance with the assistance of a mysterious operator. Checks with the telephone company later turn up no evidence of a call being places. Similar phone calls from the dead are "intention" phone calls occurring between two living persons. Such calls are much rarer than calls from the dead. In a typical "intention" call, the caller thinks about making the call but never does, the recipient nevertheless receives a call. In some cases, emergencies precipitate phantom calls, a surgeon is summoned by a nurse to the hospital to perform an emergency operation, a priest is called by a "relative" to give last rites to a dying man and so forth. Some persons who claim to have had UFO encounters report receiving harassing phantom phone calls. The calls are received soon after the witness returns home, or within a day or two of the encounter, in many cases, the calls come before the witness has shared the experience with anyone, stranger still, they are often placed to unlisted phone numbers. The unidentified caller warns the witness not to talk and to "forget" what he or she saw. Phone calls allegedly may be placed to the dead as well. The caller does not find out until sometime after the call that the person on the other end has been dead. In one such case, a woman dreamed of a female friend she had not seen for several years. In the disturbing dream, she witnessed the friend sliding down into a pool of blood. Upon awakening, she worried that the dream was a portent of trouble, and called the friend. She was relieved when the friend answered. The friend explained that she had been in the hospital, had been released and was due to be readmitted in a few days. She demurred when the woman offered to visit, saying she would call later. The return call never came. The woman called her friend again, to be told by a relative that the friend has been dead for six months at the time the conversation took place. In several cases studied by researchers, the deceased callers make reference to an anonymous” they” and caution that there is little time to talk. The remarks imply that communication between the living and the dead is not only difficult, but not necessarily desirable. Most phone calls from the dead occur within 24 hours of the death of the caller. Most short calls come from those who have been dead seven days or less: most lengthy calls come from those who have been dead several months. One of the longest death-intervals on record is two years. In a small number of cases, the callers are strangers who say they are calling on behalf of a third party, whom the recipient later discovered is dead. Several theories exist as to the origin of phantom phone calls. (1) They are indeed placed by the dead, who somehow manipulate the telephone mechanisms and circuitry: (2) they are deceptions of elemental-type spirits who enjoy playing tricks on the living: (3) they are psychokinetic acts caused subconsciously by the recipient, whose intense desire to communicate with the dead creates a type of hallucinatory experience: (4) they are entirely fantasy created by the recipient. For the most part, phantom phone calls are not seriously regarded by parapsychologists. In the early 20th century, numerous devices were built by investigators in hopes of capturing ghostly voices: many of them were modifications of the telegraph and wireless. Thomas Alva Edison, whose parents were Spiritualists, believed that a telephone could be invented that would connect the living to the dead. He verified that he was working on such a device, but apparently it never was completed before his death. "Psychic telephone" experiments were conducted in the 1940's in England and America. Interest in the phenomenon waned until the 1960’s, following the findings of Konstantin Raudive that ghostly voices could be captured on electromagnetic tape.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6960 }
Seachem Denitrate 1 Liter Container Denitrate is an economical, natural, porous material with a pore distribution and geometry that promotes both aerobic nitrification within the first few millimeters of depth and anaerobic denitrification at the core. The material has a high surface area and supports a high density of bacteria. Although de nitrate has capacity to trap nitrate, this, as with other nitrate retaining materials, such as certain zeolites and synthetic resins, is quite limited and the primary mechanism of nitrate removal is anaerobic. Detailed description below: Why It's Different Live rocks or reef rocks remove nitrate by anaerobic denitrification. de nitrate removes nitrate by the same process. Efficiency is magnified several folds by forcing the water to filter through the porous de nitrate. As with reef rock, anaerobic conditions are achieved by the porosity and the depletion of oxygen by the aerobic process at the surface. Excessive flow rates should, therefore, be avoided, as they may impede development of an adequate anaerobic environment to support denitrifying bacteria. de nitrate is also an excellent media for aerobic nitrification and it makes an ideal biological filter in drip trays, canister filters, sumps, or even box filters. At high flow rates (greater than 100 US gallons per hour), it will function solely as an aerobic filter. At slow flow rates (less than 50 US gallons per hour), it will function as both an aerobic filter and an anaerobic denitrifying filter. Directions for use For best results, de*nitrate should be placed to assure the flow of water through it, such as in a canister filter, chemical filtration module, or box filter. Flow rate should not exceed 200 L (50 gallons*) per hour. If higher flow rates are unavoidable, use Matrix or Pond Matrix. It is best to rinse off dust before use. Once de*nitrate has been in use for several days, nitrate concentrations should start to fall and level off gradually at a concentration of about 4 to 5 mg/L as nitrate. As long as nitrate concentrations remain under control, the product is not exhausted. Each 500 mL of de*nitrate treats about 100 to 200 L (25 to 50 gallons*), depending on initial nitrate concentration and the current biological load. Enough should be used to remove nitrate at a rate at least as fast as the rate of formation. If very high nitrates are initially present, they should be brought down to less than 20 mg/L with water changes. Availability: Usually Ships in 1 to 2 Business Days
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2519 }
Excerpts for Thames : The Biography The River as Fact It has a length of 215 miles, and is navigable for 191 miles. It is the longest river in England but not in Britain, where the Severn is longer by approximately 5 miles. Nevertheless it must be the shortest river in the world to acquire such a famous history. The Amazon and the Mississippi cover almost 4,000 miles, and the Yangtze almost 3,500 miles; but none of them has arrested the attention of the world in the manner of the Thames. It runs along the borders of nine English counties, thus reaffirming its identity as a boundary and as a defence. It divides Wiltshire from Gloucestershire, and Oxfordshire from Berkshire; as it pursues its way it divides Surrey from Middlesex (or Greater London as it is inelegantly known) and Kent from Essex. It is also a border of Buckinghamshire. It guarded these once tribal lands in the distant past, and will preserve them into the imaginable future. There are 134 bridges along the length of the Thames, and forty-four locks above Teddington. There are approximately twenty major tributaries still flowing into the main river, while others such as the Fleet have now disappeared under the ground. Its "basin," the area from which it derives its water from rain and other natural forces, covers an area of some 5,264 square miles. And then there are the springs, many of them in the woods or close to the streams beside the Thames. There is one in the wood below Sinodun Hills in Oxfordshire, for example, which has been described as an "everlasting spring" always fresh and always renewed. The average flow of the river at Teddington, chosen because it marks the place where the tidal and non-tidal waters touch, has been calculated at 1,145 millions of gallons (5,205 millions of litres) each day or approximately 2,000 cubic feet (56.6 cubic metres) per second. The current moves at a velocity between 1Ú2 and 23Ú4 miles per hour. The main thrust of the river flow is known to hydrologists as the "thalweg"; it does not move in a straight and forward line but, mingling with the inner flow and the variegated flow of the surface and bottom waters, takes the form of a spiral or helix. More than 95 per cent of the river's energy is lost in turbulence and friction. The direction of the flow of the Thames is therefore quixotic. It might be assumed that it would move eastwards, but it defies any simple prediction. It flows north-west above Henley and at Teddington, west above Abingdon, south from Cookham and north above Marlow and Kingston. This has to do with the variegated curves of the river. It does not meander like the Euphrates, where according to Herodotus the voyager came upon the same village three times on three separate days, but it is circuitous. It specialises in loops. It will take the riparian traveller two or three times as long to cover the same distance as a companion on the high road. So the Thames teaches you to take time, and to view the world from a different vantage. The average "fall" or decline of the river from its beginning to its end is approximately 17 to 21 inches (432 to 533 mm) per mile. It follows gravity, and seeks out perpetually the simplest way to the sea. It falls some 600 feet (183 m) from source to sea, with a relatively precipitous decline of 300 feet (91.5 m) in the first 9 miles; it falls 100 (30.4 m) more in the next 11 miles, with a lower average for the rest of its course. Yet averages may not be so important. They mask the changeability and idiosyncrasy of the Thames. The mean width of the river is given as 1,000 feet (305 m), and a mean depth of 30 feet (9 m); but the width varies from 1 or 2 feet (0.3 to 0.6 m) at Trewsbury to 51Ú2 miles at the Nore. The tide, in the words of Tennyson, is that which "moving seems asleep, too full for sound and foam." On its flood inward it can promise benefit or danger; on its ebb seaward it suggests separation or adventure. It is one general movement but it comprises a thousand different streams and eddies; there are opposing streams, and high water is not necessarily the same thing as high tide. The water will sometimes begin to fall before the tide is over. The average speed of the tide lies between 1 and 3 knots (1.15 and 3.45 miles per hour), but at times of very high flow it can reach 7 knots (8 miles per hour). At London Bridge the flood tide runs for almost six hours, while the ebb tide endures for six hours and thirty minutes. The tides are much higher now than at other times in the history of the Thames. There can now be a difference of some 24 feet (7.3 m) between high and low tides, although the average rise in the area of London Bridge is between 15 and 22 feet (4.5 and 6.7 m). In the period of the Roman occupation, it was a little over 3 feet (0.9 m). The high tide, in other words, has risen greatly over a period of two thousand years. The reason is simple. The south-east of England is sinking slowly into the water at the rate of approximately 12 inches (305 mm) per century. In 4000 BC the land beside the Thames was 46 feet (14 m) higher than it is now, and in 3000 BC it was some 31 feet (9.4 m) higher. When this is combined with the water issuing from the dissolution of the polar ice-caps, the tides moving up the lower reaches of the Thames are increasing at a rate of 2 feet (0.6 m) per century. That is why the recently erected Thames Barrier will not provide protection enough, and another barrier is being proposed. The tide of course changes in relation to the alignment of earth, moon and sun. Every two weeks the high "spring" tides reach their maximum two days after a full moon, while the low "neap" tides occur at the time of the half-moon. The highest tides occur at the times of equinox; this is the period of maximum danger for those who live and work by the river. The spring tides of late autumn and early spring are also hazardous. It is no wonder that the earliest people by the Thames venerated and propitiated the river. The general riverscape of the Thames is varied without being in any sense spectacular, the paraphernalia of life ancient and modern clustering around its banks. It is in large part now a domesticated river, having been tamed and controlled by many generations. It is in that sense a piece of artifice, with some of its landscape deliberately planned to blend with the course of the water. It would be possible to write the history of the Thames as a history of a work of art. It is a work still in slow progress. The Thames has taken the same course for ten thousand years, after it had been nudged southward by the glaciation of the last ice age. The British and Roman earthworks by the Sinodun Hills still border the river, as they did two thousand years before. Given the destructive power of the moving waters, this is a remarkable fact. Its level has varied over the millennia--there is a sudden and unexpected rise at the time of the Anglo-Saxon settlement, for example--and the discovery of submerged forests testifies to incidents of overwhelming flood. Its appearance has of course also altered, having only recently taken the form of a relatively deep and narrow channel, but its persistence and identity through time are an aspect of its power. Yet of course every stretch has its own character and atmosphere, and every zone has its own history. Out of oppositions comes energy, out of contrasts beauty. There is the overwhelming difference of water within it, varying from the pure freshwater of the source through the brackish zone of estuarial water to the salty water in proximity to the sea. Given the eddies of the current, in fact, there is rather more salt by the Essex shore than by the Kentish shore. There are manifest differences between the riverine landscapes of Lechlade and of Battersea, of Henley and of Gravesend; the upriver calm is in marked contrast to the turbulence of the long stretches known as River of London and then London River. After New Bridge the river becomes wider and deeper, in anticipation of its change. The rural landscape itself changes from flat to wooded in rapid succession, and there is a great alteration in the nature of the river from the cultivated fields of Dorchester to the thick woods of Cliveden. From Godstow the river becomes a place of recreation, breezy and jaunty with the skiffs and the punts, the sports in Port Meadow and the picnic parties on the banks by Binsey. But then by some change of light it becomes dark green, surrounded by vegetation like a jungle river; and then the traveller begins to see the dwellings of Oxford, and the river changes again. Oxford is a pivotal point. From there you can look upward and consider the quiet source; or you can look downstream and contemplate the coming immensity of London. In the reaches before Lechlade the water makes its way through isolated pastures; at Wapping and Rotherhithe the dwellings seem to drop into it, as if overwhelmed by numbers. The elements of rusticity and urbanity are nourished equally by the Thames. That is why parts of the river induce calm and forgetfulness, and others provoke anxiety and despair. It is the river of dreams, but it is also the river of suicide. It has been called liquid history because within itself it dissolves and carries all epochs and generations. They ebb and flow like water. The River as Metaphor The river runs through the language, and we speak of its influence in every conceivable context. It is employed to characterise life and death, time and destiny; it is used as a metaphor for continuity and dissolution, for intimacy and transitoriness, for art and history, for poetry itself. In The Principles of Psychology (1890) William James first coined the phrase "stream of consciousness" in which "every definite image of the mind is steeped . . . in the free water that flows around it." Thus "it flows" like the river itself. Yet the river is also a token of the unconscious, with its suggestion of depth and invisible life. The river is a symbol of eternity, in its unending cycle of movement and change. It is one of the few such symbols that can readily be understood, or appreciated, and in the continuing stream the mind or soul can begin to contemplate its own possible immortality. In the poetry of John Denham's "Cooper's Hill" (1642), the Thames is a metaphor for human life. How slight its beginning, how confident its continuing course, how ineluctable its destination within the great ocean: Hasting to pay his tribute to the sea, Like mortal life to meet eternity. The poetry of the Thames has always emphasised its affiliations with human purpose and with human realities. So the personality of the river changes in the course of its journey from the purity of its origins to the broad reaches of the commercial world. The river in its infancy is undefiled, innocent and clear. By the time it is closely pent in by the city, it has become dank and foul, defiled by greed and speculation. In this regress it is the paradigm of human life and of human history. Yet the river has one great advantage over its metaphoric companions. It returns to its source, and its corruption can be reversed. That is why baptism was once instinctively associated with the river. The Thames has been an emblem of redemption and of renewal, of the hope of escaping from time itself. When Wordsworth observed the river at low tide, with the vista of the "mighty heart" of London "lying still," he used the imagery of human circulation. It is the image of the river as blood, pulsing through the veins and arteries of its terrain, without which the life of London would seize up. Sir Walter Raleigh, contemplating the Thames from the walk by his cell in the Tower, remarked that the "blood which disperseth itself by the branches or veins through all the body, may be resembled to these waters which are carried by brooks and rivers overall the earth." He wrote his History of the World (1610) from his prison cell, and was deeply imbued with the current of the Thames as a model of human destiny. It has been used as the symbol for the unfolding of events in time, and carries the burden of past events upon its back. For Raleigh the freight of time grew ever more complex and wearisome as it proceeded from its source; human life had become darker and deeper, less pure and more susceptible to the tides of affairs. There was one difference Raleigh noticed in his history, when he declared that "for this tide of man's life, after it once turneth and declineth, ever runneth with a perpetual ebb and falling stream, but never floweth again." The Thames has also been understood as a mirror of morality. The bending rushes and the yielding willows afford lessons in humility and forbearance; the humble weeds along its banks have been praised for their lowliness and absence of ostentation. And who has ventured upon the river without learning the value of patience, of endurance, and of vigilance? John Denham makes the Thames the subject of native discourse in a further sense: Though deep, yet clear; though gentle, yet not dull; Strong without rage; without o'erflowing, full. This suggests that the river represents an English measure, an aesthetic harmony to be sought or wished for, but in the same breath Denham seems to be adverting to some emblem of Englishness itself. The Thames is a metaphor for the country through which it runs. It is modest and moderate, calm and resourceful; it is powerful without being fierce. It is not flamboyantly impressive. It is large without being too vast. It eschews extremes. It weaves its own course without artificial diversions or interventions. It is useful for all manner of purposes. It is a practical river. When Robert Menzies, an erstwhile Australian prime minister, was taken to Runnymede he was moved to comment upon the "secret springs" of the "slow English character." This identification of the land with the people, the characteristics of the earth and water with the temperament of their inhabitants, remains a poignant one. There is an inward and intimate association between the river and those who live beside it, even if that association cannot readily be understood. From the Hardcover edition.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 14234 }
Australian Fur Seal pup feeding The Australian fur seal is the world's fourth- rarest seal species. Hunted to the brink of extinction last century, population recovery has been slow, and seals are now wholly protected. The Australian fur seal is found from the coast of NSW, down around Tasmania to Victoria and South Australia. It is the most common seal in Tasmanian waters and breeds on small isolated rocks in Bass Strait between October and January. It also hauls-out at various rocky areas around the Tasmanian coastline, especially outside the breeding season when many seals disperse from the breeding colonies. It isn't always easy to tell the sexes apart, although the adult males are much bigger animals than adult females, with large heads and heavily- muscled necks and chests. Adult females average 125-170 cm in length and weigh between 50-120 kg, Cows are slender, silvery- grey on the back, with a creamy- yellow throat and chest, and a chocolate brown belly. New born pups are almost black on the back and grey/ light- brown on the belly, moulting after three months. Adult male seals can grow to 200-225 cm and weigh 220 kg to 360 kg. Bulls are usually dark grey/ brown, with a mane of coarse hair on neck and shoulders. Young seals of both sexes have grey- brown backs and yellowish belly fur. The dense coat is made of woolly underfur and long, coarse outer hairs. This traps air which waterproofs and insulates the seal. Like all seals, they moult each year, replacing their old fur with new growth. A layer of fat assists with warmth and streamlining. The Australian fur seal eats mainly fish and cephalopods (squid, octopus and cuttlefish). Of the nineteen fish species known to be consumed, Jack Mackerel, Redbait and Leatherjackets form the main prey items. Of the 11 known cephlapod species eaten, the most frequently consumed is the Gould's Squid (Notordarus gouldi). Females give birth to a single pup which is fed on thick, rich milk. Pups are born in November- December, and usually weaned 10– 11 months later, although some cows may suckle a pup for up to four years. Once a cow gives birth for the first time, she is practically in a continuous state of lactation for the rest of her life, with maybe only a few weeks off between weaning last season's pup and having another. Australian fur seals breed on five rocky Bass Strait islands, but because seals only come ashore to rest and breed, it is impossible to know exactly how many there are. Based on counts at the breeding colonies each year, scientists estimate there are about 5000 pups born in Tasmanian waters each year. However, not all pups will survive to become adults. In fact, in the first two months of life 15% of pups will die. This natural mortality continues throughout the life of the seal, but at a lower level than that of the pups. Seal mortality also occurs as a result of human activities such as deliberate persecution through shooting, fisheries bycatch and entanglement in plastic, non- biodegradable materials.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3022 }
The Neighbor Squirrel These busy fluffballs have lost their fear of most predators - and they help plant pecan trees. By Sheryl Smith-Rodgers Have you ever watched an eastern fox squirrel (Sciurus niger) bury an acorn or pecan? A nuzzle here, another there, then he hurriedly pushes the leaves and grass over the site before scampering up the closest tree. Minutes later, he's back with another nut. Over the course of three months, that industrious squirrel can bury several thousand pecans. Come winter, when food's scarce, he'll find them again with his excellent sense of smell. Some will escape his appetite, though, and sprout into saplings, which is how many native nut trees get planted. Eastern fox squirrels - the state's most common and wide-ranging squirrel and a popular game animal, too - occur in forests and riparian habitats. They also easily adapt to cities and neighborhoods, where they've lost most of their fear of natural predators. "Playing the call of a red-tailed hawk didn't phase squirrels on campus," reports Bob McCleery, a wildlife lecturer at Texas A&M University, who has studied urban squirrels in College Station. "When we played a coyote call in the Navasota river bottom, a squirrel immediately flattened itself in the crotch of a tree for a good five minutes." When agitated, fox squirrels - whose fur closely resembles that of a gray fox - bark and jerk their long, bushy tails, which they use for balance when scampering on utility lines and other high places. Tails provide warmth and protection, too. "In the summer, I've seen them lying down with their tails over their heads to block the sun," McCleery says.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1651 }
Ragtime and blues fused ‘All That Jazz’ By Laura Szepesi Published: Sunday, March 17, 2013, 7:09 p.m. Updated: Monday, March 18, 2013 EDITOR'S NOTE: Thursday marks the 85th birthday of well-known Connellsville jazz trombonist Harold Betters. We salute him with this four-part series, starting today with a brief history of jazz music. In 1979, actor Roy Scheider brought the life of Broadway dancer / director Bob Fosse to the big screen in the film “All That Jazz.” “All” is the perfect way to describe jazz music. Jazz was born around 1900 in New Orleans — about the same time as the earliest music recordings became available to the public. It grew out of ragtime, which many sources claim is the first true American music. Like jazz, ragtime has Southern roots, but was also flavored by the southern Midwest. It was popular from the late 1800s to around 1920. It developed in African American communities, a mix of march music (from composers such as John Philip Sousa), black songs and dances including the cakewalk. Ragtime: Dance on Eventually, ragtime spread across the United States via printed sheet music, but its roots were as live dance music in the red light districts of large cities such as St. Louis and New Orleans. Ernest Hogan is considered ragtime's father. He named it ragtime because of the music's lively ragged syncopation. Ragtime faded as jazz's following grew. However, composers enjoyed major success in ragtime's early years. Scott Joplin's 1899 “Maple Leaf Rag” was a hit, as was his “The Entertainer,” which was resurrected as a Top 5 hit when it was featured in the 1974 movie “The Sting” starring Robert Redford and Paul Newman. Born of ragtime, jazz was also heavily influenced by the blues. Blues originated in the late 1800s, but in the deep South. It is an amalgam of Negro spirituals, work songs, shouts, chants and narrative lyrics. Fused with blues Like jazz, the blues comes in many forms: delta, piedmont, jump and Chicago blues. Its popularity grew after World War II when electric guitars — rather than acoustic guitars — became popular. By the early 1970s, blues had formed another hybrid: blues rock. While ragtime is jangly and spirited, the blues takes after its name: blue, or melancholy. Its name is traced to 1912 when Hart Ward copyrighted the first blues song, “Dallas Blues.” Jazz — as a mix of ragtime and blues — has fused into many styles since its emergence. In the 1910s, New Orleans jazz was the first to take off. In the 1930s and 1940s, Big Band swing, Kansas City jazz and bebop prevailed. Other forms include cool jazz and jazz rock; today, there's even cyber jazz. Jazz: Always changing The late jazz trombone player J.J. Johnson summed jazz up as restless. “It won't stay put ... and never will,” he was quoted as saying, according to various sources. Johnson's sentiment is heartily endorsed by Connellsville jazz trombonist Harold Betters. Betters turns 85 years old this week. He will share decades of his memories about music and growing up in Connellsville as his March 21 birthday approaches. Laura Szepesi is a freelance writer. Tuesday: Just how did Harold Betters decide to play the trombone? - Uniontown police investigate shooting injury - Upper Tyrone family helps pet overcome paralysis - Several Fayette boroughs have contested races - Recap of the death of Connellsville police officer McCray Robb in 1882 - Connellsville police officer recognized 131 years after death - Fayette County man accused of receiving stolen property, multiple drug offenses - Connellsville set to debut model-railroad train in 2014 - Connellsville airport will remain open - Connellsville mayoral candidate Joshua DeWitt held for trial in chop shop case - South Connellsville man charged in pedestrian accident - Connellsville council to make appointments, reappointments You must be signed in to add comments To comment, click the Sign in or sign up at the very top of this page. Subscribe today! Click here for our subscription offers.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3997 }
High School Drop-out Rate at Record Low A record seven-in-ten (69%) Hispanic high school graduates in the Class of 2012 enrolled in college that fall, two percentage points higher than the rate (67%) among their white counterparts. At the same time, the high school drop-out rate among Latino youths has come down by half – from 28% in 2000 to 14% in 2011. Despite improvements on these measures, Hispanics continue to lag other youth in a number of key higher education indicators, such as completion of four-year college degrees. Two-thirds of Legal Mexican Immigrants are not U.S. Citizens Nearly two-thirds of the 5.4 million legal immigrants from Mexico who are eligible to become citizens of the United States have not yet taken that step. Their naturalization rate—36%—is only half that of legal immigrants from all other countries combined, according to a Pew Hispanic Center analysis of federal government data. A nationwide survey of Hispanic immigrants by the Center finds that nearly all (93%) who have not yet naturalized say they would if they could. But barriers such as a lack of English proficiency and the financial cost of naturalization are identified as reasons why many legal immigrants have not yet done so. Trends in migration flows, the characteristics of the foreign-born population and attitudes towards immigration policy issues. Reports and public opinion surveys examining the changing electoral participation and views of Latinos. - Election Fact Sheets: Data on the size and social and economic characteristics of the Hispanic and non-Hispanic eligible voter populations. 2012 | 2010 | 2008 - Interactive: Mapping the Latino Electorate - Latino Voters in the 2012 Election - The Latino Vote in the 2010 Elections - The Latino Electorate in 2010: More Voters, More Non-Voters The Pew Hispanic Center recently published “When Labels Don’t Fit: Hispanics and their Views of Identity,” a report based on a nationwide survey that found most Hispanics don’t embrace the term “Hispanic.” And even fewer prefer the term “Latino.” We then invited journalists, scholars and civic leaders to share their views about identity. 10.18.12 Latinos, Religion and Campaign 2012
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2191 }
What is “AP*”? The letters “AP” stand for Advanced Placement. A course that has “AP” in the title is a high school course with… - The level of difficulty of a first-year college course, with content that is rigorous and extensive in its subject area - Content that meets college-level curriculum standards outlined by the College Board; every AP course goes through an approval process by the College Board for its AP designation - The opportunity for students to take a subject exam provided by the College Board each May; qualifying AP exam scores enable students to earn credit or advanced placement at many colleges. How does a course qualify to be called “AP”? Before an AP course is offered by a school, it must go through—and pass—a course audit. “The AP Course Audit was created at the request of both secondary school and college members of the College Board... [to] provide AP teachers and administrators with clear guidelines on curricular and resource requirements that must be in place for AP courses.... [and] give colleges and universities confidence that [all] AP courses are designed to meet the same clearly articulated college-level criteria.... All schools wishing to label a course ‘AP’ must submit the subject-specific AP Course Audit form and the course syllabus for each teacher of that AP course.” (See this page for the reference and more information.) When you see “AP” in a course’s name, you know that the course conforms to a college-level curriculum standard. All PHC Prep Academy courses pass the AP course audit before they are taught to students. (See our full list of AP-approved courses.) Who takes AP courses? Most students who take AP courses are juniors and seniors in high school. However, younger students can also take a course, if they are ready. Individuals older than 18 can take AP courses—including adults. If you want to learn more about what it takes to be ready for AP studies, go here. To check out course information and see if your student is ready to take a specific course, you can find more details on our course listing page. If, after reading a course description, you still have questions regarding course content and your student’s readiness, simply call us at 540-338-8290 and we’ll get you what you need to decide well. What do AP high school courses have to do with college? Here are the top three reasons why AP courses help college-bound students: - AP studies can give you a head start on your college degree. Many colleges give students credit toward graduation on the basis of high AP exam scores, or allow them to place out of lower-level classes. - College admissions officers like to see AP courses on your high school transcript. AP courses demonstrate a high level of high school achievement that can give students an advantage in the college admissions process. According to a College Board study, “85 percent of selective colleges and universities report that a student’s AP experience favorably impacts admissions decisions.” - AP courses give you a preview of what college will be like. AP-level work helps students learn essential college study skills. For students with AP experience, the challenges of rigorous college courses will be much more familiar and manageable. What colleges and universities accept AP exam scores? Specific college admissions offices can tell you their policies regarding AP score acceptance. We recommend that you go to the website of the college or university you are interested in, or call its admissions office, for specific details. Another helpful resource is a tool on the College Board’s website that lets you look up AP credit policies for individual colleges. How does the scoring of the exams work? You can also find information about AP exams and scoring in our AP Score Results reports. Why should my student take an AP course? AP courses have many benefits for you and your student. Click here now to read The Top 10 Reasons Your Teen Needs to Take an AP Course.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3982 }
Introduction to principles of chemistry and fundamentals of inorganic and biochemistry. Structure and chemistry of carbohydrates, lipids, proteins, biochemistry of enzymes, metabolism, body fluids and radiation effects. On-line materials includes the course syllabus, copies of the lecture slides and animations, interactive Periodic Table, chapter summaries and practice exams. This course is targeted towards Health Science Majors. Introduction to principles of chemistry. This course is targeted towards Chemistry Majors. Laboratory experiments to develop techniques in organic chemistry and illustrate principles. On-line materials include step-by-step prelabs for many of the experiments that students will be conducting. Theoretical principles of quantitative and instrumental analysis. Emphasis is placed on newer analytical tools and equipment. Intermediate level course. Includes a discussion of the structure, function and metabolism of proteins, carbohydrates and lipids. In addition, there is a review of enzymes, DNA and RNA. This course stresses theory and application of modern chromatographic methods. On-line materials include the course syllabus, copies of course lecture slides and animations. A 'short course' covering the use of a mass spectrometer as a GC detector. Basic instrumentation, data treatment and spectral interpretation methods will be discussed. On-line materials include copies of course lecture slides and tables to assist in the interpretation of mass spectra. Coverage of statistical methods in Analytical Chemistry. Course includes basic statistics, experimental design, modeling, exploratory data analysis and other multivariate techniques. On-line materials include the course syllabus, homework problems and copies of the lecture slides. A survey of the basic equipment, data and methodology of Analytical methods that rely on radioisotopic materials. On-line materials include the course syllabus, homework problems. copies of the lecture slides and animations. Why I missed the exam
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2027 }
Now that we’ve said a lot about individual operators on vector spaces, I want to go back and consider some other sorts of structures we can put on the space itself. Foremost among these is the idea of a bilinear form. This is really nothing but a bilinear function to the base field: . Of course, this means that it’s equivalent to a linear function from the tensor square: . Instead of writing this as a function, we will often use a slightly different notation. We write a bracket , or sometimes , if we need to specify which of multiple different inner products under consideration. Another viewpoint comes from recognizing that we’ve got a duality for vector spaces. This lets us rewrite our bilinear form as a linear transformation . We can view this as saying that once we pick one of the vectors , the bilinear form reduces to a linear functional , which is a vector in the dual space . Or we could focus on the other slot and define . We know that the dual space of a finite-dimensional vector space has the same dimension as the space itself, which raises the possibility that or is an isomorphism from to . If either one is, then both are, and we say that the bilinear form is nondegenerate. We can also note that there is a symmetry on the category of vector spaces. That is, we have a linear transformation defined by . This makes it natural to ask what effect this has on our form. Two obvious possibilities are that and that . In the first case we’ll call the bilinear form “symmetric”, and in the second we’ll call it “antisymmetric”. In terms of the maps and , we see that composing with the symmetry swaps the roles of these two functions. For symmetric bilinear forms, , while for antisymmetric bilinear forms we have . This leads us to consider nondegenerate bilinear forms a little more. If is an isomorphism it has an inverse . Then we can form the composite . If is symmetric then this composition is the identity transformation on . On the other hand, if is antisymmetric then this composition is the negative of the identity transformation. Thus, the composite transformation measures how much the bilinear transformation diverges from symmetry. Accordingly, we call it the asymmetry of the form . Finally, if we’re working over a finite-dimensional vector space we can pick a basis for , and get a matrix for . We define the matrix entry . Then if we have vectors and we can calculate In terms of this basis and its dual basis , we find the image of the linear transformation . That is, the matrix also can be used to represent the partial maps and . If is symmetric, then the matrix is symmetric , while if it’s antisymmetric then .
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2658 }
The Gram-Schmidt Process Now that we have a real or complex inner product, we have notions of length and angle. This lets us define what it means for a collection of vectors to be “orthonormal”: each pair of distinct vectors is perpendicular, and each vector has unit length. In formulas, we say that the collection is orthonormal if . These can be useful things to have, but how do we get our hands on them? It turns out that if we have a linearly independent collection of vectors then we can come up with an orthonormal collection spanning the same subspace of . Even better, we can pick it so that the first vectors span the same subspace as . The method goes back to Laplace and Cauchy, but gets its name from Jørgen Gram and Erhard Schmidt. We proceed by induction on the number of vectors in the collection. If , then we simply set This “normalizes” the vector to have unit length, but doesn’t change its direction. It spans the same one-dimensional subspace, and since it’s alone it forms an orthonormal collection. Now, lets assume the procedure works for collections of size and start out with a linearly independent collection of vectors. First, we can orthonormalize the first vectors using our inductive hypothesis. This gives a collection which spans the same subspace as (and so on down, as noted above). But isn’t in the subspace spanned by the first vectors (or else the original collection wouldn’t have been linearly independent). So it points at least somewhat in a new direction. To find this new direction, we define This vector will be orthogonal to all the vectors from to , since for any such we can check where we use the orthonormality of the collection to show that most of these inner products come out to be zero. So we’ve got a vector orthogonal to all the ones we collected so far, but it might not have unit length. So we normalize it: and we’re done.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1884 }
TAKING EVERY PRECAUTION Japan Takes Measures to Prevent SARS (June 9, 2003) As severe acute respiratory syndrome (SARS), a new type of pneumonia, rages in wide areas of Asia and other places, the Japanese government has been busy taking measures to prevent an outbreak from occurring in Japan. The government has urged people to take caution in traveling to affected areas, and it has been making every effort to prevent SARS from entering Japan. In addition, work is progressing on a system in which medical institutions, national and local governments, and corporations will act together to prevent the spread of SARS in the event of an outbreak in Japan. As a result of these efforts, as of June 9, there have been no confirmed or probable cases of SARS in Japan. |Medical staff practice using an isolator. (Jiji) Plans Already Developed for Dealing with Patients On May 1 the government brought the heads of the relevant ministries and agencies together for a first-ever meeting devoted to SARS in order to decide what measures should be taken in the event that someone in Japan is found to be infected with the virus. The group decided to call on people returning from China to stay at home for 10 days, which is believed to be the incubation period for the disease. Taking this into consideration, the Ministry of Health, Labor, and Welfare made plans for taking action in the event of an outbreak. It decided to give local governments the authority to direct people believed likely to be infected, or "probable patients," to hospitalize themselves. In the event that a patient refuses, the local governments are empowered to forcibly hospitalize the person. Local governments are readying themselves to accept patients. According to a survey conducted by the Nihon Keizai Shimbun in early May, all of the nation's 47 prefectures had already completed action plans spelling out what measures would be taken in the event of an outbreak. In addition, some 250 medical institutions around the country have made such preparations as setting up "negative air-pressure rooms" to prevent the virus from spreading within the hospital or to the outside. Local governments in such places as Kitakyushu City, Hokkaido, and Mie Prefecture have been purchasing capsules called isolators to be used when suspected SARS patients are moved, and they have conducted drills on how to use them with volunteers playing the role of patients. In May a foreign traveler who had been to Japan was found to be infected with SARS. When this was discovered, the government and local authorities quickly implemented emergency measures, as a result of which no secondary infections occurred. According to a survey conducted by the Asahi Shimbun, 28 local governments out of the 47 prefectures and 13 major cities in Japan, nearly half the total, were rethinking their plans to cope with a potential SARS outbreak in light of this news. Fukushima Prefecture decided to check whether visitors from abroad have come from an area to which the World Health Organization recommends postponing travel. It will also make use of the local hotels association to determine the previous whereabouts of such guests. Kagawa Prefecture, meanwhile, which had previously only planned for people who had come in close contact with SARS patients, defined as having been within 2 meters, has created an action plan for checking on people who have had even a low possibility of coming in contact with a carrier. Public and Private Sectors Taking Action The Japanese government is stepping up its efforts to take rapid, nationwide measures to prevent SARS infection. The Ministry of Health, Labor, and Welfare has accelerated revision of the Infectious Disease Law, for example. And while local governments are the first line of defense in tracking the path of infection and following up on people who may have been exposed, the national government will become directly involved in the event that infection spreads outside of a local area. Japan is also actively engaged in international cooperation aimed at preventing the spread The private sector has also been taking action to prevent the spread of SARS and to reassure travelers. West Japan Railway Co. (JR West) has set up a SARS-response headquarters and is considering disinfecting affected carriages in the event that an infected person is found to have been onboard a certain train at a certain time. The company also decided to publicly release information on the time and route traveled by any SARS patients. Orient Ferry, which runs a ferry route from Shimonoseki to China's Qingdao, has since late April requested that all passengers and crew fill out health questionnaires, and the company has trained staff for what to do in the event that a passenger falls ill with SARS while onboard. The terminal in Qingdao, the shuttle bus, and the inside of the ship are all disinfected every day. Meanwhile, some companies have taken the step of postponing scheduled business trips to affected areas, and, in response to requests by the government, airlines and ship operators whose vessels operate in Japan are distributing health questionnaires to their staff and passengers. Japan has avoided SARS so far, and there is every reason to be confident that the country will remain free of the disease. Even if an outbreak did occur, the concerted efforts of local and national governments and private enterprises to prepare for such an eventuality suggest that it would be handled quickly and efficiently. Note: The government's "Measures upon Entry/Return to Japan" for travelers heading to Japan can be found here. (http://www.mofa.go.jp/policy/health_c/sars/measure0521.html) Related Web Sites the Ministry of Health, Labor, and Welfare World Health Organization West Japan Railway Co. (JR West) Copyright (c) 2004 Web Japan. Edited by Japan Echo Inc. based on domestic Japanese news sources. Articles presented here are offered for reference purposes and do not necessarily represent the policy or views of the Japanese Government. (November 19, 2002) GIVE BLOOD AND ENJOY (September 25, 2002)
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6107 }
Edited by Malcolm Brynin, John Ermisch Published February 23rd 2012 by Routledge – 244 pages Series: Routledge Advances in Sociology Some relationships are within the family -- such as between parents and children, grandparents and children and between siblings -- while others are between friends. In some cases, these distinctions are blurred (Are short-term partners family members? Are family members seen as such when relations become unfriendly? Does divorce, if amicable, replace a family with a friendship?). Using quantitative, cutting-edge statistical analysis, in conjunction with a multi-disciplinary approach, the contributors to this volume address the contemporary state of and dynamics in these various types of relationships, linking these to key rites of passage such as leaving home, marriage and childbirth, to see how these stand after a period of rapid social change. The book will be of interest to scholars in a broad range of disciplines, including sociology, social policy and economics. Part I: Forming and Maintaining Relationships 1. Introduction: The Social Significance of Relationships John Ermisch & Malcolm Brynin 2. Living Apart Together John Ermisch & Thomas Siedler 3. Gender Differences in Close Friendship Networks over the Life Cycle Michèle Belot 4. Leaving Home Maria Iacovou & Lavinia Parisi 5. The Social Significance of Homogamy Malcolm Brynin, Simonetta Longhi & Álvaro Martínez Pérez 6. How Close are Couples? Malcolm Brynin, Simonetta Longhi & Álvaro Martínez Pérez Part II: Relationships and Social Welfare 7. Young Child-Parent Relationships John Ermisch 8. Adult Child-Parent Relationships John Ermisch 9. Gender and Time Use over the Life Course Man Yee Kan & Jonathan Gershuny 10. Residential Mobility, Mobility Preferences and Psychological Health Priscila Ferreira & Mark Taylor 11. Early Labour Market Experience and the Timing of Family Formation Emilia Del Bono 12. Unemployment and Partnership Dissolution Morten Blekesaune 13. Marital Splits and Income Changes over the Longer Term Stephen Jenkins Malcolm Brynin is a researcher at ISER University of Essex. He undertakes research on education, employment, and the family. He has published articles on the returns to education and to skills, and on the gender implications of educational and technological change. John Ermisch is a researcher at ISER University of Essex. His latest book, An Economic Analysis of the Family (2003), and numerous articles in economic and demographic journals demonstrate how the standard analytical methods of microeconomics can help us understand resource allocation and the distribution of welfare within the family.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2660 }
Archaeological Site of Rehman Dheri Department of Archaeology and Museums Property names are listed in the language in which they have been submitted by the State Party. The archaeological site of Rehman Dheri consists of a rectangular shaped mound covering some twenty two hectares and standing 4.5 metres above the surrounding field. The final occupational phase of the site is clearly visible on the surface of the mound by eye and also through air photographs. It consisted of a large walled rectangular area with a grid iron network of streets and lanes dividing the settlement into regular blocks. Walls delineating individual buildings and street frontages are clearly visible in the early morning dew or after rain and it is also possible to identify the location of a number of small-scale industrial areas within the site marked, as they are, by eroding kilns and scatters of slag. The surface of the mound is littered with thousands of shreds and artefacts, slowly eroding out of room fills. The archaeological sequence at the site of Rehman Dheri is over 4.5 metres deep, and covers a sequence of over 1,400 years beginning at c.3,300 BC. The site represents following periods: I c.3300-3850 BC II c.2850-2500 BC III c.2500-1900 BC It is generally accept that the settlement received its formal plan in its earliest phases and that subsequent phases replicated the plan over time. Although its excavators have cut a number of deep trenches or soundings into the lower levels, the areas exposed have been too limited to undertake a study of change in layout and the spatial distribution of craft activities. It was abandoned at the beginning of the mature Indus phase by the middle of the third millennium BC and subsequent activities, greatly reduced, are only recorded on the neighbouring archaeological mound, Hisam Dheri. The plan of the Early Harappan settlement is therefore undisturbed by later developments and, as such, represents the most exceptionally preserved example of the beginning of urbanisation in South Asia.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2039 }
The basic requirements for a synthetic motor are repetitive 360° motion, the consumption of energy and unidirectional rotation. Two efforts in this direction were published in 1999 in the same issue of Nature. For the two reports below, it is unknown whether these molecules are capable of generating torque. It is expected that reports of more efforts in this field will increase, as understanding of chemistry and physics at the nanolevel improves. This rotation takes place in five steps. First, the amine group present on the triptycene moiety is converted to an isocyanate group by condensation with a phosgene molecule (a). Thermal or spontaneous rotation around the central bond then brings the isocyanate group in proximity of the hydroxyl group located on the helicene moiety (b), thereby allowing these two groups to react with each other (c). This reaction irreversibly traps the system as a strained cyclic urethane that is higher in energy and thus energetically closer to the rotational energy barrier than the original state. Further rotation of the triptycene moiety therefore requires only a relatively small amount of thermal activation in order to overcome this barrier, thereby releasing the strain (d). Finally, cleavage of the urethane group restores the amine and alcohol functionalities of the molecule (e). The result of this sequence of events is a unidirectional 120° rotation of the triptycene moiety with respect to the helicene moiety. Additional forward or backward rotation of the triptycene rotor is inhibited by the helicene moiety, which serves a function similar to that of the pawl of a ratchet. The unidirectionality of the system is a result from both the asymmetric skew of the helicene moiety as well as the strain of the cyclic urethane which is formed in c. This strain can be only be lowered by the clockwise rotation of the triptycene rotor in d, as both counterclockwise rotation as well as the inverse process of d are energetically unfavorable. In this respect, the preference for the rotation direction is determined by both the positions of the functional groups and the shape of the helicene, and is thus built into the design of the molecule instead of dictated by external factors. The motor by Kelly and co-workers is an elegant example of how chemical energy can be used to induce controlled, unidirectional rotational motion, a process which resembles the consumption of ATP in organisms in order to fuel numerous processes. However, it does suffer from a serious drawback: the sequence of events that leads to 120° rotation is not repeatable. Kelly and co-workers have therefore searched for ways to extend the system so that this sequence can be carried out repeatedly. Unfortunately, their attempts to accomplish this objective have not been successful and currently the project has been abandoned. In 1999, the laboratory of Prof. Dr. Ben L. Feringa at the University of Groningen (The Netherlands) reported the creation of a unidirectional molecular rotor. Their 360° molecular motor system consists of a bis-helicene connected by an alkene double bond displaying axial chirality and having two stereocenters. One cycle of unidirectional rotation takes 4 reaction steps. The first step is a low temperature endothermic photoisomerization of the trans (P,P) isomer 1 to the cis (M,M) 2 where P stands for the right handed helix and M for the left handed helix. In this process, the two axial methyl groups are converted into two less sterically favorable equatorial methyl groups. By increasing the temperature to 20 °C these methyl groups convert back exothermally to the (P,P) cis axial groups (3) in a helix inversion. Because the axial isomer is more stable than the equatorial isomer, reverse rotation is blocked. A second photoisomerization converts (P,P) cis 3 into (M,M) trans 4, again with accompanying formation of sterically unfavorable equatorial methyl groups. A thermal isomerization process at 60 °C closes the 360° cycle back to the axial positions. A major hurdle to overcome is the long reaction time for complete rotation in these systems, which does not compare to rotation speeds displayed by motor proteins in biological systems. In the fastest system to date, with a fluorene lower half, the half-life of the thermal helix inversion is 0.005 seconds . This compound is synthesized using the Barton-Kellogg reaction. In this molecule, the slowest step in its rotation (the thermally induced helix-inversion) is believed to proceed much more quickly because the larger tert-butyl group makes the unstable isomer even less stable than when the methyl group is used. Said differently, the unstable isomer is more destabilized than the transition state that leads to helix-inversion. The different behaviour of the two molecules is illustrated by the fact that the half-life time for the compound with a methyl group instead of a tert-butyl group is 3.2 minutes. The Feringa principle has been incorporated into a prototype nanocar . The car thus far synthesized has an helicene-derived engine with an oligo (phenylene ethynylene) chassis and four carborane wheels and is expected to be able to move on a solid surface with scanning tunneling microscopy monitoring, although so far this has not been observed. Interestingly, the motor does not perform with fullerene wheels because they quench the photochemistry of the motor moiety.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5411 }
What exactly does "desecration" mean? Is it just flag burning — or does it also include smearing the flag with dirt? How about dropping it on the ground? And why should law enforcement get to decide who to arrest for such desecration? Free expression and the right to dissent are among the core principles which the American flag represents. The First Amendment must be protected most when it comes to unpopular speech. Failure to do so fails the very notion of freedom of expression. Our democracy is strong because we tolerate all peaceful forms of expression, no matter how uncomfortable they make us feel, or how much we disagree. If we take away the right to dissent - no matter how unpopular - what freedom will be sacrificed next? Make a Difference Your support helps the ACLU defend free speech and a broad range of civil liberties. Burn the Flag or Burn the Constitution? (2011 blog): Sadly, Congress is once again considering an amendment to the U. S. Constitution banning desecration of the American flag and, in doing so, testing our political leaders' willingness to defend what is arguably one of America's most sacred principles — protecting political speech. Flag Amendment Defeated, First Amendment Stands Unscathed (2003): On June 27, 2006, the Senate voted down the proposed Flag Desecration Amendment by the slimmest margin ever. The vote was 66-34, just one vote short of the two-thirds needed to approve a constitutional amendment. Reasons to Oppose the Flag Desecration Amendment (2004 resource): Talking Points on Opposing the Flag Desecration Amendment Background on the Flag Desecration Amendment (2004 resource) Fight for the Flag - Resources (2006 resource)
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1685 }
Pure water does not conduct electricity very well. However, when certain substances are dissolved in water, the solution does conduct electricity. You can make a simple device that shows how well a solution conducts electricity. This device uses a flashlight bulb to indicate how well the solution conducts electricity. The better the solution conducts electricity, the brighter the bulb will glow. To construct the conductivity tester you will need: - ● a 12-volt AC adapter - This converts the 110-volt electricity from a wall socket to safer 12-volts. It must be 12 volts AC, not DC, because DC will not work for this. You may have a suitable adapter around the house from an old device you're no longer using, or you may get one from an electronics store (e.g. Radio Shack, catalog number 273-1631). - ● an audio cable with a 1/4-inch or 1/8-inch monaural plug on one end - The plug will become the probe for testing conductivity. You may have an unused cable around the house. What is on the other end does not matter because it will be removed. You may also get a suitable plug-and-cable assembly from an electronics supply store (e.g., Radio Shack, catalog number 42-2381). - ● a 12-volt flashlight bulb and socket - The bulb will provide a visible indication of how well a material conducts electricity. You can get these from an electronics store (e.g., Radio Shack, catalog numbers 272-1143 for the bulb and 272-357 for the socket). - ● a block of wood about 4 by 4 by 1 inch - The electrical connections will be made on this block, and the lamp will be mounted on it, too. - ● two 1-inch wood screws - These hold the lamp socket to the block of wood. - ● one 3/4-inch round-headed screw and washer - These will be used to make an electrical connection. - ● wire cutter and wire stripper - These are used to prepare the electrical connections. - ● a screw driver Cut the plug from the end of the cord of the AC adapter. Separate about four inches of the cord into its two conductors. Remove about 1 inch of insulation from each of the conductors. Cut the cord of the audio cable about 2 feet from the plug. Remove about four inches of insulation from the cut end of the cable. This will expose bare stranded wire wrapped around insulation that covers a center wire. Unwrap the stranded wires from the insulation and twist the strands together to make a single bundle. Strip about 1 inch of the inner insulation from the center wire. Use wood screws to attach the lamp base (socket) to the block of wood. Put the washer on the round-head screw and screw it into the block next to the lamp base, but do not tighten the screw yet. Wrap one wire from the AC adapter (it doesn't matter which) around the screw above the washer. Wrap the end of the bundled wire from the audio plug around the same screw. Tighten the screw to fasten the two wires together. Attach the remaining wire from the AC adapter to one of the terminals of the lamp base. Attach the remaining wire from the audio plug to the other terminal of the lamp base. Screw the 12-volt flashlight lamp into the lamp base. To make the connections more secure, you can use a heavy staple to hold each of the two wires to the wooden block. The conductivity tester is now complete and ready to use. To test that it works properly, plug the AC adapter into an AC outlet. The lamp will not light. Touch the audio plug sideways to a piece of metal, such as a coin. When the two metal conductors of the plug are shorted by the coin, the lamp will glow brightly. The bright glow indicates that current is easily flowing through the piece of metal. Testing a solution Put some water into a cup. Insert the end of the audio plug into the water. If you use distilled water, the lamp will not glow. If you use tap water, the lamp may glow dimly, if at all. If it glows, it shows that the tap water conducts electricity only poorly. Add some table salt to the water and stir the mixture. The lamp will glow brightly when the plug is put into the solution, because salt solution conducts electricity very well, almost as well as metal. You can investigate different materials from around your house to see how well they conduct electricity when mixed with water. Some things to try, in addition to salt, are sugar, baking soda, shampoo, laundry detergent, rubbing alcohol, and antacid tablets. Anything that dissolves in water can be tested. In order to avoid mixing the materials you're testing, be sure to rinse the plug in water and dry it before testing a different substance. Do not put the plug in a solution for more than 10 to 15 seconds, because doing so will cause the plug to corrode rapidly. Keep a record of which substances conduct electricity well, which conduct poorly, and which do not conduct at all. Sometimes, mixtures of substances conduct differently than the separate substances. As an example, test the conductivity of vinegar. Then test the conductivity of laundry ammonia. Then, pour a little ammonia into the vinegar and test the mixture. You will see a big difference between the separate substances and the mixture! An electric current is a flow of electrical charge. When a metal conducts electricity, the charge is carried by electrons moving through the metal. Electrons are subatomic particles with a negative electrical charge. When a solution conducts electricity, the charge is carried by ions moving through the solution. Ions are atoms or small groups of atoms that have an electrical charge. Some ions have a negative charge and some have a positive charge. Pure water contains very few ions, so it does not conduct electricity very well. When table salt is dissolved in water, the solution conducts very well, because the solution contains ions. The ions come from the table salt, whose chemical name is sodium chloride. Sodium chloride contains sodium ions, which have a positive charge, and chloride ions, which have a negative charge. Because sodium chloride is made up of ions, it is called an ionic substance. Not all substances are made up of ions. Some are mode of uncharged particles called molecules. Sugar is such a substance. When sugar is dissolved in water, the solution does not conduct electricity, because there are no ions in the solution. Some substances that are made of molecules form solutions that do conduct electricity. Ammonia is such a substance. When ammonia dissolves in water, it reacts with the water and forms a few ions. This is why laundry ammonia, which is a solution of ammonia in water, conducts electricity, but not very well. Sometimes, when two different solutions are mixed, the substances they contain react with each other and form ions. This is what happens when ammonia and vinegar are mixed. An ammonia solution contains only a few ions, and it conducts electricity only poorly. A vinegar solution also contains only a few ions and conducts only a little electricity. But when these solutions are mixed, the ammonia reacts with the acid in vinegar (acetic acid), and they form a lot of ions. This is why the mixture of ammonia and vinegar conducts electricity very well.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 7107 }
Closed Circuit TeleVision or CCTV video storage is available in many different forms. Sometimes the type of storage device and medium is dependent on the type of camera. In the following article, we’ll take a look at some of the more common methods of CCTV Storage. Before we continue our discussion on CCTV video storage, let’s briefly review how a digital video security and surveillance system works. Normally the system consists of a digital video camera, a Digital Video Recorder or DVR, and a monitor (or several monitors). The camera captures the video footage and transfers it into an electronic form, sends it to the DVR for storage, and/or to a monitor for live viewing. If the video is stored on a DVR, it can be retrieved at a later date and viewed on the monitor at that time. Since CCTV became digital (it used to be a totally analog system several years ago), the system’s components seem to mimic a personal computer system in many ways. Although a CCTV DVR is not a personal computer, it has processors and other items, including a Hard Disk Drive or HDD just like a personal computer. The HDD is where the CCTV video storage for the system takes place. The CCTV video storage HDD is a non-volatile medium that utilizes random access digital storage. Non-volatile means that the recorded data remains even if the unit is switched off. Random access storage means the disk drive stores the data in a random fashion on the disk in such a way as to make CCTV video storage as quick a process as possible. The disk contains read/write heads that float on a very thin space above the disks and save and retrieve the data magnetically. The DVR normally uses a utility program known as a CODEC which stands for COmpression/DECompression to shrink the size of the incredibly large digital video file it creates before storing it on the HDD. The CODEC reduces the size of the file while maintaining a high quality video image. There are several CODECs available and as digital technology advances, quite often so too does CODEC technology. The most popular and latest CODEC technology used for CCTV video storage is the H.264 CODEC. This utility is used to prepare files for saving to the HDD as well as streaming them across a network or the Internet. Digital video files can become quite large; using a CODEC enhances the ability of the CCTV video storage medium tremendously. Once the HDD is full, it begins erasing old video and recording new video over it. The smaller area taken up by the digital video files or the bigger the HDD storage capacity, the longer the period of storage without the need for transferring it to another storage medium for backup such as a DVD. There are other types of CCTV video storage besides DVRs and their corresponding HDDs depending on the digital video system. For example, cameras are available today that are IP ready (Internet Protocol ready). These camera can be plugged into the Internet and stream their files to any location i the world where there is broadband Internet service (this includes 3G and 4G smartphones). The device used to control and coordinate these types of cameras often looks like a desktop computer instead of the flatter DVR and is known as a Network Video Recorder or NVR. Using compatible cameras and an NVR, cameras can be in more than one physical location (i.e. large distances apart such as in two different stores within the same city) and still record to the NVR. There are other types of digital video security and surveillance systems that may not meet the characteristics of conventional CCTV systems, but none-the-less technically use CCTV video storage. Often times these systems are remote, portable, or hidden/disguised systems that often contain everything (except the monitor) within one unit. The CCTV video storage on these units is often saved to a variety of other non-volatile portable storage media such as CF cards, Flash Thumb Drives, and SD media. These units save the digital video file in the same manner as full size CCTV systems, but are much smaller and compact for the purpose of their designated use. CCTV storage today has come a long way in a very short time and HDDs have increased from holding megabytes of data to Terabytes. If you have any addition questions about CCTV storage, contact one of our security experts today.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4337 }
In preparation for Christmas, I read Stephen Nissenbaum's 1998 "The Battle for Christmas," a thorough exploration of this season. The book's title will be deceiving, because it has nothing to do with the recent sacred-vs.-secular Christmas quarrels. Nissenbaum explores the myriad ways that Christmas has evolved in our nation. It turns out we've been jockeying for more than 300 years over what this holiday means. In Colonial America our faith-filled ancestors banned Christmas altogether, outlawing it in some colonies. Until the 1760s, one could not even find an almanac that would print the word "Christmas" on the date Dec. 25. This opposition was because Christmas had become a drunken spectacle where gangs of poor young men roamed the streets, making merry and engaging in acts of petty rowdyism, vaguely like today's New Year's Eve. It was customary and permissible for these gangs to knock on doors of strangers to demand gifts. ("So give us some figgy pudding....") Our nation's first "battle" for Christmas was the movement to domesticate the holiday, a battle that Nissenbaum suggests involved merchants, the middle and upper classes and the church. Merchants began linking Christmas and the purchase of manufactured gifts as early as the 1830s as society began to stress family celebrations in front of a tree and with Santa visiting every home. In case you think that your complaining will reverse the commercialism of this holiday, according to Nissenbaum that complaint first emerged in the 1830s. Complain if you must, but don't expect results. Nissenbaum so thoroughly explores Clement Moore's "'Twas the Night before Christmas" that one learns why Saint Nick touches the side of his nose and why his pipe is a short one. Nissenbaum contends that the ascendance of Santa Claus, the emergence of the Christmas tree and even the giving of gifts contribute to this gradual process of making Christmas a less revolutionary, more predictable holiday. He explores Dickens and Scrooge, Christmas parties for poor children and even the complicated master-slave relationship at Christmas leading up to and immediately following the Civil War. If you prefer to maintain that Christmas was a pure season of private devotion and public worship until Sears, Roebuck, Wal-Mart and the Supreme Court got involved, don't read this book. Ditto if you enjoy lamenting that "They've taken Christmas away from us," Nissenbaum might say that a pure, simple Christmas never existed. Rather it has evolved since the first day the Colonists set foot on our shore, an evolution showing no sign of abating. Nissenbaum's scholarly, heavily footnoted book is enlightening and readable. But his analysis of Christmas reminds me of a scientist who thoroughly explains the rainbow but never grasps its beauty. And so as this season continues to evolve, I'll enjoy my Christmas tree, sing both "White Christmas" and "Joy to the World," and be grateful again for the mystery of Bethlehem, which properly understood, is the most revolutionary act of history. Contact columnist minister Creede Hinshaw at Wesley Monumental United Methodist Church in Savannah at email@example.com.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3165 }
John Wesley's Death-Mask 1) From umc.org, the official site for the United Methodist Church: FAQ Belief2) From The United Methodist Portal website: What happens immediately after a person dies? Question: What happens immediately after a person dies? Do they go directly to heaven or hell or do they go to a holding place until Christ returns to earth for the final judgment? Answer: The basic beliefs of United Methodists can be found in the Book of Discipline in Our Doctrinal Standards and General Rules. However, mention of "hell" and "heaven" as serious afterlife issues cannot be found in this section or any other part of the Book of Discipline.[ link ] Methodist Doctrine: The Essentials by Ted A. Campbell says, "The Methodist Articles of Religion, following the teachings of the Reformation, rejected the medieval Catholic idea of purgatory as a place where the souls of those who have died in Christ could be aided or helped by the prayers of the living. John Wesley himself believed in an intermediate state between and the final judgment [sic], where those who rejected Christ would be aware of their coming doom (not yet pronounced), and believers would share in the "bosom of Abraham" or "paradise," even continuing to grow in holiness there. This belief, however, is not formally affirmed in Methodist doctrinal standards, which reject the idea of purgatory but beyond that maintain silence on what lies between death and the last judgment." United Methodists have no official doctrine on “heaven” or “hell” except for this confessional statement: “We believe in the resurrection of the dead, the righteous to life eternal and the wicked to endless condemnation.” . . .3) From John Henry Overton (1835-1903), John Wesley, Boston and New York: Houghton, Mifflin & Co. / The Riverside Press, Cambridge, 1891, p. 39: John Wesley believed in the intermediate state between death and the final judgment “where believers would share in the ‘bosom of Abraham’ or ‘paradise,’ even continuing to grow in holiness there,” writes Ted Campbell, a professor at Perkins School of Theology, in his 1999 book Methodist Doctrine: The Essentials (Abingdon). That view has not been officially affirmed by the church. ("Heavenly minded: It’s time to get our eschatology right, say scholars, authors," Robin Russell, 6 April 2009) "1756, November 1, was a day of triumphant joy, as All Saints' Day generally is. How superstitious are they who scruple giving God solemn thanks for the lives and deaths of His saints!4) Letter to John Wesley (26 March 1770) from Calvinist Anglican Augustus Toplady (1740-1778): "1767, November 1. Being All Saints' Day (a Festival I dearly love) . . . " . . . He always made a point of preaching on "The Communion of Saints" on All Saints' Day. He thoroughly realized the doctrine of the Intermediate State, and to his dying day used to speak of his departed Christian friends, not as "having gone to heaven," in the popular phraseology, but as being in Paradise, or in Abraham's bosom. You affect to be deemed a minister of the national Church. Why then do you decry her doctrines, and, as far as in you lies, sap her discipline? That you decry her doctrines needs no proof: witness, for example, the wide discrepancy between her decisions and yours on the articles of freewill, justification, predestination, perseverance, and sinless perfection; to say nothing concerning your new-fangled doctrine of the intermediate state of departed souls.5) Letter of John Wesley to Miss B (17 April 1776), from The Works of the Rev. John Wesley, Vol. X: Tracts and Letters on Various Subjects, New York: J. & J. Harper, 1827, p. 322: But what is the essential part of heaven? Undoubtedly it is To see God: To know God: To love God. We shall then know both his Nature, and his works of creation and providence, and of redemption. Even in paradise, in the intermediate state between death and the resurrection, we shall learn more concerning these in an hour, than we could in an age, during our stay in the body. We cannot tell indeed how we shall then exist, or what kind or organs we shall have: the soul will not be encumbered with flesh and blood; but probably it will have some sort of ethereal vehicle, even before God clothes us "with our nobler house of empyrean light."6) Albert C. Outler (1908-1989), John Wesley: Folk-Theologian, Theology Today, Vol. 34, No. 2, July 1977: His lively discussions of "the intermediate state" are integral to his eschatology as a whole.7) Karen B. Westerfield Tucker, American Methodist Worship, Oxford University Press, 2001, p. 202: [footnote: Cf., e.g., his sermon "Of Hell," 1.4; "The Trouble and Rest of Good Men," Proem., II.6; "The Rich Man and Lazarus," 1.3; "On Worldly Folly," II.6; "On Faith" (Heb. 11:1), 4.] Decisions made during life were therefore inseparably connected to what came after life. Upon death, according to Wesley, the souls of the deceased would enter an intermediate, penultimate state in which they would remain until reunited with the body at the resurrection of the dead. In that state variously identified as "the ante-chamber of heaven," "Abraham's bosom," and "paradise," . . .8) Douglas P. Finkbeiner, "Interpreting Luke 16: Abraham, Lazarus, and the Rich Man -- Parable or History?": John Wesley on the parable–But is the subsequent account merely a parable, or a real history? It has been believed by many, and roundly asserted, to be a mere parable, because of one or two circumstances therein, which are not easy to be accounted for. In particular, it is hard to conceive, how a person in hell could hold conversation with one in paradise. But, admitting we cannot account for this, will it overbalance an express assertion of our Lord: "There was," says our Lord, "a certain rich man." -- Was there not? Did such a man never exist? "And there was a certain beggar named Lazarus."- -Was there, or was there not? Is it not bold enough, positively to deny what our blessed Lord positively affirms? Therefore, we cannot reasonably doubt, but the whole narration, with all its circumstances, is exactly true. And Theophylact (one of the ancient commentators on the Scriptures) observes upon the text, that, "according to the tradition of the Jews, Lazarus lived at Jerusalem.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6270 }
Leakage-Delay Optimization Techniques The power consumed by leakage currents in advanced process nodes has become a large portion of the overall power budget and has driven designers to employ multiple techniques to reduce leakage power in the digital logic portion of complex SoC ASIC designs. The graph above shows the impact that different implants can have on the threshold voltage of devices and the resulting impact on sub-threshold or source-drain leakage. To take advantage of this, standard cell library variants with multiple threshold transistors can be used to insert slower, lower leakage cells on non-critical paths. Multi-Vt libraries provide a fairly “coarse” set of trade-offs in that there are large delay penalties for large reductions in leakage. More recently, varying the gate length of transistors in standard cell libraries has emerged as an additional technique to provide more, finer grained choices of cells with different leakage-delay trade-offs. As shown in the graph above there is an exponential relationship between source-drain leakage and gate length but only a linear impact on the delay. In addition, transistor gate lengths can be varied in small increments thereby providing more fine-grained trade-offs between leakage and delay. This can result in the ability to swap many more lower leakage cells on non-critical paths resulting in significantly lower leakage with no impact to overall speed of the IC. Speed is not impacted because no cells are swapped on critical paths. Varying the gate length of standard cell libraries is accomplished in two ways. Cells can be designed with longer than minimum channel length transistors. The cells of this library variant would be slower and lower leakage. Within the same footprint of these longer channel length cells, minimum gate length cells can be used to create faster, higher leakage cells. Since the cells have the same footprint, they can be swapped on non-critical paths just as cells with different Vt’s can be swapped. This methodology has an area impact as the longer gate length cells have a larger footprint than they otherwise would have if minimum gate lengths had been used. In addition to the above method of varying the gate length, a technique called gate length biasing (GLB) is also used to provide incremental gate length changes. Gate length biased cells typically are the same area as the base or nominal cell and therefore share the same footprint enabling them to be swappable. GLB library variants are characterized for delay and leakage characteristics associated with the gate lengths that will be manufactured. The actual physical gate length biasing occurs post tape out. Gate length biased transistors are marked with a layer in the GDSII database and are identified for either increases or decreases as part of the mask fabrication process. In this manner, transistors in a complex SoC ASIC can be biased selectively and by varying amounts. With emergence of these multiple techniques to optimize designs for leakage and delay, it is important that the design flow and optimization tools select the most appropriate cells to produce the best quality of results. As can be seen in the graph above, there can be cell choices that are sub-optimal. Circled above are examples where a longer gate length is always better than a shorter channel, higher Vt choice. Coupled with the fact that each additional Vt results in incremental masks and process steps, and their associated costs, it is imperative that the optimization flow makes correct choices, not only from a parametric quality of results point of view but from a die cost perspective as well. Given that Vt cell swapping was the first technique to become mainstream in advanced technology node design flows, there is the potential that some flows and tools have embedded algorithms that favor Vt swapping or swap Vt cells first. As shown above, this may not produce optimal results. Tela Innovations’ optimizer was developed with a fine grained algorithm from the start and has demonstrated superior results on over 50 production tape outs to date. For more information on Tela’s optimizer and design optimization services click here.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4207 }
St. Anne, Patroness of Detroit St. Anne was named by the Vatican as the patron saint of the Archdiocese of Detroit. We honor the mother of the Blessed Virgin Mary and prayerfully ask for her intercession. One may pray to any saint for any intention, but a patron saint is seen as the particular advocate for a chosen place or activity. St. Anne is the mother of the Blessed Virgin Mary. Though she is not mentioned by name in the Bible, we know of her through early Christian writings, the most important of which is the Protoevangelium of James, written in about 150 A.D. We are told that Anne, the wife of Joachim, was advanced in years before her prayers for a child were answered. An angel appeared and told her she would conceive a child who "shall be spoken of in all the world." St. Anne's feast day is celebrated on July 26. She is known as the patron saint of equestrians, housewives, women in labor, cabinet-makers, and miners. Devotion to St. Anne became popular in the Christian East by the fourth century, and that tradition later spread to the Christian West. When the French began to colonize modern-day Quebec, they brought their devotion to St. Anne with them—asking for her protection in the New World. This devotion was planted on the banks of the Detroit River by the original French-Canadian settlers. Two days after Antoine de la Mothe Cadillac landed with 51 others in what is now downtown Detroit on July 24, 1701, they celebrated Mass and began construction of a church named after Saint Anne. Today, Ste. Anne de Detroit Church is the second oldest continually operating parish in the United States. As is now recognized by the Holy See, the church of Detroit was placed under St. Anne's protection from its very founding.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1748 }
Fat fliers weigh down on airlines Heavy suitcases aren't the only things weighing down airplanes and requiring them to burn more fuel, pushing up the cost of flights. A new government study reveals that airlines increasingly have to worry more about the weight of their passengers. America's growing waistlines are hurting the bottom lines of airline companies as the extra kilos on passengers are causing a drag on planes. Heavier fliers have created heftier fuel costs, according to the government study, and the extra fuel burned also had an environmental impact, as an estimated 3.8 million extra tonnes of carbon dioxide were released into the air. Through the 1990s, the average weight of Americans increased by 4.5kg, according to the Centres for Disease Control and Prevention. The extra weight caused airlines to spend $US275 million ($A364 million) to burn an additional 1.4 billion litres of fuel in 2000 just to carry the additional weight of Americans, the federal agency estimated in a recent issue of the American Journal of Preventive Medicine. "The obesity epidemic has unexpected consequences beyond direct health effects," said Dr Deron Burton of the CDC. "Our goal was to highlight one area that had not been looked at before." The extra fuel burned also had an environmental impact, as an estimated 3.8 million extra tonnes of carbon dioxide were released into the air, according to the study. The agency said its calculations are rough estimates, issued to highlight previously undocumented consequences of the ongoing obesity epidemic. The estimates were calculated by determining how much fuel the extra 4.5kg per passenger represented in Department of Transportation airline statistics, Burton said. Obesity is a life-or-death struggle in the United States, the underlying cause of 400,000 deaths in 2000, a 33 per cent jump from 1990. If current trends persist, it will become the nation's number one cause of preventable death, the CDC said earlier this year. More than half - 56 per cent - of US adults were overweight or obese in the early 1990s, according to a CDC survey. That rose to 65 per cent in a similar survey done from 1999 to 2002.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2171 }
The Nature Elephant The Karen people have always lived naturally in the forest, and, for many generations have relied on elephants to help them. Because elephants are ideal for carrying heavy loads they are essential for transportation through rural areas, and, more recently, for carrying tourists. The Karen people simply would not survive without them. The Karen people have always used elephants to help carry them through dense parts of the jungle which would be difficult on foot, such as down steep hills to fetch water from the creek, or carrying heavy bags of rice from the fields to the barn. What is little effort for an elephant would be a huge amount of labour for humans. Because they are so important to the Karen people, elephants are their friends, and are treated with respect. To manage an elephant and gain its trust requires knowledge, love and understanding. This is why the Karen people look after their elephants so well, and only certain members of the Karen family are trained enough to do this. Some of them call elephant-care a kind of black magic, and this black magic is passed down through families. Part of the skill of caring for elephants is to ensure the elephant is listened to. Karen legend has it that if a female elephant is ignored, it is likely that her eggs will become infected, and therefore she will not be able to continue the elephant family. This serious consequence acts as a grave warning to those handling elephants. A sense of duty, honor and patience are as important to the elephant as they are to the Karen people as a whole. The legend of Chang Karen This is a story about how elephants became so important in the life of the Karen hill tribe. The legend goes that once upon a time, there were two brothers living in the forest. One day, their mother needed to leave home for a business, so instructed the two boys to look after the house, be good, and by no means split open the bamboo tree, as it contained many flies. Being the mischievous boys they were, as soon as their mother was out of sight, they crept up to the forbidden bamboo and cracked it open, curious to see what would happen. Immediately, the room was filled with flies, two of which flew up into each of the boys' noses. Panicking, the boys didn't know what to do. Soon, they felt their bodies changing. Their legs began to itch, and grow longer and wider. Their heads began to swell, until they felt the size and shape of footballs. Their noses grew longer and their bodies became heavier and more clumsy. When their mother returned home, she was shocked to see what had happened to her sons. She offered them cooked rice, but they turned it down with a slow shake of their large heads, their noses swinging from side to side. They were still growing, and were too ashamed of their bad behaviour to eat. The mother offered them water, but they did not want to drink it. Soon, when the sons had grown too big for the house, and could now only walk on four legs, they left the house to find grass. This was all they felt like eating. Very soon the word spread, and people came from all over the valley to see the mutated boys. Their tongues had become too big for them to speak, so the sons had stopped talking. As if to compensate, their ears grew large so they cold hear very, very well. They had become elephant-boys. One day, some workers came to see if the elephant-boys could help them carry heavy loads. They gave them wood and lead them to their workshops, and the elephant-boys were calm and obedient. The workers realised that what was a huge job for them, was little effort for these giant elephant boys. And life continued this way for many generations. This is the remarkable story of how elephants and humans came to work together in harmony, explaining how they can exist together in the forest. Elephants and the Karen Hill Tribe people Deep in the rich forests of northern Thailand, in the bowl of a green valley, lies the Karen hill tribe community. Making the most of their natural surroundings, this tribe has managed to forge an incredibly simple life in the forest using no modern machinery or medicine. They need only the trees, plants, animals, and are especially reliant on the mighty elephant. The Karen people have a strong bond with elephants: their self-sufficient lifestyles are surprisingly similar, and intertwining. Wild elephants play a very important role in the Karen way of life, as well as the relationships of valley inhabitants, and the magic of the valley.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4521 }
Why Man U? AskMen / Getty Images "Gathering together at Old Trafford must have given these people something of the sense of community that they had previously known in their villages." Visiting Manchester the other day, I was driving down a nondescript road past dreary shops and offices when I saw the top of a sports stadium poking into the gray sky. It was Old Trafford. Team buses carrying soccer players from more glamorous cities such as Barcelona have been known to echo with cries of disgust as they pull in here. The home of Manchester United is rainy and underwhelming. The estimated 333 million humans who consider themselves United fans don’t all know that Manchester is a city in England, but many of those who do would probably be surprised to find just how mid-ranking a city it is. Yet when United’s American ruling family, the Glazers, sold club shares in August, United was valued at $2.3 billion. That made it the world’s most valuable sports franchise, ahead of Real Madrid and baseball’s New York Yankees, according to Forbes. In short, United is bigger than Manchester. So why on earth did this global behemoth arise precisely here? And how, in the last 134 years, has United shaped soccer, in England and now the world? When a soccer club was created just by the newish railway line in 1878, the Manchester location actually helped. The city was then growing like no other on earth. In 1800 it had been a tranquil little place of 84,000 inhabitants, so insignificant that as late as 1832 it didn’t have a member of parliament. The Industrial Revolution changed everything. Workers poured in from English villages, from Ireland, from feeble economies everywhere (my own great-grandparents arrived on the boat from Lithuania). By 1900, Manchester was Europe’s sixth-biggest city, with 1.25 million inhabitants. The club by the railway line was initially called Newton Heath, because the players worked at the Newton Heath carriage works of the Lancashire and Yorkshire Railway Company. They played in work clogs against other work teams. Jim White’s Manchester United: The Biography nicely describes the L&YR workers as “sucked in from all over the country to service the growing need for locomotives and carriages.” Life in Manchester then was neither fun nor healthy, White writes. In some neighborhoods, average male life expectancy was just 17. This was still the same brutal city where a few decades before, Karl Marx’s pal Friedrich Engels had run his father’s factory. The conditions of the industrial city were so awful it inspired communism. (My own great-grandparents lost two of their children to scarlet fever in Manchester before moving on to much healthier southern Africa.) Inevitably, most of these desperate early Mancunians were rootless migrants. Unmoored in their new home, many embraced the local soccer clubs. Gathering together at Old Trafford must have given these people something of the sense of community that they had previously known in their villages. That’s how the world’s first great industrial city engendered the world’s greatest soccer brand. Next Page >>
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3112 }
Contrary to popular belief, El Niño is not Spanish for “bad ski season.” “It just tips the probabilities that way,” said Brad Colman of the National Weather Service. So, even though scientists are predicting an El Niño winter, ski resort staffs and 2010 Olympics planners aren’t panicking. El Niño is a weather pattern created by the warming of tropical Pacific Ocean waters. Typically an El Niño winter means warmer and drier winters in the Northwest. In 2004-05 there was so little snow, skier visits dropped from 1.9 million the previous year to less than 500,000 – the worst ski season on record. The El Niño of 2002-03 resulted in a drop from 2.2 million visits to 1.4 million. “We aren’t getting nervous at all,” said Tiana Enger of Crystal Mountain. “We’ve seen that an El Niño isn’t always a bad thing.” Enger is right. The 2006-07 El Niño couldn’t keep Crystal from opening two weeks early, and Whistler Blackcomb got 558 inches of snow, its second-snowiest season. During the 1998-99 El Niño winter, Mount Baker set a national record with 1,140 inches of snow. What type of ski season this El Niño brings will likely be decided early in the next few months, Colman said. “If it is a snowy, wet fall we can build up a good snowpack,” Colman said. The warm weather brought by El Niño usually comes in January, February and March. A good snow base early would keep skiers and snowboarders on the slopes even if there’s a shortage of fresh powder. NaTai Perdue isn’t worried about the forecast either. Perdue is in charge of snowmaking at Whistler Blackcomb, where Olympic skiers will be racing for gold medals Feb. 12-28. “The courses will be dialed without a doubt,” Perdue said. “We are tracking for normal.” Skiers have been able to ski all the way from the summit to the village by Christmas every winter, Perdue said – even in the especially bad winter of 2004-05. Whistler Blackcomb’s artillery includes 270 snowmaking guns and a 52 million-gallon reservoir. Whistler Mountain, where the Olympic races will be held, has a 20 million-gallon reservoir. Whistler Blackcomb typically turns 130 to 180 million gallons of water into snow each season. That’s enough snow to blanket 650 to 900 acres with a foot of snow. It would take 39 million gallons of water to cover the Olympic runs and training hills with man-made snow if the weather didn’t help. However, Perdue said, there was already snow on the upper mountain in late September. Whistler Blackcomb officials expect to have the Olympic runs open for the public to ski by the end of the year and will leave them open until Jan. 24 before they close for the Olympics. Perdue said the racecourses need only about a foot of snow to make them race-ready. The snow is injected with water for racing. Snow on a normal recreational ski run is about 30 percent water, Perdue said. An injected race course is 70 percent water. “It’s like skiing on an ice rink,” Perdue said. Snowmaking was scheduled to start at Whistler Blackcomb on Oct. 12, Perdue said. “Everything will be ready for the Olympics and regular recreational skiing,” Perdue said. Craig Hill: 253-597-8742
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3126 }
Catching fish or going fishing to your favorite spot is like, well, turning on a light switch; up it is on, down it is off. Fishing pressure, traffic, weather and water conditions can all cause bite-on to bite-off behavior, causing a previous productive fishing pattern to fall apart. Change is not a bad thing if you can read the pattern and understand just what is going on with that area of concern under the surface in that mysterious realm beneath the waves. Like that of a light switch, your hotly tot hole may be a turn on today but tomorrow in is a true turn off dead spot; in a word of why, pressure. Understanding the fish behavior puzzle is to understand the links to fish and the barometer or pressure reader. When in the water any object, including the fish, either sinks, floats to the surface or suspends and even the smallest of change in barometric pressure is to a degree like a change in gravity; because objects weigh less in the water, the affect of a pressure change is far more pronounced beneath the surface than above. Fish are more in tune to the environment more than we humans are. Fish have an incredible array of pressure-sensing systems; such as the lateral line, which keys them in to changes in barometric pressure and, in turn, signals feeding opportunities or foretells the arrival of major weather changes, in the form of pressure gradients, which then determine on when the “Bite” is either on or off. A drop in pressure, a cold front, can cause tiny particles of sediment as well as zooplankton and phytoplankton’s to float off bottom, rising higher in the water column than they normally suspend which, in turn again creates a reduction in water clarity and usually a bite happens as Glass minnows to Mullet frenzy into a feed. As these species, show movement into that of a theatrical moment so does the feed habits of the predators around them causing all to “Bite On!” Pressure changes occur on the passage of any front or pressure gradient. It does not have to be a cold front to trigger a bite before; many a time on the water, I have used the forming of a cloud moving towards the boat to switch up to an aggressive angling pattern. As this cloud grows darker and spitting drops from aloft fall, so too does the surrounding pressure under the cloud. Usually the fish will “turn on” but for a moment as it passes by. Be alert to the lightning to move towards safety but work the clouds in the summer much as you would a pattern of diving birds. The key to great fishing is coming up with a good game plan based on forage, structure, seasonal patterns and working on your logistics in fishing weather patterns.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2653 }
Tuscaloosa, located at the falls of the Black Warrior River in west central Alabama, is the the fifth-largest city in Alabama with a population of 90,468, and the seat of Tuscaloosa County. It is named for the Choctaw chieftain Tuskalusa (meaning Black Warrior), who battled and was defeated by Hernando de Soto in 1540 in the Battle of Mauvila. Best known as the home of the University of Alabama, Tuscaloosa is also the center of industry, commerce, healthcare, and education for the region commonly known as West Alabama. The area at the fall line of what would be later known as the Black Warrior River had long been well known to the various Indian tribes whose shifting fortunes brought them to West Alabama. The river shoals at Tuscaloosa represented the southernmost site on the river which could be forded under most conditions. Inevitably, a network of Indian trails converged upon the place, the same network which, in the first years of the 19th Century began to lead a few white frontiersmen to the area. The pace of white settlement increased greatly after the War of 1812, and a small assortment of log cabins soon arose near the large Creek village at the fall line of the river, which the settlers named in honor of the legendary Chief Tuscaloosa. In 1817, Alabama became a territory, and on December 13, 1819, the territorial legislature incorporated the town of Tuscaloosa, exactly one day before the United States Congress admitted Alabama to the Union as a state. From 1826 to 1846 Tuscaloosa was the capital of Alabama. During this period, in 1831, the University of Alabama was established. The town's population and economy grew rapidly until the departure of the capital to Montgomery caused a rapid decline in population. Establishment of the Bryce State Hospital for the Insane in Tuscaloosa in the 1850s helped restore the city's fortunes. During the Civil War following Alabama's secession from the Union, several thousand men from Tuscaloosa fought in the Confederate armies. During the last weeks of the War, a brigade of Union troops raiding the city burned the campus of the University of Alabama. Tuscaloosa, too, suffered much damage from the battle and shared fully in the South's economic sufferings which followed the defeat. The construction of a system of locks and dams on the Black Warrior River by the U.S. Army Corps of Engineers in the 1890s opened up an inexpensive link to the Gulf seaport of Mobile, stimulating especially the mining and metallurgical industries of the region. By the advent of the 20th Century, the growth of the University of Alabama and the mental health-care facilities in the city, along with strong national economy fueled a steady growth in Tuscaloosa which continued unabated for 100 years. Manufacturing plants of large firms such as Michelin and JVC located in town during the latter half of the 20th Century. However, it was the announcement of the addition of the Mercedes-Benz US International assembly plant in 1993 that best personified the new era of economic prosperity for Tuscaloosa. Geography and climate According to the U.S. Census Bureau, Tuscaloosa has a total area of 66.7 square miles. 56.2 mi² of it is land and 10.5 mi² of it (15.7%) is water. Most of water within the city limits is in Lake Tuscaloosa, which is entirely in the city limits, and the Black Warrior River. Tuscaloosa lies approximately 60 miles southwest of Birmingham, at the fall line of the Black Warrior River on the boundary between the Appalachian Highland and the Gulf Coastal Plain approximately 120 miles upriver from its confluence with the Tombigbee River in Demopolis. Consequently, the geography of the area around Tuscaloosa is quite diverse, being hilly and forested to the northeast and low-lying and marshy to the southwest. The area experiences a typical Southern subtropical climate with four distinct seasons. The Gulf of Mexico heavily influences the climate by supplying the region with warm, moist air. During the fall, winter and spring seasons, the interaction of this warm, moist air with cooler, drier air from the North along fronts create precipitation. Notable exceptions occur during hurricane season where storms may move from due south to due north or even from east to west during land-falling hurricanes. The interaction between low- and high-pressure air masses is most pronounced during the severe weather seasons in the spring and fall. During the summer, the jet streams flows well to the north of the southeastern U.S., and most precipitation is consequently convectional, that is, caused by the warm surface heating the air above. Winter lasts from mid-December to late-February; temperatures range from the mid-20s to the mid-50s. On average, the low temperature falls at freezing or below about 50 days a year. While rain is abundant (an average 5.09 in. per month from Dec.-Feb.), measurable snowfall is rare; the average annual snowfall is about 0.6 inches. Spring usually lasts from late-February to mid-May; temperatures range from the mid-50s to the low-80s and monthly rainfall amounts average about 5.05 in. (128 mm) per month. Summers last from mid-May to mid-September; temperatures range from the upper-60s to the mid-90s, with temperatures above 100°F not uncommon, and average rainfall dip slightly to 3.97 in. per month. Autumn, which spans from mid-September to early-December, tends to be similar to Spring terms of temperature and precipitation. As of the census of 2000 there were 77,906 people, 31,381 households, and 16,945 families residing in the city. The population density was 1,385.2/mi². There were 34,857 housing units at an average density of 619.8/mi². The racial makeup of the city was 54% White and 43% Black or African American. 1.40% of the population were Hispanic or Latino of any race. There were 31,381 households out of which 23.9% had children under the age of 18 living with them, 35.0% were married couples living together, 15.7% had a female householder with no husband present, and 46.0% were non-families. 35.2% of all households were made up of individuals and 9.3% had someone living alone who was 65 years of age or older. The average household size was 2.22 and the average family size was 2.93. In the city the population was spread out with 19.8% under the age of 18, 24.5% from 18 to 24, 25.4% from 25 to 44, 18.5% from 45 to 64, and 11.8% who were 65 years of age or older. The median age was 28 years. For every 100 females there were 90.8 males. For every 100 females age 18 and over, there were 87.9 males. The median income for a household in the city was $27,731, and the median income for a family was $41,753. Males had a median income of $31,614 versus $24,507 for females. The per capita income for the city was $19,129. About 14.2% of families and 23.6% of the population were below the poverty line, including 25.3% of those under age 18 and 13.4% of those age 65 or over. Government and Politics Tuscaloosa has a strong-mayor variant, mayor-council form of government, led by a mayor and a seven-member city council. The mayor is elected by the city at-large and serves four-year terms. Council members are elected to single-member districts every four years as well. Neither the mayor nor the members of the city council is term-limited. All elected offices are nonpartisan. The mayor administers the day-to-day operations of the city, including overseeing the various city departments, over whom he has hiring and firing power. The mayor also acts as ambassador of the city. The mayor sits in city council meetings and has a tie-breaking vote. The current Mayor of Tuscaloosa is Walter Maddox, who was elected to office is September 2005. Prior to Maddox, Alvin A. DuPont had served as mayor for 24 years. The city council is a legislative body that considers policy and passes law. The council also passes the budget for mayoral approval. Any resolution passed by the council is binding law. The majority of work in the council is done by committee, a usually consisting of a chairman, two other council members, and relevant non-voting city employees. |3||Cynthia Lee Almond||2005| |7||William Tinker, III||2005| Tuscaloosa, as the largest county seat in western Alabama, serves a hub of state and federal government agencies. In addition to the customary offices associated with the county courthouse, namely two District Court Judges, six Circuit Court Judges, the District Attorney and the Public Defender, several Alabama state government agencies have regional offices in Tuscaloosa, such as the Alabama Department of Transportation and the Alabama State Troopers. Also, several federal agencies operate bureaus out of the Federal Courthouse in Tuscaloosa. Tuscaloosa is located partially in both the 6th and 7th Congressional Districts, which are represented by Spencer Bachus and Artur Davis respectively. On the state level, the city is split among the 5th, 21st, and 24th Senate districts and 62nd, 63rd, and 70th House districts in the Alabama State Legislature. Despite its image as a college town, Tuscaloosa boasts a diversified economy based on all sectors of manufacturing and service. 25% of the labor force in the Tuscaloosa Metropolitan Statistical area is employed by the federal, state, and local government agencies. 16.7% is employed in manufacturing; 16.4% in retail trade and transportation; 11.6% in finance, information, and private enterprise; 10.3% in mining and construction; and 9.2% in hospitality. Education and healthcare account for only 7.2% of the area workforce with the remainder employed in other services. The city's industrial base includes Elk Corporation of Alabama, Nucor Steel Tuscaloosa, BF Goodrich Tire Manufacturing, JVC America, Phifer Incorporated, Gulf States Paper Corporation, and the Mercedes-Benz U.S. International, Inc., assembly plant. Health-care and education serve as the cornerstone of Tuscaloosa's service sector, which includes the University of Alabama, DCH Regional Medical Center, Bryce State Mental Hospital, the William D. Partlow Developmental Center, and the Tuscaloosa VA Medical Center. The University of Alabama is the dominant institution of higher learning. Enrolling approximately 24,000 students, UA has been a part of Tuscaloosa's identity since it opened its doors in 1831. Stillman College, which opened in 1875, is a historically Black liberal arts college which enrolls approximately 1,200 students. Additionally, Shelton State Community College, one of the largest in Alabama, is located in the city. The school enrolls 8,000 students from all backgrounds and income levels. The Tuscaloosa City School System serves the city. It is overseen by the Board of Education, which is composed of eight members elected by district and a chairman is elected by a citywide vote. Operating with a $100 million budget, the system enrolls approximately 10,300 students. The system consists of 19 schools: 11 elementary schools, 3 middle schools, 3 high schools (Paul Bryant High School, Central High School, and Northridge High School), and 2 specialty schools (the Tuscaloosa Center for Technology and Oak Hill School for special needs students). In 2002, the system spent $6,313 per pupil, the 19th highest amount of the 120 school systems in the state. Tuscaloosa is home to a variety of cultural sites and events reflective of its historical and modern role in Alabama and the Southeast in general. Many of these cultural events are sponsored by the University of Alabama. Numerous performing arts groups and facilities, historical sites, and museums dedicated to subjects as varying as American art and collegiate football dot the city. During football season the area known as "The Strip" pulsates with students, alumni, locals and visitors. The Tuscaloosa Public Library is a city/county agency with nearly 200,000 items on catalog. 46,857 registered patrons use the library on a regular basis — roughly 28 % of the population of the county. There are currently with three branches: the Main Branch on Jack Warner Parkway, the Weaver-Bolden Branch, and the Brown Branch in Taylorville. Most of the museums in Tuscaloosa are found downtown or on the campus of the University. Downtown is the home of Children’s Hands-On Museum of Tuscaloosa and the Murphy African-American Museum. The Alabama Museum of Natural History and the Paul Bryant Museum are located on the University campus. The Westervelt-Warner Museum of American Art is located in northern Tuscaloosa at Jack Warner's NorthRiver Yacht Club. Moundville Archaeological Park and the Jones Archaeological Museum are located 15 miles south of Tuscaloosa in Moundville. The University Alabama also currently fields championship–caliber teams in football, men's baseball, men's and women's basketball, women's gymnastics, and women's softball. These teams play in athletics facilities on the University campus, including Bryant-Denny Stadium, Coleman Coliseum, Sewell-Thomas Baseball Stadium, Alabama Softball Complex, and the Ol' Colony Golf Complex. Stillman College fields teams in football, basketball, and other sports. In the past decade, Stillman has gone through a renaissance of renovations, including a new football stadium. Shelton State fields men's and women's basketball, baseball, and softball teams, each with on-campus facilities. Tuscaloosa is part of the Birmingham-Tuscaloosa-Anniston television market, which is the 40th largest in the nation. All major networks have a presence in the market. WBMA-LP is the ABC affiliate, WIAT-TV is the CBS affiliate, WBRC 6 is the Fox affiliate, WVTM-TV is the NBC affiliate, WBIQ 10 is the PBS affiliate, WTTO is the CW affiliate, and WABM is the MyNetworkTV affiliate. Additionally, WVUA-CA, an independent station, is operated by the University of Alabama. Health and medicine DCH Regional Medical Center is the main medical facility in Tuscaloosa. Other major medical centers in Tuscaloosa include the 702-bed VA Medical Center and the 422-bed Bryce State Mental Hospital. The city lies at the intersection of U.S. Highway 11, U.S. Highway 43, and U.S. Highway 82, Alabama State Route 69, Alabama State Route 215, and Alabama State Route 216) and the duplexed (conjoined) I-20 and I-59. Interstate 359 spurs off from I-20/I-59 and heads northward, ending just shy of the Black Warrior River in downtown Tuscaloosa. Tuscaloosa is served by the Tuscaloosa Transit Authority which operates the Tuscaloosa Trolley System. The Tuscaloosa Regional Airport, is located on the north side of the Black Warrior River west of downtown Northport. Barge traffic routinely transports goods along the Black Warrior River from Birmingham and Tuscaloosa to the Alabama State Docks at Mobile, on the coast of the Gulf of Mexico. Via the Tennessee-Tombigbee Waterway, the city is connected to the Ohio River valley. "Tuscaloosa, Alabama." Wikipedia, The Free Encyclopedia. 26 April 2007, 02:03 UTC . Accessed 30 April 2007.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 14971 }
... Structure and function 2.1 Rate of enzyme mediated ... reactions in one direction. Rate of enzyme mediated Main article: Rate of enzyme mediated Enzymes can increase reaction rate ... ... kinetics describe the rate of enzyme mediated reactions for many enzymes. It is named for ... To determine the maximum rate of an enzyme mediated reaction, the substrate concentration ( [S] ) ... , the factors that effect the rate of enzyme mediated reactions (ie. pH, temperature, etc) are at ... ... Active transport is the mediated transport of biochemicals , and other atomic / molecular substances, across membranes . Unlike passive transport , this process requires ... ... that action potentials do propagate back into the dendrites once initiated in the axon in most neurons. This backpropagating action potential is mediated by the activation of voltage-gated ion channels and can interact with synaptic input to alter the synaptic activity. The structure and branching of ... ... ) is a process of passive transport ( diffusion ) via which molecules diffuse across membranes , with the help of transport proteins ( mediated Small uncharged molecules can easily diffuse across cell membranes. However, due to the hydrophobic nature of the lipids that make ... ... thousands of other cells. Types of signalling Neurons communicate with one another across synapses. This communication is usually chemically mediated by rapid secretion of neurotransmitter molecules . Pre-synaptic neurons (i.e.the neurons which release the neurotransmitter) may produce in the ...
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 1562 }
Puppies & Children Need Help & Advice? If you cannot find the puppy information, articles or training courses you are looking for, or you have an idea for our website...we want to know! Please let us know here... Puppy Leadership and Dog Psychology A puppy's need for leadership, most especially in relation to dominance aggression, is a hot topic in today's climate. Some argue that dogs do not form hierarchal packs with humans - information based on studies of dogs in the wild and domestic dogs in both controlled and uncontrolled environments. In these circles, rank reduction programmes are without merit and in severe instances, will induce aggression in response to inappropriate correction (often guised as training). While we agree that there are such rituals (or methods of puppy training) that are nothing more than meaningless motions, and in some cases, detrimental to the dog's emotional stability, we believe there is something to be understood about balancing the relationship of a dog or puppy within its family unit. Consider that dogs share in excess of 97% of their genes with Wolves and the fact that both species can still procreate with each other indicates how closely related they still are. We naturally expect dogs and puppies to adapt to our human world (and many do without remark), and we treat them as equals. However, dogs, like wolves and other canine species cannot live as equals; they either lead or are led in each circumstance. Dogs and puppies without boundaries or inconsistent boundaries will live as opportunists, establishing their own rules and stirring conflict when otherwise accepted behaviour is denied. Furthermore, enforcing puppy obedience to a dog without clear boundaries is counter-productive, not to mention plagues a dog with unnecessary confusion, which translates to stress. With this said, we do assess every trainee puppy's role in their home environment and we do make recommendations based on the individuality of the situation. In most cases, a form of a leadership programme is necessary to complement the puppy training. Please know, however, that leadership need not be confrontational, rather defining the expectations for a harmonious relationship between human and dog.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2239 }
(Redirected from George C. Marshall George Catlett Marshall (December 31, 1880–October 16, 1959), was an American military leader and statesman best remembered for his leadership in the Allied victory in World War II and for his work establishing the post-war reconstruction effort for Europe, which became known as the Marshall Plan. Marshall was born into a middle-class family in Uniontown, Pennsylvania. While attending Virginia Military Institute he was initiated into the now dormant Beta('01) chapter of Kappa Alpha Order . In 1948, he was awarded the Distiguished Achievement Award for his role and contributions during and after WWII. Marshall was instrumental in getting the U.S. Army and Army Air Corps reorganized and ready for war. Marshall wrote the document that would become the central strategy for all Allied operations in Europe, selected Dwight Eisenhower as Supreme Commander in Europe, and designed Operation Overlord, the invasion of Normandy. Throughout the remainder of the World War II, Marshall coordinated all Allied operations in Europe and the Pacific. He was characterized as the organizer of Allied victory by Winston Churchill. Time Magazine named Marshall Man of the Year in 1944. After WW II he was sent to China to negotiate a truce and build a coalition government between the Nationalists and Communists fighting the Chinese Civil War. His efforts failed and he was recalled in January 1947. Marshall 'retired' in November 1945 and was named Secretary of State in 1947. As such, on June 5, 1947 at a speech at Harvard University, he outlined the U.S. government's preparedness to contribute to European recovery. The European Recovery Plan, which became known as the Marshall Plan, helped Europe quickly rebuild and earned Marshall the honor of being named TIME's Man of the Year in 1948 and awarded the Nobel Peace Prize in 1953. In 1949 he resigned from the State Department and was named president of the American National Red Cross. He was named Secretary of Defense in 1950, but retired from politics for good in 1951 after Senator Joseph McCarthy implied he was a traitor and denounced him for making decisions that "aided the Communist drive for world domination". Marshall died on October 16, 1959. He married Elizabeth Carter Cole of Lexington, Virginia in 1902. She died in 1927. 1930 he married Katherine Boyce Tupper Brown. After graduating from the Virginia Military Institute in 1901, he entered the U.S. Army, where he was to have a long and distinguished career. Until World War I, he was posted to various positions in the US and the Philippines, and was trained in modern warfare. During the War he had roles as a planner of both training and operations. Between WWI and WWII, he was a key planner and writer in the War Department, spent three years in China, and taught at the Army War College. He went to France in the summer of 1917 as the director of training and planning for the 1st Infantry Division. In mid-1918, he was promoted to American Expeditionary Forces headquarters, where he was a key planner of American operations. He was instrumental in the design and coordination of the Meuse-Argonne offensive, which forced Germany to sue for peace. In 1919 he became an aide-de-camp to General John J. Pershing. Between 1920 and 1924, while Pershing was Army Chief of Staff, Marshall worked in a number of positions in the US Army, focusing on training and teaching modern, mechanised warfare. He was promoted to Brigadier General in October 1936. In 1939 he was selected by Franklin D. Roosevelt to be Army Chief of Staff, a position he held until 1945. Dates of rank - Second Lieutenant, United States Army: February 2, 1902 - First Lieutenant, United States Army: March 7, 1907 - Captain, United States Army: July 1, 1916 - Major, National Army: August 5, 1917 - Lieutenant Colonel, National Army: January 5, 1918 - Colonel, National Army: August 27, 1918 - Major, Regular Army (reverted to permanent rank): July 1, 1920 - Lieutenant Colonel, Regular Army: August 21, 1923 - Colonel, Regular Army: September 1, 1933 - Brigadier General, Regular Army: October 1, 1936 - Major General, Regular Army: September 1, 1939 - General, Regular Army, for service as Army Chief of Staff: September 1, 1939 - General of the Army, Army of the United States: December 16, 1944 - General of the Army rank made permanent in the Regular Army: April 11, 1946 Notes about components: - United States Army: Regular U.S. Armed Forces prior to World War I - National Army: Combined conscript and regular United States forces during World War I - Regular Army: Regular volunteer forces after 1930. Considered "career" professionals - Army of the United States: Combined draft and regular forces of World War II. Awards and decorations "We are determined that before the sun sets on this terrible struggle, Our Flag will be recognized throughout the World as a symbol of Freedom on the one hand and of overwhelming force on the other." -- George Marshall (May 29, 1942, Larry I. Bland and Sharon Ritenour Stevens, ed. The Papers of George Catlett Marshall, Vol 3 pp. 212-14.) "I couldn't sleep nights, George, if you were out of Washington." -President Roosevelt, reported by Henry Stimson, 1943 “...what a joy it must be to [Marshall] to see how the armies he called into being by his own genius have won immortal renown. He is the true 'organizer of victory.’” Winston Churchill, 1945 "A man devoted to the daily study of war on several continents with all the ardour of a certified public accountant." - Alistair Cooke, 1959 "Hitherto I had thought of Marshall as a rugged soldier and a magnificent organizer and builder of armies - the American Carnot. But now I saw that he was a statesman with a penetrating and commanding view of the whole scene." - Winston Churchill |- style="text-align: center;" | width="30%" |Preceded by: Louis A. Johnson | width="40%" style="text-align: center;" |United States Secretary of Defense | width="30%" |Succeeded by: Robert A. Lovett
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6018 }
Axillary nerve dysfunction Axillary nerve dysfunction is nerve damage that leads to a loss of movement or sensation in the shoulder. Neuropathy - axillary nerve Causes, incidence, and risk factors Axillary nerve dysfunction is a form of peripheral neuropathy. It occurs when there is damage to the axillary nerve, which supplies the deltoid muscles of the shoulder and the skin around it. A problem with just one nerve, such as the axillary nerve, is called mononeuropathy. The usual causes are: - Direct trauma - Long-term pressure on the nerve - Pressure on the nerve from nearby body structures - Shoulder injury Entrapment creates pressure on the nerve where it passes through a narrow structure. The damage may destroy the myelin sheath that covers the nerve, or part of the nerve cell (the axon). Damage of either type reduces or prevents the movement of impulses through the nerve. Conditions that can lead to axillary nerve dysfunction include: - Body-wide (systemic) disorders that cause nerve inflammation - Deep infection Fracture of the upper arm bone (humerus) - Pressure from casts or splints - Improper use of crutches - Shoulder dislocation In some cases, no cause can be found. Numbness over part of the outer shoulder - Shoulder weakness, especially when lifting the arm up and away from the body Signs and tests Your health care provider will examine your neck, arm, and shoulder. Weakness of the shoulder may cause difficulty moving your arm. The deltoid muscle of the shoulder may show signs of muscle atrophy. Tests that may be used to evaluate axillary nerve dysfunction include: EMG and nerve conduction tests -- will be normal right after the injury; should be performed several weeks after the injury or symptoms start MRI or x-rays of the shoulder Depending on the cause of the nerve disorder, some people do not need treatment. They will get better on their own. However, the rate of recovery can be different for everyone. It can take many months to recover. Anti-inflammatory medications may be given if you have: - Sudden symptoms - Small changes in sensation or movement - No history of injury to the area - No signs of nerve damage These medicines reduce swelling and pressure on the nerve. They may be injected directly into the area or taken by mouth. Other medicines include: - Over-the-counter pain medicines may be helpful for mild pain (neuralgia). - Other medications (phenytoin, carbamazepine, gabapentin, pregabalin, duloxetine, or tricyclic antidepressants such as nortriptyline) may reduce the stabbing pains that some people experience. - Opiate pain relievers, such as morphine or fentanyl, may be needed to control severe pain. Whenever possible, avoid or reduce medication use to lessen the risk of side effects. If your symptoms continue or get worse, you may need surgery. Surgery may be done to see if a trapped nerve is causing your symptoms. In this case, surgery to release the nerve may help you feel better. Physical therapy may help you maintain muscle strength. Job changes, muscle retraining, or other forms of therapy may be recommended. It may be possible to make a full recovery if the cause of the axillary nerve dysfunction can be identified and successfully treated. Calling your health care provider Call for an appointment with your health care provider if you have symptoms of axillary nerve dysfunction. Early diagnosis and treatment increase the chance of controlling symptoms. Preventive measures vary, depending on the cause. Avoid putting pressure on the underarm area for a long period of time. Make sure casts, splints, and other appliances fit properly. When you use crutches, learn how to avoid putting pressure on the underarm. Last reviewed 2/5/2011 by David C. Dugdale, III, MD, Professor of Medicine, Division of General Medicine, Department of Medicine, University of Washington School of Medicine. Also reviewed by Joseph V. Campellone, MD, Division of Neurology, Cooper University Hospital, Camden, NJ. Review provided by VeriMed Healthcare Network. Also reviewed by David Zieve, MD, MHA, Medical Director, A.D.A.M., Inc. - The information provided herein should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. - A licensed medical professional should be consulted for diagnosis and treatment of any and all medical conditions. - Call 911 for all medical emergencies. - Links to other sites are provided for information only -- they do not constitute endorsements of those other sites. Any duplication or distribution of the information contained herein is strictly prohibited.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4614 }
What is bone cancer? Bone is the framework that supports the body. Most bones are hollow. Bone marrow is the soft tissue inside hollow bones. The main substance of bone is made up of a network of fibrous tissue onto which calcium salts are laid down. This makes the bone very hard and strong. At each end of the bone is a softer bone-like tissue called cartilage that acts as a cushion between bones. The outside of the bone is covered with a layer of fibrous tissue. The bone itself contains 2 kinds of cells. Osteoblasts are cells that form the bone. Osteoclasts are cells that dissolve bone. Although we think that bone does not change, the truth is that it is very active. New bone is always forming and old bone dissolving. The marrow of some bones is only fatty tissue. In other bones the marrow is a mixture of fat cells and the cells that make blood cells. These blood-forming cells make red blood cells, white blood cells, and platelets. Types of bone tumors Most of the time when someone is told they have cancer in their bones, the doctor is talking about a cancer that started somewhere else and then spread to the bone. This is called metastatic cancer (not bone cancer). This can happen to people with many different types of advanced cancer, such as breast cancer, prostate cancer, lung cancer, and many others. Under a microscope, theses cancer cells in the bone look like the cancer cells that they came from. If someone has lung cancer that has spread to the bone, the cells there will look and act like lung cancer cells and they will be treated the same way. To learn more about cancer that has spread to bone, please see the American Cancer Society document Bone Metastasis, as well as the document on the place where the cancer started (Breast Cancer, Lung Cancer (Non-Small Cell), Prostate Cancer, etc.). Other kinds of cancers that are sometimes called “bone cancers” start in the bone marrow – in the blood-forming cells – not the bone itself. These are not true bone cancers. The most common of these is multiple myeloma. Certain lymphomas (which more often start in lymph nodes) and all leukemias start in bone marrow. To learn more about these cancers, refer to the document for each. A primary bone tumor starts in the bone itself. True (or primary) bone cancers are called sarcomas. A sarcoma is a cancer that starts in bone, muscle, tendons, ligaments, fat tissue, or some other tissues in the body. There are different types of bone tumors. Their names are based on the bone or tissue that is involved and the kind of cells that make up the tumor. Some are cancer (malignant). Others are not cancer (benign). Most bone cancers are called sarcomas. Benign bone tumors do not spread to other tissues and organs. They can usually be cured by surgery. The information here does not cover benign bone tumors. Bone tumors that are cancer (malignant) Osteosarcoma: Osteosarcoma (also called osteogenic sarcoma) is the most common true bone cancer. It is most common in young people between the ages of 10 and 30. But about 10% of cases are people in their 60s and 70s. This cancer is rare during middle age. More males than females get this cancer. These tumors start most often in bones of the arms, legs, or pelvis. This type of bone cancer is not discussed in this document, but is covered in detail in our document, Osteosarcoma. Chondrosarcoma: This is cancer of the cartilage cells. Cartilage is a softer form of bone-like tissue. Chondrosarcoma is the second most common true bone cancer. It is rare in people younger than 20. After age 20, the risk of this cancer keeps on rising until about age 75. Women get this cancer as often as men. Chondrosarcomas can develop in any place where there is cartilage. It most often starts in cartilage of the pelvis, leg, or arm, but it can start in many other places, too. Chondrosarcomas are given a grade, which measures how fast they grow. The lower the grade, the slower the cancer grows. When cancer grows slowly, the chance that it will spread is lower and the outlook is better. There are also some special types of chondrosarcoma that respond differently to treatment and have a different outlook for the patient. These special types look different when seen under a microscope. Ewing tumor: This cancer is also called Ewing sarcoma. It is named after Dr. James Ewing, the doctor who first described it in 1921. It is the third most common bone cancer. Most Ewing tumors start in bones, but they can start in other tissues and organs. This cancer is most common in children and teenagers. It is rare in adults older than 30. This type of bone cancer is not discussed in this document, but is covered in detail in our document, Ewing Family of Tumors. Malignant fibrous histiocytoma (MFH): This cancer more often starts in the soft tissues around bones (such as ligaments, tendons, fat, and muscle) rather than in the bone itself. If it starts in the bones, it most often affects the legs or arms. It usually occurs in older and middle-aged adults. MFH mostly tends to grow into nearby tissues, but it can spread to distant sites, like the lungs. (Another name for this cancer is pleomorphic undifferentiated sarcoma.) Fibrosarcoma: This is another type of cancer that starts more often in “soft tissues” than it does in the bones. Fibrosarcoma usually occurs in older and middle-aged adults. Leg, arm, and jaw bones are most often affected. Giant cell tumor of bone: This type of bone tumor has both benign (not cancer) and malignant forms. The benign form is most common. These don’t often spread to distant sites, but after surgery they tend to come back where they started. Each time they come back after surgery they are more likely to spread to other parts of the body. These tumors often affect the arm or leg bones of young and middle-aged adults. Chordoma: This tumor usually occurs in the base of the skull and bones of the spine. It is found most often in adults older than 30. It is about twice as common in men than in women. Chordomas tend to grow slowly and usually do not spread to other parts of the body. But they often come back in the same place if they are not removed completely. When they do spread, they tend to go to the lymph nodes, lungs, and liver. Last Medical Review: 12/05/2012 Last Revised: 01/24/2013
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 6312 }
Sign up to our newsletter to stay in touch ICJ to hear Chagos Islands case The International Court of Justice has ruled that it can hear a case challenging the UK's decision to establish a marine protected area around the Chagos Islands. The process could also require the UK to justify its decisions to lease one of the islands to the United States for military purposes, and to remove and re-settle the Chagossians from the territory. Initially dependencies of Mauritius, which was a British colony until 1968, the Islands were detached three years prior to that to become part of the British Indian Ocean Territory. In 1966, the UK leased Diego Garcia, the largest of the Islands, to the United States, and the territory has been governed since then by UK/US Agreements regulating its use for defence purposes. In 1967, the UK bought out the Islands plantation owners, shut down the plantations and stopped regular ship supply. The Chagossians, numbering some 2,000 at the time, were evacuated and re-settled in Mauritius and the Seychelles over a period of six years. Advocacy groups maintain that Chagossians were not consulted during this process, nor properly compensated or re-settled. There are several reports - including by the UN Special Rapporteur on Indigenous Peoples - documenting discrimination, abuse and poverty suffered by Chagossians in Mauritius and the Seychelles. Following publicity of their situation in the 1990s, the UK permitted Chagossians to return to parts of the territory in 2000. This decision was overturned in 2004 as Diego Garcia assumed new strategic importance. Since then, Chagossians have been pursuing their right to return through a series of legal challenges. Successive reports by UN human rights mechanisms and by parliamentary committees have recommended that this right be upheld. In 2009, the situation was further complicated by the UK's decision to establish a 'marine protected area' around the Chagos Islands. While many MPAs are inhabited, the designation of this particular area as a 'no-take' reserve would several curtail human activity, from fishing to construction of houses. UNA-UK wrote to the UK government in May 2010, asking it to reconsider its position on the MPA: UNA-UK is writing to express its concerns regarding the decision announced on 1 April 2010 to establish a ‘no-take’ Marine Protected Area (MPA) in the British Indian Ocean Territory, which includes the Chagos Islands. While UNA-UK welcomes the efforts to protect the Chagos archipelago and notes that in his announcement, former Foreign Secretary David Miliband described the decision as “without prejudice to the outcome of the current, pending proceedings before the European Court of Human Rights (ECHR)”, we are disappointed that neither his statement nor the consultation document made reference to the representations by members of the Chagossian diaspora for recognition of their ‘right to return’. The UN Human Rights Committee, which monitors implementation of the UK’s obligations under the International Covenant of Civil and Political Rights, has twice recommended (most recently in 2008) that the UK should uphold the right of the Chagos Islanders to return. This position was echoed in a 2008 Foreign Affairs Select Committee Report on Overseas Territories, and reiterated several times by MPs and Lords from the three main political parties in the 6 April parliamentary debates. The UK government has long recognised the plight of the Chagossians. Former FCO Minister Chris Bryant repeatedly stated during the 6 April Commons debate how much the UK regrets the ‘shameful’ treatment of the Chagossians in the 1960s and 1970s. In a letter to the UK Chagos Support Association just prior to the general election, the new Foreign Secretary, William Hague, said that “if elected to serve as the next British government, we will work to ensure a fair settlement of this long-standing dispute”. Deputy Prime Minister Nick Clegg also made a strong statement, saying “[the Liberal Democrats] have actively supported [the Chagossian] cause in the past and we will continue to aid their campaign to see justice done. We have been appalled that the [previous] government has wasted time, money and effort defending the indefensible”. UNA-UK believes that the environmental and human rights objectives pertaining to Chagos are not necessarily incompatible, and that it is vital to ensure that the Chagossians are fully involved in the process of fleshing out the details of the MPA. We therefore urge the new UK government to: - work with the Chagossians (and all other relevant parties) on an MPA solution that takes into account their right to self-determination as set out in the two International Covenants and the Declaration of the Rights of Indigenous Peoples, and simultaneously provides adequate protection for a very precious marine ecosystem; - consider, once the outcome of the ECHR decision is known, the recommendations made by the UN Human Rights Committee on the right of Chagossians to return; and - report on the situation in the UK’s next periodic report to the UN Human Rights Committee.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5141 }
Common Career Technical Core The Common Career Technical Core (CCTC) is a state-led initiative to establish a set of rigorous, high-quality standards for Career Technical Education (CTE) that states can adopt voluntarily. The standards have been informed by state and industry standards and developed by a diverse group of teachers, business and industry experts, administrators and researchers. The initiative is being coordinated by the National Association of State Directors of Career Technical Education Consortium (NASDCTEc), which represents the state and territory heads of secondary, postsecondary and adult CTE across the nation. Forty-two states the District of Columbia and Palau participated in the development stage of the CCTC. The development of the CCTC was a multi-step process that incorporated input from approximately 3,500 individuals representing K-12 education, business and industry and higher education from across the nation. The process for developing the CCTC was informed by: • High-quality state and industry standards; • Input and guidance from educators, business and industry and state leaders; and • Feedback from the public. The CCTC includes a set of standards for each of the 16 Career Clusters™ and their corresponding Career Pathways that define what students should know and be able to do after completing instruction in a program of study. The CCTC also includes an overarching set of Career Ready Practices that apply to all programs of study. The Career Ready Practices include 12 statements that address the knowledge, skills and dispositions that are important to becoming career ready. In June, the NASDCTEc Board of Directors voted in full support and approved the CCTC that defines common expectations for CTE organized by the National Career Clusters™ Framework. Learn more about the CCTC or share information about the initiative with these resources: An online database of the CCTC standards provides an opportunity to create reports specific to the needs of the user. In addition, additional resources including performance elements and sample indicators for the CCTC standards are provided as a resource tool in the exploration and understanding of the standards. A public license has been created for the use of the CCTC standards and is available to review. Representatives from organizations across the nation have expressed support for the CCTC. Learn who they are and share their statements with others as you work to gain support of and raise awareness about the CCTC in your state. A summary of the process used to achieve the outcomes of the CCTC. Report highlights the steps used, the participation and recommendations for future revisions and engagements associated with the development of the CCTC. A summary of the process, methodology and approach used to update the 2008 Knowledge and Skills Statements and prepare for transition for use by the Common Career Technical Core Working Groups in the development of the CCTC. For additional information about the CCTC, contact email@example.com.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3058 }
Brachial Plexus Injury in Infants What is the brachial plexus? The brachial plexus is a group of nerves that begins in the neck and provides feeling and movement to the shoulder, arm, forearm, and hand. Signs of damage in this area include: a limp arm or an arm with no muscle control in the shoulder, arm, or hand. Infants may also lack feeling in their hand and arm. Brachial plexus injuries in infants are not painful. What is a brachial plexus injury? Brachial plexus injuries are caused when these nerves are stretched during the birth of a child. Damage to the nerves occurs in 0.38 to 3.6 per 1000 live births. Eight to twenty-three percent of these infants have nerve damage on both sides of the body. Ninety-three percent of these infants get much of their function back by 3 months of age when treated with therapy or when they are just watched. This is a good sign that these infants will do well in the future. These children likely will not need surgery. How are brachial plexus injuries treated? A small number of infants with this type of problem will need surgery. Therapy before and after surgery will improve long term results. How do you measure the extent of a brachial plexus injury? The tests listed below may be done before, during, or after surgery to show the extent of your child's nerve damage. EMG (electromyography) measures how the nerve and muscle work together. SSEPs (somatosensory evoked potentials) measures how the nerve communicates between the spinal cord and the brain. NAPs (nerve action potentials) tests for nerve conduction across the injured site. Myelogram CT (myelogram computer tomography) measures spinal cord and nerve root damage by taking x-rays after a dye is injected into the spinal cord. MRI (magnetic resonance imaging) provides a detailed picture of the spinal cord and nerve roots. What type of brachial plexus injury can occur? A stretch injury may cause three types of damage. Your child may have one itype or a combined injury. The nerve root separates from the spinal cord. this problem will not repair itself without surgery. Neuroma-in-continuity with good conduction This is from damage to the nerve but a message still travels through it The nerve will grow back over time. Neuroma-in-continuity without conduction There is damage to the nerve and messages are not able to travel through it. The nerve will need to be repaired with surgery. In most cases, it is only during surgery that we can tell if a message is able to travel through the damaged nerves or not. Types of Repair The surgeon removes the scar tissue around the nerve. The damaged part of the nerve is removed or bypassed and replaced with a nerve graft. Nerve grafts are taken from the leg, arm, or the neck at the time of surgery. A nerve from another place in the body such as the diaphragm, the neck, the chest wall is used to repair the damaged nerve Your child's surgery may include cutting away scar tissue, reconnecting two ends of a nerve, or making a bypass or a graft around the injured nerve. An incision is made from the neck to the armpit. In some cases, an incision is made on the back near the shoulder blade. You will be taught how to prepare your child for surgery at a clini visit. After surgery, your child will stay in the hospital a few days. The surgical arm will be fastened to the chest with an ace wrap or sling for a couple of weeks so that it cannot be moved. Therapy will begin in 2 weeks and will last for many months. The nerve recovery takes many months or up to a year. The nerve grows back about one inch per month. When to Call Your Surgeon(s) or Nurse Practitioner Call us if your child has any of these signs or symptoms. - Redness, pain, swelling, or drainage at the incision site - Fever greater than 101.5ºF - Change in color, temperature, or feeling in the arm or hand Please call with any questions or concerns. Dr. Iskandar, Department of Neurosurgery: (608) 263-9651 Dr. Bentz, Department of Plastic Surgery: (608) 263-1367 Dr. Mark Kiehn, Department of Plastic Surgery: (608) 263-1367 The information provided should not be used during any medical emergency or for the diagnosis or treatment of any medical condition. A licensed physician should be consulted for diagnosis and treatment of any and all medical conditions. Call 911 for all medical emergencies. Any duplication or distribution of the information contained herein is strictly prohibited. Last Updated: 02/26/2010 Copyright © 02/26/2010 University of Wisconsin Hospitals and Clinics Authority. All rights reserved. Produced by the Department of Nursing. HF#5470 Print Health Fact For You
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4627 }
A cat with bad breath has much more than a social problem. Unlike humans, kitties with halitosis are probably in considerable pain and their lives may be in danger. That’s because cats with unpleasant breath probably suffer from feline dental disease. This condition not only gives a cat bad breath and more than a little pain while eating, but it can also cause infections in the gums that may migrate to vital organs, such as the heart and the kidneys, possibly resulting in serious illness and even death. Although the American Veterinary Dental Society notes that 70 percent of cats develop dental problems by the age of three, your cat doesn’t have to be one of them. All you need to do is brush its teeth regularly, following these six steps: Step 1: Buy the right brush and paste Experts agree that toothpaste for people shouldn’t be used to clean feline teeth. Since cats don’t spit out the toothpaste after brushing, regular fluoridated paste could upset your cat’s stomach. Instead, buy toothpaste specially designed for cats. Also purchase a child’s toothbrush with soft bristles. But don’t plan on using either right away. You need to first prep your cat for what lies ahead. Step 2: Get in position Introduce your cat to the place where you’ll be doing the toothbrushing. “Start by just putting your cat up on a counter top, or on your lap, or wherever you ultimately would like to do the toothbrushing, and give a reward,” suggests Valerie Creighton, DVM, president of the American Association of Feline Practitioners. “That can be a small food treat or scratching of the cheeks --whatever makes the cat happy.” Step 3: Try using a tantalizing lure After a few days at the tooth-brushing place, begin to help your cat realize that getting its mouth worked on can be a good thing. Bring your cat to where you plan to brush the teeth, and “put some tuna fish juice on the edge of a cotton swab,” suggests Jan Bellows, DVM, a veterinary dentist in Weston, Fla. “Then, apply it to the area where the gums meet the teeth.” Finish with a treat. Step 4: Introduce the toothpaste Next, help your cat discover the pleasures of toothpaste made just for the feline set. At the brushing zone, “introduce the toothpaste by letting the cat sniff at the uncapped tube, followed by a treat,” recommends Dr. Creighton, who practices in Thousand Oaks, Calif. After a day or two of sniffing the tube, put some toothpaste on your finger for your cat to sniff or lick each day. Step 5: Bring on the toothbrush After a few days of taking the toothpaste from your finger, your cat should be ready to accept it from a toothbrush. “Get the cat used to the presence of the brush without forcing it into the mouth,” Dr. Creighton suggests. Once your cat is used to seeing the brush, try placing your hands on its face and running your fingers along its lips as though you were getting ready to brush the teeth -- but stop short of actually doing so. Step 6: Brush the teeth If you detect that your feline friend is comfortable with the brush and having your hands near its mouth then you’re ready to try actually brushing its teeth. If your cat still bristles at the bristles, don’t give up. Instead, try lifting your cat’s lips with your fingertips and gently brushing the outer surfaces of your pet’s teeth. If worst comes to worst, your cat can take care of the inside surfaces with the abrasive action of its tongue until you can take it to your veterinarian for a full, professional tooth cleaning, which should be done at least once a year and up to four times annually if gum disease has already set in. You can check for dental and gum disease as you work with your cat. Look for symptoms including brown or yellow staining on the teeth, red or swollen gums and bleeding gums. Even if you regularly take your cat for professional toothbrushings, keep in mind that more than your feline’s teeth will be clean after each visit. You’ll then have a clean slate upon which to start acclimating your cat to a home dental care routine. “This process may take several weeks, but by the end, you and your kitty will be far less traumatized by the idea of you brushing its teeth,” Dr. Creighton predicts. “And you’ll be well on your way to a healthy habit where your cat’s teeth are concerned.” Copyright (c) 2008 Studio One Networks. All rights reserved. *DISCLAIMER*: The information contained in or provided through this site section is intended for general consumer understanding and education only and is not intended to be and is not a substitute for professional advice. Use of this site section and any information contained on or provided through this site section is at your own risk and any information contained on or provided through this site section is provided on an "as is" basis without any representations or warranties.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 4824 }
Study promoter activity using the Living Colors Fluorescent Timer, a fluorescent protein that shifts color from green to red over time (1). This color change provides a way to visualize the time frame of promoter activity, indicating where in an organism the promoter is active and also when it becomes inactive. Easily detect the red and green emissions indicating promoter activity with fluorescence microscopy or flow cytometry. Easily Characterize Promoter Activity The Fluorescent Timer is a mutant form of the DsRed fluorescent reporter, containing two amino acid substitutions which increase its fluorescence intensity and endow it with a distinct spectral property: as the Fluorescent Timer matures, it changes color—in a matter of hours, depending on the expression system used. Shortly after its synthesis, the Fluorescent Timer begins emitting green fluorescence but as time passes, the fluorophore undergoes additional changes that shift its fluorescence to longer wavelengths. When fully matured the protein is bright red. The protein’s color shift can be used to follow the on and off phases of gene expression (e.g., during embryogenesis and cell differentiation). Fluorescent Timer under the control of the heat shock promoter hsp16-41 in a transgenic C. elegans embryo. The embryo was heat-shocked in a 33°C water bath. Promoter activity was studied during the heat shock recovery period. Green fluorescence was observed in the embryo as early as two hr into the recovery period. By 50 hr after heat shock, promoter activity had ceased, as indicated by the lack of green color. pTimer (left) is primarily intended to serve as a convenient source of the Fluorescent Timer cDNA. Use pTimer-1 (right) to monitor transcription from different promoters and promoter/ enhancer combinations inserted into the MCS located upstream of the Fluorescent Timer coding sequence. Without the addition of a functional promoter, this vector will not express the Fluorescent Timer. Detecting Timer Fluorescent Protein You can detect the Fluorescent Timer with the DsRed Polyclonal Antibody. You can use the DsRed1-C Sequencing Primer to sequence wild-type DsRed1 C-terminal gene fusions, including Timer fusions. Terskikh, A., et al. (2000) Science290(5496):1585–1588.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 2267 }
Download source - 8 Kb This tutorial is based off of the MSDN Article #ID: Q194873. But, for a beginner, following these MSDN articles can be intimidating to say the least. One of the most often asked questions I see as a Visual C++ and Visual Basic programmer is how to call a VB DLL from VC++. Well, I am hoping to show you exactly that today. I am not going to go over the basic details of COM as this would take too long, so I am assuming you have an understanding of VB, VC++ and a little COM knowledge. It's not too hard to learn; just takes a little time. So let's get started. The first thing you need to do is fire up Visual Basic 6 (VB 5 should work as well). With VB running, create a new "ActiveX DLL" project. Rename the project to "vbTestCOM" and the class to "clsTestClass". You can do this by clicking in the VB Project Explorer Window on the Project1 item (Step 1), then clicking in the Properties window and selecting the name property (Step 2). Do the same for the Class. Click on the class (Step 3), then the name property and enter the name mentioned above (Step 4). Your project so far should look like the folowing right hand side picture: Ok, now we are ready to add some code to the VB Class. Click on the "Tools" menu, then select the "Add Procedure" menu item. The Add Procedure window will open up. In this window we need to add some information. First (Step 1) make sure the type is set to Function. Second (Step 2) enter a Function name called "CountStringLength". Finally hit the Ok button and VB will generate the new function in the class. You should have an empty function with which to work. The first thing we will do is specify a return type and an input parameter. Edit your code to look like this: Public Function CountStringLength(ByVal strValue As String) As Long What are we doing here? We are taking one parameter, as a String type in this case, then returning the length through the return type, which is a Long. We specify the input parameter as ByVal, meaning VB will make a copy of this variable and use the copy in the function, rather than the default ByRef, which passes the variable by reference. This way we can be sure that we do not modify the string by accident that was passed to us by the calling program. Let's add the code now. Public Function CountStringLength(ByVal strValue As String) As Long If strValue = vbNullString Then CountStringLength = 0 CountStringLength = Len(strValue) In the first line of code we are checking to see if the calling program passed us an empty, or NULL, string. If so we return 0 as the length. If the user did pass something other than an empty string, then we count it's length and return the length back to the calling program. Now would be a good time to save your project. Accept the default names and put it in a safe directory. We need to compile this project now. Go to the File menu and select the "Make vbCOMTest.dll..." menu item. The compiler will produce a file called surprisingly enough: vbCOMTest.dll. The compiler will also do us the favor of entering this new DLL into the system registry. We have finished the VB side of this project, so let's start the VC++ side of it. Fire up a copy of VC++, then select from the menu, "New Project". The New Project window should appear. Select a "Win32 Console Application" (Step 1), then give it a name of "TestVBCOM" (Step 2). Finally, enter a directory you want to build this project in (Step 3 - your directory will vary from what I have entered). Click on the "OK" button and the "Win32 Console Application - Step 1 of 1" window will appear. Leave everything on this page as the default, and click the "Finish" button. One final window will appear after this titled "New Project Information". Simply click the "Ok" button here. You should now have an empty Win32 Console project. Press the "Ctrl" and hit the "N" key. Another window titled "New" will appear. Select the "C++ Source File" (Step 1), then enter the new name for this file called, "TestVBCOM.cpp" (Step 2 - make sure the Add to Project checkbox is checked and the correct project name is in the drop down combo box), then click the "Ok" button to finish. Now we are going to get fancy! You need to go to your Start Menu in Windows and navigate to the "Visual Studio 6" menu and go into the "Microsoft Visual Studio 6.0 Tools" sub-menu. In here you will see an icon with the name "OLE View". Click on it. The OLE View tool will open up. You will see a window similar to this one: Collapse all the trees, if they are not already. This will make it easier to navigate to where we want to go. Highlight the "Type Libraries" (Step 1) and expand it. You should see a fairly massive listing. We need to locate our VB DLL. Now, remember what we named the project? Right, we need to look for vbTestCOM. Scroll down until you find this. Once you have found it, double click on it. A new window should appear - the "ITypeLib Viewer" window. We are only interested in the IDL (Interface Definition Language) code on the right side of the window. Select the entire IDL text and hit the "Ctrl" and "C" buttons to copy it to the clipboard. You can close this window and the OLE View window now as we are done with the tool. We need to add the contents of the IDL file into our VC++ project folder. Go to the folder you told VC++ to create your project in and create a new text file there (If you are in Windows Explorer, you can right click in the directory and select "New" then scroll over following the arrow and select "Text Document"). Rename the text document to "vbCOMTEST.idl". Then double click on the new IDL file (VC++ should open it if you named it correctly with an IDL extension). Now paste the code in the file by pressing the "Ctrl" and "V" keys. The IDL text should be pasted into the file. So far, so good. Now, this IDL file is not going to do us much good until we compile it. That way, VC++ can use the files it generates to talk to the VB DLL. Let's do that now. Open a DOS window and navigate to the directory you created your VC++ project in. Once in that directory, at the prompt you need to type the following to invoke the MIDL compiler: E:\VCSource\TestVBCOM\TestVBCOM\midl vbTestCOM.idl /h vbTestCOM.h Hit the "Enter" key and let MIDL do its magic. You should see results similar to the following: Close the DOS window and head back into VC++. We need to add the newly generated vbTestCOM.h and vbTestCOM_i.c files to the project. You can do this by going to the "Project" menu, then selecting the "Add to Project" item, and scrolling over to the "Files" menu item and clicking on it. A window titled, "Insert Files into Project" will open. Select the two files highlighted in the next picture, then select the "Ok" button. These two files were generated by MIDL for us, and VC++ needs them in order to talk to the VB DLL (actually VC++ does not need the "vbCOMTest_i.c" file in the project, but it is handy have in the project to review). We are going to add the following code to the "TestVBCOM.cpp" file now, so navigate to that file in VC++ using the "Workspace" window. Open the file by double clicking it and VC++ will display the empty file for editing. Now add the following code to the "TestVBCOM.cpp" file: _clsVBTestClass *IVBTestClass = NULL; hr = CoInitialize(0); hr = CoCreateInstance( CLSID_clsVBTestClass, _bstr_t bstrValue("Hello World"); hr = IVBTestClass->CountStringLength(bstrValue, cout << "The string is: " << ReturnValue << " characters in length." << endl; hr = IVBTestClass->Release(); cout << "CoCreateInstance Failed." << endl; If all the code is entered in correctly, then press the "F7" key to compile this project. Once the project has compiled cleanly, then press the "Ctrl" and the "F5" keys to run it. In the C++ code, we include the MIDL created "vbTestCOM.h" file, the "Comdef.h" file for the _bstr_t class support and the "iostream.h" file for the "cout" support. The rest of the comments should speak for themselves as to what's occurring. This simple tutorial shows how well a person can integrate VB and VC++ apps together using COM. Not too tough actually.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 8126 }
This workshop is taught by Kendra Markle and is broken down as follows: Intro: Tools and techniques for persuading people, quickly and inexpensively, are here. The platforms for persuasion are open to even those with limited technical skill. We’ll cover examples of persuasion on websites, mobile apps, texting, facebook, videos and games, and reflect on the trends and upcoming opportunities we see at the time of the workshop. Brain science: Our brains are wired to stereotype, follow the crowd, learn from example, react to triggers, etc. With careful design, our technology can act like and exhibit the appropriate traits to persuade our brains and influence our attitudes and actions. For example, technology that volunteers ‘personal’ information before asking for the user’s info is likely to be more successful in obtaining it. Messages intended to stop the user from doing something are more effective when accompanied by a picture of a person of the same gender. We’ll talk briefly about why these associations exist in our brains and lead up to how they translate into web development. Persuasive techniques: We’ll describe at least five of the most broadly applicable principles of persuasive technology, and the pros and cons of each. These include tailoring the experience, surveillance, operant conditioning, reduction, tunneling and self-monitoring, among others. We’ll talk about using technology in the role of a social actor that creates a relationship with the user, as a tool that increases the user’s capability and as a medium that provides an experience and how each of these can be persuasive for different types of behaviors. We’ll do a short exercise for each technique to give attendees a chance to apply the principle to their own work (and keep people awake and learning from doing). Case Studies: Here we’ll delve into strengths and weaknesses of existing persuasive platforms, such as mobile phones and facebook, as well as some emerging platforms, such as persuasive video. We’ll look at good and bad examples of each and discuss ways that certain popular websites and services could be more influential. We’ll take give participants a chance to analyze an example that we provide with their neighbor to encourage them to think. Design process and exercises: Our 8 step design process starts with determining the exact behavior you’re targeting and understanding it with our grid of 35 behavior types. Next, choose a receptive audience and identify barriers. Then, choose the appropriate technology channel, find relevant examples, imitate successful examples, test and iterate quickly, and lastly, expand on success. We’ll also show a model for behavior change that involves using brain science to increase motivation, removes barriers by breaking the behavior down into smaller, achievable pieces and finally sparks action by sending a trigger (see more at our website: behaviormodel.org). Next, we’ll apply this process as a group to a few potential applications, such as a virtual coach for a health condition, a trustworthy website of resources, etc. We’ll guide the audience through the design process, letting them decide on the features and platform for the app. Kendra builds persuasive technology tools for healthy behavior change. She works with the Persuasive Technology Lab at Stanford University on mobile persuasion, the psychology of Facebook, and social networking for health. She does research at Kaiser Permanente using technology tools to help patients manage obesity and chronic conditions. Her company AlterActions.org produces tools for mental health, including recovering from depression, learning mindfulness, synthesizing happiness and building willpower. You can sign up to play with her new tools when they get released into the wild as pilots at AlterActions.org. Check out twitter.com/alteractions to learn more about brain science and persuasive behavior change. Comments on this page are now closed.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3964 }
1854-89 THREE DOLLARS INDIAN HEAD In 1853 the United States negotiated the "Gadsden Purchase"settlement of a boundary dispute with Mexico that resulted in the U.S. acquiring what would become the southern portions of Arizona and New Mexico for ten million dollars. The following year Commodore Matthew Perry embarked upon his famed expedition to re-open Japan to the Western world and establish trade. Spreading beyond its borders in many ways, a few years earlier the United States had joined the worldwide move to uniform postage rates and printed stamps when the Congressional Act of March 3, 1845 authorized the first U.S. postage stamps, and set the local prepaid letter rate at five cents. This set the stage for a close connection between postal and coinage history. Exactly six years later, the postage rate was reduced to three cents when New York Senator Daniel S. Dickinson fathered legislation that simultaneously initiated coinage of the tiny silver three-cent piece as a public convenience. The large cents then in circulation were cumbersome and unpopular, and the new denomination was designed to facilitate the purchase of stamps without using the hated "coppers." This reasoning was carried a step further when the Mint Act of February 21, 1853 authorized a three-dollar gold coin. Congress and Mint Director Robert Maskell Patterson were convinced that the new coin would speed purchases of three-cent stamps by the sheet and of the silver three-cent coins in roll quantities. Unfortunately, at no time during the 35-year span of this denomination did public demand justify these hopes. Chief Engraver James Barton Longacre chose an "Indian Princess" for his obverse not a Native American profile, but actually a profile modeled after the Greco-Roman Venus Accroupie statue then in a Philadelphia museum. Longacre used this distinctive sharp-nosed profile on his gold dollar of 1849 and would employ it again on the Indian Head cent of 1859. On the three-dollar coin Liberty is wearing a feathered headdress of equal-sized plumes with a band bearing LIBERTY in raised letters. She's surrounded by the inscription UNITED STATES OF AMERICA. Such a headdress dates back to the earliest known drawings of American Indians by French artist Jacques le Moyne du Morgue's sketches of the Florida Timucua tribe who lived near the tragic French colony of Fort Caroline in 1562. It was accepted by engravers and medalists of the day as the design shorthand for "America." Longacre's reverse depicted a wreath of tobacco, wheat, corn and cotton with a plant at top bearing two conical seed masses. The original wax models of this wreath still exist on brass discs in a Midwestern collection and show how meticulous Longacre was in preparing his design. Encircled by the wreath is the denomination 3 DOLLARS and the date. There are two boldly different reverse types, the small DOLLARS appearing only in 1854 and the large DOLLARS on coins of 1855-89. Many dates show bold "outlining" of letters and devices, resembling a double strike but probably the result of excessive forcing of the design punches into the die steel, causing a hint of their sloping "shoulders" to appear as part of the coin's design. The high points of the obverse design that first show wear are the cheek and hair above the eye; on the reverse, check the bow knot and leaves. A total of just over 535,000 pieces were issued along with 2058 proofs. The first coins struck were the 15 proofs of 1854. Regular coinage began on May 1, and that first year saw 138,618 pieces struck at Philadelphia (no mintmark), 1,120 at Dahlonega (D), and 24,000 at New Orleans (O). These two branch mints would strike coins only in 1854. San Francisco produced the three-dollar denomination in 1855, 1856, and 1857, again in 1860, and apparently one final piece in 1870. Mintmarks are found below the wreath. Every U.S. denomination boasts a number of major rarities. The three-dollar gold coinage of 1854-1889 is studded with so many low-mintage dates that the entire series may fairly be called rare. In mint state 1878 is the most common date, followed by the 1879, 1888, 1854 and 1889 issues. Every other date is very rare in high grade, particularly 1858, 1865, 1873 Closed 3 and all the San Francisco issues. Minuscule mintages were the rule in the later years. Proof coins prior to 1859 are extremely rare and more difficult to find than the proof-only issues of 1873 Open 3, 1875 and 1876, but many dates are even rarer in the higher Mint State grades. This is because at least some proofs were saved by well- heeled collectors while few lower-budget collectors showed any interest in higher-grade business strikes of later-date gold. Counterfeits are known for many dates; any suspicious piece should be authenticated. The rarest date of all is the unique 1870-S, of which only one example was struck for inclusion in the new Mint's cornerstone. Either the coin escaped, or a second was struck as a pocket piece for San Francisco Mint Coiner J.B. Harmstead. In any event, one coin showing traces of jewelry use surfaced in the numismatic market in 1907. It was sold to prominent collector William H. Woodin, and when Thomas L. Elder sold the Woodin collection in 1911, the coin went to Baltimore's Waldo C. Newcomer. Later owned by Virgil Brand, it was next sold by Ted and Carl Brandts of Ohio's Celina Coin Co. and Stack's of New York to Louis C. Eliasberg in 1946 for $11,500. In Bowers and Merena's October 1982 sale of the U.S. Gold Collection, this famous coin sold for a record $687,500. The three-dollar denomination quietly expired in 1889 along with the gold dollar and nickel three-cent piece. America's coinage was certainly more prosaic without this odd denomination gold piece, but its future popularity with collectors would vastly outstrip the lukewarm public reception it enjoyed during its circulating life.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 5896 }
2 Products Found |Results per page: 24 48 72|| In terms of ecologically friendly flooring, bamboo is one of the top contenders. Not only is bamboo flooring made from totally renewable resources, but it also is available in a wide variety of design options. For those who desire the look of hardwood flooring but are concerned about the environmental consequences of harvesting trees, bamboo offers the perfect solution. While bamboo is not technically wood flooring, its appearance is close enough to fool even the most discerning eye. Why is Bamboo Flooring Considered Environmentally Friendly? Although bamboo is actually a type of grass, it is harder than red oak. It reaches full maturity in just 3 to 5 years rather than several decades and re-growth appears naturally without the need for replanting. Harvesting bamboo is actually somewhat required because it is so hardy that leaving it to its own devices would put a strain on the environment. It would be a terrible shame to waste the harvested material, so people have designed many ways to put it to good use from thatched roofs to flooring material. Bamboo also has no requirements for irrigation, fertilizers, or pesticides when grown in its natural environment. Bamboo is naturally resistant to insects and pests. The lack of need for harsh chemicals during its growth only does more to keep the carbon footprint down. How is Bamboo Flooring Manufactured? There are several steps involved in creating a material suitable for flooring from bamboo. Upon harvest, the bamboo is boiled to remove its natural starches and moisture which could become a wonderful environment for termites if not remedied. The outer skin is then removed and the stalk is cut into strips for flooring. These strips are then boiled again to make them even harder or carbonized; the longer the carbonation process, the darker the color of the final product. When the strips are ready they are formed into flooring either by gluing strips together or gluing a single layer of bamboo strips on top of another solid surface, resulting in either solid bamboo flooring or engineered bamboo flooring respectively. The flooring also goes through other processes to strengthen it further by applying laminate materials to increase scratch resistance. What are the design options with bamboo flooring? - Bamboo flooring is available in widths ranging from 3 ž inches to 7 inches and thicknesses of 5/8 inches and 9/16 inches. - Finish options for bamboo flooring are available from unfinished and natural from the FSC Unfinished Bamboo collection to nearly black and a choice of either horizontal or vertical graining as found in the FSC Designer collection. - The two edge types of bamboo flooring are micro-beveled edges and square edges. Bamboo is also available in floating floor styles and nail or glue down styles. How Durable is Bamboo Flooring? Bamboo is naturally hard and durable, and the process it goes through during the manufacture of flooring only increases this strength. While one should avoid sliding furniture across the floor or allowing water to stand, bamboo flooring will do well in most any low moisture room. Bamboo flooring from the EcoBamboo Collection to the FSC Prestige Collection will be an investment in beauty and durability that is sure to add value to any home with minimal environmental impact.
{ "model": "Alibaba-NLP/gte-large-en-v1.5", "type": "fast_golden_8k", "length": 3358 }