id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
5,233,930 | https://en.wikipedia.org/wiki/Jennifer%20A.%20Lewis | Jennifer A. Lewis (born 1964) is an American materials scientist and engineer, best known for her research on colloidal assembly of ceramics and 3D printing of functional, structural, and biological materials.
In 2017, Lewis was elected as a member of the National Academy of Engineering for the development of materials and processes for 3-dimensional direct fabrication of multifunctional structures.
Education and early career
Lewis graduated with a B.S. degree from the University of Illinois at Urbana-Champaign with high honors in ceramic engineering in 1986 and earned a Sc.D. in ceramic science from Massachusetts Institute of Technology in 1991 under the direction of Michael J. Cima. The title of her dissertation is Binder Distribution Processes in Ceramic Green Tapes During Thermolysis. From 1990 to 1997 she was an assistant professor at the University of Illinois, and was also affiliated as a research professor with the Beckman Institute for Advanced Science and Technology.
Later career
Lewis was promoted to associate professor in 1997 and to professor in 2003. In 2002, she co-edited the book Polymers in Particulate Systems: Properties and Applications to which she also contributed a chapter titled "Colloid-filled Polymer Gels: a Novel Approach to Ceramics Fabrication". In 2006 Lewis was named interim director and subsequently became director of UIUC's Frederick Seitz Materials Research Laboratory in 2007.
In 2013 she moved to Harvard University as Hansjörg Wyss Professor of Biologically Inspired Engineering in Harvard's School of Engineering and Applied Sciences.
Research
Lewis's laboratory works on the directed assembly of soft functional materials. This work involves microfluidics, materials synthesis, complex fluids, and robotic assembly to design functional materials. She develops novel materials that can find potential application as printed electronics, waveguides, and 3D scaffolds and microvascular architectures for cell culture and tissue engineering. She co-leads the Wyss Institute's 3D Organ Engineering Initiative.
In 2013, Lewis' team released the world's first 3D printed battery, made from two different electrode inks.
Lewis is the author of more than 160 papers and holds 12 patents, including patents for inventions as varied as methods to 3D print functional human tissue and microbattery cells.
She is a founder of Voxel8, a company that manufactures a 3D printing platform capable of printing new functional materials, whose investors include In-Q-Tel and Braemar Energy Ventures. Voxel8 has created the world's first multi-material 3D electronics printer. In January 2015, Lewis told Business Wire: "Voxel8 is leveraging over a decade of research, which has led to 17 patents (10 issued) on functional materials, printheads, and other processes for 3D printing, from my lab.… Our work provides the foundation for Voxel8’s effort to revolutionize multi-material 3D printing."
Lewis is also a co-founder of Electroninks, Inc., a company that produces a reactive silver ink used in the printed electronics market, as well as in biomedical and electronic circuitry markets. The company launched a Kickstarter campaign on November 20, 2013, with the goal of raising $85,000 to help with the production of a pen called Circuit Scribe that can create electronic circuits. After only fifteen days into the campaign, backers had pledged $451,698 towards the product. When the Kickstarter campaign closed on December 31, 2013, a total of $674,425 was raised for Circuit Scribe by 12,277 backers.
Her publications have been cited more than 64,000 times by other scholars.
Awards and honors
Lewis is a member of the American Academy of Arts and Sciences (elected 2012), the National Academy of Engineering (elected 2017), and the National Academy of Sciences (elected 2018). She is also a Fellow of the American Ceramic Society, the American Physical Society, the Materials Research Society, and the National Academy of Inventors.
She has received the National Science Foundation Presidential Faculty Fellow Award (1994), the Schlumberger Foundation Award (1995), the Brunauer Award and Robert B. Sossman Award from the American Ceramic Society (2003; 2016), the Materials Research Society Medal (2012), and the Langmuir Lecture award from the American Chemical Society (2009).
In 2014, she was named by Foreign Policy magazine as one of the year's "100 Leading Global Thinkers".
In 2017, Lewis was awarded the Lush Science Prize for her team's work on developing a multi-material bioprinting platform for fabricating 3D human organ-on-chip models, which could eliminate the use of animal testing by the pharmaceutical and cosmetic industries.
In 2018, Lewis was named as the Jianming Yu Professor of Arts and Sciences by the Harvard Stem Cell Institute, in recognition of her "excellence in research, leadership, teaching". The five-year chair will support Lewis and her team's research to advance progress in stem cell and regenerative medicine.
In 2019, Lewis was awarded an honorary Doctorate of Science by the University of Edinburgh.
In September 2020, Lewis was honored with one of three Genius Awards presented by the Liberty Science Center at their annual Genius Gala.
Lewis was the 2020–2021 Distinguished Lecturer for the Hagler Institute for Advanced Study at Texas A&M University.
References
External links
Jennifer A. Lewis website at Harvard SEAS
Lewis Lab website at Harvard SEAS
1964 births
Living people
American bioengineers
American materials scientists
Fellows of the American Academy of Arts and Sciences
Harvard University faculty
Massachusetts Institute of Technology alumni
Fellows of the American Physical Society
Members of the United States National Academy of Engineering
Members of the United States National Academy of Sciences
3D printing specialists
University of Illinois alumni
University of Illinois faculty
Women materials scientists and engineers
20th-century American engineers
21st-century American engineers
21st-century American women engineers
20th-century American women engineers
21st-century American inventors
American women inventors
Fellows of the American Ceramic Society
American women academics
Women roboticists | Jennifer A. Lewis | Materials_science,Technology | 1,209 |
49,144,314 | https://en.wikipedia.org/wiki/Sarcodon%20aglaosoma | Sarcodon aglaosoma is a species of tooth fungus in the family Bankeraceae. Found in Papua New Guinea, it was described as new to science in 1976 by Dutch mycologist Rudolph Arnold Maas Geesteranus. It is quite similar to H. joeides and S. ianthinus, both also from New Guinea.
References
External links
Fungi described in 1976
Fungi of New Guinea
aglaosoma
Fungus species | Sarcodon aglaosoma | Biology | 88 |
9,481,277 | https://en.wikipedia.org/wiki/Control%20of%20chaos | In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time.
Control of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics.
Several techniques have been devised for chaos control, but most are developments of two basic approaches: the Ott–Grebogi–Yorke (OGY) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed.
OGY method
Edward Ott, Celso Grebogi and James A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the Ott–Grebogi–Yorke (OGY) method of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit.
To start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor. This slice is a Poincaré section. After the information about the section has been gathered, one allows the system to run and waits until it comes near a desired periodic orbit in the section. Next, the system is encouraged to remain on that orbit by perturbing the appropriate parameter. When the control parameter is actually changed, the chaotic attractor is shifted and distorted somewhat. If all goes according to plan, the new attractor encourages the system to continue on the desired trajectory. One strength of this method is that it does not require a detailed model of the chaotic system but only some information about the Poincaré section. It is for this reason that the method has been so successful in controlling a wide variety of chaotic systems.
The weaknesses of this method are in isolating the Poincaré section and in calculating the precise perturbations necessary to attain stability.
Pyragas method
In the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is practically zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time. The method was proposed by Lithuanian physicist .
Applications
Experimental control of chaos by one or both of these methods has been achieved in a variety of systems, including turbulent fluids, oscillating chemical reactions, magneto-mechanical oscillators and cardiac tissues. attempt the control of chaotic bubbling with the OGY method and using electrostatic potential as the primary control variable.
Forcing two systems into the same state is not the only way to achieve synchronization of chaos. Both control of chaos and synchronization constitute parts of cybernetical physics, a research area on the border between physics and control theory.
References
External links
Chaos control bibliography (1997–2000)
Chaos theory
Nonlinear systems | Control of chaos | Mathematics | 826 |
34,750,899 | https://en.wikipedia.org/wiki/Aviation%20and%20Railway%20Accident%20Investigation%20Board | The Aviation and Railway Accident Investigation Board (ARAIB, ) is an agency of the South Korean government that investigates aviation and railway accidents, subservient to the Ministry of Land, Infrastructure and Transport (MOLIT) and headquartered in Sejong City.
The ARAIB opened on July 10, 2006. It was a merger of the Korea Aviation Accident Investigation Board and the Railway Accident Investigation Board.
Facilities
Its headquarters is in the MOLIT offices in the Sejong Government Office in Sejong City. Its FDR/CVR Analysis and Wreckage Laboratory is on the property of Gimpo International Airport in Gwahae-dong, Gangseo District, Seoul.
Previously the headquarters of the ARAIB was in Gonghang-dong, Gangseo District, in proximity to Gimpo Airport.
Accidents investigated by the ARAIB
Asiana Airlines Flight 991
Jeju Air Flight 2216
Also the ARAIB has a support role in the following investigations:
Asiana Airlines Flight 214
See also
Korean Maritime Safety Tribunal (maritime accident investigation agency)
Korea Office of Civil Aviation (South Korean civil aviation agency)
The Korea Transport Institute (South Korean transport research institute)
References
External links
Aviation and Railway Accident Investigation Board
Aviation and Railway Accident Investigation Board
Organizations investigating aviation accidents and incidents
Government agencies of South Korea
2006 establishments in South Korea
Transport organizations based in South Korea
Rail accident investigators
Government agencies established in 2006 | Aviation and Railway Accident Investigation Board | Technology | 276 |
41,183,702 | https://en.wikipedia.org/wiki/Hypersensitive%20Narcissism%20Scale | The Hypersensitive Narcissism Scale (HSNS) is a self-report measure of covert narcissism. It was developed by Holly M. Hendin and Jonathan M. Cheek in 1997. It consists of ten items rated on a five-point scale.
It has a near zero correlation with the Narcissistic Personality Inventory, which measures overt narcissism.
The unidimensionality of the HSNS has been questioned.
References
Narcissism | Hypersensitive Narcissism Scale | Biology | 102 |
48,732,326 | https://en.wikipedia.org/wiki/Vickers%20VR180%20Vigor | The Vickers VR180 Vigor was a British crawler tractor, built from 1951 to 1958 by Vickers-Armstrongs. Since the 1920s, the company gained substantial experience in the design and construction of tanks and continuous track vehicles. After the war they developed a civilian crawler tractor that could be sold for use in peacetime reconstruction work. It was notable for the unusual sophistication of its chassis.
The Vigor was built at the Scotswood, Newcastle-upon-Tyne works.
Design
The tractor's most distinctive feature was its running gear: four full height roadwheels also acting as rear drive sprocket, front idler and track return rollers. This was the same layout as the Tetrarch light tank, which Vickers-Armstrongs had developed in the 1930s.
In common with tanks of this period, but in contrast to crawler tractors, the suspension had considerable articulation and permitted high speeds. The Vigor was capable of nearly 10 mph, while the comparable Caterpillar D8 could only reach 5 mph. The four roadwheels were linked as two bogies on each side. Pairs of bogies on opposite sides were linked by an articulated beam with a centre tilt pivot. This suspension required a flexible track, developed by Vickers, with rubber sealing washers between the moving parts. Wear to the rubber in service could make the track floppy, a drawback to the design. Together with the sophisticated suspension articulation, if not well-maintained, the Vigor was prone to throwing tracks when run at speed.
Vehicle chassis were fabricated in two pieces, mainly from large iron or steel castings. The nose of early models was distinctively solid and sloped, while other makers had a vertical radiator grille. Later models, after experience in Australia, also gained a lightweight steel grille for better ventilation. The chassis unbolted relatively quickly, two hours being cited, into front (engine) and rear (transmission) components.
The engine was a Rolls-Royce C6SFL 12.17l six-cylinder supercharged diesel. This engine was more powerful than other tractors of this age and weight, with 190 hp at the engine and 150 drawbar hp. However, the fuel consumption was high, at 9.8 gallons per hour at full rate. The transmission was initially a three-speed manual and high-low-reverse splitter gearbox with a relatively small Borg & Beck 18 inch single plate dry clutch. This small clutch plate had a reputation for early failure if worked hard. An improvement from mid-production was the option of a Rolls-Royce hydraulic torque converter, the same converters Rolls-Royce were supplying with the same engines for the new British Rail DMU fleet.
A full range of accessories were offered: front dozer blades with cable or hydraulic lifts, rear ripper blades for breaking virgin bush as farmland in Australia, and a rear logging winch of 30,000 to 50,000 lbf pull.
Production of the VR180 ran from 1952 to 1958, with approximately 1,500 built. The design continued until 1961 as the Vickers VR110 Vikon with a smaller 142 hp C4SFL four-cylinder engine. Around 20 Vikon were built, most going to New Zealand.
Sales
As for most capital equipment of the early 1950s, the main market was in the Commonwealth: the UK home market could not afford it, particularly with the Austerity Purchase Tax of the time, while the US market preferred domestic products, such as the Caterpillar.
Many Vigors went to Australia, where rough ground capability was appreciated, even though the purchase cost was considerably more than the popular option of war-surplus tanks with turret and roof armour cut away.
A major UK seller was Jack Olding of Hatfield, who relinquished their pre-war Caterpillar dealership in favour of Vickers.
Vigors were also used by the Royal Engineers. One is on display today in their museum at Gillingham, Kent.
In popular culture
The Vigor chassis became a distinctive feature of Gerry Anderson's Thunderbirds TV series. Many of the pod-carried land vehicles used by Thunderbird 2 such as Firefly and the carrier vehicle for The Mole were based on the Vigor. A 1/16 scale plastic model kit of the period by Victor Models of Guildford represented the Vigor; in typical Gerry Anderson style, a plastic kit that was available and affordable, but rare enough to be different and not obviously recognisable.
References
External links
Tractors
Vickers
Bulldozers | Vickers VR180 Vigor | Engineering | 911 |
47,763,726 | https://en.wikipedia.org/wiki/Thaxterogaster%20occidentalis | Thaxterogaster occidentalis is a species of fungus in the family Cortinariaceae.
Taxonomy
It was described in 1939 by the American mycologist Alexander H. Smith who classified it as Cortinarius occidentalis.
In 2022 the species was transferred from Cortinarius and reclassified as Thaxterogaster occidentalis based on genomic data.
Habitat and distribution
Native to the Northern Hemisphere.
References
occidentalis
Fungi described in 1939
Taxa named by Alexander H. Smith
Fungus species | Thaxterogaster occidentalis | Biology | 111 |
40,434,786 | https://en.wikipedia.org/wiki/Semi-Permanent | Semi Permanent is a creative experience company, best known for hosting annual design festivals. Semi Permanent provides creative and production services via SP Studio and SP Productions. Semi Permanent produces innovative creative experiences for clients, including Google, Dropbox and National Geographic.
The first Semi Permanent event was hosted in Sydney, Australia in 2003. Semi Permanent host events throughout Australia and New Zealand, as well as international events in the United States, United Kingdom and China. Semi Permanent curate and host new events throughout the world each year.
Some of the executive attendants of the event have been Industrial Light & Magic, Ed Templeton, Wieden+Kennedy, Shepard Fairey, Jeffrey Deitch, Paula Scher, Oliver Stone, Radiohead and Paul Pope.
In 2020 Semi Permanent announced their partnership with digital publisher Highsnobiety.
Since 2013 Semi Permanent has been part of the Vivid Ideas calendar. Previous Semi Permanent festival line-ups have included presentations from Francesco Zizola, Brian Roettinger, Numskull, and many more.
Background
Founded in 2002 by Murray Bell and Andrew Johnstone, Semi Permanent encompasses three core business pillars: Semi Permanent, SP Brand Studio and SP Productions.
The Semi Permanent office is located in Sydney, Australia.
Semi Permanent is the company's namesake brand, encompassing a global design festival, annual book, website and more. Semi Permanent have been running creativity festivals since 2003 in cities across the globe, including Sydney, Auckland, Singapore, Abu Dhabi, London, New York, Los Angeles, Hong Kong and Portugal. Past speakers and collaborators include iconic filmmakers Oliver Stone, Michel Gondry, and the Coppola family; artists Tom Sachs, CJ Hendry and Alex Israel; and the world's most renowned creatives including Nike's Tinker Hatfield, Pentagram's Paula Scher, Pro Skater Tony Hawk, architect Bjarke Ingels and many more. Previous festivals have included the collaborations with Google Tilt Brush and National Geographic. Previous festival themes include "Truth" in 2019, and "Restless" in 2020.
Semi Permanent publishes an annual book, featuring interviews with creative thought leaders. Previously featured interviews, conversations and work include talent such as Alicia Keys, Cara Stricker, Takashi Murakami, Nadia Lee Cohen and Kevin Parker.
SP Brand Studio co-created a two-day live sport & creativity experience in 2021, and created an immersive retrospective with Stanley Donwood and Radiohead with a bespoke soundscape by Thom Yorke.
SP Productions is Semi Permanents 'white label' events and production agency.
SP Productions worked with Google to curate, program and produce their global diversity and inclusion program, Rare.
Past speakers and collaborators (selection)
Radiohead
Roman Coppola
Alicia Keys
Namila Benson
Jeffrey Deitch
Kevin Parker, Tame Impala
Takashi Murakami
Olivier Zahm (Purple)
Michael Leon
Nicholas Felton
Kelly Slater
Gia Coppola
Carl Lewis
Oliver Stone
Chris Burkard
Shaun White
Carli Lloyd
Perks and Mini|PAM
Willo Perron
Vince Frost
Platon
Danny Yount
Scott Dadich (Wired)
United Visual Artists
Wieden+Kennedy
Jonathan Zawada
Cory Arnold
The Monkeys advertising|The Monkeys
Industrial Light & Magic
Banksy
Seb Lester
The Talks website|The Talks
Fafi
Michael Muller
Jill Greenberg
Ron English
Droga 5
Kris Moyes
Nabil
Paul Pope
Floria Sigismondi
WETA Digital
Taj Burrow
Shepard Fairey
March Studio
AKQA
R/GA
Tara McPherson
Moving Picture Company (MPC)
Aaron Rose
Ed Templeton
Reg Mombassa
Pixar
Stefan Sagmeister
References
Design events | Semi-Permanent | Engineering | 727 |
9,385,138 | https://en.wikipedia.org/wiki/Association%20for%20Retail%20Technology%20Standards | The Association for Retail Technology Standards (ARTS) is an international standards organization dedicated to reducing the costs of technology through standards. Since 1993, ARTS has been delivering application standards exclusively to the retail industry. ARTS has four standards
The Standard Relational Data Model, UnifiedPOS, ARTS XML and the Standard RFPs. It is a division of the National Retail Federation. These standards enable the rapid implementation of technology within the retail industry by developing standards to ease integration of software applications and hardware devices. ARTS offers testing services to verify that applications accurately incorporate these standards.
Hundreds of leading retailers and vendors worldwide contribute in shaping the ARTS Data Model. The ARTS Data Model is known as the information standard in the retail industry and provides a comprehensive design document containing all data elements and definitions required to support retail applications.
UnifiedPOS is a platform-neutral specification for connecting POS peripherals such as printers, scanners, and scales to the POS terminal, allowing retailers freedom of choice in the selection of hardware integration.
ARTS XML (formerly IXRetail) builds on the ARTS Data Model to develop standard XML schemas and message sets to ease application-to-application integration within a retail enterprise. There are currently 11 schemas available.
Standard RFPs (Requests for Proposal) were developed to help retailers choose the right applications for their specific business requirements. There are currently seven standardized template RFPs available for download.
Membership is open to all members of the international technology community, retailers from all industry segments, application developers and hardware companies. Membership requires a small fee, which is waived to those already members of the National Retail Federation, and an agreement to adhere to policies and standards regarding the licensing of any ARTS property.
Notes
External links
Association for Retail Technology Standards
National Retail Federation
UnifiedPOS
Retail point of sale systems
Retailing organizations | Association for Retail Technology Standards | Technology | 363 |
71,506,226 | https://en.wikipedia.org/wiki/Sean%20Smith%20%28chemist%29 | Sean Smith is the director of NCI Australia with a conjoint position of professor of computational nanomaterials science and technology at the Australian National University (ANU).
Education and research
Smith received a BSc and PhD in chemistry at University of Canterbury in Christchurch, New Zealand before postdoctoral work at University of California, Berkeley (1991-1993) and the University of Göttingen (Humboldt Fellow 1989–1991).
In 1993 he started work at the University of Queensland, eventually heading up the Computational Reaction Dynamics Group before moving on to Oak Ridge National Laboratory in 2011 where he was the director of the Center for Nanophase Materials Sciences. After leaving Oak Ridge in 2017 Smith moved to the University of New South Wales where he founded the Integrated Materials Design Centre.
Smith left the University of New South Wales at the end of 2017 to become director of NCI Australia and professor of computational nanomaterials science and technology at Australian National University.
Awards and honours
Fellow of the Institution of Chemical Engineers (2015)
Bessel Research Award of the German Alexander von Humboldt Foundation (2006)
Fellow of the American Association for the Advancement of Science (2012)
Fellow of the Royal Australian Chemical Institute (1998)
Le Fevre Memorial Prize of the Australian Academy of Science (1998)
Rennie Memorial Medal of the Royal Australian Chemical Institute (1994)
Selected publications
References
External links
Academic profile at ANU
Computational chemists
University of Canterbury alumni
21st-century Australian chemists | Sean Smith (chemist) | Chemistry | 289 |
78,396,435 | https://en.wikipedia.org/wiki/Operation%3A%20Tango | Operation: Tango is a cooperative first-person video game, created by the Canadian studio Clever Plays. Two players take control of either an Agent or a Hacker and must work together to solve puzzles to bring down a hi-tech global menace.
Premise and gameplay
Announced in 2020, Operation: Tango is an asymmetrical co-operative game that puts the player in control of either Angel, a field operative, or Alistair, a top hacker, tasked with saving the world by investigating and locating a global cyber-criminal named Cypher.
Reception
Critical response
The Xbox Series X version of Operation: Tango received generally favorable reviews according to review aggregator Metacritic. Jerome Joffard of Jeuxvideo.com generally praised Operation: Tango for its unique and innovative gameplay, particularly the asymmetric cooperative experience where players assume the roles of Agent and Hacker. However, he criticized the game's repetitiveness and unclear objectives in later missions, which he felt could frustrate players. In their review of Operation: Tango, Zheng Yi of Geek Culture praised the futuristic world, comparing it with the cyberpunk aesthetic of Keiichi Matsuda's Hyper-Reality and the distinctive style of Robert Valley. Thomas Heath of TheGamer applauded the game for its engaging gameplay and globe-trotting setting, which he believes make it a thrilling co-op experience. He does however note that the game's relatively short length might leave some players wanting more.
Awards
Operation: Tango has won a number of awards since its release, including "Best Game Design Award Winner" at the Tokyo Game Show 2021 Sense of Wonder Night, "Best Multiplayer Game Winner" at the Gamescom 2021 Indie Arena Booth, and "Grand Winner PC Game - Multiplayer Game" at the NYX Game Awards 2021.
References
External links
Asymmetrical multiplayer video games
Cooperative video games
First-person video games
Puzzle video games
2021 video games
Indie games
Windows games
PlayStation 4 games
Xbox One games | Operation: Tango | Physics | 395 |
78,164,251 | https://en.wikipedia.org/wiki/Integral%20of%20a%20correspondence | In mathematics, the integral of a correspondence is a generalization of the integration of single-valued functions to correspondences.
The first notion of the integral of a correspondence is due to Aumann in 1965, with a different approach by Debreu appearing in 1967. Integrals of correspondences have applications in general equilibrium theory in mathematical economics, random sets in probability theory, partial identification in econometrics, and fuzzy numbers in fuzzy set theory.
Preliminaries
Correspondences
A correspondence is a function , where is the power set of . That is, assigns each point with a set .
Selections
A selection of a correspondence is a function such that for every .
If can be seen as a measure space and as a Banach space , then one can define a measurable selection as an -measurable function such that for μ-almost all .
Definitions
The Aumann integral
Let be a measure space and a Banach space. If is a correspondence, then the Aumann integral of is defined as
where the integrals are Bochner integrals.
Example: let the underlying measure space be , and a correspondence be defined as for all . Then the Aumman integral of is .
The Debreu integral
Debreu's approach to the integration of a correspondence is more restrictive and cumbersome, but directly yields extensions of usual theorems from the integration theory of functions to the integration of correspondences, such as Lebesgue's Dominated convergence theorem. It uses Rådström's embedding theorem to identify convex and compact valued correspondences with subsets of a real Banach space, over which Bochner integration is straightforward.
Let be a measure space, a Banach space, and the set of all its convex and compact subsets. Let be a convex and compact valued correspondence from to . By Rådström's embedding theorem, can be isometrically embedded as a convex cone in a real Banach space , in such a way that addition and multiplication by nonnegative real numbers in induces the corresponding operation in .
Let be the "image" of under the embedding defined above, in the sense that is the image of under this embedding for every . For each pair of -simple functions , define the metric .
Then we say that is integrable if is integrable in the following sense: there exists a sequence of -simple functions from to which are Cauchy in the metric and converge in measure to . In this case, we define the integral of to be
where the integrals are again simply Bochner integrals in the space , and the result still belongs since it is a convex cone. We then uniquely identify the Debreu integral of as
such that . Since every embedding is injective and surjective onto its image, the Debreu integral is unique and well-defined.
Notes
References
Functional analysis
Mathematical economics | Integral of a correspondence | Mathematics | 589 |
15,964,820 | https://en.wikipedia.org/wiki/European%20Chemical%20Society | The European Chemical Society (EuChemS) is a European non-profit organisation which promotes collaboration between non-profit scientific and technical societies in the field of chemistry.
Based in Brussels, Belgium, the association took over the role and responsibilities of the Federation of European Chemical Societies and Professional Institutions (FECS) founded in 1970. It currently has 50 Member Societies and supporting members, with a further 19 divisions and working parties. It represents more than 160,000 chemists from more than 30 countries in Europe.
On 26 August 2022, the EuChemS General Assembly voted Angela Agostiano, Professor at the University of Bari Aldo Moro, Italy, as EuChemS President-Elect. Her term as President began in January 2023. Nineta Hrastelj is Secretary General. Floris Rutjes of Radboud University, is Vice-President of EuChemS.
Aims and function
The European Chemical Society has two major aims. By bringing together national chemical societies from across Europe, it aims to foster a community of scientists from different countries and provide opportunities for them to exchange ideas, communicate, cooperate on work projects and develop their networks. EuChemS in turn relies on the knowledge of this community to provide sound scientific advice to policymakers at the European level, in order to better inform their decision-making work. EuChemS is an official accredited stakeholder of the European Food Safety Agency (EFSA) and the European Chemical Agency (ECHA). EuChemS also relies on quality science communication to better inform citizens, decision-makers and scientists of the latest research developments in the chemical sciences, and their role in tackling major societal, environmental and economic challenges.
Because the field of chemistry is particularly vast with many different disciplines within it, EuChemS provides advice and knowledge on a broad range of subjects including:
EU Research Framework Programmes, such as Horizon 2020 and Horizon Europe
Open Science
Education and STEM
Environmental issues and climate change
Circular Economy
Renewable Energy
Food safety
Science literacy
Health
Ethics and scientific integrity
Cultural Heritage
Chemical and nuclear safety
EuChemS is a signatory of the EU Transparency register. The register number is: 03492856440-03.
Divisions and Working Parties
The EuChemS scientific divisions and working parties are networks in their own fields of expertise and promote collaboration with other European and international organisations. They organise high quality scientific conferences in chemical and molecular sciences and interdisciplinary areas.
Division of Analytical Chemistry
Division of Chemical Education
Division of Chemistry and the Environment
Division of Chemistry in Life Sciences
Division of Computational Chemistry
Division of Food Chemistry
Division of Green and Sustainable Chemistry
Division of Inorganic Chemistry
Division of Nuclear and Radiochemistry
Division of Organic Chemistry
Division of Organometallic Chemistry
Division of Physical Chemistry
Division of Solid State Chemistry
Division of Chemistry and Energy
Working Party on Chemistry for Cultural Heritage
Working party on Ethics in Chemistry
Working Party on the History of Chemistry
The European Young Chemists' Network (abbreviated to EYCN) is the younger members' division of EuChemS.
Events
EuChemS organises a variety of different events, including policy workshops with the European Institutions, specialised academic conferences, as well as the biennial EuChemS Chemistry Congress (ECC). There have been 8 Congresses so far since the first in 2006, held in Budapest, Hungary.
The congresses have taken place in: Turin, Italy (2008); Nuremberg, Germany (2010); Prague, Czechia (2012); Istanbul, Turkey (2014); Seville, Spain (2016); Liverpool, UK (2018), Lisbon, Portugal (2022). The next ECC is set to be held in Dublin, Ireland in 2024. The ECCs usually attract some 2000 chemists from more than 50 countries across the world.
Awards
EuChemS proposes several awards including the European Chemistry Gold Medal Award, awarded in 2018 to Nobel Laureate Bernard Feringa and in 2020 to Michele Parrinello; the EuChemS Award for Service; the EuChemS Lecture Award; the European Young Chemists' Award; the EuChemS EUCYS Award; the EuChemS Historical Landmarks Award, as well as several Divisional Awards.
EuChemS implemented in 2020 the EuChemS Chemistry Congress fellowship scheme. The aim of EuChemS fellowship scheme is to support early career researchers (bachelor, masters and PhD students) actively attending the EuChemS Chemistry Congresses.
EuChemS Gold Medal
The EuChemS Gold medal is awarded to reflect the exceptional achievements of scientists working in the field of chemistry in Europe.
2022
Dame Carol Robinson
2020
Michele Parrinello
2018
Bernard L. Feringa
EuChemS Historical Landmarks Awards
The EuChemS Historical Landmarks Award recognize sites important in the history of chemistry in Europe:
2020
Prague, Czech Republic (50 anniversary of the foundation of EuChemS).
Giessen, Germany, Justus Liebig’s Laboratory.
2019
Almadén mines in Spain (producing mercury for Spain and the Spanish empire) and Edessa Cannabis Factory Museum, Greece (a preserved factory producing ropes and twine from hemp).
2018
The Ytterby mine in Sweden (linked to the discovery of 8 chemical elements) and ABEA in Crete, Greece (a factory processing olive oil).
Projects and activities
In light of the UN declared International Year of the Periodic Table of Chemical Elements of 2019, EuChemS published a Periodic Table which depicts the issue of the abundance of the chemical elements to raise awareness of the need to develop better recycling capacities, to manage waste, and to find alternative materials to the elements that are at risk of being unusable.
Members & Supporting Members
Austrian Chemical Society
Austrian Society of Analytical Chemistry
Royal Flemish Chemical Society
Walloon Royal Society of Chemistry
Union of Chemists in Bulgaria
Croatian Chemical Society
Pancyprian Union of Chemists
Czech Chemical Society
Danish Chemical Society
Estonian Chemical Society
Finnish Chemical Society
French Chemical Society
German Chemical Society
German Bunsen Society for Physical Chemistry
Association of Greek Chemists
Hungarian Chemical Society
Institute of Chemistry of Ireland
Israel Chemical Society
Italian Chemical Society
Lithuanian Chemical Society
Association of Luxembourgish Chemists
Society of Chemists and Technologists of Macedonia
Chemical Society of Montenegro
Royal Dutch Chemical Society
Norwegian Chemical Society
Polish Chemical Society
Portuguese Chemical Society
Portuguese Electrochemical Society
Romanian Chemical Society
Mendeleev Russian Chemical Society
Russian Scientific Council on Analytical Chemistry
Serbian Chemical Society
Slovak Chemical Society
Slovenian Chemical Society
Royal Spanish Chemical Society
Spanish Society of Analytical Chemistry (SEQA)
Catalan Chemical Society
Swedish Chemical Society
Swiss Chemical Society
Turkish Chemical Society
Royal Society of Chemistry
Supporting members:
European Nanoporous Materials Institute of Excellence (ENMIX)
European Chemistry Thematic Network Association (ECTN)
European Federation of Managerial Staff in the Chemical and Allied Industries (FECCIA)
European Research Institute of Catalysis (ERIC)
European Federation for Medicinal Chemistry (EFMC)
International Sustainable Chemistry Collaborative Centre (ISC3)
ChemPubSoc Europe
Italian National Research Council (CNR)
See also
European Chemist
European Physical Society
Timeline of chemistry
European Research Council
Marie Skłodowska-Curie Actions
References
External links
EuChemS
EuChemS Newsletter
Brussels News Updates
1st EuChemS Chemistry Congress 2006
2nd EuChemS Chemistry Congress 2008
3rd EuChemS Chemistry Congress 2010
4th EuChemS Chemistry Congress 2012
5th EuChemS Chemistry Congress 2014
6th EuChemS Chemistry Congress 2016
7th EuChemS Chemistry Congress 2018
8th EuChemS Chemistry Congress 2022
Chemistry societies
International scientific organizations based in Europe
Organizations established in 1970 | European Chemical Society | Chemistry | 1,496 |
20,759,739 | https://en.wikipedia.org/wiki/Sesamodil | Sesamodil is a calcium channel blocker.
Calcium channel blockers
Benzodioxoles
Benzothiazines
Lactams
Phenol ethers | Sesamodil | Chemistry,Biology | 35 |
51,975,023 | https://en.wikipedia.org/wiki/Mailvelope | Mailvelope is free software for end-to-end encryption of email traffic inside of a web browser (Firefox, Chromium or Edge) that integrates itself into existing webmail applications ("email websites"). It can be used to encrypt and sign electronic messages, including attached files, without the use of a separate, native email client (like Thunderbird) using the OpenPGP standard.
The name is a portmanteau of the words "mail" and "envelope". It is published together with its source code under the terms of version 3 of the GNU Affero General Public License (AGPL). The company Mailvelope GmbH runs the development using a public code repository on GitHub. Development is sponsored by the Open Technology Fund and Internews.
Similar alternatives had been Mymail-Crypt and WebPG.
Features
Mailvelope equips webmail applications with OpenPGP functionality. Support for several popular providers like Gmail, Yahoo, Outlook on the web and others are preconfigured.
The webmail software Roundcube senses and supports Mailvelope as of version 1.2 from May 2016, as well as most (self-hosted) webmail clients. For Chromium/Chrome there's the possibility to install from an authenticated source using the integrated software extension manager "Chrome Web Store". In addition, Mailvelope is also available for Firefox and Microsoft Edge as an add-on.
Mailvelope works according to the OpenPGP standard, a public-key cryptosystem first standardized in 1998 and is written in JavaScript. On preset or user-authorized web pages it overlays the page with its control elements, which are optically distinguished as being separate from the web application by a surrounding security-background. This background can be customized to detect impersonations. For encryption it relies on the functionality of the program library OpenPGP.js, a free JavaScript Implementation of the OpenPGP standard. By running inside a separate inline frame, its code is executed separately from the web application and should prevent it from accessing clear text message contents.
The integration of Mailvelope via an API, developed in collaboration with United Internet, allows deeper integration between the webmail service and Mailvelope components. Thus, the setup and generation of a key pair can be done directly in the webmailer using a wizard. Mailvelope manages all OpenPGP keys locally in the browser. Since version 3.0, a local GnuPG installation can be included in Mailvelope's key management, allowing users to use native applications if desired.
History and usage
Thomas Oberndörfer started developing Mailvelope in spring 2012 with the first public version 0.4.0.1 released on August 24. The global surveillance disclosure raised questions about the security of private and business email communication. At the time, e-mail encryption with OpenPGP was considered too complicated to use. Moreover, the webmail services that were particularly popular with private individuals did not offer any end-to-end encryption functions. This led to various mentions of Mailvelope in the press as a possible solution to this problem.
Mario Heiderich and Krzysztof Kotowicz of Cure53 did a security audit on an alpha version from 2012/2013. Among other things, the separation from the web application and its data structures was improved based on its findings. In February 2014, the same group analysed the library OpenPGP.js which Mailvelope is based on. Version 0.8.0, released the following April, adopted the resulting fixes and added support for message signing. In May 2014, iSEC Partners published an analysis of the Firefox extension. Version 1.0.0 was published on August 18, 2015.
In April 2015, De-Mail providers equipped their services with a default disabled option for end-to-end encryption based on Mailvelope, but it could only be used in combination with Mobile TAN or the German electronic identity card. The new version of the extension was released in May 2015.
In August 2015, the email services of Web.de and GMX introduced support for OpenPGP encryption and integrated Mailvelope into their webmail applications for that. According to the company's own information, this option to encrypt e-mails in this way was available to around 30 million users.
A 2015 study examined the usability of Mailvelope as an example of a modern OpenPGP client and deemed it unsuitable for the masses. They recommended integrating assistant functionality, sending instructive invitation messages to new communication partners, and publishing basic explanatory texts. The Mailvelope-based OpenPGP system of United Internet integrates such functionality and its usability earned some positive mentions in the press, particularly the offered key synchronization feature. A usability analysis from 2016 found it to still be "worthy of improvement" ("verbesserungswürdig"), though, and mentioned "confusing wording" ("irritierende Formulierungen"), missing communication of the concept, bad password recommendations, missing negative dissociation of the more prominent modus that features only transport encryption, plus insufficient support for key authenticity checking (to thwart man-in-the-middle attacks).
Mailvelope was enhanced in 2018/19 as part of a BSI initiative. Overall, the "key management was simplified, and security of the software improved." All security vulnerabilities in the Mailvelope source code, as well as in the OpenPGP.js program library used, brought to light by a security audit conducted by SEC Consult were closed. According to the BSI, one goal of the project was also to enable website operators to offer contact forms in the future to securely encrypt messages from the user's browser to the recipient. The import of new keys would be HTTPS-encrypted using the WKD (Web Key Directory) protocol.
References
External links
Software add-ons
Cryptographic software
Free software programmed in JavaScript
Free Firefox WebExtensions | Mailvelope | Mathematics | 1,278 |
2,077,667 | https://en.wikipedia.org/wiki/Knuckle-walking | Knuckle-walking is a form of quadrupedal walking in which the forelimbs hold the fingers in a partially flexed posture that allows body weight to press down on the ground through the knuckles. Gorillas and chimpanzees use this style of locomotion, as do anteaters and platypuses.
Knuckle-walking helps with actions other than locomotion on the ground. Gorillas use fingers for the manipulation of food, whereas chimpanzees use fingers for the manipulation of food and climbing. In anteaters and pangolins, the fingers have large claws for opening the mounds of social insects. Platypus fingers have webbing that extend past the fingers to aid in swimming, thus knuckle-walking is used to prevent stumbling. Gorillas move around by knuckle-walking, although they sometimes walk bipedally for short distances while carrying food or in defensive situations. Mountain gorillas use knuckle-walking plus other parts of their hand—fist-walking does not use the knuckles, using the backs of their hand, and using their palms.
Anthropologists once thought that the common ancestor of chimpanzees and humans engaged in knuckle-walking, and humans evolved upright walking from knuckle-walking, a view thought to be supported by reanalysis of overlooked features on hominid fossils. Since then, scientists discovered Ardipithecus ramidus, a human-like hominid descended from the common ancestor of chimpanzees and humans. Ar. ramidus engaged in upright walking, but not knuckle-walking. This leads to the conclusion that chimpanzees evolved knuckle-walking after they split from humans six million years ago, and humans evolved upright walking without knuckle-walking. This would imply that knuckle-walking evolved independently in the African great apes, which would mean a homoplasic (independent) evolution of this locomotor behaviour in gorillas and chimpanzees. However, other studies have argued the opposite by pointing out that the differences in knuckle-walking between gorillas and chimpanzees can be explained by differences in positional behaviour, kinematics, and the biomechanics of weight-bearing.
Apes
Chimpanzees and gorillas engage in knuckle-walking. This form of hand-walking posture allows these tree-climbers to use their hands for terrestrial locomotion while retaining long fingers for gripping and climbing. It may also allow small objects to be carried in the fingers while walking on all fours. This is the most common type of movement for gorillas, although they also practice bipedalism.
Their knuckle-walking involves flexing the tips of their fingers and carrying their body weight down on the dorsal surface of their middle phalanges. The outer fingers are held clear off the ground. The wrist is held in a stable, locked position during the support phase of knuckle-walking by means of strongly flexed interphalangeal joints, and extended metacarpophalangeal joints. The palm, as a result, is positioned perpendicular to the ground and in line with the forearm. The wrist and elbow are extended throughout the last period in which the knuckle-walker's hand carried body weight.
Differences exist between knuckle-walking in chimpanzees and gorillas; juvenile chimpanzees engage in less knuckle-walking than juvenile gorillas. Another difference is that the hand bones of gorillas lack key features that were once thought to limit the extension of the wrist during knuckle-walking in chimpanzees. For example, the ridges and concavities features of the capitate and hamate bones have been interpreted to enhance stability of weight-bearing; on this basis, they have been used to identify knuckle-walking in fossils. These are found in all chimpanzees, but in only two out of five gorillas. They are also less prominent when found in gorillas. They are, however, found in primates that do not knuckle-walk.
Chimpanzee knuckle-walking and gorilla knuckle-walking have been suggested to be biomechanically and posturally distinct. Gorillas use a form of knuckle-walking that is "columnar". In this forelimb posture, the hand and wrist joints are aligned in a relatively straight, neutral posture. In contrast, chimpanzees use an extended wrist posture. These differences underlie the different characteristics of their hand bones.
The difference has been attributed to the greater locomotion of chimpanzees in trees, compared to gorillas. The former frequently engage in both knuckle-walking and palm-walking branches. As a result, to preserve their balance in trees, chimpanzees, like other primates in trees, often extended their wrists. This need has produced different wrist bone anatomy, and through this, a different form of knuckle-walking.
Knuckle-walking has been reported in some baboons. Fossils attributed to Australopithecus anamensis and Au. afarensis also may have had specialized wrist morphology that was retained from an earlier knuckle-walking ancestor.
Gorillas
Gorillas use the form of walking on all fours with the fingers on the hands of the front two limbs folded inward. A gorilla's forearm and wrist bones lock together to be able to sustain the weight of the animal and create a strong supporting structure. Gorillas use this form of walking because their hips are attached differently from humans, so standing on two legs for a long period of time would eventually become painful. Gorillas sometimes do walk upright in instances where dangers are present.
Other mammals
Giant anteaters and platypuses are also knuckle-walkers. Pangolins also sometimes walk on their knuckles. Some members of the extinct ungulate family Chalicotheriidae with gorilla-like forelimbs are suggested to have knuckle-walked. Some ground sloths may have also walked on their knuckles.
Advantages
Knuckle-walking tends to evolve when the fingers of the forelimb are specialized for tasks other than locomotion on the ground. In the gorilla, the fingers are used for the manipulation of food, and in chimpanzees, for the manipulation of food and climbing. In anteaters and pangolins, the fingers have large claws for opening the mounds of social insects. Platypus fingers have webbing that extend past the fingers to aid in swimming, thus knuckle-walking is used to prevent stumbling.
Knuckle-walking of chimpanzees and gorillas, arguably, originally started from fist-walking as found in orangutans. African apes most likely diverged from ancestral arboreal apes (similar to orangutans) that were adapted to distribute their weight among tree branches and forest canopies. Adjustments made for terrestrial locomotion early on may have involved fist-walking, later evolving into knuckle-walking.
Evolution of knuckle-walking
Competing hypotheses are given as to how knuckle-walking evolved as a form of locomotion, stemming from comparisons between African apes. High magnitudes of integration would indicate homoplasy of knuckle-walking in gorillas and chimpanzees, in which a trait is shared or similar between two species, but is not derived from a common ancestor. However, results show that they are not characterized by such high magnitudes, which does not support independent evolution of knuckle-walking. Similarities between gorillas and chimpanzees have been suggested to support a common origin for knuckle-walking, such as manual pressure distribution when practicing this form of locomotion. On the other hand, their behavioral differences have been hypothesized to suggest convergent evolution, or homoplasy.
Another hypothesis proposes that African apes came from a bipedal ancestor, as no differences in hemoglobin are seen between Pan and Homo, suggesting that their divergence occurred relatively recently. Examining protein sequence changes suggests that Gorilla diverged before the clade Homo-Pan, meaning that ancestral bipedalism would require parallel evolution of knuckle-walking in separate chimpanzee and gorilla radiations. The fact that chimpanzees practice both arboreal and knuckle-walking locomotion implies that knuckle-walking evolved from an arboreal ancestor as a solution for terrestrial travel, while still maintaining competent climbing skills.
Not all features associated with knuckle-walking are identical to the beings that practice it, as it suggests possible developmental differences. For example, brachiation and suspension are almost certainly homologous between siamangs and gibbons, yet they differ substantially in the relative growth of their locomotor skeletons. Differences in carpal growth are not necessarily a consequence of their function, as they could be related to differences in body mass, growth, etc. It is important to keep this in mind when examining similarities and differences between African apes themselves, as well as knuckle-walkers and humans, when developing hypotheses on locomotive evolution.
Human evolution
One theory of the origins of human bipedality is that it evolved from a terrestrial knuckle-walking ancestor. This theory is opposed to the theory that such bipedalism arose from a more generalized arboreal ape ancestor. The terrestrial knuckle-walking theory argues that early hominin wrist and hand bones retain morphological evidence of early knuckle-walking. The argument is not that they were knuckle-walkers themselves, but that it is an example of "phylogenetic 'lag'". "The retention of knuckle-walking morphology in the earliest hominids indicates that bipedalism evolved from an ancestor already adapted for terrestrial locomotion. ... Pre-bipedal locomotion is probably best characterized as a repertoire consisting of terrestrial knuckle-walking, arboreal climbing and occasional suspensory activities, not unlike that observed in chimpanzees today". See Vestigiality. Crucial to the knuckle-walking ancestor hypothesis is the role of the os centrale in the hominoid wrist, since the fusion of this bone with the scaphoid is among the clearest morphological synapomorphies of hominins and African apes. It has been shown that fused scaphoid-centrales display lower stress values during simulated knuckle-walking as compared to non-fused morphologies, hence supporting a biomechanical explanation for the fusion as a functional adaptation to this locomotor behavior. This suggests that this wrist morphology was probably retained from a recent common ancestor that showed knuckle-walking as part of its locomotor repertoire and that was probably later exapted for other functions (e.g. to withstand the shear stress during power-grip positions). Nevertheless, it is relevant to keep in mind that extant knuckle-walkers display diverse positional behaviors, and that knuckle-walking does not preclude climbing or exclude the possible importance of arboreality in the evolution of bipedalism in the hominin lineage.
Knuckle-walking, though has been suggested to have evolved independently and separately in Pan and Gorilla, so was not present in the human ancestors. This is supported by the evidence that gorillas and chimpanzees differ in their knuckle-walking-related wrist anatomy and in the biomechanics of their knuckle-walking. Kivell and Schmitt note "Features found in the hominin fossil record that have traditionally been associated with a broad definition of knuckle-walking are more likely reflecting the habitual Pan-like use of extended wrist postures that are particularly advantageous in an arboreal environment. This, in turn, suggests that human bipedality evolved from a more arboreal ancestor occupying a generalized locomotor and ecological niche common to all living apes". Arguments for the independent evolution of knuckle-walking have not gone without criticism, however. Another study of morphological integration in human and great ape wrists suggests that knuckle-walking did not evolve independently in gorillas and chimpanzees, which "places the emergence of hominins and the evolution of bipedalism in the context of a knuckle-walking background."
Related forms of hand-walking
Primates can walk on their hands in other ways than on their knuckles. They can walk on fists such as orangutans. In this form, body weight is borne on the back of the proximal phalanges.
Quadrupedal primate walking can be done on the palms. This occurs in many primates when walking on all fours on tree branches. It is also the method used by human infants when crawling on their knees or engaged in a "bear-crawl" (in which the legs are fully extended and weight is taken by the ankles). A few older children and some adults retain the ability to walk quadrupedally, even after acquiring bipedalism. A BBC2 and NOVA episode, "The Family That Walks on All Fours", reported on the Ulas family in which five individuals grew up walking normally upon the palms of their hands and fully extended legs due to a recessive genetic mutation that causes a nonprogressive congenital cerebellar ataxia that impairs the balance needed for bipedality. Not only did they walk on their palms of their hands, but they also could do so holding objects in their fingers.
Primates can also walk on their fingers. In olive baboons, rhesus macaques, and patas monkeys, such finger-walking turns to palm-walking when animals start to run. This has been suggested to spread the forces better across the wrist bones to protect them.
References
Ethology
Primatology
Walking | Knuckle-walking | Biology | 2,905 |
5,119,865 | https://en.wikipedia.org/wiki/HD%20116243 | HD 116243 is a single star in the southern constellation of Centaurus. It has the Bayer designation m Centauri, while HD 116243 is the identifier from the Henry Draper catalogue. This star has a yellow hue and is faintly visible to the naked eye with an apparent visual magnitude of +4.52. It is located at a distance of approximately 244 light years from the Sun based on parallax, and it has an absolute magnitude of 0.01. It is drifting further away with a radial velocity of +13.3 km/s.
This object is an aging bright giant star with a stellar classification of G6IIb With the supply of hydrogen at its core exhausted, it has expanded to 12 times the radius of the Sun. The star is radiating 89 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 5,197 K.
References
G-type bright giants
Centauri, m
Centaurus
Durchmusterung objects
116243
065387
5041 | HD 116243 | Astronomy | 214 |
1,165,697 | https://en.wikipedia.org/wiki/Non-recurring%20engineering | Non-recurring engineering (NRE) cost refers to the one-time cost to research, design, develop and test a new product or product enhancement. When budgeting for a new product, NRE must be considered to analyze if a new product will be profitable. Even though a company will pay for NRE on a project only once, NRE costs can be prohibitively high and the product will need to sell well enough to produce a return on the initial investment. NRE is unlike production costs, which must be paid constantly to maintain production of a product. It is a form of fixed cost in economics terms. Once a system is designed any number of units can be manufactured without increasing NRE cost.
NRE can be also formulated and paid via another commercial term called Royalty Fee. The Royalty Fee could be a percentage of sales revenue or profit or combination of these two, which have to be incorporated in a mid to long term agreement between technology supplier and the OEM.
In a project-type (manufacturing) company, large parts (possibly all) of the project represent NRE. In this case the NRE costs are likely to be included in the first project's costs, this can also be called research and development (R&D). If the firm cannot recover these costs, it must consider funding part of these from reserves, possibly take a project loss, in the hope that the investment can be recovered from further profit on future projects.
The concept of full product NRE as described above may lead readers to believe that NRE expenses are unnecessarily high. However, focused NRE wherein small amounts of NRE money can yield large returns by making existing product changes is an option to consider as well. A small adjustment to an existing assembly may be considered, in order to use a less expensive or improved subcomponent or to replace a subcomponent which is no longer available. In the world of embedded firmware, NRE may be invested in code development to fix problems or to add features where the costs to implement are a very small percentages of an immediate return. Chrysler found such a way to repair a transmission problem by investing trivial NRE dollars into computer firmware to fix a mechanical problem to save some tens of millions of dollars in mechanical repairs to transmissions in the field.
NRE-concepts-as-financial-investments are loss control tools considered part of manufacturing profit enhancement.
References
External links
costs by Daniel Shefer - a short explanation of NRE
Product lifecycle management
Engineering concepts | Non-recurring engineering | Engineering | 513 |
12,514,554 | https://en.wikipedia.org/wiki/Toadstone | The toadstone, also known as bufonite (from Latin , "toad"), is a mythical stone or gem that was thought to be found in the head of a toad. It was supposed to be an antidote to poison and in this it is like batrachite, supposedly formed in the heads of frogs. Toadstones were actually the button-like fossilised teeth of Scheenstia (previously Lepidotes), an extinct genus of ray-finned fish from the Jurassic and Cretaceous periods. They appeared to be "stones that are perfect in form" and were set by European jewellers into magical rings and amulets from Medieval times until the 18th century.
Beliefs
From ancient times people associated the fossils with jewels that were set inside the heads of toads. The toad has poison glands in its skin, so it was naturally assumed that they carried their own antidote and that this took the form of a magical stone. They were first recorded by Pliny the Elder in the first century.
Like the fossilised shark teeth known as tonguestones, toadstones were thought to be antidotes for poison and were also used to treat epilepsy. As early as the 14th century, people began to adorn jewelry with toadstones for their magical abilities. In their folklore, a toadstone was required to be removed from an old toad while the creature was still alive. 17th century naturalist Edward Topsell wrote that this could be done by setting the toad on a piece of red cloth.
The true toadstone was taken by contemporary jewellers to be no bigger than the nail of a hand and they varied in colour from a whitish brown through green to black, depending on where they were buried. They were supposedly most effective against poison when worn against the skin, on which occasion they were thought to heat up, sweat and change colour. If a person were bitten by a venomous creature a toadstone would be touched against the affected part to effect a cure. Alternatively Johannes de Cuba, in his book Gart der Gesundheit of 1485, claimed that toadstone would help with kidney disease and earthly happiness.
Loose toadstones were discovered among other gemstones in the Elizabethan Cheapside Hoard and there are surviving toadstone rings in the Ashmolean Museum and the British Museum.
Allusions in literature
The toadstone is alluded to by Duke Senior in Shakespeare's As You Like It (1599), in Act 2, Scene 1, lines 12 to 14:
Sweet are the uses of adversity;
Which, like the toad, ugly and venomous,
Wears yet a precious jewel in his head.
In James Branch Cabell's short story "Balthazar's Daughter" (collected in The Certain Hour) and its subsequent play adaptation The Jewel Merchants, Alessandro de Medici attempts to seduce Graciosa by listing various precious jewels in his possession, including "jewels cut from the brain of a toad".
Jewelry
Some toadstones were used in jewelry, including on a crown held at Aachen Cathedral used to coronate Charles IV, Holy Roman Emperor.
See also
Bezoar
Biomineralization
References
Further reading
New Oxford American Dictionary, under the entry "toadstone".
The Complete Works of William Shakespeare by Crown Publishers Inc
External links
"Toadstones: A note to Pseudodoxia Epidemica, Book III, chapter 13"
A collection of notes maintained by James Eason of the University of Chicago comprising excerpts from Thomas Nicols and other authors
New York Times reference, October 1890
Whitehurst, John (1713–1788). An inquiry into the original state and formation of the earth, pp 184-5, 190 and ff).
Folklore
Toads
Mythological substances
Lepisosteiformes
Magic items | Toadstone | Physics,Chemistry | 757 |
2,086,023 | https://en.wikipedia.org/wiki/Lysogeny%20broth | Lysogeny broth (LB) is a nutritionally rich medium primarily used for the growth of bacteria. Its creator, Giuseppe Bertani, intended LB to stand for lysogeny broth, but LB has also come to colloquially mean Luria broth, Lennox broth, life broth or Luria–Bertani medium. The formula of the LB medium was published in 1951 in the first paper of Bertani on lysogeny. In this article he described the modified single-burst experiment and the isolation of the phages P1, P2, and P3. He had developed the LB medium to optimize Shigella growth and plaque formation.
LB medium formulations have been an industry standard for the cultivation of Escherichia coli as far back as the 1950s. These media have been widely used in molecular microbiology applications for the preparation of plasmid DNA and recombinant proteins. It continues to be one of the most common media used for maintaining and cultivating laboratory recombinant strains of Escherichia coli. For physiological studies however, the use of LB medium is discouraged.
There are several common formulations of LB. Although they are different, they generally share a somewhat similar composition of ingredients used to promote growth, including the following:
Peptides and casein peptones
Vitamins (including B vitamins)
Trace elements (e.g. nitrogen, sulfur, magnesium)
Minerals
Sodium ions for transport and osmotic balance are provided by sodium chloride. Tryptone is used to provide essential amino acids such as peptides and peptones to the growing bacteria, while the yeast extract is used to provide a plethora of organic compounds helpful for bacterial growth. These compounds include vitamins and certain trace elements.
In his original 1951 paper, Bertani used 10 grams of NaCl and 1 gram of glucose per 1 L of solution; Luria in his "L broth" of 1957 copied Bertani's original recipe exactly. Recipes published later have typically left out the glucose.
Formula
The formulations generally differ in the amount of sodium chloride, thus providing selection of the appropriate osmotic conditions for the particular bacterial strain and desired culture conditions. The low salt formulations, Lennox and Luria, are ideal for cultures requiring salt-sensitive antibiotics.
LB (Miller) (10 g/L NaCl)
LB (Lennox) (5 g/L NaCl)
LB (Luria) (0.5 g/L NaCl)
Adjusting the pH
Prior to autoclaving, some laboratories adjust the pH of LB to 7.5 or 8 with sodium hydroxide. However, sodium hydroxide does not provide any buffering capacity to the media, and this results in rapid changes to the pH during bacteria cultivation. To get around this some laboratories prefer to adjust the pH with 5-10 mmol/L TRIS buffer, diluted from 1 mol/L TRIS stock at the desired pH. However, it is not absolutely necessary to adjust the pH for most situations. Some laboratories adjust the pH to 7.0 merely as a precaution.
See also
Agar plate
Salvador Luria
SOC medium—another widely used medium for culture of Escherichia coli in molecular biology work
References
Microbiological media | Lysogeny broth | Biology | 670 |
13,008,203 | https://en.wikipedia.org/wiki/Immunization%20registry | An immunization registry or immunization information system (IIS) is an information system that collects vaccination data about all persons within a geographic area. It consolidates the immunization records from multiple sources for each person living in its jurisdiction.
Introduction
Immunization information systems (IIS) are an important tool to increase and sustain high vaccination coverage by consolidating vaccination records of children and adults from multiple providers, forecasting next doses past due, due, and next due to support generating reminder and recall vaccination notices for each individual, and providing official vaccination forms and vaccination coverage assessments. One of the national health objectives is to increase to 95% the proportion of children aged <6 years who participate in fully operational population-based IIS.
A "fully operational" IIS includes 95% enrollment or higher of all catchment area children less than 6 years of age with 2 or more immunization encounters administered according to ACIP recommendations.
In a population-based IIS, children are entered into the IIS at birth, often through a linkage with electronic birth records. An IIS record also can be initiated by a health care provider at the time of a child's first immunization. If an IIS includes all children in a given geographical area and all providers are reporting immunization information, it can provide a single data source for all community immunization partners. Such a population-based IIS can make it easier to carry out the demonstrably effective immunization strategies (e.g., reminder/recall, AFIX, and WIC linkages) and thereby decrease the resources needed to achieve and maintain high levels of coverage. IIS can also be used to enhance adult immunization services and coverage. Pharmacy immunizations are reported to state IIS, allowing for a complete lifetime immunization history.
The concept of IIS is not new. Many individual practices and health plans administer immunizations to their patients. Records of these immunizations often are based on computerized information systems designed for other purposes, such as billing. There is also a growing movement toward the development of totally computerized patient medical records. Although an IIS includes all immunizations administered by health care providers participating in it, only population-based IIS are capable of providing information on all children and all adult doses of vaccines administered by all providers.
See also
Vaccination schedule
References
CDC Immunization Information System
American Immunization Registry Association (AIRA)
Electronic health record software
Vaccination | Immunization registry | Biology | 525 |
27,868,407 | https://en.wikipedia.org/wiki/How%20the%20Universe%20Works | How The Universe Works is a science documentary television series that provides scientific explanations about the inner workings of the universe and everything it encompasses. With the use of computer-generated imagery (CGI) and visual effects, each episode presents and narrates a topic about the universe (e.g.: the origin of the universe, the formation and the evolution of the Solar System, and the origin and behavior of life), which then are complemented with scientific insights from leading scientists of organizations such as NASA and CERN.
The series originally aired on the Discovery Channel in 2010. All but the second, third and eighth seasons were narrated by Mike Rowe. The second, third and eighth seasons, as well as episodes of the fifth and sixth seasons, were narrated by Erik Todd Dellums.
The first season, broadcast from April 25 to May 24, 2010, was released on Blu-ray on February 28, 2012. Since its second season, consisting of eight episodes broadcast between July 11 and August 29, 2012, the show has aired on the Science Channel. The third season aired between July 9 and September 3, 2014. The fourth season premiered on July 14, 2015, as part of the Science Channel's "Space Week," in honor of New Horizons' flyby of Pluto that day; the season ran through September 1, 2015. The show's fifth season aired from November 22, 2016, through February 7, 2017. The sixth season premiered on January 9, 2018, and ran through March 13, 2018. The seventh season premiered on January 8, 2019. On December 30, 2019, it was announced that the eighth season would premiere on January 2, 2020. The ninth season premiered on March 24, 2021. The tenth season premiered on March 6, 2022. The eleventh season premiered on March 5, 2023.
Episode list
Season 1 (2010)
Season 2 (2012)
Season 3 (2014)
Season 4 (2015)
Season 5 (2016–17)
Season 6 (2018)
Season 7 (2019)
Season 8 (2020)
In the UK version of the series, there are two episodes of Season 8 which are not included in its US counterpart. These episodes are as follows:
Season 9 (2021)
Season 10 (2022)
Season 11 (2023)
See also
Alien Planet
Cosmos: A Spacetime Odyssey
Extreme Universe
Into the Universe with Stephen Hawking
Killers of the Cosmos
Mars: The Secret Science
The Planets and Beyond
Space's Deepest Secrets
Strip the Cosmos
Through the Wormhole
The Universe
References
External links
by Science Channel (Discovery Channel)
2010 American television series debuts
2010s American documentary television series
2020s American documentary television series
Discovery Channel original programming
Documentary television series about astronomy
Science Channel original programming | How the Universe Works | Astronomy | 548 |
1,051,589 | https://en.wikipedia.org/wiki/Pyraminx | The Pyraminx () is a regular tetrahedron puzzle in the style of Rubik's Cube. It was made and patented by Uwe Mèffert after the original 3 layered Rubik's Cube by Ernő Rubik, and introduced by Tomy Toys of Japan (then the 3rd largest toy company in the world) in 1981.
Optimal solutions
The maximum number of twists required to solve the Pyraminx is 11. There are 933,120 different positions (disregarding the trivial rotation of the tips), a number that is sufficiently small to allow a computer search for optimal solutions. The table below summarizes the result of such a search, stating the number p of positions that require n twists to solve the Pyraminx:
{| class="wikitable"
!n
|0||1||2 ||3 ||4 ||5 ||6 ||7 ||8 ||9 ||10 ||11
|-
!p
|1||8||48||288||1728||9896||51808||220111||480467||166276||2457||32
|}
Records
The world record single solve is 0.73 seconds, set by Simon Kellum of the United States at Middleton Meetup Thursday 2023. The world record average of five solves (excluding fastest and slowest) is 1.27 seconds, set by Sebastian Lee of Australia at Maitland Spring 2024.
Top 5 solvers by single solve
Top 5 solvers by Olympic average of 5 solves
Methods
There are many methods for solving a Pyraminx. They can be split up into two main groups.
1) V First Methods - In these methods, two or three edges are solved first, and a set of algorithms, also called LL (last layer) algorithms, are used to solve the remainder of the puzzle.
2) Top First Methods- In these methods, three edges around a center piece are solved first, and the remainder of the puzzle is solved using a set of algorithms.
Common V first methods-
a) Layer by Layer - In this method, a face with all edges permuted is solved, and then the remaining puzzle is solved by a single algorithm from a set of 5.
b) Algorithmic L4E and Intuitive L4E - L4E or last 4 edges is somewhat similar to Layer by Layer. The only difference is that only two edges are solved around three centers. Both of these methods solve the last four edges in the same step, hence the name. The difference is that Intuitive L4E requires a lot of visualization and "intuition" to solve the last four edges while algorithmic L4E uses algorithms. Algorithmic L4E is generally used more at higher levels, although there are very fast Intuitive L4E users. It is also easy to transition between Intuitive L4E and Algorithmic L4E.
Common top first methods-
a) One Flip - This method uses two edges around one center solved and the third edge flipped. There are a total of six cases after this step, for which algorithms are memorized and executed. The third step involves using a common set of algorithms for all top first methods, also called Keyhole last layer, which involves 5 algorithms, four of them being the mirrors of each other.
b) Keyhole - This method uses two edges in the right place around one center, and the third edge placed elsewhere on the puzzle. The centers of the fourth color are then solved using the slot formed by the non-permuted edge. The last step is solved using Keyhole last layer algorithms.
c) OKA - In this method, one edge is oriented around two edges in the wrong place, but one of the edges that is in the wrong place belongs to the block itself. The last edge is found on the bottom layer, and a very simple algorithm is executed to get it in the right place, followed by keyhole last layer algorithms.
Some other common top first methods are WO and Nutella.
Many top Pyraminx speedsolvers only use V-first methods, as top-first methods are extremely clunky and outdated due to hardware.
Variations
There are several variations of the puzzle. The simplest, Tetraminx, is equivalent to the (3x) Pyraminx but without the tips (see photo), resembling a truncated tetrahedron. There also exist "higher-order" versions, such as the 4x Master Pyraminx (see photos) and the 5x Professor's Pyraminx.
The Master Pyraminx has 4 layers and 16 triangles-per-face (compared to 3 layers and 9 triangles-per-face of the original), and is based on the Skewb Diamond mechanism. This version has about 2.6817 × 1015 combinations. The Master Pyraminx has
4 "tips" (same as the original Pyraminx)
4 "middle axials" (same as the original Pyraminx)
4 "centers" (similar to Rubik's Cube, none in the original Pyraminx)
6 "inner edges" (similar to Rubik's Cube, none in the original Pyraminx)
12 "outer edges" (2-times more than the 6 of the original Pyraminx)
In summary, the Master Pyraminx has 30 "manipulable" pieces. However, like the original, 8 of the pieces (the tips and middle axials) are fixed in position (relative to each other) and can only be rotated in place. Also, the 4 centers are fixed in position and can only rotate (like the Rubik's Cube). So there are only 18 (30-8-4) "truly movable" pieces; since this is 10% fewer than the 20 "truly movable" pieces of the Rubik's Cube, it should be no surprise that the Master Pyraminx has about 10,000-times fewer combinations than a Rubik's Cube (43 quintilion in the short scale or 43 trilion in the long scale). The Master Pyraminx can be solved in numerous ways: one is layer by layer like the original one or reducing it to a Jing pyraminx.
Reviews
Games
See also
Pyraminx Duo
Pyramorphix and Master Pyramorphix, two regular tetrahedron puzzles which resemble the Pyraminx but are mechanically very different from it
Pocket Cube
Rubik's Cube
Rubik's Revenge
Rubik's Triamid
Professor's Cube
V-Cube 6
V-Cube 7
V-Cube 8
Skewb
Skewb Diamond
Megaminx
Dogic
Combination puzzles
Tower Cube
References
External links
Jaap's Pyraminx and related puzzles page, with solution
Pyraminx solution from PuzzleSolver
Pyraminx - ruwix.com (how to solve)
A solution to the Pyraminx by Jonathan Bowen
An efficient and easy to follow solution favoured by speed solvers
Patterns A collection of pretty patterns for the Pyraminx
1980s toys
Mechanical puzzles
Combination puzzles
Rubik's Cube
Tetrahedra | Pyraminx | Mathematics | 1,502 |
13,606,566 | https://en.wikipedia.org/wiki/Rotolock%20valve | A service valve is a valve used to separate one piece of equipment from another in any system where liquids or gases circulate. Two types of service valves are marketed: the Schrader-type valve and the stem-type service valve. Specialized versions are made for specific purposes, such as the Rotolock valve (a stem-type valve also called a Rotalock valve ), which is a special refrigeration valve with a teflon ring seated against a machined surface enclosed by a threaded fitting; this valve allows the technician to remove all refrigerant from the compressor without requiring removal of the system charge.
References
Valves
Heating, ventilation, and air conditioning | Rotolock valve | Physics,Chemistry | 140 |
7,272,050 | https://en.wikipedia.org/wiki/Rangekeeper | Rangekeepers were electromechanical fire control computers used primarily during the early part of the 20th century. They were sophisticated analog computers whose development reached its zenith following World War II, specifically the Computer Mk 47 in the Mk 68 Gun Fire Control system. During World War II, rangekeepers directed gunfire on land, sea, and in the air. While rangekeepers were widely deployed, the most sophisticated rangekeepers were mounted on warships to direct the fire of long-range guns.
These warship-based computing devices needed to be sophisticated because the problem of calculating gun angles in a naval engagement is very complex. In a naval engagement, both the ship firing the gun and the target are moving with respect to each other. In addition, the ship firing its gun is not a stable platform because it will roll, pitch, and yaw due to wave action, ship change of direction, and board firing. The rangekeeper also performed the required ballistics calculations associated with firing a gun. This article focuses on US Navy shipboard rangekeepers, but the basic principles of operation are applicable to all rangekeepers regardless of where they were deployed.
Function
A rangekeeper is defined as an analog fire control system that performed three functions:
Target tracking
The rangekeeper continuously computed the current target bearing. This is a difficult task because both the target and the ship firing (generally referred to as "own ship") are moving. This requires knowing the target's range, course, and speed accurately. It also requires accurately knowing the own ship's course and speed.
Target position prediction
When a gun is fired, it takes time for the projectile to arrive at the target. The rangekeeper must predict where the target will be at the time of projectile arrival. This is the point at which the guns are aimed.
Gunfire correction
Directing the fire of a long-range weapon to deliver a projectile to a specific location requires many calculations. The projectile point of impact is a function of many variables, including: gun azimuth, gun elevation, wind speed and direction, air resistance, gravity, latitude, gun/sight parallax, barrel wear, powder load, and projectile type.
History
Manual fire control
The early history of naval fire control was dominated by the engagement of targets within visual range (also referred to as direct fire). In fact, most naval engagements before 1800 were conducted at ranges of .
Even during the American Civil War, the famous engagement between the and the was often conducted at less than range.
With time, naval guns became larger and had greater range. At first, the guns were aimed using the technique of artillery spotting. Artillery spotting involved firing a gun at the target, observing the projectile's point of impact (fall of shot), and correcting the aim based on where the shell was observed to land, which became more and more difficult as the range of the gun increased.
Predecessor fire control tools and systems
Between the American Civil War and 1905, numerous small improvements were made in fire control, such as telescopic sights and optical rangefinders. There were also procedural improvements, like the use of plotting boards to manually predict the position of a ship during an engagement. Around 1905, mechanical fire control aids began to become available, such as the Dreyer Table, Dumaresq (which was also part of the Dreyer Table), and Argo Clock, but these devices took a number of years to become widely deployed. These devices were early forms of rangekeepers.
The issue of directing long-range gunfire came into sharp focus during World War I with the Battle of Jutland. While the British were thought by some to have the finest fire control system in the world at that time, during the Battle of Jutland only 3% of their shots actually struck their targets. At that time, the British primarily used a manual fire control system. The one British ship in the battle that had a mechanical fire control system turned in the best shooting results. This experience contributed to rangekeepers becoming standard issue.
Power drives and Remote Power Control (RPC)
The US Navy's first deployment of a rangekeeper was on the in 1916. Because of the limitations of the technology at that time, the initial rangekeepers were crude. During World War I, the rangekeepers could generate the necessary angles automatically, but sailors had to manually follow the directions of the rangekeepers (a task called "pointer following" or "follow the pointer"). Pointer following could be accurate, but the crews tended to make inadvertent errors when they became fatigued during extended battles. During World War II, servomechanisms (called "power drives" in the U.S. Navy and RPC in the Royal Navy) were developed that allowed the guns to automatically steer to the rangekeeper's commands with no manual intervention. The Mk. 1 and Mk. 1A computers contained approximately 20 servomechanisms, mostly position servos, to minimize torque load on the computing mechanisms. The Royal Navy first installed RPC, experimentally, aboard HMS Champion in 1928. In the 1930s RPC was used for naval searchlight control and during WW2 it was progressively installed on pom-pom mounts and directors, 4-inch, 4.5-inch and 5.25-inch gun mounts.
During their long service life, rangekeepers were updated often as technology advanced, and by World War II they were a critical part of an integrated fire control system. The incorporation of radar into the fire control system early in World War II provided ships with the ability to conduct effective gunfire operations at long range in poor weather and at night.
Service in World War II
During World War II, rangekeeper capabilities were expanded to the extent that the name "rangekeeper" was deemed to be inadequate. The term "computer," which had been reserved for human calculators, came to be applied to the rangekeeper equipment. After World War II, digital computers began to replace rangekeepers. However, components of the analog rangekeeper system continued in service with the US Navy until the 1990s.
The performance of these analog computers was impressive. The battleship during a 1945 test was able to maintain an accurate firing solution on a target during a series of high-speed turns. It is a major advantage for a warship to be able to maneuver while engaging a target.
Night naval engagements at long range became feasible when radar data could be input to the rangekeeper. The effectiveness of this combination was demonstrated in November 1942 at the Third Battle of Savo Island when the engaged the Japanese battlecruiser at a range of at night. The Kirishima was set aflame, suffered a number of explosions, and was scuttled by her crew. She had been hit by nine rounds out of 75 fired (12% hit rate).
The wreck of the Kirishima was discovered in 1992 and showed that the entire bow section of the ship was missing.
The Japanese during World War II did not develop radar or automated fire control to the level of the US Navy and were at a significant disadvantage.
The Royal Navy began to introduce gyroscopic stabilization of their director gunsights in World War One and by the start of World War Two all warships fitted with director control had gyroscopically controlled gunsights.
The last combat action for the analog rangekeepers, at least for the US Navy, was in the 1991 Persian Gulf War when the rangekeepers on the s directed their last rounds in combat.
Construction
Rangekeepers were very large, and the ship designs needed to make provisions to accommodate them. For example, the Ford Mk 1A Computer weighed
The Mk. 1/1A's mechanism support plates, some were up to thick, were made of aluminum alloy, but nevertheless, the computer is very heavy. On at least one refloated museum ship, the destroyer (now in Boston), the computer and Stable Element more than likely still are below decks, because they are so difficult to remove.
The rangekeepers required a large number of electrical signal cables for synchro data transmission links over which they received information from the various sensors (e.g. gun director, pitometer, rangefinder, gyrocompass) and sent commands to the guns.
These computers also had to be formidably rugged, partly to withstand the shocks created by firing the ship's own guns, and also to withstand the effects of hostile enemy hits to other parts of the ship. They not only needed to continue functioning, but also stay accurate.
The Ford Mark 1/1A mechanism was mounted into a pair of approximately cubical large castings with very wide openings, the latter covered by gasketed castings. Individual mechanisms were mounted onto thick aluminum-alloy plates, and along with interconnecting shafts, were progressively installed into the housing. Progressive assembly meant that future access to much of the computer required progressive disassembly.
The Mk 47 computer was a radical improvement in accessibility over the Mk 1/1A. It was more akin to a tall, wide storage cabinet in shape, with most or all dials on the front vertical surface. Its mechanism was built in six sections, each mounted on very heavy-duty pull-out slides. Behind the panel were typically a horizontal and a vertical mounting plate, arranged in a tee.
Mechanisms
The problem of rangekeeping
Long-range gunnery is a complex combination of art, science, and mathematics. There are numerous factors that affect the ultimate placement of a projectile and many of these factors are difficult to model accurately. As such, the accuracy of battleship guns was ≈1% of range (sometimes better, sometimes worse). Shell-to-shell repeatability was ≈0.4% of range.
Accurate long-range gunnery requires that a number of factors be taken into account:
Target course and speed
Own ship course and speed
Gravity
Coriolis effect: Because the Earth is rotating, there is an apparent force acting on the projectile.
Internal ballistics: Guns do wear, and this aging must be taken into account by keeping an accurate count of the number of projectiles sent through the barrel (this count is reset to zero after the installation of a new liner). There are also shot-to-shot variations due to barrel temperature and interference between guns firing simultaneously.
External ballistics: Different projectiles have different ballistic characteristics. Also, air conditions have an effect as well (temperature, wind, air pressure).
Parallax correction: In general, the position of the gun and target spotting equipment (radar, mounted on the gun director, pelorus, etc) are in different locations on a ship. This creates a parallax error for which corrections must be made.
Projectile characteristics (e.g. ballistic coefficient)
Powder charge weight and temperature
The calculations to predict and compensate for all these factors are complicated, frequent and error-prone when done by hand. Part of the complexity came from the amount of information that must be integrated from many different sources. For example, information from the following sensors, calculators, and visual aids must be integrated to generate a solution:
Gyrocompass: This device provides an accurate true north own ship course.
Rangefinders: Optical devices for determining the range to a target.
Pitometer Logs: These devices provided an accurate measurement of the own ship's speed.
Range clocks: These devices provided a prediction of the target's range at the time of projectile impact if the gun was fired now. This function could be considered "range keeping".
Angle clocks: This device provided a prediction of the target's bearing at the time of projectile impact if the gun was fired now.
Plotting board: A map of the gunnery platform and target that allowed predictions to be made as to the future position of a target. (The compartment ("room") where the Mk.1 and Mk.1A computers was located was called "Plot" for historical reasons.)
Various slide rules: These devices performed the various calculations required to determine the required gun azimuth and elevation.
Meteorological sensors: Temperature, wind speed, and humidity all have an effect on the ballistics of a projectile. U.S. Navy rangekeepers and analog computers did not consider different wind speeds at differing altitudes.
To increase speed and reduce errors, the military felt a dire need to automate these calculations. To illustrate the complexity, Table 1 lists the types of input for the Ford Mk 1 Rangekeeper (ca 1931).
{| class="wikitable"
|+Table 1: Manual Inputs Into Pre-WWII Rangekeeper
|-
| style="width:80pt"|Variable
| style="width:200pt"|Data Source
|-
| Range
| Phoned from range finder
|-
|Own ship course
|Gyrocompass repeater
|-
|Own ship speed
|Pitometer log
|-
|Target course
|Initial estimates for rate control
|-
|Target speed
|Initial estimates for rate control
|-
|Target bearing
|Automatically from director
|-
|Spotting data
|Spotter, by telephone
|}
However, even with all this data, the rangekeeper's position predictions were not infallible. The rangekeeper's prediction characteristics could be used against it. For example, many captains under long-range gun attack would make violent maneuvers to "chase salvos" or "steer for the fall of shot," i.e., maneuver to the position of the last salvo splashes. Because the rangekeepers are constantly predicting new positions for the target, it was unlikely that subsequent salvos would strike the position of the previous salvo. Practical rangekeepers had to assume that targets were moving in a straight-line path at a constant speed, to keep complexity within acceptable limits. A sonar rangekeeper was built to track a target circling at a constant radius of turn, but that function was disabled.
General technique
The data were transmitted by rotating shafts. These were mounted in ball-bearing brackets fastened to the support plates. Most corners were at right angles, facilitated by miter gears in 1:1 ratio.
The Mk. 47, which was modularized into six sections on heavy-duty slides, connected the sections together with shafts in the back of the cabinet. Shrewd design meant that the data carried by these shafts required no manual zeroing or alignment; only their movement mattered. The aided-tracking output from an integrator roller is one such example. When the section was slid back into normal position, the shaft couplings mated as soon as the shafts rotated.
Common mechanisms in the Mk. 1/1A included many miter-gear differentials, a group of four 3-D cams, some disk-ball-roller integrators, and servo motors with their associated mechanism; all of these had bulky shapes. However, most of the computing mechanisms were thin stacks of wide plates of various shapes and functions. A given mechanism might be up to thick, possibly less, and more than a few were maybe across. Space was at a premium, but for precision calculations, more width permitted a greater total range of movement to compensate for slight inaccuracies, stemming from looseness in sliding parts.
The Mk. 47 was a hybrid, doing some computing electrically, and the rest mechanically. It had gears and shafts, differentials, and totally enclosed disk-ball-roller integrators. However, it had no mechanical multipliers or resolvers ("component solvers"); these functions were performed electronically, with multiplication carried out using precision potentiometers.
In the Mk. 1/1A, however, excepting the electrical drive servos, all computing was mechanical.
Implementations of mathematical functions
The implementation methods used in analog computers were many and varied. The fire control equations implemented during World War II on analog rangekeepers are the same equations implemented later on digital computers. The key difference is that the rangekeepers solved the equations mechanically. While mathematical functions are not often implemented mechanically today, mechanical methods exist to implement all the common mathematical operations. Some examples include:
Addition and subtraction
Differential gears, usually referred to by technicians simply as "differentials", were often used to perform addition and subtraction operations. The Mk. 1A contained approximately 160 of them. The history of this gearing for computing dates to antiquity (see Antikythera mechanism).
Multiplication by a constant
Gear ratios were very extensively used to multiply a value by a constant.
Multiplication of two variables
The Mk. 1 and Mk.1A computer multipliers were based on the geometry of similar triangles.
Sine and cosine generation (polar-to-rectangular coordinate conversion)
These mechanisms would be called resolvers, today; they were called "component solvers" in the mechanical era. In most instances, they resolved an angle and magnitude (radius) into sine and cosine components, with a mechanism consisting of two perpendicular Scotch yokes. A variable crankpin radius handled the magnitude of the vector in question.
Integration
Ball-and-disk integrators performed the integration operation. As well, four small Ventosa integrators in the Mk. 1 and Mk. 1A computers scaled rate-control corrections according to angles.
The integrators had rotating discs and a full-width roller mounted in a hinged casting, pulled down toward the disc by two strong springs. Twin balls permitted free movement of the radius input with the disk stopped, something done at least daily for static tests. Integrators were made with discs of 3, 4 and 5 inch (7.6, 10 and 12.5 cm) diameters, the larger being more accurate. Ford Instrument Company integrators had a clever mechanism for minimizing wear when the ball-carrier carriage was in one position for extended periods.
Component integrators
Component integrators were essentially Ventosa integrators, all enclosed. Think of a traditional heavy-ball computer mouse and its pickoff rollers at right angles to each other. Underneath the ball is a roller that turns to rotate the mouse ball. However, the shaft of that roller can be set to any angle you want. In the Mk. 1/1A, a rate-control correction (keeping the sights on target) rotated the ball, and the two pickoff rollers at the sides distributed the movement appropriately according to angle. That angle depended upon the geometry of the moment, such as which way the target was heading.
Differentiation
Differentiation was performed by using an integrator in a feedback loop.
Functions of one variable
Rangekeepers used a number of cams to generate function values. Many face cams (flat discs with wide spiral grooves) were used in both rangekeepers. For surface fire control (the Mk. 8 Range Keeper), a single flat cam was sufficient to define ballistics.
Functions of two variables
In the Mk. 1 and Mk 1A computers, four three-dimensional cams were needed. These used cylindrical coordinates for their inputs, one being the rotation of the cam, and the other being the linear position of the ball follower. The radial displacement of the follower yielded the output.
The four cams in the Mk. 1/1A computer provided mechanical time fuse setting, time of flight (this time is from firing to bursting at or near the target), time of flight divided by predicted range, and superelevation combined with vertical parallax correction. (Superelevation is essentially the amount the gun barrel needs to be raised to compensate for gravity drop.)
Servo speed stabilization
The Mk.1 and Mk.1A computers were electromechanical, and many of their mechanical calculations required drive movements of precise speeds. They used reversible two-phase capacitor-run induction motors with tungsten contacts. These were stabilized primarily by rotary magnetic drag (eddy-current) slip clutches, similar to classical rotating-magnet speedometers, but with a much higher torque. One part of the drag was geared to the motor, and the other was constrained by a fairly stiff spring. This spring offset the null position of the contacts by an amount proportional to motor speed, thus providing velocity feedback. Flywheels mounted on the motor shafts, but coupled by magnetic drags, prevented contact chatter when the motor was at rest. Unfortunately, the flywheels must also have slowed down the servos somewhat.
A more elaborate scheme, which placed a rather large flywheel and differential between the motor and the magnetic drag, eliminated velocity error for critical data, such as gun orders.
The Mk. 1 and Mk. 1A computer integrator discs required a particularly elaborate system to provide constant and precise drive speeds. They used a motor with its speed regulated by a clock escapement, cam-operated contacts, and a jeweled-bearing spur-gear differential. Although the speed oscillated slightly, the total inertia made it effectively a constant-speed motor. At each tick, contacts switched on motor power, then the motor opened the contacts again. It was in effect slow pulse-width modulation of motor power according to load. When running, the computer had a unique sound as motor power was switched on and off at each tick—dozens of gear meshes inside the cast-metal computer housing spread out the ticking into a "chunk-chunk" sound.
Assembly
A detailed description of how to dismantle and reassemble the system was contained in the two-volume Navy Ordnance Pamphlet OP 1140 with several hundred pages and several hundred photographs. When reassembling, shaft connections between mechanisms had to be loosened and the mechanisms mechanically moved so that an output of one mechanism was at the same numerical setting (such as zero) as the input to the other. Fortunately these computers were especially well-made, and very reliable.
Related targeting systems
During WWII, all the major warring powers developed rangekeepers to different levels.
Rangekeepers were only one member of a class of electromechanical computers used for fire control during World War II. Related analog computing hardware used by the United States included:
Norden bombsight
US bombers used the Norden bombsight, which used similar technology to the rangekeeper for predicting bomb impact points.
Torpedo Data Computer (TDC)
US submarines used the TDC to compute torpedo launch angles. This device also had a rangekeeping function that was referred to as "position keeping." This was the only submarine-based fire control computer during World War II that performed target tracking. Because space within a submarine hull is limited, the TDC designers overcame significant packaging challenges in order to mount the TDC within the allocated volume.
M-9/SCR-584 Anti-Aircraft System
This equipment was used to direct air defense artillery. It made a particularly good account of itself against the V-1 flying bombs.
See also
Director (military)
Gun data computer
Fire-control system
Kerrison Predictor
Notes
Bibliography
External links
Appendix one, Classification of Director Instruments
USN Report on IJN Technology
Excellent article on the performance of long-range gunnery between the World Wars.
British fire control
British fire control expert
Ford Instrument Company museum site. Ford built rangekeepers for the US Navy during World Wars I and II
OP1140, a superb Navy manual. Chapter 2 has many fine illustrations and clearly written text.
Basic Mechanisms in Fire Control Computers. United States Navy Training Film. MN-6783a and MN-6783b. 1953. 2 parts of 4.
Military computers
Electro-mechanical computers
Analog computers
Artillery operation
Artillery components
Naval artillery
Fire-control computers of World War II | Rangekeeper | Technology | 4,775 |
31,834,828 | https://en.wikipedia.org/wiki/Bert%20Schiettecatte | Bert Schiettecatte (born January 1, 1979) is a Belgian entrepreneur who created the Audiocubes.
Biography
Bert Schiettecatte was born in Ghent, Belgium. He has an electronic music production background . He holds an MSc in computer science from the University of Brussels (VUB) and an MA/MST of Arts in Music, Science and Technology from CCRMA, a department at Stanford University (BAEF grant).
While studying at CCRMA, Bert Schiettecatte developed a strong interest in hardware engineering, electronics, and human-computer interaction. Together with Eto Otitigbe and Luigi Castelli, Bert Schiettecatte created a customized dance pad and a laser harp (such as the one of Jean-Michel Jarre), at CCRMA.
After taking several research positions, Bert founded his company Percussa in 2004 to develop tangible user interface technology for music making. Percussa's first product, Audiocubes, was launched in January 2007.
Awards
2009: he received the Qwartz Max Mathews award for the invention of the Audiocubes
2010: he was invited to give a talk at TEDx Mediterranean in Cannes.
Publications
References
1979 births
Living people
Electronics engineers
Stanford University alumni
Audio engineering | Bert Schiettecatte | Engineering | 258 |
13,113,563 | https://en.wikipedia.org/wiki/Plutonium%28III%29%20chloride | Plutonium(III) chloride is a chemical compound with the formula PuCl3. This ionic plutonium salt can be prepared by reacting the metal with hydrochloric acid.
Structure
Plutonium atoms in crystalline PuCl3 are 9 coordinate, and the structure is tricapped trigonal prismatic. It crystallizes as the trihydrate, and forms lavender-blue solutions in water.
Safety
As with all plutonium compounds, it is subject to control under the Nuclear Non-Proliferation Treaty. Due to the radioactivity of plutonium, all of its compounds, PuCl3 included, are warm to the touch. Such contact is not recommended, since touching the material may result in serious injury.
References
Plutonium(III) compounds
Nuclear materials
Chlorides
Actinide halides | Plutonium(III) chloride | Physics,Chemistry | 162 |
5,817,943 | https://en.wikipedia.org/wiki/European%20Organisation%20for%20Civil%20Aviation%20Equipment | The European Organisation for Civil Aviation Equipment (EUROCAE) is an international organisation that deals exclusively with aviation standardisation, for both airborne and ground systems and equipment. It was created in 1963 in Lucerne, Switzerland by a decision of the European Civil Aviation Conference as a European forum focusing on electronic equipment for air transport. The organisation's offices are based in Saint-Denis, France near Paris.
History and operations
EUROCAE has now been operating for more than 50 years as a non-profit organisation whose membership comprises aviation stakeholders made up of regulators, manufacturers, services providers, users (such as airlines and airports) and academia. The membership is not limited to the European region.
From the outset, EUROCAE has developed performance specifications and other documents exclusively dedicated to the aviation community. EUROCAE documents are widely referenced by the International Civil Aviation Organisation (ICAO) as Guidance Material and by the European Aviation Safety Agency (EASA) as means of compliance to European Technical Standard Orders (ETSOs) and other regulatory documents.
To achieve the desired global harmonisation of aviation standards, EUROCAE has a close cooperation with RTCA, Inc. and SAE International. About 50% of the EUROCAE Working Groups (WG) work jointly with RTCA, and another 10% with SAE. The joint development of standards and the subsequent reference of those standards by EASA and the FAA as Acceptable Means of Compliance allows for a globally harmonised implementation of specific applications or systems based on the state of the art technology. This includes aircraft but also satellites.
EUROCAE documents are also produced in the context of the applicable ICAO standards and are coherent with existing ARINC specifications to ensure global interoperability.
Organisation
EUROCAE has currently more than 450 members, according to its own statements.
EUROCAE documents (ED) are developed by the Working Groups (WG) composed of voluntary experts from the member organisations of EUROCAE and – in case of joint activities - RTCA and SAE. Before publication EDs undergo a rigorous internal and external scrutiny process (Open Consultation) to ensure the high quality of the approved standards. Since its creation EUROCAE has published more than 250 EDs.
The EUROCAE governance is led by the Council, which is composed of senior staff from full members of the Association who are elected by the annual General Assembly. A Technical Advisory Committee consisting of technical experts in various aviation domains advises the Council in technical decisions. The day-to-day work of the organisation is carried out by the EUROCAE Secretariat, a collective term for the Secretary General, Programme Managers and administrative staff.
References
External links
EUROCAE
Avionics
International aviation organizations
Safety organizations
Business organizations based in Europe | European Organisation for Civil Aviation Equipment | Technology | 548 |
11,845,078 | https://en.wikipedia.org/wiki/1972%20Chicago%20commuter%20rail%20crash | A collision between two commuter trains in Chicago occurred during the cloudy morning rush hour on October 30, 1972, and was the worst such crash in Chicago's history. Illinois Central Gulf Train 416, made up of newly purchased Highliners, overshot the 27th Street station on what is now the Metra Electric Line, and the engineer asked and received permission from the train's conductor to back the train to the platform. This move was then made without the flag protection required by the railroad's rules. The train's crew had not used a flagman before, and while it was a prescribed practice, it had fallen out of use. Instead, the conductor and the engineer worked in concert to back up the train, with the curve in the track partially blocking the view.
Train 416 passed the automatic block signals, which cleared express Train 720, made up of more heavily constructed single level cars, to continue at full speed on the same track. The engineer of the express train did not see the bilevel train backing up until it was too late. When the trains collided, the front car of the express train telescoped the rear car of the bilevel train, killing 45 people and injuring 332. The death toll could have been higher, but the accident occurred near Michael Reese Hospital (which later moved) and Mercy Hospital.
Later investigations showed that Train 720 likely could have seen the red light for Train 416 and avoided a collision if it was traveling slower (30 mph). It is estimated that Train 720 hit Train 416 at about 44-50 mph.
References
External links
Final accident report in full - NTSB - On ROSAP website
Collision of Illinois Central Gulf Railroad Commuter Trains: Investigation Summary and Recommendations
List of crash dead identified - Clipping from Chicago Tribune - Newspapers.com
Works cited
Railway accidents in 1972
Railway accidents and incidents in Illinois
1970s in Chicago
Accidents and incidents involving Illinois Central Railroad
1972 in Illinois
October 1972 events in the United States | 1972 Chicago commuter rail crash | Technology | 395 |
30,669 | https://en.wikipedia.org/wiki/Tunguska%20event | The Tunguska event was a large explosion of between 3 and 50 megatons that occurred near the Podkamennaya Tunguska River in Yeniseysk Governorate (now Krasnoyarsk Krai), Russia, on the morning of 30 June 1908. The explosion over the sparsely populated East Siberian taiga flattened an estimated 80 million trees over an area of of forest, and eyewitness accounts suggest up to three people may have died. The explosion is generally attributed to a meteor air burst, the atmospheric explosion of a stony asteroid about wide. The asteroid approached from the east-south-east, probably with a relatively high speed of about (~Ma 80). Though the incident is classified as an impact event, the object is thought to have exploded at an altitude of rather than hitting the Earth's surface, leaving no impact crater.
The Tunguska event is the largest impact event on Earth in recorded history, though much larger impacts occurred in prehistoric times. An explosion of this magnitude would be capable of destroying a large metropolitan area. The event has been depicted in numerous works of fiction. The equivalent Torino scale rating for the impactor is 8: a certain collision with local destruction.
Description
On 30 June 1908 N.S. (cited as 17 June 1908 O.S. before the implementation of the Soviet calendar in 1918), at around 7:17 AM local time, Evenki natives and Russian settlers in the hills northwest of Lake Baikal observed a bluish light, nearly as bright as the Sun, moving across the sky and leaving a thin trail. Closer to the horizon, there was a flash producing a billowing cloud, followed by a pillar of fire that cast a red light on the landscape. The pillar split in two and faded, turning to black. About ten minutes later, there was a sound similar to artillery fire. Eyewitnesses closer to the explosion reported that the source of the sound moved from the east to the north of them. The sounds were accompanied by a shock wave that knocked people off their feet and broke windows hundreds of kilometres away.
The explosion registered at seismic stations across Eurasia, and air waves from the blast were detected in Germany, Denmark, Croatia, and the United Kingdom – and as far away as Batavia, Dutch East Indies, and Washington, D.C. It is estimated that, in some places, the resulting shock wave was equivalent to an earthquake measuring 5.0 on the Richter scale. Over the next few days, night skies in Asia and Europe were aglow. There are contemporaneous reports of brightly lit photographs being successfully taken at midnight (without the aid of flashbulbs) in Sweden and Scotland. It has been theorized that this sustained glowing effect was due to light passing through high-altitude ice particles that had formed at extremely low temperatures as a result of the explosion – a phenomenon that decades later was reproduced by Space Shuttles. In the United States, a Smithsonian Astrophysical Observatory program at the Mount Wilson Observatory in California observed a months-long decrease in atmospheric transparency consistent with an increase in suspended dust particles.
Selected eyewitness reports
Though the region of Siberia in which the explosion occurred was very sparsely populated in 1908, there are accounts of the event from eyewitnesses, and regional newspapers reported the event shortly after it occurred.
The testimony of S. Semenov, as recorded by Russian mineralogist Leonid Kulik's expedition in 1930:
Testimony of Chuchan of the Shanyagir tribe, as recorded by I. M. Suslov in 1926:
Sibir newspaper, 2 July 1908:
Siberian Life newspaper, 27 July 1908:
Krasnoyaretz newspaper, 13 July 1908:
Scientific investigation
Since the 1908 event, an estimated 1,000 scholarly papers (most in Russian) have been published about the Tunguska explosion. Owing to the site's remoteness and the limited instrumentation available at the time of the event, modern scientific interpretations of its cause and magnitude have relied chiefly on damage assessments and geological studies conducted many years after the event. Estimates of its energy have ranged from .
Only more than a decade after the event did any scientific analysis of the region take place, in part due to the area's isolation and significant political upheaval affecting Russia in the 1910s. In 1921, the Russian mineralogist Leonid Kulik led a team to the Podkamennaya Tunguska River basin to conduct a survey for the Soviet Academy of Sciences. Although they never visited the central blast area, the many local accounts of the event led Kulik to believe that a giant meteorite impact had caused the event. Upon returning, he persuaded the Soviet government to fund an expedition to the suspected impact zone, based on the prospect of salvaging meteoric iron.
Kulik led a scientific expedition to the Tunguska blast site in 1927. He hired local Evenki hunters to guide his team to the centre of the blast area, where they expected to find an impact crater. To their surprise, there was no crater at ground zero. Instead they found a zone, roughly across, where the trees were scorched and devoid of branches, but still standing upright. Trees farther from the centre had been partly scorched and knocked down away from the centre, creating a large radial pattern of downed trees.
In the 1960s, it was established that the zone of levelled forest occupied an area of , its shape resembling a gigantic spread-eagled butterfly with a "wingspan" of and a "body length" of . Upon closer examination, Kulik found holes that he erroneously concluded were meteorite holes; he did not have the means at that time to excavate the holes.
During the next 10 years, there were three more expeditions to the area. Kulik found several dozen little "pothole" bogs, each in diameter, that he thought might be meteoric craters. After a laborious exercise in draining one of these bogs (the so-called "Suslov's crater", in diameter), he found an old tree stump on the bottom, ruling out the possibility that it was a meteoric crater. In 1938, Kulik arranged for an aerial photographic survey of the area covering the central part of the leveled forest (). The original negatives of these aerial photographs (1,500 negatives, each ) were burned in 1975 by order of Yevgeny Krinov, then Chairman of the Committee on Meteorites of the USSR Academy of Sciences, as part of an initiative to dispose of flammable nitrate film. Positive prints were preserved for further study in Tomsk.
Expeditions sent to the area in the 1950s and 1960s found microscopic silicate and magnetite spheres in siftings of the soil. Similar spheres were predicted to exist in the felled trees, although they could not be detected by contemporary means. Later expeditions did identify such spheres in the resin of the trees. Chemical analysis showed that the spheres contained high proportions of nickel relative to iron, which is also found in meteorites, leading to the conclusion they were of extraterrestrial origin. The concentration of the spheres in different regions of the soil was also found to be consistent with the expected distribution of debris from a meteor air burst. Later studies of the spheres found unusual ratios of numerous other metals relative to the surrounding environment, which was taken as further evidence of their extraterrestrial origin.
Chemical analysis of peat bogs from the area also revealed numerous anomalies considered consistent with an impact event. The isotopic signatures of carbon, hydrogen, and nitrogen at the layer of the bogs corresponding to 1908 were found to be inconsistent with the isotopic ratios measured in the adjacent layers, and this abnormality was not found in bogs outside the area. The region of the bogs showing these anomalous signatures also contains an unusually high proportion of iridium, similar to the iridium layer found in the Cretaceous–Paleogene boundary. These unusual proportions are believed to result from debris from the falling body that deposited in the bogs. The nitrogen is believed to have been deposited as acid rain, a suspected fallout from the explosion.
Other scientists disagree: "Some papers report that hydrogen, carbon and nitrogen isotopic compositions with signatures similar to those of CI and CM carbonaceous chondrites were found in Tunguska peat layers dating from the TE (Kolesnikov et al. 1999, 2003) and that iridium anomalies were also observed (Hou et al. 1998, 2004). Measurements performed in other laboratories have not confirmed these results (Rocchia et al. 1990; Tositti et al. 2006)."
Researcher John Anfinogenov has suggested that a boulder found at the event site, known as John's stone, is a remnant of the meteorite, but oxygen isotope analysis of the quartzite suggests that it is of hydrothermal origin, and probably related to Permian-Triassic Siberian Traps magmatism.
In 2013, a team of researchers published the results of an analysis of micro-samples from a peat bog near the centre of the affected area, which show fragments that may be of extraterrestrial origin.
Earth impactor model
The leading scientific explanation for the explosion is a meteor air burst by an asteroid above the Earth's surface.
Meteoroids enter Earth's atmosphere from outer space every day, travelling at a speed of at least . The heat generated by compression of air in front of the body (ram pressure) as it travels through the atmosphere is immense and most meteoroids burn up or explode before they reach the ground. Early estimates of the energy of the Tunguska air burst ranged from to 30 megatons of TNT (130 PJ), depending on the exact height of the burst as estimated when the scaling laws from the effects of nuclear weapons are employed. More recent calculations that include the effect of the object's momentum find that more of the energy was focused downward than would be the case from a nuclear explosion and estimate that the air burst had an energy range from 3 to 5 megatons of TNT (13 to 21 PJ). The 15-megaton (Mt) estimate represents an energy about 1,000 times greater than that of the Trinity nuclear test, and roughly equal to that of the United States' Castle Bravo nuclear test in 1954 (which measured 15.2 Mt) and one third that of the Soviet Union's Tsar Bomba test in 1961. A 2019 paper suggests the explosive power of the Tunguska event may have been around 20–30 megatons.
Since the second half of the 20th century, close monitoring of Earth's atmosphere through infrasound and satellite observation has shown that asteroid air bursts with energies comparable to those of nuclear weapons routinely occur, although Tunguska-sized events, on the order of 5–15 megatons, are much rarer. Eugene Shoemaker estimated that 20-kiloton events occur annually and that Tunguska-sized events occur about once every 300 years. More recent estimates place Tunguska-sized events at about once every thousand years, with 5-kiloton air bursts averaging about once per year. Most of these are thought to be caused by asteroid impactors, as opposed to mechanically weaker cometary materials, based on their typical penetration depths into the Earth's atmosphere. The largest asteroid air burst observed with modern instrumentation was the 500-kiloton Chelyabinsk meteor in 2013, which shattered windows and produced meteorites.
Glancing impact hypothesis
In 2020, a group of Russian scientists used a range of computer models to calculate the passage of asteroids with diameters of 200, 100, and 50 metres at oblique angles across Earth's atmosphere. They used a range of assumptions about the object's composition as if it was made of iron, rock, or ice. The model that most closely matched the observed event was an iron asteroid up to 200 metres in diameter, travelling at 11.2 km per second, that glanced off the Earth's atmosphere and returned into solar orbit.
Blast pattern
The explosion's effect on the trees near the explosion's hypocentre was similar to the effects of the conventional Operation Blowdown. These effects are caused by the blast wave produced by large air-burst explosions. The trees directly below the explosion are stripped as the blast wave moves vertically downward, but remain standing upright, while trees farther away are knocked over because the blast wave is travelling closer to horizontal when it reaches them.
Soviet experiments performed in the mid-1960s, with model forests (made of matches on wire stakes) and small explosive charges slid downward on wires, produced butterfly-shaped blast patterns similar to the pattern found at the Tunguska site. The experiments suggested that the object had approached at an angle of roughly 30 degrees from the ground and 115 degrees from north and had exploded in midair.
Asteroid or comet
In 1930, the British meteorologist and mathematician F. J. W. Whipple suggested that the Tunguska body was a small comet. A comet is composed of dust and volatiles, such as water ice and frozen gases, and could have been completely vaporised by the impact with Earth's atmosphere, leaving no obvious traces. The comet hypothesis was further supported by the glowing skies (or "skyglows" or "bright nights") observed across Eurasia for several evenings after the impact, which are possibly explained by dust and ice that had been dispersed from the comet's tail across the upper atmosphere. The cometary hypothesis gained a general acceptance among Soviet Tunguska investigators by the 1960s.
In 1978, Slovak astronomer Ľubor Kresák suggested that the body was a fragment of Comet Encke, a periodic comet with a period of just over three years that stays entirely within Jupiter's orbit. It is also responsible for the Beta Taurids, an annual meteor shower with a maximum activity around 28–29 June. The Tunguska event coincided with that shower's peak activity, the Tunguska object's approximate trajectory is consistent with what would be expected from a fragment of Comet Encke, and a hypothetical risk corridor has now been calculated demonstrating that if the impactor had arrived a few minutes earlier it would have exploded over the US or Canada. It is now known that bodies of this kind explode at frequent intervals tens to hundreds of kilometres above the ground. Military satellites have been observing these explosions for decades. In 2019 astronomers searched for hypothesized asteroids ~100 metres in diameter from the Taurid swarm between 5–11 July, and 21 July – 10 August. , there have been no reports of discoveries of any such objects.
In 1983, astronomer Zdeněk Sekanina published a paper criticising the comet hypothesis. He pointed out that a body composed of cometary material, travelling through the atmosphere along such a shallow trajectory, ought to have disintegrated, whereas the Tunguska body apparently remained intact into the lower atmosphere. Sekanina also argued that the evidence pointed to a dense rocky object, probably of asteroidal origin. This hypothesis was further boosted in 2001, when Farinella, Foschini, et al. released a study calculating the probabilities based on orbital modelling extracted from the atmospheric trajectories of the Tunguska object. They concluded with a probability of 83% that the object moved on an asteroidal path originating from the asteroid belt, rather than on a cometary one (probability of 17%). Proponents of the comet hypothesis have suggested that the object was an extinct comet with a stony mantle that allowed it to penetrate the atmosphere.
The chief difficulty in the asteroid hypothesis is that a stony object should have produced a large crater where it struck the ground, but no such crater has been found. It has been hypothesised that the asteroid's passage through the atmosphere caused pressures and temperatures to build up to a point where the asteroid abruptly disintegrated in a huge explosion. The destruction would have to have been so complete that no remnants of substantial size survived, and the material scattered into the upper atmosphere during the explosion would have caused the skyglows. Models published in 1993 suggested that the stony body would have been about across, with physical properties somewhere between an ordinary chondrite and a carbonaceous chondrite. Typical carbonaceous chondrite substance tends to be dissolved with water rather quickly unless it is frozen.
Christopher Chyba and others have proposed a process whereby a stony asteroid could have exhibited the Tunguska impactor's behaviour. Their models show that when the forces opposing a body's descent become greater than the cohesive force holding it together, it blows apart, releasing nearly all its energy at once. The result is no crater, with damage distributed over a fairly wide radius, and all the damage resulting from the thermal energy the blast releases.
During the 1990s, Italian researchers, coordinated by the physicist Giuseppe Longo from the University of Bologna, extracted resin from the core of the trees in the area of impact to examine trapped particles that were present during the 1908 event. They found high levels of material commonly found in rocky asteroids and rarely found in comets.
Kelly et al. (2009) contend that the impact was caused by a comet because of the sightings of noctilucent clouds following the impact, a phenomenon caused by massive amounts of water vapour in the upper atmosphere. They compared the noctilucent cloud phenomenon to the exhaust plume from NASA's Endeavour Space Shuttle. A team of Russian researchers led by Edward Drobyshevski in 2009 suggested that the near-Earth asteroid may be a possible candidate for the Tunguska object's parent body as the asteroid made a close approach of from Earth on 27 June 1908, three days before the Tunguska impact. The team suspected that 's orbit likely fits with the Tunguska object's modelled orbit, even with the effects of weak non-gravitational forces. In 2013, analysis of fragments from the Tunguska site by a joint US-European team was consistent with an iron meteorite.
The February 2013 Chelyabinsk bolide event provided ample data for scientists to create new models for the Tunguska event. Researchers used data from both Tunguska and Chelyabinsk to perform a statistical study of over 50 million combinations of bolide and entry properties that could produce Tunguska-scale damage when breaking apart or exploding at similar altitudes. Some models focused on combinations of properties which created scenarios with similar effects to the tree-fall pattern as well as the atmospheric and seismic pressure waves of Tunguska. Four different computer models produced similar results; they concluded that the likeliest candidate for the Tunguska impactor was a stony body between in diameter, entering the atmosphere at roughly , exploding at altitude, and releasing explosive energy equivalent to between 10 and 30 megatons. This is similar to the blast energy equivalent of the 1980 volcanic eruption of Mount St. Helens. The researchers also concluded impactors of this size hit the Earth only at an average interval scale of millennia.
Lake Cheko
In June 2007, scientists from the University of Bologna identified a lake in the Tunguska region as a possible impact crater from the event. They do not dispute that the Tunguska body exploded in midair, but believe that a fragment survived the explosion and struck the ground. Lake Cheko is a small bowl-shaped lake about north-northwest of the hypocentre.
The hypothesis has been disputed by other impact crater specialists. A 1961 investigation had dismissed a modern origin of Lake Cheko, saying that the presence of metres-thick silt deposits on the lake bed suggests an age of at least 5,000 years, but more recent research suggests that only a metre or so of the sediment layer on the lake bed is "normal lacustrine sedimentation", a depth consistent with an age of about 100 years. Acoustic-echo soundings of the lake floor support the hypothesis that the Tunguska event formed the lake. The soundings revealed a conical shape for the lake bed, which is consistent with an impact crater. Magnetic readings indicate a possible metre-sized chunk of rock below the lake's deepest point that may be a fragment of the colliding body. Finally, the lake's long axis points to the Tunguska explosion's hypocentre, about away. Work is still being done at Lake Cheko to determine its origins.
The main points of the study are that:
In 2017, new research by Russian scientists pointed to a rejection of the theory that the Tunguska event created Lake Cheko. They used soil research to determine that the lake is 280 years old or even much older; in any case clearly older than the Tunguska event. In analyzing soils from the bottom of Lake Cheko, they identified a layer of radionuclide contamination from mid-20th century nuclear testing at Novaya Zemlya. The depth of this layer gave an average annual sedimentation rate of between 3.6 and 4.6 mm a year. These sedimentation values are less than half of the 1 cm/year calculated by Gasperini et al. in their 2009 publication on their analysis of the core they took from Lake Cheko in 1999. The Russian scientists in 2017 counted at least 280 such annual varves in the 1260 mm long core sample pulled from the bottom of the lake, representing an age older than the Tunguska event.
Additionally, there are problems with impact physics: It is unlikely that a stony meteorite in the right size range would have the mechanical strength necessary to survive atmospheric passage intact while retaining a velocity high enough to excavate a crater that size on reaching the ground.
Geophysical hypotheses
Though scientific consensus is that the Tunguska explosion was caused by the impact of a small asteroid, there are some dissenters. Astrophysicist Wolfgang Kundt has proposed that the Tunguska event was caused by the release and subsequent explosion of 10 million tons of natural gas from within the Earth's crust. The basic idea is that natural gas leaked out of the crust and then rose to its equal-density height in the atmosphere; from there, it drifted downwind, in a sort of wick, which eventually found an ignition source such as lightning. Once the gas was ignited, the fire streaked along the wick, and then down to the source of the leak in the ground, whereupon there was an explosion.
The similar verneshot hypothesis has also been proposed as a possible cause of the Tunguska event. Other research has proposed a geophysical mechanism for the event.
Similar event
A smaller air burst occurred over a populated area on 15 February 2013, at Chelyabinsk in the Ural district of Russia. The exploding meteoroid was determined to have been an asteroid that measured about across. It had an estimated initial mass of 11,000 tonnes and exploded with an energy release of approximately 500 kilotons. The air burst inflicted over 1,200 injuries, mainly from broken glass falling from windows shattered by its shock wave.
In fiction
In fiction, many alternative explanations for the event appear. The notion that it was caused by an alien spaceship is a popular one that gained prominence following the publication of Russian science fiction writer Alexander Kazantsev's 1946 short story "Explosion". The idea that the cause was the impact of a micro black hole has also appeared.
See also
Asteroid Day, annual global event held on June 30
Patomskiy crater, about to the east-southeast
Sikhote-Alin meteorite, 1947 impact
Tunguska Nature Reserve, protected area covering a portion of the site; ongoing scientific study of forest recovery
References
Bibliography
Cited in Verma.
This review is widely cited.
Furneaux, Rupert. The Tungus Event: The Great Siberian Catastrophe of 1908, (New York) Nordon Publications, 1977. . (St. Albans) Panther, 1977. .
Gallant, Roy A. The Day the Sky Split Apart: Investigating a Cosmic Mystery, (New York) Atheneum Books for Children, 1995. .
Cover article, with full-page map. Cited in Verma.
Krinov, E. L. Giant Meteorites, trans. J. S. Romankiewicz (Part III: The Tunguska Meteorite), (Oxford and New York) Pergamon Press, 1966.
Cited in Baxter and Atkins, also in Verma.
Rubtsov, Vladimir. The Tunguska Mystery, (Dordrecht and New York) Springer, 2009. ; 2012, .
This is one of several articles in a special issue, cover title: "Cosmic Cataclysms".
Stoneley, Jack; with Lawton, A. T. Cauldron of Hell: Tunguska, (New York) Simon & Schuster, 1977. .
Stoneley, Jack; with Lawton, A. T. Tunguska, Cauldron of Hell, (London) W. H. Allen, 1977.
Verma, Surendra. The Tunguska Fireball: Solving One of the Great Mysteries of the 20th century, (Cambridge) Icon Books Ltd., 2005. .
Verma, Surendra. The Mystery of the Tunguska Fireball, (Cambridge) Icon Books Ltd., 2006. , also (Crows Nest, NSW, Australia) Allen & Unwin Pty Ltd., 2006, with same ISBN. Index has "Lake Cheko" as "Ceko, Lake", without "h".
External links
Tunguska picturesMany Tunguska-related pictures with comments in English
Preliminary results from the 1961 combined Tunguska meteorite expedition
1908 Siberia Explosion. Reconstruction by William K. Hartmann.
"Mystery space blast 'solved (BBC News)
Sound of the Tunguska event (reconstruction)
The Tunguska Event 100 Years laterNASA
1908 in the environment
1908 in the Russian Empire
1908 natural disasters
Explosions in 1908
Explosions in Russia
Fires in Russia
History of Krasnoyarsk Krai
History of Siberia
Holocene Asia
Holocene events
June 1908
Evenkiysky District
Yeniseysk Governorate
Modern Earth impact events
Natural disasters in Siberia
Unsolved problems in physics
Meteorite falls
20th-century astronomical events | Tunguska event | Physics,Astronomy | 5,338 |
25,137,399 | https://en.wikipedia.org/wiki/Alphanumeric%20grid | An alphanumeric grid (also known as atlas grid) is a simple coordinate system on a grid in which each cell is identified by a combination of a letter and a number.
An advantage over numeric coordinates such as easting and northing, which use two numbers instead of a number and a letter to refer to a grid cell, is that there can be no confusion over which coordinate refers to which direction. As an easy example, one could think about battleship; simply match the number at the top to the number on the bottom, then follow the two lines until they meet in a spot.
Algebraic chess notation uses an alphanumeric grid to refer to the squares of a chessboard.
Some kinds of geocode also use letters and numbers, typically several of each in order to specify many more locations over much larger regions.
References
Coordinate systems | Alphanumeric grid | Mathematics | 171 |
76,617,124 | https://en.wikipedia.org/wiki/Ancient%20Near%20Eastern%20cosmology | Ancient Near Eastern (ANE) cosmology refers to the plurality of cosmological beliefs in the Ancient Near East, covering the period from the 4th millennium BC to the formation of the Macedonian Empire by Alexander the Great in the second half of the 1st millennium BC. These beliefs include the Mesopotamian cosmologies from Babylonia, Sumer, and Akkad; the Levantine or West Semitic cosmologies from Ugarit and ancient Israel and Judah (the biblical cosmology); the Egyptian cosmology from Ancient Egypt; and the Anatolian cosmologies from the Hittites. This system of cosmology went on to have a profound influence on views in early Greek cosmology, later Jewish cosmology, patristic cosmology, and Islamic cosmology (including Quranic cosmology). Until the modern era, variations of ancient near eastern cosmology survived with Hellenistic cosmology as the main competing system.
Summary
Ancient near eastern cosmology can be divided into its cosmography, the physical structure and features of the cosmos; and cosmogony, the creation myths that describe the origins of the cosmos in the texts and traditions of the ancient near eastern world. The cosmos and the gods were also related, as cosmic bodies like heaven, earth, the stars were believed to be and/or personified as gods, and the sizes of the gods were frequently described as being of cosmic proportions.
Cosmography
Ancient near eastern civilizations held to a fairly uniform conception of cosmography. This cosmography remained remarkably stable in the context of the expansiveness and longevity of the ancient near east, but changes were also to occur. Widely held components of ancient near eastern cosmography included:
a flat earth and a solid heaven (firmament), both of which are disk-shaped
a primordial cosmic ocean. When the firmament is created, it separates the cosmic ocean into two bodies of water:
the heavenly upper waters located on top of the firmament, which act as a source of rain
the lower waters that the earth is above and that the earth rests on; they act as the source of rivers, springs, and other earthly bodies of water
the region above the upper waters, namely the abode of the gods
the netherworld, the furthest region in the direction downwards, below the lower waters
Keyser, categorizing ancient near eastern cosmology as belonging to a larger and more cross-cultural set of cosmologies he describes as a "cradle cosmology," offers a longer list of shared features. Some cosmographical features have been misattributed to Mesopotamian cosmologies, including the idea that ziggurats represented cosmic objects reaching up to heaven or the idea of a dome- or vault-shaped (as opposed to a flat) firmament.
Another controversy concerns if the ancient near eastern cosmography was purely observational or phenomenological. However, a number of lines of evidence, including descriptions from the cosmological texts themselves, presumptions of this cosmography in non-cosmological texts (like incantations), anthropological studies of contemporary primitive cosmologies, and cognitive expectations that humans construct mental models to explain observation, support that the ancient near eastern cosmography was not phenomenological.
Cosmogony
Ancient near eastern cosmogony also included a number of common features that are present in most if not all creation myths from the ancient near east. Widespread features included:
Creatio ex materia from a primordial state of chaos; that is, the organization of the world from pre-existing, unordered and unformed (hence chaotic) elements, represented by a primordial body of water
the presence of a divine creator
the Chaoskampf motif: a cosmic battle between the protagonist and a primordial sea monster
the separation of undifferentiated elements (to create heaven and earth)
the creation of mankind
Lisman uses the broader category of "Beginnings" to encompass three separate though inter-related categories: the beginning of the cosmos (cosmogony), the beginning of the gods (theogony), and the beginning of humankind (anthropogeny).
There is evidence that Mesopotamian creation myths reached as far as Pre-Islamic Arabia.
Structure of the cosmos
Overview of the whole cosmos
The Mesopotamian cosmos can be imagined along a vertical axis, with parallel planes of existence layered above each other. The uppermost plane of existence was heaven, being the residence of the god of the sky Anu. Immediately below heaven was the atmosphere. The atmosphere extended from the bottom of heaven (or the lowermost firmament) to the ground. This region was inhabited by Enlil, who was also the king of the gods in Sumerian mythology. The cosmic ocean below the ground was the next plane of existence, and this was the domain of the sibling deities Enki and Ninhursag. The lowest plane of existence was the underworld. Other deities inhabited these planes of existence even if they did not reign over them, such as the sun and moon gods. In later Babylonian accounts, the god Marduk alone ascends to the top rank of the pantheon and rules over all domains of the cosmos. The three-tiered cosmos (sky-earth-underworld) is found in Egyptian artwork on coffin lids and burial chambers.
Descriptions
A variety of terms or phrases were used to refer to the cosmos as a whole, acting as rough equivalents to contemporary terms like "cosmos" or "universe". This included phrases like "heaven and earth" or "heaven and underworld". Terms like "all" or "totality" similarly connoted the entire universe. These motifs are found in temple hymns and royal inscriptions located in temples. The temples symbolized cosmic structures that reached heaven at their height and the underworld at their depths/foundations. Surviving evidence does not specify the exact physical bounds of the cosmos or what lies beyond the region described in the texts.
Three heavens and earths
In Mesopotamian cosmology, heaven and earth both had a tripartite structure: a Lower Heaven/Earth, a Middle Heaven/Earth, and an Upper Heaven/Earth. The Upper Earth was where humans existed. Middle Earth, corresponding to the Abzu (primeval underworldly ocean), was the residence of the god Enki. Lower Earth, the Mesopotamian underworld, was where the 600 Anunnaki gods lived, associated with the land of the dead ruled by Nergal. As for the heavens: the highest level was populated by 300 Igigi (great gods), the middle heaven belonged to the Igigi and also contained Marduk's throne, and the lower heaven was where the stars and constellations were inscribed into. The extent of the Babylonian universe therefore corresponded to a total of six layers spanning across heaven and Earth. Notions of the plurality of heaven and earth are no later than the 2nd millennium BC and may be elaborations of earlier and simpler cosmographies. One text (KAR 307) describes the cosmos in the following manner, with each of the three floors of heaven being made of a different type of stone:30 “The Upper Heavens are Luludānītu stone. They belong to Anu. He (i.e. Marduk) settled the 300 Igigū (gods) inside. 31 The Middle Heavens are Saggilmud stone. They belong to the Igīgū (gods). Bēl (i.e. Marduk) sat on the high throne within, 32 the lapis lazuli sanctuary. And made a lamp? of electrum shine inside (it). 33 The Lower Heavens are jasper. They belong to the stars. He drew the constellations of the gods on them. 34 In the … …. of the Upper Earth, he lay down the spirits of mankind. 35 [In the …] of the Middle earth, he settled Ea, his father. 36 […..] . He did not let the rebellion be forgotten. 37 [In the … of the Lowe]r earth, he shut inside 600 Anunnaki. 38 […….] … […. in]side jasper. Another text (AO 8196) offers a slightly different arrangement, with the Igigi in the upper heaven instead of the middle heaven, and with Bel placed in the middle heaven. Both agree on the placement of the stars in the lower heaven. Exodus 24:9–10 identifies the floor of heaven as being like sapphire, which may correspond to the blue lapis lazuli floor in KAR 307, chosen potentially for its correspondence to the visible color of the sky. One hypothesis holds that the belief that the firmament is made of stone (or a metal, such as iron in Egyptian texts) arises from the observation that meteorites, which are composed of this substance, fall from the firmament.
Seven heavens and earths
Some texts describe seven heavens and seven earths, but within the Mesopotamian context, this is likely to refer to a totality of the cosmos with some sort of magical or numerological significance, as opposed to a description of the structural number of heavens and Earth. Israelite texts do not mention the notion of seven heavens or earths.
Unity of the cosmos
Mythical bonds, akin to ropes or cables, played the role of cohesively holding the entire world and all its layers of heaven and Earth together. These are sometimes called the "bonds of heaven and earth". They can be referred to with terminology like durmāhu (typically referring to a strong rope made of reeds), markaṣu (referring to a rope or cable, of a boat, for example), or ṣerretu (lead-rope passed through an animals nose). A deity can hold these ropes as a symbol of their authority, such as the goddess Ishtar "who holds the connecting link of all heaven and earth (or netherworld)". This motif extended to descriptions of great cities like Babylon which was called the "bond of [all] the lands," or Nippur which was "bond of heaven and earth," and some temples as well.
Center of the cosmos
The idea of a center to the cosmos played a role in elevating the status of whichever place was chosen as the cosmic center and in reflecting beliefs of the finite and closed nature of the cosmos. Babylon was described as the center of the Babylonian cosmos. In parallel, Jerusalem became "the navel of the earth" (Ezekiel 38:12). The finite nature of the cosmos was also suggested to the ancients by the periodic and regular movements of the heavenly bodies in the visible vicinity of the Earth.
Firmament
The firmament was believed to be a solid boundary above the Earth, separating it from the upper or celestial waters. In the Book of Genesis, it is called the raqia. In ancient Egyptian texts, and from texts across the near east generally, the firmament was described as having special doors or gateways on the eastern and western horizons to allow for the passage of heavenly bodies during their daily journeys. These were known as the windows of heaven or the gates of heaven. Canaanite text describe Baal as exerting his control over the world by controlling the passage of rainwater through the heavenly windows in the firmament. In Egyptian texts particularly, these gates also served as conduits between the earthly and heavenly realms for which righteous people could ascend. The gateways could be blocked by gates to prevent entry by the deceased as well. As such, funerary texts included prayers enlisting the help of the gods to enable the safe ascent of the dead. Ascent to the celestial realm could also be done by a celestial ladder made by the gods. Multiple stories exist in Mesopotamian texts whereby certain figures ascend to the celestial realm and are given the secrets of the gods.
Four different Egyptian models of the firmament and/or the heavenly realm are known. One model was that it was the shape of a bird: the firmament above represented the underside of a flying falcon, with the sun and moon representing its eyes, and its flapping causing the wind that humans experience. The second was a cow, as per the Book of the Heavenly Cow. The cosmos is a giant celestial cow represented by the goddess Nut or Hathor. The cow consumed the sun in the evening and rebirthed it in the next morning. The third is a celestial woman, also represented by Nut. The heavenly bodies would travel across her body from east to west. The midriff of Nut was supported by Shu (the air god) and Geb (the earth god) lay outstretched between the arms and feet of Nut. Nut consumes the celestial bodies from the west and gives birth to them again in the following morning. The stars are inscribed across the belly of Nut and one needs to identify with one of them, or a constellation, in order to join them after death. The fourth model was a flat (or slightly convex) celestial plane which, depending on the text, was thought to be supported in various ways: by pillars, staves, scepters, or mountains at the extreme ends of the Earth. The four supports give rise to the motif of the "four corners of the world".
Earth
Topography of the earth
The ancient near eastern earth was a single-continent disk resting on a body of water sometimes compared to a raft. An aerial view of the cosmography of the earth is pictorially elucidated by the Babylonian Map of the World. Here, the city Babylon is near the Earth's center and it is on the Euphrates river. Other kingdoms and cities surround it. The north is covered by an enormous mountain range, akin to a wall. This mountain range was traversed in some hero myths, such as the Epic of Gilgamesh where Gilgamesh travels past it to an area only accessible by gods and other great heroes. The furthest and most remote parts of the earth were believed to be inhabited by fantastic creatures. In the Babylonian Map, the world continent is surrounded by a bitter salt-water Ocean (called marratu, or "salt-sea") akin to Oceanus described by the poetry of Homer and Hesiod in early Greek cosmology, as well as the statement in the Bilingual Creation of the World by Marduk that Marduk created the first dry land surrounded by a cosmic sea. Egyptian cosmology appears to have also shared this view, as one of the words used for sea, shen-wer, means "great encircler". World-encircling oceans are also found in the Fara tablet VAT 12772 from the 3rd millennium BC and the Myth of Etana.
Four corners of the earth
A common honorific that many kings and rulers ascribed to themselves was that they were the rulers of the four quarters (or corners) of the Earth. For example, Hammurabi (ca. 1810–1750 BC) received the title of "King of Sumer and Akkad, King of the Four Quarters of the World". Monarchs of the Assyrian empire like Ashurbanipal also took on this title. (Although the title implies a square or rectangular shape, in this case it is taken to refer to the four quadrants of a circle which is joined at the world's center.) Likewise, the 'four corners' motif would also appear in some biblical texts, such as Isaiah 11:12.
Cosmic mountain
According to iconographic and literary evidence, the cosmic mountain, known as Mashu in the Epic of Gilgamesh, was thought to be located at or extend to both the westernmost and easternmost points of the earth, where the sun could rise and set respectively. As such, the model may be called a bi-polar model of diurnal solar movement. The gates for the rising and setting of the sun were also located at Mashu. Some accounts have Mashu as a tree growing at the center of the earth with roots descending into the underworld and a peak reaching to heaven. The cosmic mountain is also found in Egyptian cosmology, as Pyramid Text 1040c says that the mountain ranges on the eastern and western sides of the Nile act as the "two supports of the sky." In the Baal Cycle, two cosmic mountains exist at the horizon acting as the point through which the sun rises from and sets into the underworld (Mot). The tradition of the twin cosmic mountains may also lie behind Zechariah 6:1.
Heavenly bodies
Sun
The sun god (represented by the god Utu in Sumerian texts or Shamash in Akkadian texts) rises in the day and passes over the earth. Then, the sun god falls beneath the earth in the night and comes to a resting point. This resting point is sometimes localized to a designated structure, such as the chamber within a house in the Old Babylonian Prayer to the Gods of the Night. To complete the cycle, the sun comes out in the next morning. Likewise, the moon was thought to rest in the same facility when it was not visible. A similar system was maintained in Egyptian cosmology, where the sun travelled beneath the surface of the earth through the underworld (known among ancient Egyptians as Duat) to rise from the same eastern location each day. These images result from anthropomorphizing the sun and other astral bodies also conceived as gods. For the sun to exit beneath the earth, it had to cross the solid firmament: this was thought possible by the existence of opening ways or corridors in the firmament (variously illustrated as doors, windows or gates) that could temporarily open and close to allow astral bodies to pass across them. The firmament was conceived as a gateway, with the entry/exit point as the gates; other opening and closing mechanisms were also imagined in the firmament like bolts, bars, latches, and keys. During the sun's movement beneath the earth, into the netherworld, the sun would cease to flare. This enabled the netherworld to remain dark. But when it rose, it would flare up and again emit light. This model of the course of the sun had an inconsistency that later models evolved to address. The issue was to understand how, if the sun came to a resting point beneath the earth, could it also travel beneath the earth the same distance under it that it was observed to cross during the day above it such that it would rise periodically from the east. One solution that some texts arrived at was to reject the idea that the sun had a resting point. Instead, it remained unceasing in its course.
Overall, the sun god's activities in night according to Sumerian and Akkadian texts proceeds according to a regular and systematic series of events: (1) The western door of heaven opens (2) The sun passes through the door into the interior of heaven (3) Light falls below the western horizon (4) The sun engages in certain activities in the netherworld like judging the dead (5) The sun enters a house, called the White House (6) The sun god eats the evening meal (7) The sun god sleeps in the chamber agrun (8) The sun emerges from the chamber (9) The eastern door opens and the sun passes through as it rises. In many ancient near eastern cultures, the underworld had a prominent place in descriptions of the sun journey, where the sun would carry out various roles including judgement related to the dead.
In legend, many hero journeys followed the daily course of the sun god. These have been attributed to Gilgamesh, Odysseus, the Argonauts, Heracles and, in later periods, Alexander the Great. In the Epic of Gilgamesh, Gilgamesh reaches the cosmic mountain Mashu, which is either two mountains or a single twin-peaked mountain. Mashu acts as the sun-gate, from where the sun sets in its path to and out from the netherworld. In some texts, the mountain is called the mountain of sunrise and sunset. According to the Epic:
The name of the mountain was Mashu. When [he] arrived at Mount Mashu, which daily guards the rising [of the sun,] – their tops [abut] the fabric of the heavens, their bases reach down to Hades – there were scorpion-men guarding its gate, whose terror was dread and glance was death, whose radiance was terrifying, enveloping the uplands – at both sunrise and sunset they guard the sun…
Another texts describing the relationship between the sun and the cosmic mountain reads:
O Shamash, when you come forth from the great mountain, When you come forth from the great mountain, the mountain of the deep, When you come forth from the holy hill where destinies are ordained, When you [come forth] from the back of heaven to the junction point of heaven and earth…A number of additional texts share descriptions like these.
Moon
Mesopotamians believed the moon to be a manifestation of the moon god, known as Nanna in Sumerian texts or Sîn in Akkadian texts, a high god of the pantheon, subject to cultic devotion, and father of the sun god Shamash and the Venus god Inanna. The path of the moon in the night sky and its lunar phases were also of interest. At first, Mesopotamia had no common calendar, but around 2000 BC, the semi-lunar calendar of the Sumerian center of Nippur became increasingly prevalent. Hence, the moon god was responsible for ordering perceivable time. The lunar calendar was divided into twelve months of thirty days each. New months were marked by the appearance of the moon after a phase of invisibility. The Enuma Elish creation myth describes Marduk as arranging the paths of the stars and then spends considerable space on Marduk's ordering of the moon:12 He made Nannaru (=the moon-god) appear (and) entrusted the night to him. 13 He assigned him as the jewel of the night to determine the days. 14 Month by month without cease, he marked (him) with a crown: 15 “At the beginning of the month, while rising over the land, 16 you shine with horns to reveal six days. 17 On the seventh day, (your) disc shall be halved. 18 On the fifteenth day, in the middle of each month, you shall stand in opposition. 19 As soon as Šamaš (= the sun-god) sees you on the horizon, 20 reach properly your full measure and form yourself back. 21 At the day of disappearance, approach the path of Šamaš. 22 [… 3]0. day you shall stand in conjunction. You shall be equal to Šamaš. The ideal course of the moon was thought to form one month every thirty days. However, the precise lunar month is 29.53 days, leading to variations that made the lunar month counted as 29 or 30 days in practice. The mismatch between the predictions and reality of the course of the moon gave rise to the idea that the moon could act according to its expected course as a good omen or deviate from it as a bad omen. In the 2nd millennium BC, Mesopotamian scholars composed the Enūma Anu Enlil, a collection of at least seventy tablets concerned with omens. The first fourteen (1–14) relate to the appearance of the moon, and the next eight (15–22) deal with lunar eclipses.
The moon was also assigned other functions, such as providing illumination during the night, and already in this period, had a known influence on the tides. During the day when the moon was not visible, it was thought that the moon descended beneath the flat disk of the earth and, like the sun, underwent a voyage through the underworld. The cosmic voyage and motion of the moon also allowed it to exert influence over the world; this belief naturally allowed for the practice of divination to arise.
Stars and planets
Mesopotamian cosmology would differ from the practice of astronomy in terms of terminology: for astronomers, the word "firmament" was not used but instead "sky" to describe the domain in which the heavenly affairs were visible. The stars were located on the firmament. The earliest texts attribute to Anu, Enlil, and Enki (Ea) the ordering of perceivable time by creating and ordering the courses of the stars. Later, according to the Enuma Elish, the stars were arranged by Marduk into constellations representing the images of the gods. The year was fixed by organizing the year into twelve months, and by assigning (the rising of) three stars to each of the twelve months. The moon and zenith were also created. Other phenomena introduced by Marduk included the lunar phases and lunar scheme, the precise paths that the stars would take as they rose and set, the stations of the planets, and more. Another account of the creation of the heavenly bodies is offered in the Babyloniaca of Berossus, where Bel (Marduk) creates stars, sun, moon, and the five (known) planets; the planets here do not help guide the calendar (a lack of concern for the planets also shared in the Book of the Courses of the Heavenly Luminaries, a subsection of 1 Enoch). Concern for the establishment of the calendar by the creation of heavenly bodies as visible signs is shared in at least seven other Mesopotamian texts. A Sumerian inscription of Kudur-Mabuk, for example, reads "The reliable god, who interchanges day and night, who establishes the month, and keeps the year intact." Another example is to be found in the Exaltation of Inanna.
The word "star" (mul in Sumerian; kakkabu in Akkadian) was inclusive to all celestial bodies, stars, constellations, and planets. A more specific term for planets existed however (udu.idim in Sumerian; bibbu in Akkadian, literally "wild sheep") to distinguish them from other stars (of which they were a subcategory): unlike the stars thought to be fixed into their location, the planets were observed to move. By the 3rd millennium BC, the planet Venus was identified as the astral form of the goddess Inanna (or Ishtar), and motifs such as the morning and evening star were applied to her. Jupiter became Marduk (hence the name "Marduk Star", also called Nibiru), Mercury was the "jumping one" (in reference to its comparatively fast motion and low visibility) associated with the gods Ninurta and Nabu, and Mars was the god of pestilence Nergal and thought to portend evil and death. Saturn was also sinister. The most obvious characteristic of the stars were their luminosity and their study for the purposes of divination, solving calendrical calculations, and predictions of the appearances of planets, led to the discovery of their periodic motion. From 600 BC onwards, the relative periodicity between them began to be studied.
Upper waters
Above the firmament was a large, cosmic body of water which may be referred to as the cosmic ocean or celestial waters. In the Tablet of Shamash, the throne of the sun god Shamash is depicted as resting above the cosmic ocean. The waters are above the solid firmament that covers the sky. In the Enuma Elish, the upper waters represented the waters of Tiamat, contained by Tiamat's stretched out skin. Canaanite mythology in the Baal Cycle describes the supreme god Baal as enthroned above the freshwater ocean. Egyptian texts depict the sun god sailing across these upper waters. Some also convey that this body of water is the heavenly equivalent of the Nile River.
Lower waters
Both Babylonian and Israelite texts describe one of the divisions of the cosmos as the underworldly ocean. In Babylonian texts, this is coincided with the region/god Abzu. In Sumerian mythology, this realm was created by Enki. It was also where Enki lived and ruled over. Due to the connection with Enki, the lower waters were associated with wisdom and incantational secret knowledge. In Egyptian mythology, the personification of this subterranean body of water was instead Nu. The notion of a cosmic body of water below the Earth was inferred from the realization that much water used for irrigation came from under the ground, from springs, and that springs were not limited to any one part of the world. Therefore, a cosmic body of water acting as a common source for the water coming out of all these springs was conceived.
Underworld
The Underworld/Netherworld (kur or erṣetu in Sumerian) is the lowest region in the direction downwards, below even Abzu (the primeval ocean/lower waters). It is geographically parallel with the plane of human existence, but was so low that both demons and gods could not descend to it. One of its names was "Earth of No Return". It was, however, inhabited by beings such as ghosts, demons, and local gods. The land was depicted as dark and distant: this is because it was the opposite of the human world and so did not have light, water, fields, and so forth. According to KAR 307, line 37, Bel cast 600 Annunaki into the underworld. They were locked away there, unable to escape, analogous to the enemies of Zeus who were confined to the underworld (Tartarus) after their rebellion during the Titanomachy. During and after the Kassite period, Annunaki were largely depicted as underworld deities; a hymn to Nergal praises him as the "Controller of the underworld, Supervisor of the 600".
In Canaanite religion, the underworld was personified as the god Mot. In Egyptian mythology, the underworld was known as Duat and was ruled by Osiris, the god of the afterlife. It was also the region where the sun (manifested by the god Ra) made its journey from west (where it sets) to the east (from where it would rise again the next morning).
Origins and development of the cosmos
Origins of the cosmos
The world was thought to be created ex materia. That is, out of pre-existing, and unformed, eternal matter. This is in contrast to the later notion of creation ex nihilo, which asserts that all the matter of the universe was created out of nothing. The primeval substance had always existed, was unformed, divine, and was envisioned as an immense, cosmic, chaotic mass of water or ocean (a representation that still existed in the time of Ovid). In the Mesopotamian theogonic process, the gods would be ultimately generated from this primeval matter, although a distinct process is found in the Hebrew Bible where God is initially distinct from the primeval matter. For the cosmos and the gods to ultimately emerge from this formless cosmic ocean, the idea emerged that it had to be separated into distinct parts and therefore be formed or organized. This event can be imagined of as the beginning of time. Furthermore, the process of the creation of the cosmos is coincident or equivalent to the beginning of the creation of new gods. In the 3rd millennium BC, the goddess Nammu was the one and singular representation of the original cosmic ocean in Mesopotamian cosmology. From the 2nd millennium BC onwards, this cosmic ocean came to be represented by two gods, Tiamat and Abzu who would be separated from each other to mark the cosmic beginning. The Ugaritic god Yam from the Baal Cycle may also represent the primeval ocean.
Sumerian and Akkadian sources understand the matter of the primordial universe out of which the cosmos emerges in different ways. Sumerian thought distinguished between the inanimate matter that the cosmos was made of and the animate and living matter that permeated the gods and went on to be transmitted to humans. In Akkadian sources, the cosmos is originally alive and animate, but the deaths of Abzu (male deity of the fresh waters) and Tiamat (female sea goddess) give rise to inanimate matter, and all inanimate matter is derived from the dead remains of these deities.
Origins of the gods
The core Mesopotamian myth to explain the gods' origins begins with the primeval ocean, personified by Nammu, containing Father Sky and Mother Earth within her. In the god-list TCL XV 10, Nammu is called 'the mother, who gave birth to heaven and earth'. The conception of Nammu as mother of Sky-Earth is first attested in the Ur III period (early 2nd millennium BC), though it may go back to an earlier Akkadian era. Earlier in the 3rd millennium BC, Sky and Earth were the starting point with little apparent question about their own origins.
The representation of Sky as male and Earth as female may come from the analogy between the generative power of the male sperm and the rain that comes from the sky, which respectively fertilize the female to give rise to newborn life or the Earth to give rise to vegetation. In the desert-dweller milieu, life depended on pastureland. Sky and Earth are in a union. Because they are the opposite sex, they inevitably reproduce and their offspring are successive pairs or generations of gods known as the Enki-Ninki deities. The name comes from Enki and Ninki ("Lord and Lady Earth") being the first pair in all versions of the story. The only other consistent feature is that Enlil and Ninlil are the last pair. In each pair, one member is male (indicated by the En- prefix) and the other is female (indicated by the Nin- prefix). The birth of Enlil results in the separation of heaven and earth as well as the division of the primordial ocean into the upper and lower waters. Sky, now Anu, can mate with other deities after being separated from Earth: he mates with his mother Nammu to give birth to Enki (different from the earlier Enki) who takes dominion over the lower waters. The siblings Enlil and Ninlil mate to give birth to Nanna (also known as Sin), the moon god, and Ninurta, the warrior god. Nanna fathers Utu (known as Shamash in Akkadian texts), the sun god, and Inanna (Venus). By this point, the main features of the cosmos had been created/born.
A variation of this myth existed in Egyptian cosmology. Here, the primordial ocean is given by the god Nu. The creation act neither takes its materials from Nu, unlike in Mesopotamian cosmology, nor is Nu eliminated by the creation act.
Separation of heaven and earth
3rd millennium BC texts speak of the cosmic marriage or union of Heaven and Earth. Only one towards the end of this era, the Song of the hoe, mentions their separation. By contrast, 2nd millennium texts entirely shift in focus to their separation. The tradition spread into Sumerian, Akkadian, Phoenician, Egyptian, and early Greek mythology. The cause of the separation involves either the agency of Enlil or takes place as a spontaneous act. One recovered Hittite text states that there was a time when they "severed the heaven from the earth with a cleaver", and an Egyptian text refers to "when the sky was separated from the earth" (Pyramid Text 1208c). OIP 99 113 ii and 136 iii says Enlil separated Earth from Sky and separated Sky from Earth. Enkig and Ninmah 1–2 also says Sky and Earth were separated in the beginning. The introduction of Gilgamesh, Enkidu, and the Netherworld says that heaven is carried off from the earth by the sky god Anu to become the possession of the wind god Enlil. Several other sources also present this idea.
There are two strands of Mesopotamian creation myths regarding the original separation of the heavens and earth. The first, older one, is evinced from texts in the Sumerian language from the 3rd millennium BC and the first half of the 2nd millennium BC. In these sources, the heavens and Earth are separated from an original solid mass. In the younger tradition from Akkadian texts, such as the Enuma Elish, the separation occurs from an original water mass. The former usually has the leading gods of the Sumerian pantheon, the King of Heaven Anu and the King of Earth Enlil, separating the mass over a time-frame of "long days and nights", similar to the total timeframe of the Genesis creation narrative (six days and nights). The Sumerian texts do not mention the creation of the cosmic waters but it may be surmised that water was one of the primordial elements.
Stretching out the heavens
The idiom of the heavens and earth being stretched out plays both a cultic and cosmic role in the Hebrew Bible where it appears repeatedly in the Book of Isaiah (40:22; 42:5; 44:24; 45:12; 48:13; 51:13, 16), with related expressions in the Book of Job (26:7) and the Psalms (104:2). One example reads "The one who stretched out the heavens like a curtain / And who spread them out like a tent to dwell in" (Is 40:22). The idiom is used in these texts to identify the creative element of Yahweh's activities and the expansion of the heavens signifies its vastness, acting as Yahweh's celestial shrine. In Psalmic tradition, the "stretching" of the heavens is analogous to the stretching out of a tent. The Hebrew verb for the "stretching" of the heavens is also the regular verb for "pitching" a tent. The heavens, in other words, may be depicted as a cosmic tent (a motif found in many ancient cultures). This finds architectural analogy in descriptions of the tabernacle, which is itself a heavenly archetype, over which a tent is supposed to have been spread. The phrase is frequently followed by an expression that God sits enthroned above and ruling the world, paralleling descriptions of God being seated in the Holy of Holies of the Tabernacle where he is stated to exercise rule over Israel. Biblical references to stretching the heavens typically occur in conjunction with statements that God made or laid the foundations of the earth.
Similar expressions may be found elsewhere in the ancient near east. A text from the 2nd millennium BC, the Ludlul Bēl Nēmeqi, says "Wherever the earth is laid, and the heavens are stretched out", though the text does not identify the creator of the cosmos. The Enuma Elish also describes the phenomena, in IV.137–140:137 He split her into two like a dried fish: 138 One half of her he set up and stretched out as the heavens. 139 He stretched the skin and appointed a watch. 140 With the instruction not to let her waters escape. In this text, Marduk takes the body of Tiamat, who he has killed, and stretches out Tiamat's skin to create the firmamental heavens which, in turn, comes to play the role of preventing the cosmic waters above the firmament from escaping and being unleashed onto the earth. Whereas the Masoretic Text of the Hebrew Bible states that Yahweh stretched heaven like a curtain in Psalm 104:2, the equivalent passage in the Septuagint instead uses the analogy of stretching out like "skin", which could represent a relic of Babylonian cosmology from the Enuma Elish. Nevertheless, the Hebrew Bible never identifies the material out of which the firmament was stretched. Numerous theories about what the firmament was made of sprung up across ancient cultures.
Origins of humanity
Many stories emerged to explain the creation of humanity and the birth of civilization. Earlier Sumerian language texts from the 3rd and 2nd millennia BC can be divided into two traditions: those from the cities of Nippur or Eridu. The Nippur tradition asserts that Heaven (An) and Earth (Ki) were coupled in a cosmic marriage. After they are separated by Enlil, Ki receives semen from An and gives rise to the gods, animals, and man. The Eridu tradition says that Enki, the offspring of An and Namma (in this tradition, the freshwater goddess) is the one who creates everything. Periodical relations between Enki and Ninhursaga (in this tradition, the personification of Earth) gives rise to vegetation. With the help of Namma, Enki creates man from clay. A famous work of the Eridu tradition is Eridu Genesis. A minority tradition in Sumerian texts, distinct from Nippur and Eridu traditions, is known from KAR 4, where the blood of a slaughtered deity is used to create humanity for the purpose of making them build temples for the gods.
Later Akkadian language tradition can be divided into various minor cosmogonies, cosmogonies of significant texts like Enuma Elish and Epic of Atrahasis, and finally the Dynasty of Dunnum placed in its own category. In the Atrahasis Epic, the Anunnaki gods force the Igigi gods to do their labor. However, the Igigi became fed up with this work and rebel. To solve the problem, Enlil and Mami create humanity by mixing the blood of gods with clay, who in the stead of the Igigi are assigned the gods' work. In the Enuma Elish, divine blood alone is used to make man.
Main texts
Overview and limitations
The Hebrew Bible, especially in the Genesis creation narrative, undergirds known beliefs about biblical cosmology in ancient Israel and Judah. From Mesopotamia, cosmological evidence has fragmentarily survived in cuneiform literature especially in the Sumerian and Akkadian languages, like the Enuma Elish. Cosmogonic information has been sourced from Enki-Ninki god lists. Cosmogonic prologues preface texts falling in the genre of the Sumerian and Akkadian disputation poems, as well as individual works like the Song of the hoe, Gilgamesh, Enkidu, and the Netherworld, and Lugalbanda I. Evidence is also available in Ugaritic (Ritual Theogony of the Gracious Gods) and Hittite (Song of Emergence) sources. Egyptian papyri and inscriptions, like the Memphite Theology, and later works such as the Babyloniaca of Berossus, offer additional evidence. A less abundant source are pictorial/iconographic representations, especially the Babylonian Map of the World.
Limitations of these types of texts (papyri, cuneiform, etc) is that the majority are administrative and economic in their nature, saying little about cosmology. Detailed descriptions are unknown before the first millennium BC. As such, reconstructions from that time depend on gleaning information from surviving creation myths and etiologies.
Enuma Elish
The Enuma Elish is the most famous Mesopotamian creation story and was widely known in among learned circles across Mesopotamia, influencing both art and ritual. It is also the only complete cosmogony, whereas others must be reconstructed from disparate sources. The story was, in many ways, an original work, and as such is not a general representative of ancient near eastern or even Babylonian cosmology as a whole, and its survival as the most complete creation account appears to be a product of it having been composed in the milieu of Babylonian literature that happened to survive and get discovered in the present day. On the other hand, recent evidence suggests that after its composition, it played an important role in Babylonian scribal education. The story is preserved foremost in seven clay tablets discovered from the Library of Ashurbanipal in Nineveh. The creation myth seeks to describes how the god Marduk created the world from the body of the sea monster Tiamat after defeating her in battle, after which Marduk ascends to the top of the heavenly pantheon. The Enuma Elish is one of a broader set of near eastern traditions describing the cosmic battle between the storm and sea gods, but only Israelite cosmogonies share with it the act of creation that follows the storm gods victory.
The following is a synopsis of the account. The primordial universe is alive and animate, made of Abzu, commonly identified as a male deity of the fresh waters, and Tiamat, the female sea goddess of salt waters. The waters mingle to create the next generations of deities. However, the younger gods are noisy and this noise eventually incenses Abzu so much that he tries to kill them. In trying to do so, however, he is killed by Ea (Akkadian equivalent of the Sumerian Enki). This eventually leads to a battle between Tiamat and the son of Ea, Marduk. Marduk kills Tiamat and fashions the cosmos, including the heavens and Earth, from Tiamat's corpse. Tiamat's breasts are used to make the mountains and Tiamat's eyes are used to open the sources of the Tigris and Euphrates rivers. Parts of the watery body were used to create parts of the world including its wind, rain, mist, and rivers. Marduk forms the heavenly bodies including the sun, moon, and stars to produce periodic astral activity that is the basis for the calendar, before finally setting up the cosmic capital at Babylon. Marduk attains universal kingship and the Tablet of Destinies. Tiamat's helper Kingu is also slain and his life force is used to animate the first human beings.
The Enuma Elish is in continuity with other texts like the Myth of Anzû, the Labbu Myth, and KAR 6. In both the Enuma Elish and the Myth of Anzu, a dragon (Anzu or Tiamat) steals the Tablet of Destinies from Enlil, the chief god and in response, the chief god looks for someone to slay the dragon. Then, in both stories, a champion among the gods is chosen to fight the dragon (Ninurta or Marduk) after two or three others before them reject the offer to fight. The champion wins, after which he is acclaimed and given many names. The Enuma Elish may have also drawn from the myth of the Ninurta and the dragon Kur. The dragon is formerly responsible for holding up the primordial waters. Upon being killed, the waters begin to rise; this problem is solved by Ninurta heaping stones upon them until the waters are held back. One of the most significant differences between the Enuma Elish and earlier creation myths is in its exaltation of Marduk as the highest god. In prior myths, Ea was the chief god and creator of mankind.
Genesis creation narrative
The Genesis creation narrative, composed perhaps in the 7th or 6th century BC, spans Gen 1:1–2:3 and covers a one-week (seven-day) period. In each of the first three days there is an act of division: day one divides the darkness from light, day two the "waters above" from the "waters below", and day three the sea from the land. In each of the next three days these divisions are populated: day four populates the darkness and light with "greater light" (Sun), "lesser light" (Moon) and stars; day five populates seas and skies with fish and fowl; and finally land-based creatures and mankind populate the land. According to Victor Hamilton, most scholars agree that the choice of "greater light" and "lesser light", rather than the more explicit "Sun" and "Moon", is anti-mythological rhetoric intended to contradict widespread contemporary beliefs that the Sun and the Moon were deities themselves.
In 1895, Hermann Gunkel related this narrative to the Enuma Elish via an etymological relationship between Tiamat and təhôm ("the deep" or "the abyss") and a sharing of the Chaoskampf motif. Today, another view rejects these connections and groups the Genesis creation narrative with other West Semitic cosmologies like those of Ugarit.
Other biblical creation narratives
Other prominent biblical cosmogonies include Psalm 74:12–17; Psalm 89:6–13; and Job 26:7–13, with a variety of additional briefer passages expounding on subsections of these lengthier passages (like Isaiah 51:9–10). Like traditions from Babylon, Egypt, Anatolia, and Canaan and the Levant, these cosmogonies describe a cosmic battle (on the part of Yahweh in the biblical versions) with a sea god (named Leviathan, Rahab) but only with Babylonian versions like the Enuma Elish is the victory against the sea god followed by an act of creation. The seemingly well-known cosmogony proceeded as follows: Yahweh fights and subdues the sea god while portrayed as holding a weapon and fighting with meteorological forces; the Sea that previously covered the earth is forced to make way for dry land and parts of it are confined behind the seashore, in the clouds, and into storehouses below the earth; Mount Zaphon is established and a temple for Yahweh is erected; finally, Yahweh is enthroned above all the gods.
An alternative cosmogony appears in the doxologies of Amos (4:13; 5:8; 9:5–6). Instead of the earth being already covered by a primal sea, the earth is originally in a dry state and only later is the sea stretched over it. No cosmic battle takes place. The winds and mountains, which elsewhere primordially exist, in this account are both created. Like some passages in Deutero-Isaiah, these doxologies appear to present a view of creation ex-nihilo.
These cosmogonies are relatively mythologized compared to the Genesis cosmogony. In addition, the Genesis cosmogony differs by describing the separation of the upper and lower waters by the creation of a firmament, whereas here, they are typically assembled into clouds. The closest cosmogony to Genesis is the one in Psalm 104.
Babyloniaca of Berossus
The first book of the Babyloniaca of the Babylonian priest Berossus, composed in the third century BC, offers a variant (or, perhaps, an interpretation) of the cosmogony of the Enuma Elish. This work is not extant but survives in later quotations and abridgements. Berossus' account begins with a primeval ocean. Unlike in the Enuma Elish, where sea monsters are generated for combat with other gods, in Berossus' account, they emerge by spontaneous generation and are described in a different manner to the 11 monsters of the Enuma Elish, as it expands beyond the purely mythical creatures of that account in a potential case of influence from Greek zoology. The fragments of Berossus by Syncellus and the Armenian of how he described his cosmogony are as follows:Syncellus: There was a time, he says, when everything was [darkness and] water and that in it fabulous beings with peculiar forms came to life. For men with two wings were born and some with four wings and two faces, having one body and two heads, male and female, and double genitalia, male and female ... [a list of monstrous beings follows]. Over all these a woman ruled named Omorka. This means in Chaldaean Thalatth, in Greek it is translated as ‘Sea’ (Thalassa) ... When everything was arranged in this way, Belos rose up and split the woman in two. Of one half of her he made earth, of the other half sky; and he destroyed all creatures in her ... For when everything was moist, and creatures had come into being in it, this god took off his own head and the other gods mixed the blood that flowed out with earth and formed men. For this reason they are intelligent and share in divine wisdom. Belos, whom they translate as Zeus, cut the darkness in half and separated earth and sky from each other and ordered the universe. The creatures could not endure the power of the light and were destroyed. When Belos saw the land empty and barren, he ordered one of the gods to cut off his own head and to mix the blood that flowed out with earth and to form men and wild animals that were capable of enduring the air. Belos also completed the stars and the sun and the moon and the five planets. Alexander Polyhistor says that Berossus asserts these things in his first book.
Syncellus: They say that in the beginning all was water, which was called Sea (Thalassa). Bel made this one by assigning a place to each, and he surrounded Babylon with a wall.
Armenian: All, he said, was from the start water, which was called Sea. And Bel placed limits on them (the waters) and assigned a place to each, allocated their lands, and fortified Babylon with an enclosing wall.The conclusion of the account states that Belus then created the stars, sun, moon, and five planets. The account of Berossus agrees largely with the Enuma Elish, such as its reference to the splitting of the woman whose halves are used to create heaven and earth, but also contain a number of differences, such as the statement about allegorical exegesis, the self-decapitation of Belus in order to create humans, and the statement that it is the divine blood which has made humans intelligent. Some debate has ensued about which elements of these may or may not go back to the original account of Berossus. Some of the information Berossus got for his account of the creation myth may have come from the Enuma Elish and the Babylonian Dynastic Chronicle.
Other
Additional minor texts also present varying cosmogonical details. The Bilingual Creation of the World by Marduk describes the construction of Earth as a raft over the cosmic waters by Marduk. An Akkadian text called The Worm describes a series of creation events: first Heaven creates Earth, Earth creates the Rivers, and eventually, the worm is created at the end of the series and it goes to live in the root of the tooth that is removed during surgery.
Influence
Survival
Copies from the Sippar Library indicate the Enuma Elish was copied into Seleucid times. One Hellenistic-era Babylonian priest, Berossus, wrote a Greek text about Mesopotamian traditions called the Babyloniaca (History of Babylon). The text survives mainly in fragments, especially by quotations in Eusebius in the fourth-century. The first book contains an account of Babylonian cosmology and, though concise, contains a number of echoes of the Enuma Elish. The creation account of Berossus is attributed to the divine messenger Oannes in the period after the global flood and is derivative of the Enuma Elish but also has significant variants to it. Babylonian cosmology also received treatments by the lost works of Alexander Polyhistor and Abydenus. The last known evidence for reception of the Enuma Elish is in the writings of Damascius (462–538), who had a well-informed source. As such, some learned circles in late antiquity continued to know the Enuma Elish. Echoes of Mesopotamian cosmology continue into the 11th century.
Early Greek cosmology
Early Greek cosmology was closely related to the broader domain of ancient near eastern cosmology, reflected 8th century BC works like the Theogony of Hesiod and the works of Homer, and prior to the emergence of an independent and systematic Hellenistic system of cosmology that was represented by figures such as Aristotle and the astronomer Ptolemy, starting with the Ionian School of philosophy at the city of Miletus from the 6th to 4th centuries BC. In early Greek cosmology, the Earth was conceived of as being flat, encircled by a cosmic ocean known as Oceanus, and that heaven was a solid firmament held above the Earth by pillars. Many believe that a Hurro-Hittite work from the 13th century BC, the Song of Emergence (CTH 344), was directly used by Hesiod on the basis of extensive similarities between their accounts.
The notion of heaven and earth originally being in unity followed by their separation continues to be attested in later Greek cosmological texts, such as in the descriptions of Orphic cosmology according to the Wise Melanippe of Euripides in the 5th century BC and the Argonautica by Apollonius of Rhodes in the 3rd century BC, as well as in other and still later Greek accounts, such as the writings of Diodorus Siculus in the 1st century BC.
Zoroastrian cosmology
The earliest Zoroastrian sources describe a tripartite sky, with an upper heaven where the sun exists, a middle heaven where the moon exists, and a lower heaven where the stars exist and are fixed. Significant work has been done on comparing this cosmography to ones present in Mesopotamian, Greek, and Indian parallels. In light of evidence which has emerged in recent decades, the present view is that this idea entered into Zoroastrian thought through Mesopotamian channels of influence. Another influence is that the name that one of the planets took on in Middle Persian literature, Kēwān (for Saturn), was derived from the Akkadian language.
Jewish cosmology
Mesopotamian cosmology, especially as it manifested in the biblical Genesis creation narrative, exerted continued substantial influence on Jewish cosmology, especially as it is described in the rabbinic literature. Not all influence appears to have been mediated through the Bible. The dome-shaped firmament was described in Hebrew as a kippah, which has been related to its Akkadian cognate kippatšamê, though the latter only refers to flat objects. The Jewish belief in seven heavens, as it is absent from the Hebrew Bible, has often been interpreted as being taken from early interactions with Mesopotamian cosmologies.
Christian cosmology
Christian texts were familiar with ancient near eastern cosmology insofar as it had shaped the Genesis creation narrative. A genre of literature emerged among Jews and Christians dedicated to the composition of texts commenting precisely on this narrative to understand the cosmos and its origins: these works are called Hexaemeron. The first extant example is the De opificio mundi ("On the Creation of the World") by the first-century Jewish philosopher Philo of Alexandria. Philo preferred an allegorical form of exegesis, in line with that of the School of Alexandria, and so was partial to a Hellenistic cosmology as opposed to an ancient near eastern one. In the late fourth century, the Hexaemeral genre was revived and popularized by the Hexaemeron of Basil of Caesarea, who composed his Hexaemeron in 378, which subsequently inspired numerous works including among Basil's own contemporaries. Basil was much more literal in his interpretation than Philo, closer instead to the exegesis of the School of Antioch. Christian authors would heavily dispute the correct degree of literal or allegorical exegesis in future writings. Among Syrian authors, Jacob of Serugh was the first to produce his own Hexaemeron in the early sixth century, and he was followed later by Jacob of Edessa's Hexaemeron in the first years of the eighth century. The most literal approach was that of the Christian Topography by Cosmas Indicopleustes, which presented a cosmography very similar to the traditional Mesopotamian one, but in turn, John Philoponus wrote a harsh rebuttal to Cosmas in his own De opificio mundi. Syrian Christian texts also shared topographical features like the cosmic ocean surrounding the earth.
Cosmographies were described in works other than those of the Hexaemeral genre. For example, in the genre of novels, the Alexander Romance would portray a mythologized picture of the journeys and conquests of Alexander the Great, ultimately inspired by the Epic of Gilgamesh. The influence is evident in the texts cosmography, as Alexander reaches an outer ocean circumscribing the Earth which cannot be passed. Both in the Alexander Romance, and in later texts like the Syriac Alexander Legend (Neshana), Alexander journeys to the ends of the Earth which is surrounded by an ocean. Unlike in the story of Gilgamesh, however, this ocean is an unpassable boundary that marks the extent to which Alexander can go. The Neshana also aligns with a Mesopotamian cosmography in its description of the path of the sun: as the sun sets in the west, it passes through a gateway in the firmament, cycles to the other side of the earth, and rises in the east in its passage through another celestial gateway. Alexander, like Gilgamesh, follows the path of the sun during his journey. These elements of Alexander's journey are also described as part of the journey of Dhu al-Qarnayn in the Quran. Gilgamesh's journey takes him to a great cosmic mountain Mashu; likewise, Alexander reaches a cosmic mountain known as Musas. The cosmography depicted in this text greatly resembles that outlined by the Babylonian Map of the World.
Quranic cosmology
The Quran conceives of the primary elements of the ancient near eastern cosmography, such as the division of the cosmos into the heavens and the Earth, a solid firmament, upper waters, a flat Earth, and seven heavens. As with rabbinic cosmology, however, these elements were not directly transmitted from ancient near eastern civilization. Instead, work in the field of Quranic studies has identified the primary historical context for the reception of these ideas to have been in the Christian and Jewish cosmologies of late antiquity. This conception of the cosmos was carried on into the traditionalist cosmologies that were held in the caliphate, though with a few nuances that appear to have emerged.
See also
Ancient Mesopotamian religion
Hexaemeron
Pre-Islamic Arabian inscriptions
King of the Universe
Mandaean cosmology
Panbabylonism
Quranic studies
References
Notes
Citations
Sources
Further reading
Assman, Jan. The Search for God in Ancient Egypt, Cornell University Press, 2001, pp. 53–82.
Clifford, Richard. Creation Accounts in the Ancient Near East and in the Bible, Wipf and Stock Publishers, 1994.
Dalley, Myths from Mesopotamia: Creation, the Flood, Gilgamesh, and Others, Oxford University Press, 1998.
George, A. Babylonian Topographical Texts, Peeters, 1992.
Hetherington, Norriss S (ed.). Encyclopedia of Cosmology, Routledge, 2014.
Hunger, Hermann, and John Steele, The Babylonian Astronomical Compendium MUL.APIN, Routledge, 2018.
Jacobsen, Thorkild. "The Cosmos as a State" in The Intellectual Adventure of Ancient Man: An Essay of Speculative Thought in the Ancient Near East, University of Chicago Press, 1977.
Keel, Othmar & Silvia Shroer, Creation: Biblical Theologies in the Context of the Ancient Near East, Eisenbrauns, 2015.
Sjöberg, Å. "In the beginning" in Riches Hidden in Secret Places: Ancient Near Eastern Studies in Memory of Thorkild Jacobsen, Eisenbrauns, 2002, pp. 229–247.
Wiggermann, F. "Mythological foundations of nature" in Natural Phenomena: Their Meaning, Depiction and Description in the Ancient Near East, 1992.
Zago, Silvia. A Journey through the Beyond: The Development of the Concept of Duat and Related, Lockwood Press, 2022.
External links
Mesopotamian Creation Myths (Metropolitan Museum of Art)
Ancient Near East
Ancient astronomy
Biblical cosmology
Obsolete scientific theories
Religious cosmologies | Ancient Near Eastern cosmology | Astronomy | 13,251 |
7,626,796 | https://en.wikipedia.org/wiki/Pseudoelasticity | In materials science, pseudoelasticity, sometimes called superelasticity, is an elastic (reversible) response to an applied stress, caused by a phase transformation between the austenitic and martensitic phases of a crystal. It is exhibited in shape-memory alloys.
Overview
Pseudoelasticity is from the reversible motion of domain boundaries during the phase transformation, rather than just bond stretching or the introduction of defects in the crystal lattice (thus it is not true superelasticity but rather pseudoelasticity). Even if the domain boundaries do become pinned, they may be reversed through heating. Thus, a pseudoelastic material may return to its previous shape (hence, shape memory) after the removal of even relatively high applied strains. One special case of pseudoelasticity is called the Bain Correspondence. This involves the austenite/martensite phase transformation between a face-centered crystal lattice (FCC) and a body-centered tetragonal crystal structure (BCT).
Superelastic alloys belong to the larger family of shape-memory alloys. When mechanically loaded, a superelastic alloy deforms reversibly to very high strains (up to 10%) by the creation of a stress-induced phase. When the load is removed, the new phase becomes unstable and the material regains its original shape. Unlike shape-memory alloys, no change in temperature is needed for the alloy to recover its initial shape.
Superelastic devices take advantage of their large, reversible deformation and include antennas, eyeglass frames, and biomedical stents.
Nickel titanium (Nitinol) is an example of an alloy exhibiting superelasticity.
Size effects
Recently, there have been interests of discovering materials exhibiting superelasticity in nanoscale for MEMS (Microelectromechanical systems) application. The ability to control the martensitic phase transformation has already been reported. But the behavior of superelasticity has been observed to have size effects in nanoscale.
Qualitatively speaking, superelasticity is the reversible deformation by phase transformation. Therefore, it competes with the irreversible plastic deformation by dislocation motion. At nanoscale, the dislocation density and possible Frank–Read source sites are greatly reduced, so the yield stress is increased with reduced size. Therefore, for materials exhibiting superelasticity behavior in nanoscale, it has been found that they can operate in long-term cycling with little detrimental evolution. On the other hand, the critical stress for martensitic phase transformation to occur is also increased because of the reduced possible sites for nucleation to begin. Nucleation usually begins near dislocation or at surface defects. But for nanoscale materials, the dislocation density is greatly reduced, and the surface is usually atomically smooth. Therefore, the phase transformation of nanoscale materials exhibiting superelasticity is usually found to be homogeneous, resulting in much higher critical stress. Specifically, for Zirconia, where it has three phases, the competition between phase transformation and plastic deformation has been found to be orientation dependent, indicating the orientation dependence of activation energy of dislocation and nucleation. Therefore, for nanoscale materials suitable for superelasticity, one should research on the optimized crystal orientation and surface roughness for most enhanced superelasticity effect.
See also
Shape-memory alloy
Elasticity (physics)
References
External links
DoITPoMS Teaching and Learning Package: "Superelasticity and Shape Memory Alloys"
Materials science | Pseudoelasticity | Physics,Materials_science,Engineering | 732 |
45,558,734 | https://en.wikipedia.org/wiki/COX5A | Cytochrome c oxidase subunit 5a is a protein that in humans is encoded by the COX5A gene. Cytochrome c oxidase 5A is a subunit of the cytochrome c oxidase complex, also known as Complex IV, the last enzyme in the mitochondrial electron transport chain.
Structure
The COX5A gene, located on the q arm of chromosome 15 in position 24.1, is made up of 5 exons and is 17,880 base pairs in length. The COX5A protein weighs 17 kDa and is composed of 150 amino acids. The protein is a subunit of Complex IV, which consists of 13 mitochondrial- and nuclear-encoded subunits.
Function
Cytochrome c oxidase (COX) is the terminal enzyme of the mitochondrial respiratory chain. It is a multi-subunit enzyme complex that couples the transfer of electrons from cytochrome c to molecular oxygen and contributes to a proton electrochemical gradient across the inner mitochondrial membrane to drive ATP synthesis via protonmotive force. The mitochondrially-encoded subunits perform the electron transfer of proton pumping activities. The functions of the nuclear-encoded subunits are unknown but they may play a role in the regulation and assembly of the complex.
Summary reaction:
4 Fe2+-cytochrome c + 8 H+in + O2 → 4 Fe3+-cytochrome c + 2 H2O + 4 H+out
Clinical significance
COX5A (this gene) and COX5B are involved in the regulation of cancer cell metabolism by Bcl-2. COX5A interacts specifically with Bcl-2, but not with other members of the Bcl-2 family, such as Bcl-xL, Bax or Bak.
The Trans-activator of transcription protein (Tat) of human immunodeficiency virus (HIV) inhibits cytochrome c oxidase (COX) activity in permeabilized mitochondria isolated from both mouse and human liver, heart, and brain samples.
References
Further reading
External links
Mass spectrometry characterization of COX5A at COPaKB
Proteins | COX5A | Chemistry | 437 |
73,851,158 | https://en.wikipedia.org/wiki/HD%2027563 | HD 27563, also known by the Bayer designation d Eridani, is a single star in Eridanus, in the direction of the Orion–Eridanus Superbubble, that is faintly visible to the naked eye at a magnitude of about 5.84. classifies this star as spectral type B5III, but catalog it as B7II.
It was used as a comparison star for 46 Eridani in four separate runs at La Silla Observatory in the 1970s and 1980s, but in 1989, both stars were found to be variable with similar periods of about four days, and HD 27563 assigned the designation EM Eridani. Despite the lack of reliable comparisons for EM Eridani, it was found that the power spectrum of its light curve is remarkably noisy, with two or four prominent oscillation periods centered around , and is classified as a slowly pulsating B-type star.
References
Eridanus (constellation)
Eridani, d
Eridani, 210
B-type giants
B-type bright giants
Slowly pulsating B-type stars
020271
027563
1363
BD–07 798
Eridani, EM
J04204283−0735329 | HD 27563 | Astronomy | 257 |
9,264,025 | https://en.wikipedia.org/wiki/Terrain%20awareness%20and%20warning%20system | In aviation, a terrain awareness and warning system (TAWS) is generally an on-board system aimed at preventing unintentional impacts with the ground, termed "controlled flight into terrain" accidents, or CFIT. The specific systems currently in use are the ground proximity warning system (GPWS) and the enhanced ground proximity warning system (EGPWS). The U.S. Federal Aviation Administration (FAA) introduced the generic term TAWS to encompass all terrain-avoidance systems that meet the relevant FAA standards, which include GPWS, EGPWS and any future system that might replace them.
As of 2007, 5% of the world's commercial airlines still lacked a TAWS. A study by the International Air Transport Association examined 51 accidents and incidents and found that pilots did not adequately respond to a TAWS warning in 47% of cases.
Several factors can still place aircraft at risk for CFIT accidents: older TAWS systems, deactivation of the EGPWS system, or ignoring TAWS warnings when an airport is not in the TAWS database.
History
Beginning in the early 1970s, a number of studies looked at the occurrence of CFIT accidents, where a properly functioning airplane under the control of a fully qualified and certificated crew is flown into terrain (or water or obstacles) with no apparent awareness on the part of the crew. In the 1960s and 70s, there was an average of one CFIT accident per month, and CFIT was the single largest cause of air travel fatalities during that time.
C. Donald Bateman, an engineer at Honeywell, is credited with developing the first ground proximity warning system (GPWS); in an early test, conducted after the 1971 crash of Alaska Airlines Flight 1866, the device provided sufficient warning for a small plane to avoid the terrain, but not enough for the larger Boeing 727 jetliner involved. Bateman's earliest devices, developed in the 1960s, used radio waves to measure altitude and triggered an alarm when the aircraft was too low, but it was not aimed forward and could not provide sufficient warning of steeply rising terrain ahead.
Early GPWS mandates
Findings from these early studies indicated that many such accidents could have been avoided if a GPWS had been used. As a result of these studies and recommendations from the U.S. National Transportation Safety Board (NTSB), in 1974 the FAA required all (Part 121) certificate holders (that is, those operating large turbine-powered airplanes) and some (Part 135) certificate holders (that is, those operating large turbojet airplanes) to install TSO-approved GPWS equipment.
In 1978, the FAA extended the GPWS requirement to Part 135 certificate holders operating smaller airplanes: turbojet-powered airplanes with ten or more passenger seats. These operators were required to install TSO-approved GPWS equipment or alternative ground proximity advisory systems that provide
routine altitude callouts whether or not there is any imminent danger. This requirement was considered necessary because of the complexity, size, speed, and flight performance characteristics of these airplanes. The GPWS equipment was considered essential in helping the pilots of these airplanes to regain altitude quickly and avoid what could have been a CFIT accident.
Installation of GPWS or alternative FAA-approved advisory systems was not required on turbo-propeller powered (turboprop) airplanes operated under Part 135 because, at that time, the general consensus was that the performance characteristics of turboprop airplanes made them less susceptible to CFIT accidents. For example, it was thought that turboprop airplanes had a greater ability to respond quickly in situations where altitude control was inadvertently neglected, as compared to turbojet airplanes. However, later studies, including investigations by the NTSB, analyzed CFIT accidents involving turboprop airplanes and found that many of these accidents could have been avoided if GPWS equipment had been used.
Some of these studies also compared the effectiveness of the alternative ground proximity advisory system to the GPWS. GPWS was found to be superior in that it would warn only when necessary, provide maximum warning time with minimal unwanted alarms, and use command-type warnings.
Based on these reports and NTSB recommendations, in 1992 the FAA amended §135.153 to require GPWS equipment on all turbine-powered airplanes with ten or more passenger seats.
Evolution to EGPWS & TAWS
After these rules were issued, advances in terrain mapping technology permitted the development of a new type of ground proximity warning system that provides greater situational awareness for flight crews. The FAA has approved certain installations of this type of equipment, known as the enhanced ground proximity warning system (EGPWS). However, in the proposed final rule, the FAA is using the broader term "terrain awareness and warning system" (TAWS) because the FAA expects that a variety of systems may be developed in the near future that would meet the improved standards contained in the proposed final rule. The breakthrough that enabled successful EGPWS came after the dissolution of the Soviet Union in 1991; the USSR had created detailed terrain maps of the world, and Bateman convinced his director of engineering to purchase them after the political chaos made them available, enabling earlier terrain warnings.
The TAWS improves on existing GPWS systems by providing the flight crew much earlier aural and visual warning of impending terrain, forward looking capability, and continued operation in the landing configuration. These improvements provide more time for the flight crew to make smoother and gradual corrective action. United Airlines was an early adopter of the EGPWS technology. The CFIT of American Airlines Flight 965 in 1995 convinced that carrier to add EGPWS to all its aircraft; although the Boeing 757 was equipped with the earlier GPWS, the terrain warning was issued only 13 seconds before the crash.
In 1998, the FAA issued Notice No. 98-11, Terrain Awareness and Warning System, proposing that all turbine-powered U.S.-registered airplanes type certificated to have six or more passenger seats (exclusive of pilot and copilot seating), be equipped with an FAA-approved terrain awareness and warning system.
On March 23, 2000, the FAA issued Amendments 91–263, 121–273, and 135-75 (Correction 135.154). These amendments amended the operating rules to require that all U.S. registered turbine-powered airplanes with six or more passenger seats (exclusive of pilot and copilot seating) be equipped with an FAA-approved TAWS. The mandate only affects aircraft manufactured after March 29, 2002.
By 2006, aircraft upset accidents had overtaken CFIT as the leading cause of aircraft accident fatalities, credited to the widespread deployment of TAWS. On March 7, 2006, the NTSB called on the FAA to require all U.S.-registered turbine-powered helicopters certified to carry at least 6 passengers to be equipped with a terrain awareness and warning system. The technology had not yet been developed for the unique flight characteristics of helicopters in 2000. A fatal helicopter crash in the Gulf of Mexico, involving an Era Aviation Sikorsky S-76A++ helicopter with two pilots transporting eight oil service personnel, was one of many crashes that prompted the decision.
President Barack Obama awarded the National Medal of Technology and Innovation to Bateman in 2010 for his invention of GPWS and its later evolution into EGPWS/TAWS.
Workings
A modern TAWS works by using digital elevation data and airplane instrumental values to predict if a likely future position of the aircraft intersects with the ground. The flight crew is thus provided with "earlier aural and visual warning of impending terrain, forward looking capability, and continued operation in the landing configuration."
TAWS types
Class A TAWS includes all the requirements of Class B TAWS, below, and adds the following additional three alerts and display requirements of:
Excessive closure rate to terrain alert
Flight into terrain when not in landing configuration alert
Excessive downward deviation from an ILS glideslope alert
Required: Class A TAWS installations shall provide a terrain awareness display that shows either the surrounding terrain or obstacles relative to the airplane, or both.
Class B TAWS is defined by the U.S. FAA as:
A class of equipment that is defined in TSO-C151b and RTCA DO-161A. As a minimum, it will provide alerts for the following circumstances:
Reduced required terrain clearance
Imminent terrain impact
Premature descent
Excessive rates of descent
Negative climb rate or altitude loss after takeoff
Descent of the airplane to 500 feet above the terrain or nearest runway elevation (voice callout "Five Hundred") during a non-precision approach.
Optional: Class B TAWS installation may provide a terrain awareness display that shows either the surrounding terrain or obstacles relative to the airplane, or both.
Class C defines voluntary equipment intended for small general aviation airplanes that are not required to install Class B equipment. This includes minimum operational performance standards intended for piston-powered and turbine-powered airplanes, when configured with fewer than six passenger seats, excluding any pilot seats. Class C TAWS equipment shall meet all the requirements of a Class B TAWS with the small aircraft modifications described by the FAA. The FAA has developed Class C to make voluntary TAWS usage easier for small aircraft.
Effects and statistics
Prior to the development of GPWS, large passenger aircraft were involved in 3.5 fatal CFIT accidents per year, falling to 2 per year in the mid-1970s. A 2006 report stated that from 1974, when the U.S. FAA made it a requirement for large aircraft to carry such equipment, until the time of the report, there had not been a single passenger fatality in a CFIT crash by a large jet in U.S. airspace.
After 1974, there were still some CFIT accidents that GPWS was unable to help prevent, due to the "blind spot" of those early GPWS systems. More advanced systems were developed.
Older TAWS, or deactivation of the EGPWS, or ignoring its warnings when airport is not in its database, still leave aircraft vulnerable to possible CFIT incidents. In April 2010, a Polish Air Force Tupolev Tu-154M aircraft crashed near Smolensk, Russia, in a possible CFIT accident killing all passengers and crew, including the Polish President. The aircraft was equipped with TAWS made by Universal Avionics Systems of Tucson. According to the Russian Interstate Aviation Committee TAWS was turned on. However, the airport where the aircraft was going to land (Smolensk (XUBS)) is not in the TAWS database. In January 2008 a Polish Air Force Casa C-295M crashed in a CFIT accident near Mirosławiec, Poland, despite being equipped with EGPWS; the investigation found the EGPWS warning sounds had been disabled, and the pilot-in-command was not properly trained with EGPWS.
See also
Index of aviation articles
List of aviation, avionics, aerospace and aeronautical abbreviations
Airborne collision avoidance system
Controlled flight into terrain (CFIT)
Digital fly-by-wire
Ground proximity warning system / enhanced GPWS
Runway Awareness and Advisory System
References
External links
Honeywell Enhanced Ground Proximity Warning System (EGPWS)
FAR Sec. 121.354 – Terrain awareness and warning system
Terrain Awareness and Warning System; Final Rule
TSO-C151b Terrain Avoidance and Warning System PDF , TSO-C151b Web Page
TAWS - FAA Mandates A New Proximity to Safety by Gary Picou
Avionics
Aircraft collision avoidance systems | Terrain awareness and warning system | Technology | 2,314 |
69,228,425 | https://en.wikipedia.org/wiki/Grand%20Antique%20marble | Grand Antique marble (also Celtic marble (), Grand Antique of Aubert, and known in Roman times as Marmor Aquitanicum), is a prestigious marble, composed of clasts of black limestone and white calcite, quarried near Aubert-Moulis in France. The fault breccia from which it is extracted was formed at the end of the Cretaceous period, following the corrugation that affected the Northern Pyrenean area about 65 million years ago.
The marble was first quarried by the Romans in the third or fourth century and was exported in large quantities to Rome and Constantinople, primarily for decorative columns. Roman examples include the ciborium in Santa Cecilia and the candelabra of the Paschal candle in Santa Maria Maggiore. In Byzantium, the marble was used for decorative panels in Hagia Sophia.
The quarry was subsequently closed, and the blocks already extracted were utilized for several churches, including St Peter's Basilica in Rome, St Mark's Basilica in Venice, and Westminster in London. The marble was widely used by Émile-Jacques Ruhlmann for fireplaces. Examples are also found in the Les Invalides, for the columns on the altar in the chapel and the tomb of Joseph Napoleon, and at Versailles. More recently, this marble had also been extensively used in New York during the 1920s on the exterior facade of the Roosevelt Hotel, as well as on the recent extension of the Museum of Modern Art for interior decoration.
Exploited intermittently and then closed in 1948, the quarry was reopened in 2012 when the Italian company Escavamar purchased the operating rights with the goal of providing high-quality marble in measured quantities to a luxury and high-end clientele. In 2015, Escavamar officially registered the trademark "Grand Antique d'Aubert".
References
Building materials
Types of marble
Hagia Sophia | Grand Antique marble | Physics,Engineering | 387 |
70,812,248 | https://en.wikipedia.org/wiki/Chandni%20Chowk%2C%20Kolkata | Chandni Chowk is a neighbourhood of North Kolkata in Kolkata district in the Indian state of West Bengal. It is famous for its old and cheap market of computer software products and hardwares, and had been listed as a notorious market in 2009 and 2010 by the USTR for selling counterfeit software, media and goods.
History
Chandni Chowk which is an economical hub and marketplace in North Kolkata, existed as early as 1784. A. Upjohn's map of 1784 of Calcutta describes it as "Chandney Choke Bazar", "Chandney Bazaar ka rastah", "Chandnee Choke" or "Goreeamar Lane". Kolkata Municipal Corporation records show that before 1937, that one of the lanes were called Guriama Lane in local. In February 1937, the name was officially changed to Chandney Approach. In 1938, again it was renamed aas Chandni Chowk Street. It was speculated the name was given after Delhi’s Chandni Chowk as a symbol of acknowledgement for the Mughals whose capital was at Delhi at that time.
W.H. Carey speculates that the name came from the canopies or the semi-permanent roofs (chadna, in Bengali) above shops that had sprung up over the years. The market existed in 19th century. R.J. Minney's "Round about Calcutta" (1922) says that Chandni Chowk contained garment markets, cycle shops, camera shops, pigeon stalls for cigarettes and sherbet. Hawkers and shopkeepers with semi-permanent structures continue to occupy every square inch of the space. In 1907, KMC bought land to widen Chandney Chowk Streets, but that did not help passersby.
In October 1994, Chandni Chowk came under metro corridor as the stretch from Esplanade to Chandni Chowk was authorised for construction.
Market
Chandni Chowk is known for it being the oldest and biggest software and hardware market in West Bengal. Multiple technology companies had their main offices and service centres in Chandni Chowk due to its cheap and nearest facilities.
Transport
Central Ave, which is one the main road connectors between South and North Calcutta, passes through Chandni Chowk. Also Lenin Sarani of Kolkata road passes through Chandni Chowk which connects it directly to Sealdah. Chandni Chowk metro station and Esplanade metro station are nearest metro stations of the North South metro corridor. Esplanade serves also as metro station of the East West metro corridor of Kolkata.
See also
Bidhan Sarani
Boubazar
College Street
Dharmatala
References
Neighbourhoods in Kolkata
Notorious markets
High-technology business districts in India
Information technology places
Information technology in India
Electronics districts
Electronics industry in India | Chandni Chowk, Kolkata | Technology | 571 |
1,550,921 | https://en.wikipedia.org/wiki/Comparison%20of%20wiki%20software | The following tables compare general and technical information for many wiki software packages.
General information
Systems listed on a light purple background are no longer in active development.
Target audience
Features 1
Features 2
Installation
See also
Comparison of
wiki farms
notetaking software
text editors
HTML editors
word processors
wiki hosting services
List of
wikis
wiki software
personal information managers
text editors
outliners for
desktops
mobile devices
web-based
Footnotes
Comparison
Wiki software
Text editor comparisons
Wiki software | Comparison of wiki software | Technology | 94 |
21,141,486 | https://en.wikipedia.org/wiki/4-1000A | The 4-1000A/8166 is a radial-beam tetrode designed for use in radio transmitters. The 4-1000A is the largest of a series of tubes including the 4-65A, 4-125A, 4-250A, and the 4-400A. These tubes share a common naming convention in which the first number identifies the number of elements contained within the tube; i.e., the number "4" identifies the tube as a tetrode which contains four elements (filament, control grid, screen grid, and anode), and the second number indicates the maximum continuous power dissipation of the anode in Watts. The entire family of tubes can be used as oscillators, modulators, and amplifiers.
Specifications
The 4-1000A is a relatively large glass tube with an overall height of 9.25 inches and a diameter of 5 inches. It is designed to operate with its plate (anode) at an orange-red color due to the "getter" being a zirconium compound on the anode structure which requires a great deal of heat to be effective. The cathode is a directly heated, thoriated-tungsten filament rated at 7.5 volts at 21 amperes. Connections to the filament and grids are made via a special 5-pin socket, and the anode connection is at the top of the tube.
The tube may be operated as a class C amplifier in which a single tube can provide up to 3340 watts of RF power. A pair of tubes may be operated as an audio-frequency modulator for an AM transmitter; in this case a pair of tubes will provide up to 3,840 watts of audio power.
Internal construction
The 4-1000A is of radial construction; the most obvious feature is the large, roughly cylindrical, blackish anode suspended from the top of the tube. The anode, which may be constructed out of metal or graphite, is finned for increased heat dissipation. The filament and grids are supported from the lower section/base of the tube.
Sources
The 4-1000A was available from multiple manufacturers including RCA, EIMAC, AMPEREX, and Triton.
Cooling
A complete cooling system is required to operate the tube at rated values. A centrifugal fan pressurizes the equipment chassis and provides a continuous stream of cooling air. A specially designed socket is used to direct the air over the filament and grid connections, and a glass chimney (Pyrex) then directs the air around the tube's glass envelope; chassis-mounted metal clips are used to center the chimney around the tube. Finally, a finned, cylindrical heat sink is attached to the anode connection (top of the tube) to provide additional cooling for the anode's glass-to-metal seal.
Variants
The 4-1000A was modified for operation in pulsed applications; the modified tube, identified as the 4PR1000A/8189, can be operated with anode voltages as high as 30 kV DC and peak plate current of 8.0 Amperes, but at a reduced duty cycle.
References
Vacuum tubes | 4-1000A | Physics | 665 |
35,330,140 | https://en.wikipedia.org/wiki/Convergence%20%28relationship%29 | The convergence hypothesis suggests that spouses and romantic partners tend to become more alike over time due to their shared environment, repeated interactions, and synchronized routines. For example, partners who often laugh and joke with each other, may experience less stress, which, over the years, may improve their health and social interactions. Yet, as detailed below, this hypothesis was not confirmed by empirical studies.
The convergence hypothesis became popular among social scientists and was widely used to explain the high levels of observed similarity between spouses and romantic partners in physical, physiological, demographic and psychological characteristics, such as social class, religion, be of similar height, intelligence, education. Yet, empirical research shows that couples do not become more similar over time, but are similar from the outset. The similarity between spouses and romantic partners is explained by homogamy, (i.e., being socially and geographically surrounded by similar others) and homophily (i.e., preference for similar others).
Studies
The study by Zajonc et al. found that the faces of spouses become more similar over time and that this similarity is positively correlated with couples' satisfaction in their marriage. The researchers suggest that this may be due to couples sharing similar environments and experiences, leading to similar facial features as a result. For example, couples who smile frequently may develop similar wrinkles around their eyes as a result.
More recent studies have called into question the hypothesis that spouses' faces become more similar over time, as suggested by Zajonc, et al For example, Stanford University psychologists, Tea-makorn and Kosinski conducted a study on a sample of 517 married couples using photographs taken at the beginning of their marriages and 20 to 69 years later. They used two independent approaches to measure the similarity of the spouses’ faces: human judges and a modern facial recognition algorithm. Their findings demonstrated that while spouses have similar facial features at the start of their marriage, these features do not continue to become more alike over time.
Hinsz found that couples married for 25 years were no more similar in appearance than recently engaged couples. Additionally, Griffith and Kunz found that while student raters were able to match spouses' faces at a level above chance, there was no significant trend of spouses growing to look alike as they lived together.
Research carried out by psychologists from the Michigan State University, and the University of Minnesota M. Brent Donnellan Mikhila N. Humbad, William G. Iacono, Matthew McGue and S. Alexandra Burt based on a database of 1,296 couples who have been married for an average of 19.8 years, suggested that only the degree of aggressivity actually tended to converge.They also found that couples who had been married for up to 39 years were no more alike in fundamental traits than newlyweds. They concluded that personalities do not grow more similar as years pass. The couples were more likely looking for specific traits during the courtship period and ended up with persons similar to themselves.
References
Interpersonal relationships | Convergence (relationship) | Biology | 617 |
487,507 | https://en.wikipedia.org/wiki/Orange%20box | An orange box is a piece of hardware or software that generates caller ID frequency-shift keying (FSK) signals to spoof caller ID information on the target's caller ID terminal.
See also
Blue box
References
Caller ID
Phreaking boxes | Orange box | Engineering | 52 |
1,621,152 | https://en.wikipedia.org/wiki/Spiritual%20materialism | Spiritual materialism is a term coined by Chögyam Trungpa in his book Cutting Through Spiritual Materialism. The book is a compendium of his talks explaining Buddhism given while opening the Karma Dzong meditation center in Boulder, Colorado. He expands on the concept in later seminars that became books such as Work, Sex, Money. He uses the term to describe mistakes spiritual seekers commit which turn the pursuit of spirituality into an ego-building and confusion-creating endeavor, based on the idea that ego development is counter to spiritual progress.
Conventionally, it is used to describe capitalist and spiritual narcissism, commercial efforts such as "new age" bookstores and wealthy lecturers on spirituality; it might also mean the attempt to build up a list of credentials or accumulate teachings in order to present oneself as a more realized or holy person. Author Jorge Ferrer equates the terms "Spiritual materialism" and "Spiritual Narcissism", though others draw a distinction, that spiritual narcissism is believing that one deserves love and respect or is better than another because one has accumulated spiritual training instead of the belief that accumulating training will bring an end to suffering.
Lords of Materialism
In Trungpa's presentation, spiritual materialism can fall into three categories — what he calls the three "Lords of Materialism" (Tibetan: lalo literally "barbarian") — in which a form of materialism is misunderstood as bringing long-term happiness but instead brings only short-term entertainment followed by long-term suffering:
Physical materialism is the belief that possessions can bring release from suffering. In Trungpa's view, they may bring temporary happiness but then more suffering in the endless pursuit of creating one's environment to be just right. Or on another level it may cause a misunderstanding like, "I am rich because I have this or that" or "I am a teacher (or whatever) because I have a diploma (or whatever)."
Psychological materialism is the belief that a particular philosophy, belief system, or point of view will bring release from suffering. So seeking refuge by strongly identifying with a particular religion, philosophy, political party or viewpoint, for example, would be psychological materialism. From this the conventional usage of spiritual materialism arises, by identifying oneself as Buddhist or some other label, or by collecting initiations and spiritual accomplishments, one further constructs a solidified view of ego. Trungpa characterizes the goal of psychological materialism as using external concepts, pretexts, and ideas to prove that the ego-driven self exists, which manifests in a particular competitive attitude.
Spiritual materialism is the belief that a certain temporary state of mind is a refuge from suffering. An example would be using meditation practices to create a peaceful state of mind, or using drugs or alcohol to remain in a numbed out or a euphoric state. According to Trungpa, these states are temporary and merely heighten the suffering when they cease. So attempting to maintain a particular emotional state of mind as a refuge from suffering, or constantly pursuing particular emotional states of mind like being in love, will actually lead to more long-term suffering.
Ego
The underlying source of these three approaches to finding happiness is based, according to Trungpa, on the mistaken notion that one's ego is inherently existent and a valid point of view. He claims that is incorrect, and therefore the materialistic approaches have an invalid basis to begin with. The message in summary is, "Don't try to reinforce your ego through material things, belief systems like religion, or certain emotional states of mind." In his view, the point of religion is to show you that your ego doesn't really exist inherently. Ego is something you build up to make you think you exist, but it is not necessary and in the long run causes more suffering.
References
Carson, Richard David (2003) Taming Your Gremlin: A Surprisingly Simple Method for Getting Out of Your Own Way
Ferrer, Jorge Noguera (2001) Revisioning Transpersonal Theory: A Participatory Vision of Human Spirituality
Hart, Tobin (2004) The Secret Spiritual World of Children
Potter, Richard and Potter, Jan (2006) Spiritual Development for Beginners: A Simple Guide to Leading a Purpose Filled Life
Trungpa, Chögyam (1973). Cutting Through Spiritual Materialism. Boston, Massachusetts: Shambhala Publications, Inc. .
Trungpa, Chögyam (2011). Work, Sex, Money: Real Life on the Path of Mindfulness. Boston, Massachusetts: Shambhala Publications, Inc. . Based on a series of talks given between 1971 and 1981.
External links
Cutting Through Spiritual Materialism excerpts
Work, Sex, Money excerpts
Spiritual Finances
Video of Boulder talks on the subject by Chögyam Trungpa
Materialism
Spiritual philosophy
Tibetan Buddhist philosophical concepts | Spiritual materialism | Physics | 992 |
657,862 | https://en.wikipedia.org/wiki/Shared%20Source%20Common%20Language%20Infrastructure | The Shared Source Common Language Infrastructure (SSCLI), previously codenamed Rotor, is Microsoft's shared source implementation of the CLI, the core of .NET. Although the SSCLI is not suitable for commercial use due to its license, it does make it possible for programmers to examine the implementation details of many .NET libraries and to create modified CLI versions. Microsoft provides the Shared Source CLI as a reference CLI implementation suitable for educational use.
History
Beginning in 2001, Microsoft announced they would release part of the .NET Framework infrastructure source code in Shared source through ECMA, as part of the C# and CLI standardization process.
In March 2002, Microsoft released version 1.0 of the Shared Source Common Language Infrastructure, also called Rotor. The Shared Source CLI was initially pre-configured to run on Windows, but could also be built on FreeBSD (version 4.7 or newer), and Mac OS X 10.2. It was designed such that the only thing that needed to be customized to port the Shared Source CLI to a different platform was a thin Platform Abstraction Layer (PAL).
The last 2.0 version of SSCLI was released in March 2006, and contains most of the classes and features of version 2.0 of the .NET Framework. SSCLI 2.0 can be downloaded directly from Microsoft downloads and requires Perl and Visual Studio 2005 running on Windows XP SP2 to compile. Microsoft has not updated the source and build requirements since 2006. Even Microsoft MVPs, important part of Microsoft community ecosystem, complained about the lack of support for other Visual Studio versions and operating systems. However, a non-official patch for Visual Studio 2008 was provided by a Microsoft employee in the MSDN Blog and another for Visual Studio 2010 was released by the community.
Later versions of .NET, originally known as .NET Core and now referred to simply as .NET, have been open sourced under the more permissive MIT license.
License
The Shared Source CLI use the non-free Microsoft Shared Source Common Language Infrastructure license. This license allows modifications and redistribution of the code for personal or academic usages, but they can't be used for commercial products.
See also
Microsoft and open source
Common Language Runtime
.NET
Mono
DotGNU
References
External links
Shared Source Common Language Infrastructure 1.0 Release:
Shared Source Common Language Infrastructure 2.0 Release:
Introduction to Shared Source CLI
Shared Source Common Language Infrastructure
2002 software
Computing platforms | Shared Source Common Language Infrastructure | Technology | 505 |
16,782,427 | https://en.wikipedia.org/wiki/HD%20156846%20b | HD 156846 b is an extrasolar planet located approximately 160 light-years away in the constellation of Ophiuchus, orbiting the star HD 156846. It has one of the most eccentric planetary orbits known. The high eccentricity of this planet's orbit is probably attributable to the presence of a red dwarf companion star. The average distance of the planet from HD 156846 is 0.99 AU, nearly identical to the distance of Earth from the Sun. The distance ranges from 0.15 AU to 1.83 AU with a 360-day period, also very close to the period of the Earth. It is also very massive with at least 10.45 Jupiter masses. The mass is a lower limit because the inclination of the orbit is not known; if its inclination and hence true mass became known, it would probably be found to be a brown dwarf, or even a red dwarf.
References
External links
Ophiuchus
Giant planets
Exoplanets discovered in 2007
Exoplanets detected by radial velocity | HD 156846 b | Astronomy | 212 |
3,400,834 | https://en.wikipedia.org/wiki/Acylsilane | Acylsilanes are a group of chemical compounds sharing a common functional group with the general structure RC(O)-SiR3.
Synthesis
Acylsilanes can be synthesized by treating acyl anion equivalents with silyl halides (typically trimethylsilyl chloride, tmsCl). Silylation of 2-lithio-1,3-dithiane, followed by hydrolysis of the dithioacetal group with mercury(II) chloride. Analogous methods has also been used to produce acylgermanes.
Several approches to acylsilanes start with carboxylic acid derivatives. Esters undergo reductive silylation en route to acylsilanes:
Tertiary amides react with silyl lithium reagents:
Acid chlorides are converted using hexamethyldisilane:
Some acyl silanes are prepared by oxidation of a suitable silanes.
Reactions
Acylsilanes are starting compounds in the Brook rearrangement with vinyl lithium compounds to silyl enol ethers.
Acyl silanes and aryl bromides are coupling partners in Pd-catalyzed cross coupling reactions:
Further reading
The reactivity of α- and β-iodo propenoylsilanes: an alternative access to polyunsaturated acylsilanes Alessandro Degl’Innocenti, Antonella Capperucci, Patrizia Scafato, Antonella Telesca Arkivoc 0-005A 2000 Article
References
Functional groups
Organosilicon compounds | Acylsilane | Chemistry | 328 |
11,421,637 | https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20psi28S-2876 | In molecular biology, Small nucleolar RNA psi28S-2876 (also known as snoRNA psi28S-2876) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a 'guide RNA'.
This Drosophila-specific snoRNA is a member of the H/ACA box class of snoRNA and is predicted to be responsible for guiding the modification of uridines 2876 and 2956 to pseudouridine in Drosophila 28S rRNA.
References
External links
Small nuclear RNA | Small nucleolar RNA psi28S-2876 | Chemistry | 184 |
377,112 | https://en.wikipedia.org/wiki/George%20Romanes | George John Romanes (20 May 1848 – 23 May 1894) was a Canadian-Scots evolutionary biologist and physiologist who laid the foundation of what he called comparative psychology, postulating a similarity of cognitive processes and mechanisms between humans and other animals.
He was the youngest of Charles Darwin's academic friends, and his views on evolution are historically important. He is considered to invent the term neo-Darwinism, which in the late 19th century was considered as a theory of evolution that focuses on natural selection as the main evolutionary force. However, Samuel Butler used this term with a similar meaning in 1880. Romanes' early death was a loss to the cause of evolutionary biology in Britain. Within six years Mendel's work was rediscovered, and a whole new agenda opened up for debate.
Early life
George Romanes was born in Kingston, Canada West (now Ontario), in 1848, the youngest of three children, all boys, in a well-to-do and intellectually cultivated family. His father was Rev George Romanes (1805–1871), a Scottish Presbyterian minister. Two years after his birth, his parents moved to Cornwall Terrace in London, United Kingdom, which would set Romanes on the path to a fruitful and lasting relationship with Charles Darwin. During his youth, Romanes resided temporarily in Germany and Italy, developing a fluency in both German and Italian. His early education was inconsistent, undertaken partly in public schools, and partly at home. He developed an early love for poetry and music, at which he excelled. However, his true passion resided elsewhere, and the young Romanes decided to study science, abandoning a prior ambition to become a clergyman like his father.
Adulthood
Although he came from an educated home, his school education was erratic. He entered university half-educated and with little knowledge of the ways of the world. He studied medicine and physiology, graduating from Gonville and Caius College, Cambridge with the degree of BA in 1871, and is commemorated there by a stained glass window in the chapel.
It was at Cambridge that he came first to the attention of Charles Darwin: "How glad I am that you are so young!" said Darwin.
Forging a relationship with Darwin was not difficult for Romanes, who reputedly inherited a "sweetness of temper and calmness of manner" from his father. The two remained friends for life. Guided by Michael Foster, Romanes continued to work on the physiology of invertebrates at University College London under William Sharpey and Burdon-Sanderson. In 1879, at 31, Romanes was elected a Fellow of the Royal Society on the basis of his work on the nervous systems of medusae. However, Romanes' tendency to support his claims by anecdotal evidence rather than empirical tests prompted Lloyd Morgan's warning known as Morgan's Canon:
"In no case is an animal activity to be interpreted in terms of higher psychological processes, if it can be fairly interpreted in terms of processes which stand lower in the scale of psychological evolution and development".
As a young man, Romanes was a Christian, and some, including his religious wife, later said that he regained some of that belief during his final illness. In fact, he became an agnostic due to the influence of Darwin. In a manuscript left unfinished at the end of his life he said that the theory of evolution had caused him to abandon religion.
Romanes founded a series of free public lectures, the Romanes Lectures, which continue to this day. He was a friend of Thomas Henry Huxley, who gave the second Romanes lecture.
Towards the end of his life, he returned to Christianity.
He died in Oxford on 23 May 1894. A memorial to Romanes exists in the north west corner of Greyfriars Kirkyard in Edinburgh on the grave of his parents.
Professional life
Romanes's and Darwin's relationship developed quickly and they became close friends. This relationship began when Romanes became Darwin's research assistant during the last eight years of Darwin's life. The association Romanes had with Darwin was essential in Darwin's later works. Therefore, Darwin confided volumes of unpublished work which Romanes later used to publish papers. Like Darwin, Romanes's theories were met with scepticism and were not accepted initially. The majority of Romanes's work attempted to make a connection between animal consciousness and human consciousness. Some problems were encountered during his research that he addressed with the development of physiological selection. This was Romanes's answer to three objections to Darwin's isolation theory of speciation. These were: species characteristics that have no evolutionary purpose; the widespread fact of inter-specific sterility; and the need for varieties to escape the swamping effects of inter-crossing after permanent species are established. At the end of his career the majority of his work was directed towards the development of a relationship between intelligence and placement on an evolutionary tree. Romanes believed that the further along an organism was on an evolutionary standpoint, the more likely that organism would be to possess a higher level of functioning.
Family
Romanes was the last child born of three children from George Romanes and Isabella Cair Smith. The majority of his immediate and extended family were descendant from Scottish Highland tribes. His father, Reverend George Romanes, was a professor at Queens College in Kingston, Canada and taught Greek at the local university until the family moved back to England. Romanes and his wife Ethel Duncan were married on 11 February 1879. They were happily married and studied together. Romanes was said to be an "ideal father" to their six children. Both Romanes' mother and father were involved in the Protestant and Anglican Church during his childhood. Romanes was baptised Anglican and was heavily involved with the Anglican teachings during his youth, despite the fact his parents were not heavily involved with any religion.
It is speculated that Darwin may have been viewed as a father figure to Romanes. Darwin did not agree with the teachings of the Catholic Church because the fundamental teachings were not supported by his scientific findings at the time. This could explain Romanes' conversion to agnosticism.
Philosophical and political views
When Romanes attended Gonville and Caius College Cambridge, he entered into an essay contest on the topic of "Christian Prayer considered in relation to the belief that Almighty governs the world by general laws". Romanes did not have much hope in winning, but much to his surprise he took first place in this contest and received the Burney prize. After winning the Burney prize, Romanes came to the conclusion that he could no longer be faithful to his Christian religion due to his love and commitment for science. This is interesting due to the fact that when Romanes was growing up, his father was a Reverend. Therefore, Romanes went into great detail about religion and how all aspects of the mind need to be involved to be faithfully committed to religion in his book Thoughts on Religion. He believed that one had to have an extremely high level of will to be dedicated to God or Christ. He had earlier published a book on the subject in general called A Candid Examination of Theism, where he concluded that God's existence was not supported by the evidence, but stated his unhappiness with the fact.
Romanes on evolution
Romanes tackled the subject of evolution frequently. For the most part he supported Darwinism and the role of natural selection. However, he perceived three problems with Darwinian evolution:
The difference between natural species and domesticated varieties in respect to fertility. [this problem was especially pertinent to Darwin, who used the analogy of change in domesticated animals so frequently]
Structures which serve to distinguish allied species are often without any known utilitarian significance. [taxonomists choose the most visible and least changeable features to identify a species, but there may be a host of other differences which though not useful to the taxonomist are significant in survival terms]
The swamping influence upon an incipient species-split of free inter-crossing. [Here we strike the problem which most perplexed Darwin, with his ideas of blending inheritance. It was solved by the rediscovery of Mendelian genetics, and the modern synthesis which showed that particulate inheritance could underlie continuous variation. Romanes also made the acute point that Darwin had not actually shown how natural selection produced species, despite the title of his famous book (On the Origin of Species by Means of Natural Selection). Natural selection could be the 'machine' for producing adaptation, but still in question was the mechanism for splitting species.
Romanes' own solution to this was called 'physiological selection'. His idea was that variation in reproductive ability, caused mainly by the prevention of inter-crossing with parental forms, was the primary driving force in the production of new species. The majority view then (and now) was that geographical separation is the primary force in species splitting (or allopatry) and secondarily was the increased sterility of crosses between incipient species.
Taking influence from Darwin, Romanes was a proponent of both natural selection and the inheritance of acquired characteristics. The latter was denied by Alfred Russel Wallace, a strict selectionist. Romanes came into a dispute with Wallace over the definition of Darwinism.
Published works
When Charles Darwin died, Romanes defended Darwin's theories by attempting to rebut criticisms and attacks levied by other psychologists against the Darwinian school of thought. Romanes expanded on Darwin's theories of evolution and natural selection by advancing a theory of behaviour based on comparative psychology. In Animal Intelligence, Romanes demonstrated similarities and dissimilarities between cognitive and physical functions of various animals. In Mental Evolution in Animals, Romanes illustrated the evolution of the cognitive and physical functions associated with animal life. Romanes believed that animal intelligence evolves through behavioural conditioning, or positive reinforcement. Romanes then published Mental Evolution in Man, which focused on the evolution of human cognitive and physical functions.
In 1890, Romanes published Darwin, and After Darwin, where he attempted to explain the relationship between science and religion. All of his notes on this subject were left to Charles Gore. Gore used the notes in preparing Thoughts on Religion, and published the work under Romanes' name. The Life and Letters of George Romanes offers a semi-autobiographical account of Romanes's life.
Accomplishments
1879: Romanes was selected for the Fellow of the Royal Society.
1886–1890: Romanes was a professor at University of Edinburgh.
1892: When he was a professor at the University of Oxford, Romanes created a series of lectures known as Romanes Lectures.
These lectures are currently still held once a year in memory of Romanes.
Romanes is also known for creating the following words and meanings:
Anthropomorphism: attributing human-like qualities to other animals.
Anecdotal method: the use of observational methods to collect data on animal behaviour.
He developed the stepping stairs for cognitive development.
References
Further reading
Lesch, John E. "The Role of Isolation in Evolution: George J. Romanes and John T. Gulick," Isis, Vol. 66, No. 4, Dec. 1975.
McGrew, Timothy. “A Pilgrim's Regress: George John Romanes and the Search for Rational Faith,” The Christendom Review, Vol. II (2), 2009.
Morganti, Federico. "Intelligence as the Plasticity of Instinct: George J. Romanes and Darwin's Earthworms," Theoretical Biology Forum", Vol. 104, N°. 2, 2011.
Romanes, Ethel Duncan. The Life and Letters of George John Romanes, Longmans, Green and co., 1896.
Schwartz, Joel S. "George John Romanes's Defense of Darwinism: The Correspondence of Charles Darwin and His Chief Disciple," Journal of the History of Biology, Vol. 28, No. 2, Summer, 1995.
Schwartz, Joel S. "Out from Darwin's Shadow: George John Romanes's Efforts to Popularize Science in 'Nineteenth Century' and Other Victorian Periodicals," Victorian Periodicals Review, Vol. 35, No. 2, Summer, 2002.
Schwartz, Joel S. Darwin's Disciple: George John Romanes, A Life In Letters, Diane Publishing Company, 2010.
Schultz, D., & Schultz, S. A History of Modern Psychology, Harcourt College Publishers, 2000.
Tollemache, Lionel A. Mr. Romanes's Catechism, C.F. Hodgson & Son, 1887.
Zeller,Peter, "Romanes.Un discepolo di Darwin alla ricerca delle origini del pensiero.Armando Armando Editore, 2007.
Publications
The Scientific Evidences of Organic Evolution, Macmillan and Co., 1882 [1st Pub. 1877].
Candid Examination of Theism, Trübner & Co., 1878 [pseudonymously published as Physicus].
Animal Intelligence, D. Appleton and Company, 1892 [1st Pub. 1882].
Mental Evolution in Animals, with a Posthumous Essay on Instinct by Charles Darwin, Kegan Paul, Trench & Co., 1883.
Jelly-Fish, Star-Fish and Sea Urchins, Being a Research on Primitive Nervous Systems, K. Paul, Trench & Co., 1885.
Physiological Selection: an Additional Suggestion on the Origin of Species, The Journal of the Linnean Society, Vol. 19, 1886.
Mental Evolution in Man, Kegan Paul, Trench & Co., 1888.
Darwin, and after Darwin, (1892–97, a work of significance for historians of evolution theory):
The Darwinian Theory, The Open Court Publishing Company, 1910 [1st Pub. 1892].
Post-Darwinian Questions: Heredity and Utility, The Open Court Publishing Company, 1906 [1st Pub. 1895].
Post-Darwinian Questions: Isolation and Physiological Selection, The Open Court Publishing Company, 1914 [1st Pub. 1897].
Mind and Motion and Monism, Longmans, Green, and Co., 1895.
An Examination of Weismannism, The Open Court Publishing company, 1893 (August Weismann was the leading evolutionary theoretician at the turn of the 19th century).
Thoughts on Religion, Longmans, Green & Co., 1895.
Essays, Longmans, Green & Co., 1897.
Articles
"Christian Prayer and General Laws: Being the Burney Prize Essay for the Year 1873," Macmillan & Co., 1874.
"Fetichism in Animals," Nature, 27 December 1877.
"Recreation," The Nineteenth Century, Vol. VI, July/December 1879.
"Suicide," Nature, December 1881.
"American Ants," Nature, 2 March 1882.
"Nature and Thought," The Contemporary Review, Vol. XLIII, June 1883.
"Man and Brute," The North American Review, Vol. 139, No. 333, Aug. 1884.
"Mind in Men and Animals," The North American Review, March 1885.
"Physiological Selection," The Nineteenth Century, Vol. XXI, January/June 1887.
"Mental Differences Between Men and Women," The Nineteenth Century, Vol. XXI, January/June 1887.
"Concerning Women," The Forum, Vol. IV, 1887.
"Recent Critics of Darwinism," The Contemporary Review, Vol. LIII, January/June 1888.
"Mr. Wallace on Darwinism," The Contemporary Review, Vol. LVI, July/December 1889.
"Weismann's Theory of Heredity," The Contemporary Review, Vol. LVII, January/June 1890.
"Mr. A. R. Wallace on Physiological Selection," The Monist, Vol. I, N°. 1, October 1890.
"Origin of Human Faculty," Brain; a Journal of Neurology, Vol. XII, 1890.
"The Psychic Life of Micro-Organisms," The Open Court, Vol. IV, 1890–1891.
"Aristotle as a Naturalist," The Contemporary Review, Vol. LIX, January/June 1891.
"Thought and Language," Part II, The Monist, Vol. 2, No. 1, October 1891; No. 3, April 1892.
"Critical Remarks on Weismannism," The Open Court, Vol. VII, N°. 313, August 1893.
"Weismann and Galton," The Open Court, Vol. VII, N°. 315, September 1893.
"A Note on Panmixia," The Contemporary Review, Vol. LXIV, July/December 1893.
"Longevity and Death," The Monist, Vol. V, N°. 2, January 1895.
"The Darwinism of Darwin, and of the Post-Darwinian Schools," The Monist, Vol. VI, N°. 1, October 1895.
"Isolation in Organic Evolution," The Monist, Vol. VIII, 1898.
Miscellany
Observations on the Locomotor System of Echinodermata, Philosophical Transactions of the Royal Society of London, Vol. 172, Part III, 1882.
Darwinism Illustrated: Wood-engravings Explanatory of the Theory of Evolution, The Open Court Publishing Company, 1892.
A Selection from the Poems of George John Romanes, Longmans, Green & Co., 1896.
External links
Works by George Romanes at Wikisource
Catalogue of the Papers of George John Romanes, 1867–1927
Genealogy, Background and Works of G. J. Romanes
Psyography George John Romanes
Pilgrim's Regress by George Romanes
George Romanes' procedures for compiling anecdotes about the intelligence of animals
Evolution by Romanes
1848 births
1894 deaths
19th-century British people
19th-century English people
Academics of University College London
Alumni of Gonville and Caius College, Cambridge
British zoologists
English zoologists
Ethologists
Evolutionary biologists
Fellows of the Royal Society
Fullerian Professors of Physiology
Pre-Confederation Ontario people | George Romanes | Biology | 3,682 |
40,326,332 | https://en.wikipedia.org/wiki/Pablo%20Rodriguez%20%28computer%20scientist%29 | Pablo Rodriguez (; born 17 April) is a Spanish computer scientist and researcher, who is best known for his research in the mid-2000s on peer-to-peer file sharing and user-generated content. After working for technology and communications companies AT&T and Microsoft Research, Rodriguez returned to Spain in 2006 to become the research director for telecommunications provider Telefónica. In 2010 he took a position as an adjunct professor at Columbia University in New York.
Rodriguez has been a frequent guest speaker at technology conferences in Europe, such as the International World Wide Web Conference, TEDx Barcelona, and the Wired Conference in London. He has collaborated with chef Ferran Adrià of the restaurant elBulli to develop Bullipedia, and in 2014 with football team FC Barcelona to analyze their strategies.
Early life and education
Rodriguez was born in Oviedo, in the Asturias region of Spain. After studying for his Bachelor and Master of Science in Telecommunications Engineering at the Universidad Pública de Navarra (1990–1995), Rodriguez continued his Master's in computational physics at King's College London, studying electro-optical sensors and collaborating on the research paper Advances in high-resolution distributed sensing using a time-resolved photon counting technique.
Travelling to Switzerland and France, Rodriguez studied communication systems at a postgraduate level, and gained a PhD in Computer Science in 2000 from the École Polytechnique Fédérale de Lausanne. During his doctorate studies, Rodriguez worked as an intern at AT&T Labs in New Jersey, and researched scalability at the Institut Eurécom. At AT&T, Rodriguez filed his first patents, TCP transparent proxies. His dissertation in 2002, Scalable content distribution in the Internet, focused on scaling existing Internet architecture to perform content distribution to millions of users. As part of his doctorate, he designed parallel download algorithms to improve download times and resilience in peer-to-peer file swarming systems.
Career
In the early 2000s, Rodriguez worked as a software architect for Silicon Valley companies such as search engine Inktomi, and network equipment company Tahoe Networks. In 2002, Rodriguez returned to AT&T to work at Bell Labs, where he researched many of the early concepts of peer-to-peer networks and mobile computing. Following this, Rodriguez returned to England to begin working at Microsoft Research Cambridge in their systems and networking research group. By 2004, Rodriguez had already having ten patents. In 2005, Rodriguez co-designed Avalanche, a peer-to-peer client for legal files proposed to improve download efficiency and copy protection, which was released in 2007 as Microsoft Secure Content Distribution. In addition to Avalanche, Rodriguez researched content distribution, wireless systems, and complex networks, while conducting studies assessing Windows Update, FolderShare and Xbox Live. Rodriguez further researched low-power datacasting with Julian Chesterfield of the University of Cambridge.
In November 2006, Rodriguez left the Avalanche project at Microsoft to work at Telefónica Catalunya in Barcelona, a center separate from Telefónica's main Madrid offices that was created in March 2006. There, he worked as the head of Telefónica's Barcelona research and development team, leading research on highly scalable distributed systems, next generation social networks and advanced wireless systems. In 2008, the team began working on BeWifi, a technology that employs ideas from peer-to-peer networks to gather additional bandwidth for Wi-Fi connections, using additional routers in the user's area. Initially employed as the Internet scientific director, in 2013 he became the center's director of research and innovation, focusing on big data concepts, until Telefónica Digital was merged into the company's Global Corporate Centre.
Rodriguez has been collaborating with chef Ferran Adrià of the former Michelin 3-star restaurant elBulli to develop Bullipedia: a Wiki format culinary repository of information about Spanish cuisine, which was first announced in early 2012. In 2014, Rodriguez collaborated with football team FC Barcelona to develop new strategies for football, by analyzing the team using network theory techniques.
Rodriguez is a member of several advisory boards for companies and associations, including the scientific journal IEEE/ACM Transactions on Networking, the IMDEA Networks Institute since 2010, and the art and science exhibition centre LABoral Centro de Arte y Creación Industrial in Gijón since 2013, and serves as a trustee board member of the Catalan Institution for Research and Advanced Studies since 2014.
University and educating
In the mid-2000s, Rodriguez collaborated on research papers dealing mainly with peer-to-peer content distribution. Two of these were highly influential in computer science: Network coding for large scale content distribution (2005) and Should internet service providers fear peer-assisted content distribution? (2005), which became highly cited papers for researchers. Network coding for large scale content distribution, as well as a paper analyzing YouTube networks, I tube, you tube, everybody tubes: analyzing the world's largest user generated content video system (2007), have been cited by thousands of papers and studies.
In 2010 he joined the computer science department of Columbia University as an adjunct professor, where he taught about social networks and next generation system architectures. He held this position until 2012.
In 2009, Rodriguez was one of four keynote speakers at the International World Wide Web Conference, held in Madrid. Rodriguez has spoken at TEDx Barcelona in 2011 and 2014, discussing distributed programming and later net neutrality. At the 2013 Internet Measurement Conference, Rodriguez delivered the keynote speech, while receiving an award for a paper he collaborated on, entitled Follow the Money: Understanding Economics of Online Aggregation and Advertising. In 2013 he was a part of a panel on Re-architecting the Internet for the Institute of Electrical and Electronics Engineers' conference Infocom 2013, and in 2014 attended the Wired Conference as a guest speaker, discussing his research on FC Barcelona's strategies.
Awards and honors
In 2015 he was named a fellow of the Association for Computing Machinery "for contributions to content distribution architectures in peer-to-peer networks."
See also
Hamed Haddadi
Cha Meeyoung
References
Papers
External links
Official website
Columbia University Course Page
TEDx Talk: My Data Soul
1972 births
Alumni of King's College London
AT&T people
Computer systems researchers
École Polytechnique Fédérale de Lausanne alumni
Information systems researchers
Living people
People from Oviedo
Researchers in distributed computing
Spanish academics
Spanish computer scientists
Telefónica
Columbia University faculty
Public University of Navarre alumni | Pablo Rodriguez (computer scientist) | Technology | 1,307 |
14,039,665 | https://en.wikipedia.org/wiki/Decene | Decene is an organic compound with the chemical formula . Decene contains a chain of ten carbon atoms with one double bond, making it an alkene. There are many isomers of decene depending on the position and geometry of the double bond. Dec-1-ene is the only isomer of industrial importance. As an alpha olefin, it is used as a comonomer in copolymers and is an intermediate in the production of epoxides, amines, oxo alcohols, synthetic lubricants, synthetic fatty acids and alkylated aromatics.
The industrial processes used in the production of dec-1-ene are oligomerization of ethylene by the Ziegler process or by the cracking of petrochemical waxes.
In ethenolysis, methyl oleate, the methyl ester of oleic acid, converts to 1-decene and methyl 9-decenoate:
Dec-1-ene has been isolated from the leaves and rhizome of the plant Farfugium japonicum and has been detected as the initial product in the microbial degradation of n-decane.
References
External links
Alkenes | Decene | Chemistry | 248 |
47,186,759 | https://en.wikipedia.org/wiki/Desoutter%20Tools | Desoutter Industrial Tools founded in Great Britain in 1914, now headquartered in France, is an industrial manufacturer providing Process Control & Data Analysis Software, as well as electric and formerly pneumatic assembly tools. Products and services are sold in more than 170 countries through 20 business units. Desoutter Tools is active in fields, such as Aerospace, Automotive Industry, Light Assembly and Heavy Vehicles, Off-Road, General Industry.
The French companies Georges Renault in 1989 and Seti-Tec in 2011, the British Pivotware in 2015, German Nexonar in 2022, American Tech-motive in 2005 and the Swedish Scan Rotor in 2004 have been integrated in the Desoutter Tools company.
History
Origin
Marcel Desoutter, one of the five Desoutter brothers, was an aviator. When he lost a leg in an aircraft crash, he was fitted with an "uncomfortable wooden replacement". His brother Charles helped him regain his mobility by designing a prototype for a new artificial leg made of duralumin. It was the first ever metal leg. Lighter and easier to manoeuvre than wooden legs, Marcel was flying again by the following year.
This innovation was met with interest from other persons needing a lighter artificial leg; and it resulted in the founding of the Desoutter Company, headed by Marcel Desoutter.
Product lines
From the outset, Desoutter needed to develop specific pneumatic tools to ensure that the aluminium components of the artificial limbs were drilled efficiently.
Adjusting to the numerous developments in its production, the company acquired expertise in this field that in the 1950s they decided to make it their sole business.
Logo
The original idea for this symbol was accredited to Charles Cunliffe who headed Desoutter’s Advertising department for many years after the Second World War. This was a period of growth, particularly owing to the development of a new range of products. Their launch was accompanied by a novel advertising campaign presenting diminutive figures in worker's overalls, but with the heads of horses.
This horsepower concept was developed in many of the brand's advertisements for about twenty years. The managing board at the time even decided that it was the embodiment of the company's identity.
In 1973, the horse's head was combined with the Desoutter logo script, which was a facsimile of Louis Albert Desoutter's signature, one of the company's founders.
To mark the centenary of the brand, the emblem recently adopted a more contemporary graphic design.
Products
References
Further reading
Flight magazine, 29 March 1913
Flight magazine, 2 May 1929
Flight magazine, 25 April 1952 (Obituary)
Flight magazine, 13 January 1955
Jackson, A J. British Civil Aircraft since 1919 Volume 2. Putnam, 1973
Oxford Dictionary of National Biography, Volume15. Oxford University Press, 2004
External links
Mechanization in Industry, Harry Jerome, 1934
Design for Industry, Volumes 48–49
Machinery & Production Engineering Volume 77, Issue 2
Aeroplane and Commercial Aviation News, Volume 97
Pneumatic tool manufacturers
Automotive tool manufacturers
Tool manufacturing companies of the United Kingdom
Power tool manufacturers
Manufacturing companies established in 1914
Manufacturing companies of France
Tool manufacturing companies of the United States
Industrial machine manufacturers
1914 establishments in England | Desoutter Tools | Engineering | 653 |
14,308 | https://en.wikipedia.org/wiki/Hoover%20Dam | Hoover Dam is a concrete arch-gravity dam in the Black Canyon of the Colorado River, on the border between the U.S. states of Nevada and Arizona. Constructed between 1931 and 1936, during the Great Depression, it was dedicated on September 30, 1935, by President Franklin D. Roosevelt. Its construction was the result of a massive effort involving thousands of workers, and cost over 100 lives. Bills passed by Congress during its construction referred to it as Hoover Dam (after President Herbert Hoover), but the Roosevelt administration named it Boulder Dam. In 1947, Congress restored the name Hoover Dam.
Since about 1900, the Black Canyon and nearby Boulder Canyon had been investigated for their potential to support a dam that would control floods, provide irrigation water, and produce hydroelectric power. In 1928, Congress authorized the project. The winning bid to build the dam was submitted by a consortium named Six Companies, Inc., which began construction in early 1931. Such a large concrete structure had never been built before, and some of the techniques used were unproven. The torrid summer weather and lack of facilities near the site also presented difficulties. Nevertheless, Six Companies turned the dam over to the federal government on March 1, 1936, more than two years ahead of schedule.
Hoover Dam impounds Lake Mead and is located near Boulder City, Nevada, a municipality originally constructed for workers on the construction project, about southeast of Las Vegas, Nevada. The dam's generators provide power for public and private utilities in Nevada, Arizona, and California. Hoover Dam is a major tourist attraction, with 7 million tourists a year. The heavily traveled U.S. Route 93 (US 93) ran along the dam's crest until October 2010, when the Hoover Dam Bypass opened.
Background
Search for resources
As the United States developed the Southwest, the Colorado River was seen as a potential source of irrigation water. An initial attempt at diverting the river for irrigation purposes occurred in the late 1890s, when land speculator William Beatty built the Alamo Canal just north of the Mexican border; the canal dipped into Mexico before running to a desolate area Beatty named the Imperial Valley. Though water from the Alamo Canal allowed for the widespread settlement of the valley, the canal proved expensive to operate. After a catastrophic breach that caused the Colorado River to fill the Salton Sea, the Southern Pacific Railroad spent $3 million in 1906–07 to stabilize the waterway, an amount it hoped in vain that it would be reimbursed for by the federal government. Even after the waterway was stabilized, it proved unsatisfactory because of constant disputes with landowners on the Mexican side of the border.
As the technology of electric power transmission improved, the Lower Colorado was considered for its hydroelectric-power potential. In 1902, the Edison Electric Company of Los Angeles surveyed the river in the hope of building a rock dam which could generate . However, at the time, the limit of transmission of electric power was , and there were few customers (mostly mines) within that limit. Edison allowed land options it held on the river to lapse—including an option for what became the site of Hoover Dam.
In the following years, the Bureau of Reclamation (BOR), known as the Reclamation Service at the time, also considered the Lower Colorado as the site for a dam. Service chief Arthur Powell Davis proposed using dynamite to collapse the walls of Boulder Canyon, north of the eventual dam site, into the river. The river would carry off the smaller pieces of debris, and a dam would be built incorporating the remaining rubble. In 1922, after considering it for several years, the Reclamation Service finally rejected the proposal, citing doubts about the unproven technique and questions as to whether it would, in fact, save money.
Planning and agreements
In 1922, the Reclamation Service presented a report calling for the development of a dam on the Colorado River for flood control and electric power generation. The report was principally authored by Davis and was called the Fall-Davis report after Interior Secretary Albert Fall. The Fall-Davis report cited use of the Colorado River as a federal concern because the river's basin covered several states, and the river eventually entered Mexico. Though the Fall-Davis report called for a dam "at or near Boulder Canyon", the Reclamation Service (which was renamed the Bureau of Reclamation the following year) found that canyon unsuitable. One potential site at Boulder Canyon was bisected by a geologic fault; two others were so narrow there was no space for a construction camp at the bottom of the canyon or for a spillway. The Service investigated Black Canyon and found it ideal; a railway could be laid from the railhead in Las Vegas to the top of the dam site. Despite the site change, the dam project was referred to as the "Boulder Canyon Project".
With little guidance on water allocation from the Supreme Court, proponents of the dam feared endless litigation. Delph Carpenter, a Colorado attorney, proposed that the seven states which fell within the river's basin (California, Nevada, Arizona, Utah, New Mexico, Colorado and Wyoming) form an interstate compact, with the approval of Congress. Such compacts were authorized by Article I of the United States Constitution but had never been concluded among more than two states. In 1922, representatives of seven states met with then-Secretary of Commerce Herbert Hoover. Initial talks produced no result, but when the Supreme Court handed down the Wyoming v. Colorado decision undermining the claims of the upstream states, they became anxious to reach an agreement. The resulting Colorado River Compact was signed on November 24, 1922.
Legislation to authorize the dam was introduced repeatedly by two California Republicans, Representative Phil Swing and Senator Hiram Johnson, but representatives from other parts of the country considered the project as hugely expensive and one that would mostly benefit California. The 1927 Mississippi flood made Midwestern and Southern congressmen and senators more sympathetic toward the dam project. On March 12, 1928, the failure of the St. Francis Dam, constructed by the city of Los Angeles, caused a disastrous flood that killed up to 600 people. As that dam was a curved-gravity type, similar in design to the arch-gravity as was proposed for the Black Canyon dam, opponents claimed that the Black Canyon dam's safety could not be guaranteed. Congress authorized a board of engineers to review plans for the proposed dam. The Colorado River Board found the project feasible, but warned that should the dam fail, every downstream Colorado River community would be destroyed, and that the river might change course and empty into the Salton Sea. The Board cautioned: "To avoid such possibilities, the proposed dam should be constructed on conservative if not ultra-conservative lines."
On December 21, 1928, President Coolidge signed the bill authorizing the dam. The Boulder Canyon Project Act appropriated $165 million for the project along with the downstream Imperial Dam and All-American Canal, a replacement for Beatty's canal entirely on the U.S. side of the border. It also permitted the compact to go into effect when at least six of the seven states approved it. This occurred on March 6, 1929, with Utah's ratification; Arizona did not approve it until 1944.
Design, preparation and contracting
Even before Congress approved the Boulder Canyon Project, the Bureau of Reclamation was considering what kind of dam should be used. Officials eventually decided on a massive concrete arch-gravity dam, the design of which was overseen by the Bureau's chief design engineer John L. Savage. The monolithic dam would be thick at the bottom and thin near the top and would present a convex face towards the water above the dam. The curving arch of the dam would transmit the water's force into the abutments, in this case the rock walls of the canyon. The wedge-shaped dam would be thick at the bottom, narrowing to at the top, leaving room for a highway connecting Nevada and Arizona.
On January 10, 1931, the Bureau made the bid documents available to interested parties, at five dollars a copy. The government was to provide the materials, and the contractor was to prepare the site and build the dam. The dam was described in minute detail, covering 100 pages of text and 76 drawings. A $2 million bid bond was to accompany each bid; the winner would have to post a $5 million performance bond. The contractor had seven years to build the dam, or penalties would ensue.
The Wattis Brothers, heads of the Utah Construction Company, were interested in bidding on the project, but lacked the money for the performance bond. They lacked sufficient resources even in combination with their longtime partners, Morrison-Knudsen, which employed the nation's leading dam builder, Frank Crowe. They formed a joint venture to bid for the project with Pacific Bridge Company of Portland, Oregon; Henry J. Kaiser & W. A. Bechtel Company of San Francisco; MacDonald & Kahn Ltd. of Los Angeles; and the J.F. Shea Company of Portland, Oregon. The joint venture was called Six Companies, Inc. as Bechtel and Kaiser were considered one company for purposes of Six in the name. The name was descriptive and was an inside joke among the San Franciscans in the bid, where "Six Companies" was also a Chinese benevolent association in the city. There were three valid bids, and Six Companies' bid of $48,890,955 was the lowest, within $24,000 of the confidential government estimate of what the dam would cost to build, and five million dollars less than the next-lowest bid.
The city of Las Vegas had lobbied hard to be the headquarters for the dam construction, closing its many speakeasies when the decision maker, Secretary of the Interior Ray Wilbur, came to town. Instead, Wilbur announced in early 1930 that a model city was to be built in the desert near the dam site. This town became known as Boulder City, Nevada. Construction of a rail line joining Las Vegas and the dam site began in September 1930.
Construction
Labor force
Soon after the dam was authorized, increasing numbers of unemployed people converged on southern Nevada. Las Vegas, then a small city of some 5,000, saw between 10,000 and 20,000 unemployed descend on it. A government camp was established for surveyors and other personnel near the dam site; this soon became surrounded by a squatters' camp. Known as McKeeversville, the camp was home to men hoping for work on the project, together with their families. Another camp, on the flats along the Colorado River, was officially called Williamsville, but was known to its inhabitants as "Ragtown". When construction began, Six Companies hired large numbers of workers, with more than 3,000 on the payroll by 1932 and with employment peaking at 5,251 in July 1934. "Mongolian" (Chinese) labor was prevented by the construction contract, while the number of black people employed by Six Companies never exceeded thirty, mostly lowest-pay-scale laborers in a segregated crew, who were issued separate water buckets.
As part of the contract, Six Companies, Inc. was to build Boulder City to house the workers. The original timetable called for Boulder City to be built before the dam project began, but President Hoover ordered work on the dam to begin in March 1931 rather than in October. The company built bunkhouses, attached to the canyon wall, to house 480 single men at what became known as River Camp. Workers with families were left to provide their own accommodations until Boulder City could be completed, and many lived in Ragtown. The site of Hoover Dam endures extremely hot weather, and the summer of 1931 was especially torrid, with the daytime high averaging . Sixteen workers and other riverbank residents died of heat prostration between June 25 and July 26, 1931.
The Industrial Workers of the World (IWW or "Wobblies"), though much-reduced from their heyday as militant labor organizers in the early years of the century, hoped to unionize the Six Companies workers by capitalizing on their discontent. They sent eleven organizers, several of whom were arrested by Las Vegas police. On August 7, 1931, the company cut wages for all tunnel workers. Although the workers sent the organizers away, not wanting to be associated with the "Wobblies", they formed a committee to represent them with the company. The committee drew up a list of demands that evening and presented them to Crowe the following morning. He was noncommittal. The workers hoped that Crowe, the general superintendent of the job, would be sympathetic; instead, he gave a scathing interview to a newspaper, describing the workers as "malcontents".
On the morning of the 9th, Crowe met with the committee and told them that management refused their demands, was stopping all work, and was laying off the entire work force, except for a few office workers and carpenters. The workers were given until 5 p.m. to vacate the premises. Concerned that a violent confrontation was imminent, most workers took their paychecks and left for Las Vegas to await developments. Two days later, the remainder were talked into leaving by law enforcement. On August 13, the company began hiring workers again, and two days later, the strike was called off. While the workers received none of their demands, the company guaranteed there would be no further reductions in wages. Living conditions began to improve as the first residents moved into Boulder City in late 1931.
A second labor action took place in July 1935, as construction on the dam wound down. When a Six Companies manager altered working times to force workers to take lunch on their own time, workers responded with a strike. Emboldened by Crowe's reversal of the lunch decree, workers raised their demands to include a $1-per-day raise. The company agreed to ask the Federal government to supplement the pay, but no money was forthcoming from Washington. The strike ended.
River diversion
Before the dam could be built, the Colorado River needed to be diverted away from the construction site. To accomplish this, four diversion tunnels were driven through the canyon walls, two on the Nevada side and two on the Arizona side. These tunnels were in diameter. Their combined length was nearly 16,000 ft, or more than . The contract required these tunnels to be completed by October 1, 1933, with a $3,000-per-day fine to be assessed for any delay. To meet the deadline, Six Companies had to complete work by early 1933, since only in late fall and winter was the water level in the river low enough to safely divert.
Tunneling began at the lower portals of the Nevada tunnels in May 1931. Shortly afterward, work began on two similar tunnels in the Arizona canyon wall. In March 1932, work began on lining the tunnels with concrete. First the base, or invert, was poured. Gantry cranes, running on rails through the entire length of each tunnel were used to place the concrete. The sidewalls were poured next. Movable sections of steel forms were used for the sidewalls. Finally, using pneumatic guns, the overheads were filled in. The concrete lining is thick, reducing the finished tunnel diameter to . The river was diverted into the two Arizona tunnels on November 13, 1932; the Nevada tunnels were kept in reserve for high water. This was done by exploding a temporary cofferdam protecting the Arizona tunnels while at the same time dumping rubble into the river until its natural course was blocked.
Following the completion of the dam, the entrances to the two outer diversion tunnels were sealed at the opening and halfway through the tunnels with large concrete plugs. The downstream halves of the tunnels following the inner plugs are now the main bodies of the spillway tunnels. The inner diversion tunnels were plugged at approximately one-third of their length, beyond which they now carry steel pipes connecting the intake towers to the power plant and outlet works. The inner tunnels' outlets are equipped with gates that can be closed to drain the tunnels for maintenance.
Groundworks, rock clearance and grout curtain
To protect the construction site from the Colorado River and to facilitate the river's diversion, two cofferdams were constructed. Work on the upper cofferdam began in September 1932, even though the river had not yet been diverted. The cofferdams were designed to protect against the possibility of the river's flooding a site at which two thousand men might be at work, and their specifications were covered in the bid documents in nearly as much detail as the dam itself. The upper cofferdam was high, and thick at its base, thicker than the dam itself. It contained of material.
When the cofferdams were in place and the construction site was drained of water, excavation for the dam foundation began. For the dam to rest on solid rock, it was necessary to remove accumulated erosion soils and other loose materials in the riverbed until sound bedrock was reached. Work on the foundation excavations was completed in June 1933. During this excavation, approximately of material was removed. Since the dam was an arch-gravity type, the side-walls of the canyon would bear the force of the impounded lake. Therefore, the side-walls were also excavated to reach virgin rock, as weathered rock might provide pathways for water seepage. Shovels for the excavation came from the Marion Power Shovel Company.
The men who removed this rock were called "high scalers". While suspended from the top of the canyon with ropes, the high-scalers climbed down the canyon walls and removed the loose rock with jackhammers and dynamite. Falling objects were the most common cause of death on the dam site; the high scalers' work thus helped ensure worker safety. One high scaler was able to save a life in a more direct manner: when a government inspector lost his grip on a safety line and began tumbling down a slope towards almost certain death, a high scaler was able to intercept him and pull him into the air. The construction site had become a magnet for tourists. The high scalers were prime attractions and showed off for the watchers. The high scalers received considerable media attention, with one worker dubbed the "Human Pendulum" for swinging co-workers (and, at other times, cases of dynamite) across the canyon. To protect themselves against falling objects, some high scalers dipped cloth hats in tar and allowed them to harden. When workers wearing such headgear were struck hard enough to inflict broken jaws, they sustained no skull damage. Six Companies ordered thousands of what initially were called "hard boiled hats" (later "hard hats") and strongly encouraged their use.
The cleared, underlying rock foundation of the dam site was reinforced with grout, forming a grout curtain. Holes were driven into the walls and base of the canyon, as deep as into the rock, and any cavities encountered were to be filled with grout. This was done to stabilize the rock, to prevent water from seeping past the dam through the canyon rock, and to limit "uplift"—upward pressure from water seeping under the dam. The workers were under severe time constraints due to the beginning of the concrete pour. When they encountered hot springs or cavities too large to readily fill, they moved on without resolving the problem. A total of 58 of the 393 holes were incompletely filled. After the dam was completed and the lake began to fill, large numbers of significant leaks caused the Bureau of Reclamation to examine the situation. It found that the work had been incompletely done, and was based on less than a full understanding of the canyon's geology. New holes were drilled from inspection galleries inside the dam into the surrounding bedrock. It took nine years (1938–47) under relative secrecy to complete the supplemental grout curtain.
Concrete
The first concrete was poured into the dam on June 6, 1933, 18 months ahead of schedule. Since concrete heats and contracts as it cures, the potential for uneven cooling and contraction of the concrete posed a serious problem. Bureau of Reclamation engineers calculated that if the dam were to be built in a single continuous pour, the concrete would take 125 years to cool, and the resulting stresses would cause the dam to crack and crumble. Instead, the ground where the dam would rise was marked with rectangles, and concrete blocks in columns were poured, some as large as and high. Each five-foot form contained a set of steel pipes; cool river water would be poured through the pipes, followed by ice-cold water from a refrigeration plant. When an individual block had cured and had stopped contracting, the pipes were filled with grout. Grout was also used to fill the hairline spaces between columns, which were grooved to increase the strength of the joints.
The concrete was delivered in huge steel buckets and almost 7 feet in diameter; Crowe was awarded two patents for their design. These buckets, which weighed when full, were filled at two massive concrete plants on the Nevada side, and were delivered to the site in special railcars. The buckets were then suspended from aerial cableways which were used to deliver the bucket to a specific column. As the required grade of aggregate in the concrete differed depending on placement in the dam (from pea-sized gravel to stones), it was vital that the bucket be maneuvered to the proper column. When the bottom of the bucket opened up, disgorging of concrete, a team of men worked it throughout the form. Although there are myths that men were caught in the pour and are entombed in the dam to this day, each bucket deepened the concrete in a form by only , and Six Companies engineers would not have permitted a flaw caused by the presence of a human body.
A total of of concrete was used in the dam before concrete pouring ceased on May 29, 1935. In addition, were used in the power plant and other works. More than of cooling pipes were placed within the concrete. Overall, there is enough concrete in the dam to pave a two-lane highway from San Francisco to New York. Concrete cores were removed from the dam for testing in 1995; they showed that "Hoover Dam's concrete has continued to slowly gain strength" and the dam is composed of a "durable concrete having a compressive strength exceeding the range typically found in normal mass concrete". Hoover Dam concrete is not subject to alkali–silica reaction (ASR), as the Hoover Dam builders happened to use nonreactive aggregate, unlike that at downstream Parker Dam, where ASR has caused measurable deterioration.
Dedication and completion
With most work finished on the dam itself (the powerhouse remained uncompleted), a formal dedication ceremony was arranged for September 30, 1935, to coincide with a western tour being made by President Franklin D. Roosevelt. The morning of the dedication, it was moved forward three hours from 2 p.m. Pacific time to 11 a.m.; this was done because Secretary of the Interior Harold L. Ickes had reserved a radio slot for the President for 2 p.m. but officials did not realize until the day of the ceremony that the slot was for 2 p.m. Eastern Time. Despite the change in the ceremony time, and temperatures of , 10,000 people were present for the President's speech, in which he avoided mentioning the name of former President Hoover, who was not invited to the ceremony. To mark the occasion, a three-cent stamp was issued by the United States Post Office Department—bearing the name "Boulder Dam", the official name of the dam between 1933 and 1947. After the ceremony, Roosevelt made the first visit by any American president to Las Vegas.
Most work had been completed by the dedication, and Six Companies negotiated with the government through late 1935 and early 1936 to settle all claims and arrange for the formal transfer of the dam to the Federal Government. The parties came to an agreement and on March 1, 1936, Secretary Ickes formally accepted the dam on behalf of the government. Six Companies was not required to complete work on one item, a concrete plug for one of the bypass tunnels, as the tunnel had to be used to take in irrigation water until the powerhouse went into operation.
Construction deaths
There were 112 deaths reported as associated with the construction of the dam. The first was Bureau of Reclamation employee Harold Connelly who died on May 15, 1921, after falling from a barge while surveying the Colorado River for an ideal spot for the dam. Surveyor John Gregory ("J.G.") Tierney, who drowned on December 20, 1922, in a flash flood while looking for an ideal spot for the dam was the second person. The official list's final death occurred on December 20, 1935, when Patrick Tierney, electrician's helper and the son of J.G. Tierney, fell from one of the two Arizona-side intake towers. Included in the fatality list are three workers who took their own lives on site, one in 1932 and two in 1933. Of the 112 fatalities, 91 were Six Companies employees, three were Bureau of Reclamation employees, and one was a visitor to the site; the remainder were employees of various contractors not part of Six Companies.
Ninety-six of the deaths occurred during construction at the site. Not included in the official number of fatalities were deaths that were recorded as pneumonia. Workers alleged that this diagnosis was a cover for death from carbon monoxide poisoning (brought on by the use of gasoline-fueled vehicles in the diversion tunnels), and a classification used by Six Companies to avoid paying compensation claims. The site's diversion tunnels frequently reached , enveloped in thick plumes of vehicle exhaust gases. A total of 42 workers were recorded as having died from pneumonia and were not included in the above total; none were listed as having died from carbon monoxide poisoning. No deaths of non-workers from pneumonia were recorded in Boulder City during the construction period.
Architectural style
The initial plans for the facade of the dam, the power plant, the outlet tunnels and ornaments clashed with the modern look of an arch dam. The Bureau of Reclamation, more concerned with the dam's functionality, adorned it with a Gothic-inspired balustrade and eagle statues. This initial design was criticized by many as being too plain and unremarkable for a project of such immense scale, so Los Angeles-based architect Gordon B. Kaufmann, then the supervising architect to the Bureau of Reclamation, was brought in to redesign the exteriors. Kaufmann greatly streamlined the design and applied an elegant Art Deco style to the entire project. He designed sculpted turrets rising seamlessly from the dam face and clock faces on the intake towers set for the time in Nevada and Arizona—both states are in different time zones, but since Arizona does not observe daylight saving time, the clocks display the same time for more than half the year.
At Kaufmann's request, Denver artist Allen Tupper True was hired to handle the design and decoration of the walls and floors of the new dam. True's design scheme incorporated motifs of the Navajo and Pueblo tribes of the region. Although some were initially opposed to these designs, True was given the go-ahead and was officially appointed consulting artist. With the assistance of the National Laboratory of Anthropology, True researched authentic decorative motifs from Indian sand paintings, textiles, baskets and ceramics. The images and colors are based on Native American visions of rain, lightning, water, clouds, and local animals—lizards, serpents, birds—and on the Southwestern landscape of stepped mesas. In these works, which are integrated into the walkways and interior halls of the dam, True also reflected on the machinery of the operation, making the symbolic patterns appear both ancient and modern.
With the agreement of Kaufmann and the engineers, True also devised for the pipes and machinery an innovative color-coding which was implemented throughout all BOR projects. True's consulting artist job lasted through 1942; it was extended so he could complete design work for the Parker, Shasta and Grand Coulee dams and power plants. True's work on the Hoover Dam was humorously referred to in a poem published in The New Yorker, part of which read, "lose the spark, and justify the dream; but also worthy of remark will be the color scheme".
Complementing Kaufmann and True's work, sculptor Oskar J. W. Hansen designed many of the sculptures on and around the dam. His works include the monument of dedication plaza, a plaque to memorialize the workers killed and the bas-reliefs on the elevator towers. In his words, Hansen wanted his work to express "the immutable calm of intellectual resolution, and the enormous power of trained physical strength, equally enthroned in placid triumph of scientific accomplishment", because "[t]he building of Hoover Dam belongs to the sagas of the daring." Hansen's dedication plaza, on the Nevada abutment, contains a sculpture of two winged figures flanking a flagpole.
Surrounding the base of the monument is a terrazzo floor embedded with a "star map". The map depicts the Northern Hemisphere sky at the moment of President Roosevelt's dedication of the dam. This is intended to help future astronomers, if necessary, calculate the exact date of dedication. The bronze figures, dubbed Winged Figures of the Republic, were both formed in a continuous pour. To put such large bronzes into place without marring the highly polished bronze surface, they were placed on ice and guided into position as the ice melted. Hansen's bas-relief on the Nevada elevator tower depicts the benefits of the dam: flood control, navigation, irrigation, water storage, and power. The bas-relief on the Arizona elevator depicts, in his words, "the visages of those Indian tribes who have inhabited mountains and plains from ages distant."
Operation
Power plant and water demands
Excavation for the powerhouse was carried out simultaneously with the excavation for the dam foundation and abutments. The excavation of this U-shaped structure located at the downstream toe of the dam was completed in late 1933 with the first concrete placed in November 1933. Filling of Lake Mead began February 1, 1935, even before the last of the concrete was poured that May. The powerhouse was one of the projects uncompleted at the time of the formal dedication on September 30, 1935; a crew of 500 men remained to finish it and other structures. To make the powerhouse roof bombproof, it was constructed of layers of concrete, rock, and steel with a total thickness of about , topped with layers of sand and tar.
In the latter half of 1936, water levels in Lake Mead were high enough to permit power generation, and the first three Allis Chalmers built Francis turbine-generators, all on the Nevada side, began operating. In March 1937, one more Nevada generator went online and the first Arizona generator by August. By September 1939, four more generators were operating, and the dam's power plant became the largest hydroelectricity facility in the world. The final generator was not placed in service until 1961, bringing the maximum generating capacity to 1,345 megawatts at the time. Original plans called for 16 large generators, eight on each side of the river, but two smaller generators were installed instead of one large one on the Arizona side for a total of 17. The smaller generators were used to serve smaller communities at a time when the output of each generator was dedicated to a single municipality, before the dam's total power output was placed on the grid and made arbitrarily distributable.
Before water from Lake Mead reaches the turbines, it enters the intake towers and then four gradually narrowing penstocks which funnel the water down towards the powerhouse. The intakes provide a maximum hydraulic head (water pressure) of as the water reaches a speed of about . The entire flow of the Colorado River usually passes through the turbines. The spillways and outlet works (jet-flow gates) are rarely used. The jet-flow gates, located in concrete structures above the river and also at the outlets of the inner diversion tunnels at river level, may be used to divert water around the dam in emergency or flood conditions, but have never done so, and in practice are used only to drain water from the penstocks for maintenance. Following an uprating project from 1986 to 1993, the total gross power rating for the plant, including two 2.4 megawatt Pelton turbine-generators that power Hoover Dam's own operations is a maximum capacity of 2080 megawatts. The annual generation of Hoover Dam varies. The maximum net generation was 10.348 TWh in 1984, and the minimum since 1940 was 2.648 TWh in 1956. The average power generated was 4.2 TWh/year for 1947–2008. In 2015, the dam generated 3.6 TWh.
The amount of electricity generated by Hoover Dam has been decreasing along with the falling water level in Lake Mead due to the prolonged drought since year 2000 and high demand for the Colorado River's water. By 2014 its generating capacity was downrated by 23% to 1592 MW and was providing power only during periods of peak demand. Lake Mead fell to a new record low elevation of on July 1, 2016, before beginning to rebound slowly. Under its original design, the dam would no longer be able to generate power once the water level fell below , which might have occurred in 2017 had water restrictions not been enforced. To lower the minimum power pool elevation from , five wide-head turbines, designed to work efficiently with less flow, were installed. Water levels were maintained at over in 2018 and 2019, but fell to a new record low of on June 10, 2021 and were projected to fall below by the end of 2021.
Control of water was the primary concern in the building of the dam. Power generation has allowed the dam project to be self-sustaining: proceeds from the sale of power repaid the 50-year construction loan, and those revenues also finance the multimillion-dollar yearly maintenance budget. Power is generated in step with and only with the release of water in response to downstream water demands.
Lake Mead and downstream releases from the dam also provide water for both municipal and irrigation uses. Water released from the Hoover Dam eventually reaches several canals. The Colorado River Aqueduct and Central Arizona Project branch off Lake Havasu while the All-American Canal is supplied by the Imperial Dam. In total, water from Lake Mead serves 18 million people in Arizona, Nevada, and California and supplies the irrigation of over of land.
In 2018, the Los Angeles Department of Water and Power (LADWP) proposed a $3 billion pumped-storage hydroelectricity project—a "battery" of sorts—that would use wind and solar power to recirculate water back up to Lake Mead from a pumping station downriver.
Power distribution
Electricity from the dam's powerhouse was originally sold pursuant to a fifty-year contract, authorized by Congress in 1934, which ran from 1937 to 1987. In 1984, Congress passed a new statute which set power allocations to southern California, Arizona, and Nevada from the dam from 1987 to 2017. The powerhouse was run under the original authorization by the Los Angeles Department of Water and Power and Southern California Edison; in 1987, the Bureau of Reclamation assumed control. In 2011, Congress enacted legislation extending the current contracts until 2067, after setting aside 5% of Hoover Dam's power for sale to Native American tribes, electric cooperatives, and other entities. The new arrangement began on October 1, 2017.
The Bureau of Reclamation reports that the energy generated under the contracts ending in 2017 was allocated as follows:
Spillways
The dam is protected against over-topping by two spillways. The spillway entrances are located behind each dam abutment, running roughly parallel to the canyon walls. The spillway entrance arrangement forms a classic side-flow weir with each spillway containing four and steel-drum gates. Each gate weighs and can be operated manually or automatically. Gates are raised and lowered depending on water levels in the reservoir and flood conditions. The gates cannot entirely prevent water from entering the spillways but can maintain an extra of lake level.
Water flowing over the spillways falls dramatically into , spillway tunnels before connecting to the outer diversion tunnels and reentering the main river channel below the dam. This complex spillway entrance arrangement combined with the approximate elevation drop from the top of the reservoir to the river below was a difficult engineering problem and posed numerous design challenges. Each spillway's capacity of was empirically verified in post-construction tests in 1941.
The large spillway tunnels have only been used twice, for testing in 1941 and because of flooding in 1983. Both times, when inspecting the tunnels after the spillways were used, engineers found major damage to the concrete linings and underlying rock. The 1941 damage was attributed to a slight misalignment of the tunnel invert (or base), which caused cavitation, a phenomenon in fast-flowing liquids in which vapor bubbles collapse with explosive force. In response to this finding, the tunnels were patched with special heavy-duty concrete and the surface of the concrete was polished mirror-smooth. The spillways were modified in 1947 by adding flip buckets, which both slow the water and decrease the spillway's effective capacity, in an attempt to eliminate conditions thought to have contributed to the 1941 damage. The 1983 damage, also due to cavitation, led to the installation of aerators in the spillways. Tests at Grand Coulee Dam showed that the technique worked, in principle.
Roadway and tourism
There are two lanes for automobile traffic across the top of the dam, which formerly served as the Colorado River crossing for U.S. Route 93. In the wake of the September 11 terrorist attacks, authorities expressed security concerns and the Hoover Dam Bypass project was expedited. Pending the completion of the bypass, restricted traffic was permitted over Hoover Dam. Some types of vehicles were inspected prior to crossing the dam while semi-trailer trucks, buses carrying luggage, and enclosed-box trucks over long were not allowed on the dam at all, and were diverted to U.S. Route 95 or Nevada State Routes 163/68. The four-lane Hoover Dam Bypass opened on October 19, 2010. It includes a composite steel and concrete arch bridge, the Mike O'Callaghan–Pat Tillman Memorial Bridge, downstream from the dam.
With the opening of the bypass, through traffic is no longer allowed across Hoover Dam; dam visitors are allowed to use the existing roadway to approach from the Nevada side and cross to parking lots and other facilities on the Arizona side.
Hoover Dam opened for tours in 1937 after its completion but following Japan's attack on Pearl Harbor on December 7, 1941, it was closed to the public when the United States entered World War II, during which only authorized traffic, in convoys, was permitted. After the war, it reopened September 2, 1945, and by 1953, annual attendance had risen to 448,081. The dam closed on November 25, 1963, and March 31, 1969, days of mourning in remembrance of Presidents Kennedy and Eisenhower. In 1995, a new visitors' center was built, and the following year, visits exceeded one million for the first time. The dam closed again to the public on September 11, 2001; modified tours were resumed in December and a new "Discovery Tour" was added the following year. Today, nearly a million people per year take the tours of the dam offered by the Bureau of Reclamation. The government's increased security concerns have led to the exclusion of visitors from most of the interior structures. As a result, few of True's decorations can now be seen by visitors. Visitors can only purchase tickets on-site and have the options of a guided tour of the whole facility or only the power plant area. The only self-guided tour option is for the visitor center itself, where visitors can view various exhibits and enjoy a 360-degree view of the dam.
Environmental impact
The changes in water flow and use caused by Hoover Dam's construction and operation have had a large impact on the Colorado River Delta. The construction of the dam has been implicated in causing the decline of this estuarine ecosystem. For six years after the construction of the dam, while Lake Mead filled, virtually no water reached the mouth of the river. The delta's estuary, which once had a freshwater-saltwater mixing zone stretching south of the river's mouth, was turned into an inverse estuary where the level of salinity was higher close to the river's mouth.
The Colorado River had experienced natural flooding before the construction of the Hoover Dam. The dam eliminated the natural flooding, threatening many species adapted to the flooding, including both plants and animals. The construction of the dam devastated the populations of native fish in the river downstream from the dam. Four species of fish native to the Colorado River, the Bonytail chub, Colorado pikeminnow, Humpback chub, and Razorback sucker, are listed as endangered.
Naming controversy
During the years of lobbying leading up to the passage of legislation authorizing the dam in 1928, the press generally referred to the dam as "Boulder Dam" or as "Boulder Canyon Dam", even though the proposed site had shifted to Black Canyon. The Boulder Canyon Project Act of 1928 (BCPA) never mentioned a proposed name or title for the dam. The BCPA merely allows the government to "construct, operate, and maintain a dam and incidental works in the main stream of the Colorado River at Black Canyon or Boulder Canyon".
When Secretary of the Interior Ray Wilbur spoke at the ceremony starting the building of the railway between Las Vegas and the dam site on September 17, 1930, he named the dam "Hoover Dam", citing a tradition of naming dams after Presidents, though none had been so honored during their terms of office. Wilbur justified his choice on the ground that Hoover was "the great engineer whose vision and persistence ... has done so much to make [the dam] possible". One writer complained in response that "the Great Engineer had quickly drained, ditched, and dammed the country."
After Hoover's election defeat in 1932 and the accession of the Roosevelt administration, Secretary Ickes ordered on May 13, 1933, that the dam be referred to as Boulder Dam. Ickes stated that Wilbur had been imprudent in naming the dam after a sitting president, that Congress had never ratified his choice, and that it had long been referred to as Boulder Dam. Unknown to the general public, Attorney General Homer Cummings informed Ickes that Congress had indeed used the name "Hoover Dam" in five different bills appropriating money for construction of the dam. The official status this conferred to the name "Hoover Dam" had been noted on the floor of the House of Representatives by Congressman Edward T. Taylor of Colorado on December 12, 1930, but was likewise ignored by Ickes.
When Ickes spoke at the dedication ceremony on September 30, 1935, he was determined, as he recorded in his diary, "to try to nail down for good and all the name Boulder Dam." At one point in the speech, he spoke the words "Boulder Dam" five times within thirty seconds. Further, he suggested that if the dam were to be named after any one person, it should be for California Senator Hiram Johnson, a lead sponsor of the authorizing legislation. Roosevelt also referred to the dam as Boulder Dam, and the Republican-leaning Los Angeles Times, which at the time of Ickes' name change had run an editorial cartoon showing Ickes ineffectively chipping away at an enormous sign "HOOVER DAM", reran it showing Roosevelt reinforcing Ickes, but having no greater success.
In the following years, the name "Boulder Dam" failed to fully take hold, with many Americans using both names interchangeably and mapmakers divided as to which name should be printed. Memories of the Great Depression faded, and Hoover to some extent rehabilitated himself through good works during and after World War II. In 1947, a bill passed both Houses of Congress unanimously restoring the name "Hoover Dam." Ickes, who was by then a private citizen, opposed the change, stating, "I didn't know Hoover was that small a man to take credit for something he had nothing to do with."
Recognition
Hoover Dam was recognized as a National Historic Civil Engineering Landmark in 1984. It was listed on the National Register of Historic Places in 1981 and was designated a National Historic Landmark in 1985, cited for its engineering innovations.
See also
Ralph Luther Criswell, lobbyist on behalf of the dam
Glen Canyon Dam
Hoover Dam Police
List of dams in the Colorado River system
List of largest hydroelectric power stations
List of largest hydroelectric power stations in the United States
List of National Historic Landmarks in Arizona
List of National Historic Landmarks in Nevada
St. Thomas, Nevada, ghost town with site now under Lake Mead.
Water in California
Hoover Dam in popular culture
Citations
Bibliography
Cited works
Other sources
Arrigo, Anthony F. (2014). Imaging Hoover Dam: The Making of a Cultural Icon. Reno, NV: University of Nevada Press.
External links
Hoover Dam – Visitors Site
Historic Construction Company Project – Hoover Dam
Hoover Dam – An American Experience Documentary
Boulder City/Hoover Dam Museum official site
1936 establishments in Arizona
1936 establishments in Nevada
Arch-gravity dams
Art Deco architecture in Arizona
Presidential memorials in the United States
Buildings and structures in Clark County, Nevada
Buildings and structures in Mohave County, Arizona
Dams completed in 1936
Dams in Arizona
Dams in Nevada
Dams on the Colorado River
Dams on the National Register of Historic Places in Nevada
Energy infrastructure completed in 1939
Energy infrastructure on the National Register of Historic Places
Engineering projects
Historic American Engineering Record in Arizona
Historic American Engineering Record in Nevada
Historic Civil Engineering Landmarks
Hydroelectric power plants in Arizona
Hydroelectric power plants in Nevada
Industrial buildings and structures on the National Register of Historic Places in Arizona
Lake Mead National Recreation Area
Lake Mead
Naming controversies
National Historic Landmarks in Arizona
National Historic Landmarks in Nevada
National Register of Historic Places in Mohave County, Arizona
PWA Moderne architecture
Tourist attractions in Clark County, Nevada
Tourist attractions in Mohave County, Arizona
U.S. Route 93
United States Bureau of Reclamation dams | Hoover Dam | Engineering | 9,404 |
307,155 | https://en.wikipedia.org/wiki/Primitive%20equations | The primitive equations are a set of nonlinear partial differential equations that are used to approximate global atmospheric flow and are used in most atmospheric models. They consist of three main sets of balance equations:
A continuity equation: Representing the conservation of mass.
Conservation of momentum: Consisting of a form of the Navier–Stokes equations that describe hydrodynamical flow on the surface of a sphere under the assumption that vertical motion is much smaller than horizontal motion (hydrostasis) and that the fluid layer depth is small compared to the radius of the sphere
A thermal energy equation: Relating the overall temperature of the system to heat sources and sinks
The primitive equations may be linearized to yield Laplace's tidal equations, an eigenvalue problem from which the analytical solution to the latitudinal structure of the flow may be determined.
In general, nearly all forms of the primitive equations relate the five variables u, v, ω, T, W, and their evolution over space and time.
The equations were first written down by Vilhelm Bjerknes.
Definitions
is the zonal velocity (velocity in the east–west direction tangent to the sphere)
is the meridional velocity (velocity in the north–south direction tangent to the sphere)
is the vertical velocity in isobaric coordinates
is the temperature
is the geopotential
is the term corresponding to the Coriolis force, and is equal to , where is the angular rotation rate of the Earth ( radians per sidereal hour), and is the latitude
is the gas constant
is the pressure
is the density
is the specific heat on a constant pressure surface
is the heat flow per unit time per unit mass
is the precipitable water
is the Exner function
is the potential temperature
is the Absolute vorticity
Forces that cause atmospheric motion
Forces that cause atmospheric motion include the pressure gradient force, gravity, and viscous friction. Together, they create the forces that accelerate our atmosphere.
The pressure gradient force causes an acceleration forcing air from regions of high pressure to regions of low pressure. Mathematically, this can be written as:
The gravitational force accelerates objects at approximately 9.8 m/s2 directly towards the center of the Earth.
The force due to viscous friction can be approximated as:
Using Newton's second law, these forces (referenced in the equations above as the accelerations due to these forces) may be summed to produce an equation of motion that describes this system. This equation can be written in the form:
Therefore, to complete the system of equations and obtain 6 equations and 6 variables:
where n is the number density in mol, and T:=RT is the temperature equivalent value in Joule/mol.
Forms of the primitive equations
The precise form of the primitive equations depends on the vertical coordinate system chosen, such as pressure coordinates, log pressure coordinates, or sigma coordinates. Furthermore, the velocity, temperature, and geopotential variables may be decomposed into mean and perturbation components using Reynolds decomposition.
Pressure coordinate in vertical, Cartesian tangential plane
In this form pressure is selected as the vertical coordinate and the horizontal coordinates are written for the Cartesian tangential plane (i.e. a plane tangent to some point on the surface of the Earth). This form does not take the curvature of the Earth into account, but is useful for visualizing some of the physical processes involved in formulating the equations due to its relative simplicity.
Note that the capital D time derivatives are material derivatives. Five equations in five unknowns comprise the system.
the inviscid (frictionless) momentum equations:
the hydrostatic equation, a special case of the vertical momentum equation in which vertical acceleration is considered negligible:
the continuity equation, connecting horizontal divergence/convergence to vertical motion under the hydrostatic approximation ():
and the thermodynamic energy equation, a consequence of the first law of thermodynamics
When a statement of the conservation of water vapor substance is included, these six equations form the basis for any numerical weather prediction scheme.
Primitive equations using sigma coordinate system, polar stereographic projection
According to the National Weather Service Handbook No. 1 – Facsimile Products, the primitive equations can be simplified into the following equations:
Zonal wind:
Meridional wind:
Temperature:
The first term is equal to the change in temperature due to incoming solar radiation and outgoing longwave radiation, which changes with time throughout the day. The second, third, and fourth terms are due to advection. Additionally, the variable T with subscript is the change in temperature on that plane. Each T is actually different and related to its respective plane. This is divided by the distance between grid points to get the change in temperature with the change in distance. When multiplied by the wind velocity on that plane, the units kelvins per meter and meters per second give kelvins per second. The sum of all the changes in temperature due to motions in the x, y, and z directions give the total change in temperature with time.
Precipitable water:
This equation and notation works in much the same way as the temperature equation. This equation describes the motion of water from one place to another at a point without taking into account water that changes form. Inside a given system, the total change in water with time is zero. However, concentrations are allowed to move with the wind.
Pressure thickness:
These simplifications make it much easier to understand what is happening in the model. Things like the temperature (potential temperature), precipitable water, and to an extent the pressure thickness simply move from one spot on the grid to another with the wind. The wind is forecast slightly differently. It uses geopotential, specific heat, the Exner function π, and change in sigma coordinate.
Solution to the linearized primitive equations
The analytic solution to the linearized primitive equations involves a sinusoidal oscillation in time and longitude, modulated by coefficients related to height and latitude.
where s and are the zonal wavenumber and angular frequency, respectively. The solution represents atmospheric waves and tides.
When the coefficients are separated into their height and latitude components, the height dependence takes the form of propagating or evanescent waves (depending on conditions), while the latitude dependence is given by the Hough functions.
This analytic solution is only possible when the primitive equations are linearized and simplified. Unfortunately many of these simplifications (i.e. no dissipation, isothermal atmosphere) do not correspond to conditions in the actual atmosphere. As a result, a numerical solution which takes these factors into account is often calculated using general circulation models and climate models.
See also
Barometric formula
Climate model
Euler equations
Fluid dynamics
General circulation model
Numerical weather prediction
References
Beniston, Martin. From Turbulence to Climate: Numerical Investigations of the Atmosphere with a Hierarchy of Models. Berlin: Springer, 1998.
Firth, Robert. Mesoscale and Microscale Meteorological Model Grid Construction and Accuracy. LSMSA, 2006.
Thompson, Philip. Numerical Weather Analysis and Prediction. New York: The Macmillan Company, 1961.
Pielke, Roger A. Mesoscale Meteorological Modeling. Orlando: Academic Press, Inc., 1984.
U.S. Department of Commerce, National Oceanic and Atmospheric Administration, National Weather Service. National Weather Service Handbook No. 1 – Facsimile Products. Washington, DC: Department of Commerce, 1979.
External links
National Weather Service – NCSU
Collaborative Research and Training Site, Review of the Primitive Equations.
Partial differential equations
Equations of fluid dynamics
Numerical climate and weather models
Atmospheric dynamics | Primitive equations | Physics,Chemistry,Environmental_science | 1,550 |
14,504,484 | https://en.wikipedia.org/wiki/Walking%20fern | Walking fern may refer to two species of fern in the genus Asplenium, which are occasionally placed in a separate genus Camptosorus. The name "walking fern" derives from the fact that new plantlets grow wherever the arching leaves of the parent touch the ground, creating a walking effect. Both have evergreen, undivided, slightly leathery leaves that are triangular and taper to a thin point. On the bottom of the leaves, sori, or spore-bearing structures, cluster along the veins. These hardy plants can be found in shady spots of limestone ledges and limy forest places.
Asplenium rhizophyllum (syn: Camptosorus rhizophyllum), native to North America
Asplenium ruprechtii (syn: Camptosorus sibiricus), native to East Asia
It may also refer to:
Adiantum caudatum, a species of maidenhair fern
References
"walking fern." Encyclopædia Britannica. . Encyclopædia Britannica Online. <http://www.britannica.com/eb/article-9075948>.
Ferns | Walking fern | Biology | 244 |
42,202,892 | https://en.wikipedia.org/wiki/Geophysical%20Service%20of%20the%20Russian%20Academy%20of%20Sciences | Geophysical Service of the Russian Academy of Sciences () is a research body responsible for geological research.
History
Geophysical Service was established on the basis of Experimental Methodical Expedition of the Joint Institute of Physics of the Earth in accordance with the Decree of the Presidium of the Russian Academy of Sciences No.107 on May 31, 1994, pursuant to Order of the Government of Russia No.444 of May 11, 1993.
Functions
The main activities of service: basic research and applied research in the field of seismology and geophysics, as well as conducting continuous seismic monitoring of the Russian Federation, tsunami warning in the Russian Far East, monitoring volcanic activity in Kamchatka and slow monitoring of geodynamic processes in the crust and ground deformation, carrying out continuous seismic monitoring of the world.
In addition, in comply with the Order of the Government of Russia, Russian Academy of Sciences, the service:
Together with Federal Service for Hydrometeorology and Environmental Monitoring of Russia, maintains the Russian Tsunami Warning System in the Far East;
Together with the Ministry of Defence of the Russian Federation (according to Government Resolution dated August 25, 2005, No.537) maintains a Russian objects (9 seismic stations through RAS) of the International Monitoring System, which is created for continuous monitoring of the implementation of the Comprehensive Nuclear-Test-Ban Treaty (CTBT).
References
Institutes of the Russian Academy of Sciences
Geophysics organizations
Seismology
Earthquake and seismic risk mitigation | Geophysical Service of the Russian Academy of Sciences | Engineering | 300 |
64,487,875 | https://en.wikipedia.org/wiki/Thomas%20Whitwell | Thomas Whitwell (24 October 1837 – 5 August 1878) was a British engineer, inventor and metallurgist.
Known as Tom, he was the third son of William and Sarah Whitwell of Kendal. Tom was initially educated at home via private tutors he was sent to the Quaker run York School at 10 years old. In 1858, at 16, he travelled with his elder brother William to Darlington. As apprentice to Alfred Kitching in his locomotive building shop he learned engineering and metallurgy. From there he continued to build his skills, working with Robert Stephenson & Co in Newcastle.
In 1859 he and William started iron-smelting at Thornaby. Iron ore had been discovered in the area four years previously. The brothers designed and built large scale hot blast fire brick stoves that were much larger and more efficient that anything built in the area until that point. By 1873 the three re-built blast furnaces were 80 feet high and 22 feet in diameter and the works had over 750 employees.
In 1878 Tom died due to an accident at his works. A steam explosion caught him and his foreman John Thompson whilst they were investigating a problem with the rolling mill furnace.
The works continued to run under family ownership, under the chairmanship of Tom's nephew William Fry Whitwell until 1922 when they were eventually closed due to a global glut of pig iron.
Key inventions
Tom filed at least five patents in the UK and two in the US. He invented and patented the technology used at Thornaby as the Whitwell Heating Stove. Over two hundred stoves were installed in over 70 furnaces around the globe. He also patented a continuous brick-burning kiln and a more efficient fire grate.
US Influence
The City of Whitwell in Tennessee is named in his honour. Tom was a founder and Chairman of the Southern States Coal, Iron and Land Company which developed coal mining in Whitwell and Iron smelting in nearby South Pittsburg. Tom visited the area at least twice hosting a banquet for five hundred workers and guests of ‘all classes’. After his death, the company was acquired by the Tennessee Coal and Iron Company.
Influence at home
Throughout his life, Tom was a committed Christian and contributed to wider society, helping to form over thirty Young Men's Christian Associations across the North of England. He was also Captain of his local Fire Brigade. One of his lasting legacies is the Cleveland Institution of Engineers. The Institution is one of the oldest such engineering bodies in the world. Tom hosted the inaugural meeting at his home on Church Road in Stockton and was the first secretary of the organisation. There were 12 members at that first meeting, but by the time of his death (when he was president) the ranks had grown to over 460.
Death and Funeral
Thousands of residents assembled to pay respects to Tom at his funeral filling the south end of Stockton High Street and the entire length of Bridge Road. His funeral procession, was four deep and numbered about two thousand people – an unusual turnout for a 40 year old industrialist and engineer.
References
British inventors
British metallurgists
Metallurgists
1837 births
1878 deaths | Thomas Whitwell | Chemistry,Materials_science | 639 |
68,772,421 | https://en.wikipedia.org/wiki/Alicante%2010 | Alicante 10, also known as RSGC6 (Red Supergiant Cluster 6), is a young massive open cluster belonging to the Milky Way galaxy. It was discovered in 2012 in the 2MASS survey data. Currently, eight red supergiants have been identified in this cluster. Alicante 10 is located in the constellation Scutum at the distance of about 6000 pc from the Sun. It is likely situated at the intersection of the northern end of the Long Bar of the Milky Way and the inner portion of the Scutum–Centaurus Arm—one of the two major spiral arms.
The age of Alicante 10 is estimated to be around 16–20 million years. The observed red supergiants are type II supernova progenitors. The cluster is heavily obscured and have not been detected in the visible light. It lies close to other groupings of red supergiants known as RSGC1, Stephenson 2 (RSGC2), RSGC3, Alicante 8 (RSGC4), and Alicante 7 (RSGC5). Alicante 10 is located 16′ southwards of RSGC3. The red supergiant clusters RSGC3, Alicante 7 and Alicante 10 seems to be part of the RSGC3 complex. The mass of the open cluster is estimated at 10–20 thousand solar masses, which makes it one of the most massive open clusters in the Galaxy.
References
Alicante 10
Scutum (constellation)
Scutum–Centaurus Arm | Alicante 10 | Astronomy | 318 |
71,937,299 | https://en.wikipedia.org/wiki/HD%2032820 | HD 32820, also known as HR 1651, is a yellowish-white hued star located in the southern constellation Caelum, the chisel. It has an apparent magnitude of 6.3, placing it near the limit of naked eye visibility. The object is located relatively close at a distance of 103 light years based on parallax measurements from Gaia DR3, but is receding with a heliocentric radial velocity of .
HD 32820 has a stellar classification of F8 V, indicating that it is an ordinary F-type main-sequence star that is generating energy via hydrogen fusion at its core. It has 125% the mass of the Sun and 133% of its radius. It radiates double the luminosity of the Sun from its photosphere at an effective temperature of . HD 32820 is said to be 3.46 billion years old, slightly younger than the Sun, and has a near solar iron abundance. The star spins modestly with a projected rotational velocity of and is chromospherically inactive
References
F-type main-sequence stars
32820
23555
1651
CD-41 01690
Caeli, 27
Caelum | HD 32820 | Astronomy | 239 |
1,084,655 | https://en.wikipedia.org/wiki/Von%20Staudt%E2%80%93Clausen%20theorem | In number theory, the von Staudt–Clausen theorem is a result determining the fractional part of Bernoulli numbers, found independently by
and .
Specifically, if is a positive integer and we add to the Bernoulli number for every prime such that divides , then we obtain an integer; that is,
This fact immediately allows us to characterize the denominators of the non-zero Bernoulli numbers as the product of all primes such that divides ; consequently, the denominators are square-free and divisible by 6.
These denominators are
6, 30, 42, 30, 66, 2730, 6, 510, 798, 330, 138, 2730, 6, 870, 14322, 510, 6, 1919190, 6, 13530, ... .
The sequence of integers is
1, 1, 1, 1, 1, 1, 2, -6, 56, -528, 6193, -86579, 1425518, -27298230, ... .
Proof
A proof of the Von Staudt–Clausen theorem follows from an explicit formula for Bernoulli numbers which is:
and as a corollary:
where are the Stirling numbers of the second kind.
Furthermore the following lemmas are needed:
Let be a prime number; then
1. If divides , then
2. If does not divide , then
Proof of (1) and (2): One has from Fermat's little theorem,
for .
If divides , then one has
for . Thereafter, one has
from which (1) follows immediately.
If does not divide , then after Fermat's theorem one has
If one lets , then after iteration one has
for and .
Thereafter, one has
Lemma (2) now follows from the above and the fact that for .
(3). It is easy to deduce that for and , divides .
(4). Stirling numbers of the second kind are integers.
Now we are ready to prove the theorem.
If is composite and , then from (3), divides .
For ,
If is prime, then we use (1) and (2), and if is composite, then we use (3) and (4) to deduce
where is an integer, as desired.
See also
Kummer's congruence
References
External links
Theorems in number theory | Von Staudt–Clausen theorem | Mathematics | 502 |
71,560,254 | https://en.wikipedia.org/wiki/HD%20208741 | HD 208741, also known as HR 8380, is a yellowish-white hued star located in the southern circumpolar constellation Octans. It has an apparent magnitude of 5.91, making it faintly visible to the naked eye. Parallax measurements place it at a distance of 211 light years, and it is currently receding with a heliocentric radial velocity of .
HD 208741 has a 10th magnitude K-type main-sequence companion separated by . Together, they make up a wide binary system designated collectively as CPD−76°1542. Sir John Herschel, the discoverer of the pair, noted the primary to be a probable spectroscopic binary.
This object has a stellar classification of F3 III, indicating that it is a slightly evolved F-type star. Gaia Data Release 3 models it to be a dwarf that is 81.3% through its main sequence lifetime. At present it has 1.52 times the mass of the Sun and a slightly enlarged radius of due to its evolved state. It radiates at 12.9 times the luminosity of the Sun from its photosphere at an effective temperature of . HD 208741 has a metallicity twice the Sun's, making it metal enriched. It is estimated to be 1.1 billion years old, and is spinning with a projected rotational velocity of .
References
F-type giants
K-type main-sequence stars
208741
PD-76 01542
108849
8380
Octantis, 66
Double stars
Octans | HD 208741 | Astronomy | 319 |
22,269,179 | https://en.wikipedia.org/wiki/Aster%20%28cell%20biology%29 | An aster is a cellular structure shaped like a star, consisting of a centrosome and its associated microtubules during the early stages of mitosis in an animal cell. Asters do not form during mitosis in plants. Astral rays, composed of microtubules, radiate from the centrosphere and look like a cloud. Astral rays are one variant of microtubule which comes out of the centrosome; others include kinetochore microtubules and polar microtubules.
During mitosis, there are five stages of cell division: Prophase, Prometaphase, Metaphase, Anaphase, and Telophase. During prophase, two aster-covered centrosomes migrate to opposite sides of the nucleus in preparation of mitotic spindle formation. During prometaphase there is fragmentation of the nuclear envelope and formation of the mitotic spindles. During metaphase, the kinetochore microtubules extending from each centrosome connect to the centromeres of the chromosomes. Next, during anaphase, the kinetochore microtubules pull the sister chromatids apart into individual chromosomes and pull them towards the centrosomes, located at opposite ends of the cell. This allows the cell to divide properly with each daughter cell containing full replicas of chromosomes. In some cells, the orientation of the asters determines the plane of division upon which the cell will divide.
Astral microtubules
Astral microtubules are a subpopulation of microtubules, which only exist during and immediately before mitosis. They are defined as any microtubule originating from the centrosome which does not connect to a kinetochore. Astral microtubules develop in the actin skeleton and interact with the cell cortex to aid in spindle orientation. They are organized into radial arrays around the centrosomes. The turn-over rate of this population of microtubules is higher than any other population.
The role of astral microtubules is assisted by dyneins specific to this role. These dyneins have their light chains (static portion) attached to the cell membrane, and their globular parts (dynamic portions) attached to the microtubules. The globular chains attempt to move towards the centrosome, but as they are bound to the cell membrane, this results in pulling the centrosomes towards the membrane, thus assisting cytokinesis.
Astral microtubules are not required for the progression of mitosis, but they are required to ensure the fidelity of the process. The function of astral microtubules can be generally considered as determination of cell geometry. They are absolutely required for correct positioning and orientation of the mitotic spindle apparatus, and are thus involved in determining the cell division site based on the geometry and polarity of the cells.
The maintenance of astral microtubules is dependent on centrosomal integrity. It is also dependent on several microtubule-associated proteins such as EB1 and adenomatous polyposis coli (APC).
Growth of Microtubules
Asters grow through nucleation and polymerization. At the negative ends of the aster centrosomes will nucleate (form a nucleus) and anchor to the microtubules. At the positive end, polymerization of the aster will occur. Cortical dynein, a motor protein, moves along the microtubules of the cell and plays a key role in the growth and inhibition of aster microtubules. A dynein that is barrier-attached can inhibit and trigger growth.
References
Ishihara, Keisuke, et al. "Physical basis of large microtubule aster growth." eLife, vol. 5, 2016. Gale OneFile: Health and Medicine, link.gale.com/apps/doc/A476395269/HRCA?u=cuny_hunter&sid=HRCA&xid=5e6ad228. Accessed 28 Apr. 2021.
Laan, Liedewij et al. “Cortical Dynein Controls Microtubule Dynamics to Generate Pulling Forces That Position Microtubule Asters.” Cell (Cambridge) 148.3 (2012): 502–514. Web.
See also
Mitosis
Centrosome
Centriole
Chromosome
Cell biology
Cell cycle
Mitosis | Aster (cell biology) | Biology | 905 |
1,293,780 | https://en.wikipedia.org/wiki/Fomitopsis%20betulina | Fomitopsis betulina (previously Piptoporus betulinus), commonly known as the birch polypore, birch bracket, or razor strop, is a common bracket fungus and, as the name suggests, grows almost exclusively on birch trees. The brackets burst out from the bark of the tree, and these fruit bodies can last for more than a year.
Taxonomy
The fungus was originally described by Jean Bulliard in 1788 as Boletus betulinus. It was transferred to the genus Piptoporus by Petter Karsten in 1881. Molecular phylogenetic studies suggested that the species was more closely related to Fomitopsis than to Piptoporus, and the fungus was reclassified to Fomitopsis in 2016.
The specific epithet betulina refers to the genus of the host plant (Betula). Common names for the fungus include birch bracket, birch polypore, and razorstrop fungus.
Description
The fruit bodies (basidiocarps) are pale, with a smooth greyish-brown top surface, while the creamy white underside has hundreds of pores that contain the spores. The fruit body has a rubbery texture, becoming corky with age. Wood decayed by the fungus, and cultures of its mycelium, often smell distinctly of green apples. The spores are cylindrical to ellipsoidal in shape, and measure 3–6 by 1.5–2 μm.
Fomitopsis betulina has a bipolar mating system where monokaryons or germinating spores can only mate and form a fertile dikaryon with an individual that possesses a different mating-type factor. There are at least 33 different mating-type factors within the British population of this fungus. These factors are all variants or alleles of a single gene, as opposed to the tetrapolar mating system of some other basidiomycete species, which involves two genes.
It is considered inedible.
Range and ecology
Fomitopsis betulina is one of the most common species of brown rot fungi. The geographic distribution of F. betulina appears to be restricted to the Northern Hemisphere, including Northern America, Europe, and Asia. It is only found on birch trees, including Betula pendula, B. pubescens, B. papyrifera, and B. obscura. There is some doubt about the ability of isolates from the European continent, North America and the British Isles to interbreed.
It is a necrotrophic parasite on weakened birches, and will cause brown rot and eventually death, being one of the most common fungi visible on dead birches. It is likely that the birch bracket fungus becomes established in small wounds and broken branches and may lie dormant for years, compartmentalised into a small area by the tree's own defence mechanisms, until something occurs to weaken the tree. Fire, drought and suppression by other trees are common causes of such stress.
In most infections there is only one fungal individual present, but occasionally several individuals may be isolated from a single tree, and in these cases it is possible that the birch bracket fungus entered after something else killed the tree. These fungal "individuals" can sometimes be seen if a slice of brown-rotted birch wood is incubated in a plastic bag for several days. This allows the white mycelium of the fungus to grow out of the surface of the wood. If more than one individual dikaryon is present, lines of intraspecific antagonism form as the two individual mycelia interact and repel each other.
The fungus can harbor a large number of species of insects that depend on it for food and as breeding sites. In a large-scale study of over 2600 fruit bodies collected in eastern Canada, 257 species of arthropods, including 172 insects and 59 mites, were found. The fungus is eaten by the caterpillars of the fungus moth Nemaxera betulinella. Old fruit bodies that have survived the winter are often colonized by the white to pale yellow fungus Hypocrea pulmonata.
Research on chemical constituents
Fomitopsis betulina has been widely used in traditional medicines, and has been extensively researched for its phytochemistry and pharmacological activity. Phytochemicals include phenolic acids, indole compounds, sterols, and triterpenes, especially betulin and betulinic acid.
Agaric acid, found in the fruit body of the fungus, is poisonous to the parasitic whipworm Trichuris trichiura. The fungus was carried by "Ötzi the Iceman" – the 5,300 year old mummy found in Tyrol, with speculation that the fungus may have been used as a laxative to expel whipworm.
Uses
The velvety cut surface of the fruit body was traditionally used as a strop for finishing the edges on razors, and as a mounting material for insect collections. It has also been used as tinder and anesthetic.
See also
Fomes fomentarius another fungus carried by Ötzi the Iceman
References
Fomitopsidaceae
Fungi described in 1788
Medicinal fungi
Fungi of Asia
Fungi of Europe
Fungi of North America
Inedible fungi
Parasitic fungi
Taxa named by Jean Baptiste François Pierre Bulliard
Fungus species
Ötzi | Fomitopsis betulina | Biology | 1,088 |
41,993 | https://en.wikipedia.org/wiki/Intensity%20%28physics%29 | In physics and many other areas of science and engineering the intensity or flux of radiant energy is the power transferred per unit area, where the area is measured on the plane perpendicular to the direction of propagation of the energy. In the SI system, it has units watts per square metre (W/m2), or kg⋅s−3 in base units. Intensity is used most frequently with waves such as acoustic waves (sound), matter waves such as electrons in electron microscopes, and electromagnetic waves such as light or radio waves, in which case the average power transfer over one period of the wave is used. Intensity can be applied to other circumstances where energy is transferred. For example, one could calculate the intensity of the kinetic energy carried by drops of water from a garden sprinkler.
The word "intensity" as used here is not synonymous with "strength", "amplitude", "magnitude", or "level", as it sometimes is in colloquial speech.
Intensity can be found by taking the energy density (energy per unit volume) at a point in space and multiplying it by the velocity at which the energy is moving. The resulting vector has the units of power divided by area (i.e., surface power density). The intensity of a wave is proportional to the square of its amplitude. For example, the intensity of an electromagnetic wave is proportional to the square of the wave's electric field amplitude.
Mathematical description
If a point source is radiating energy in all directions (producing a spherical wave), and no energy is absorbed or scattered by the medium, then the intensity decreases in proportion to the distance from the object squared. This is an example of the inverse-square law.
Applying the law of conservation of energy, if the net power emanating is constant,
where
is the net power radiated;
is the intensity vector as a function of position;
the magnitude is the intensity as a function of position;
is a differential element of a closed surface that contains the source.
If one integrates a uniform intensity, , over a surface that is perpendicular to the intensity vector, for instance over a sphere centered around the point source, the equation becomes
where
is the intensity at the surface of the sphere;
is the radius of the sphere;
is the expression for the surface area of a sphere.
Solving for gives
If the medium is damped, then the intensity drops off more quickly than the above equation suggests.
Anything that can transmit energy can have an intensity associated with it. For a monochromatic propagating electromagnetic wave, such as a plane wave or a Gaussian beam, if is the complex amplitude of the electric field, then the time-averaged energy density of the wave, travelling in a non-magnetic material, is given by:
and the local intensity is obtained by multiplying this expression by the wave velocity,
where
is the refractive index;
is the speed of light in vacuum;
is the vacuum permittivity.
For non-monochromatic waves, the intensity contributions of different spectral components can simply be added. The treatment above does not hold for arbitrary electromagnetic fields. For example, an evanescent wave may have a finite electrical amplitude while not transferring any power. The intensity should then be defined as the magnitude of the Poynting vector.
Electron beams
For electron beams, intensity is the probability of electrons reaching some particular position on a detector (e.g. a charge-coupled device) which is used to produce images that are interpreted in terms of both microstructure of inorganic or biological materials, as well as atomic scale structure. The map of the intensity of scattered electrons or x-rays as a function of direction is also extensively used in crystallography.
Alternative definitions
In photometry and radiometry intensity has a different meaning: it is the luminous or radiant power per unit solid angle. This can cause confusion in optics, where intensity can mean any of radiant intensity, luminous intensity or irradiance, depending on the background of the person using the term. Radiance is also sometimes called intensity, especially by astronomers and astrophysicists, and in heat transfer.
See also
Field strength
Sound intensity
Magnitude (astronomy)
Footnotes
References
Optical quantities
Radiometry
Physical quantities | Intensity (physics) | Physics,Mathematics,Engineering | 858 |
45,252,803 | https://en.wikipedia.org/wiki/List%20of%20OpenCL%20applications | The following list contains a list of computer programs that are built to take advantage of the OpenCL or WebCL heterogeneous compute framework.
Graphics
ACDSee
Adobe Photoshop
Affinity Photo
Capture One
Blurate
darktable
FAST: imaging Medical
GIMP
HALCON by MVTec
Helicon Focus
ImageMagick
Musemage
Pathfinder, GPU-based font rasterizer
PhotoScan
seedimg
CAD and 3D modelling
Autodesk Maya
Blender GPU rendering with NVIDIA CUDA and OptiX & AMD OpenCL
Houdini
LuxRender
Mandelbulber
Audio, video, and multimedia
AlchemistXF
CUETools
DaVinci Resolve by Blackmagic Design
FFmpeg has a number of OpenCL filters
gr-fosphor GNU Radio block for RTSA-like spectrum visualization
HandBrake
Final Cut Pro X
KNLMeansCL: Denoise plugin for AviSynth
Libav
OpenCV
RealFlow Hybrido2
Sony Catalyst
Vegas Pro by Magix Software GmbH
vReveal by MotionDSP
Total Media Theatre by ArcSoft
x264
x265
h.265/HEVC possible
Web (including WebCL)
Google Chrome (experimental)
Mozilla Firefox (experimental)
Office
Collabora Online
LibreOffice Calc
Games
Military Operations, operational level real-time strategy game where the complete army is simulated in real-time using OpenCL
Planet Explorers is using OpenCL to calculate the voxels.
BeamNG.drive is going to use OpenCL for the physics engine.
Leela Zero, open source replication of Alpha Go Zero using OpenCL for neural network computation.
Scientific computing
Advanced Simulation Library (ASL)
AMD Compute Libraries
clBLAS, complete set of BLAS level 1, 2 & 3 routines
clSPARSE, routines for sparse matrices
clFFT, FFT routines
clRNG, random numbers generators MRG31k3p, MRG32k3a, LFSR113, and Philox-4×32-10
ArrayFire: parallel computing with an easy-to-use API with JIT compiler (open source),
BEAGLE, Bayesian and Maximum Likelihood phylogenetics library
BigDFT
BOINC
Bolt, STL-compatible library for creating accelerated data parallel applications
Bullet
CLBlast: tuned clBlas
clMAGMA, OpenCL port of the MAGMA project, a linear algebra library similar to LAPACK
CP2K: molecular simulations
GROMACS: chemical simulations, deprecated OpenCL with Version 2021 with change to SYCL
HiFlow3: Open source finite elements CFD
HIP, CUDA-to-portable C++ compiler
LAMMPS
MDT (Microstructure Diffusion Toolbox): MRI analysis in Python and OpenCL
MOT (Multi-threaded Optimization Toolbox): OpenCL accelerated non-linear optimization and MCMC sampling
OCCA
Octopus
OpenMM: Part of Omnia Suite, biomolecular simulations
PARALUTION
pyFAI, Fast Azimuthal Integration in Python
Random123, library of counter-based random number generators
SecondSpace, simulation software for waves in 2D space
StarPU, task programming library
Theano: Python array library
UFO, data processing framework
VexCL, vector expression template library
ViennaCL and PyViennaCL, linear algebra library developed at TU Wien
Cryptography
BFGMiner,
Hashcat, password recovery tool
John the Ripper,
Scallion, GPU-based Onion hash generator
Pyrit, WPA key recovery software
Language bindings
ClojureCL: parallel OpenCL 2.0 with Clojure
dcompute: native Execution of D
Erlang OpenCL binding
OpenCLAda: Binding Ada to OpenCL
OpenCL.jl: Julia bindings
PyOpenCL, Python interface to OpenCL API
Project Coriander: Conversion CUDA to OpenCL 1.2 with CUDA-on-CL
Lightweight Java Game Library (LWJGL) contains low-lag Java bindings for OpenCL
Miscellaneous
clinfo
clpeak, peak device capability profiler
OCLToys, collection of OpenCL examples
opencl-stream, OpenCL implementation of the STREAM benchmark
SNU NPB, benchmark
mixbench, benchmark tool for evaluating GPUs on mixed operational intensity kernels
See also
List of OpenGL programs
References
Heterogeneous computing
Lists of software | List of OpenCL applications | Technology | 902 |
37,092,860 | https://en.wikipedia.org/wiki/International%20Conference%20on%20Cold%20Fusion | The International Conference on Cold Fusion (ICCF) (also referred to as Annual Conference on Cold Fusion in 1990-1991 and mostly as International Conference on Condensed Matter Nuclear Science since 2007) is an annual or biennial conference on the topic of cold fusion. An international conference on cold fusion was held in Santa Fe, New Mexico US in 1989. However, the first ICCF conference (ICCF1) took place in 1990 in Salt Lake City, Utah, under the title "First Annual Conference on Cold Fusion". Its location has since rotated between Russia, US, Europe, and Asia. It was held in India for the first time in 2011.
The conferences have been criticized as events which attract "crackpots" and "pseudo-scientists".
Reception
The First Annual Conference on Cold Fusion was held in March 1990 in Salt Lake City, Utah, United States. Robert L. Park of the American Physical Society derisively referred to it as a "seance of true believers." The conference was attended by more than 200 researchers from the United States, Italy, Japan, India and Taiwan and dozens of reporters from all over the U.S. and abroad.
The Third International Conference on Cold Fusion was held in 1992 in Nagoya, Japan. It was described by The New York Times, "depending on one's point of view" as "either a turning point in which evidence was presented that will convince the skeptics that cold fusion exists or a religious revival where claims of miracles were lapped up by ardent believers." The conference was sponsored by seven Japanese scientific societies, it was attended by 200 Japanese scientists and more than 100 from abroad. Tomohiro Taniguchi, then director of the Electric Power Technology Division at Japan's Ministry of International Trade and Industry, reportedly said that the Ministry of International Trade and Industry was willing to finance research in the field in view of "encouraging evidence, especially after the conference." The conference was also covered by the Associated Press.
A journalist for the Wired magazine attended the 1998 conference in Vancouver—apparently the only mainstream journalist who attended—and reported that he found there "about 200 extremely conventional-looking scientists, almost all of them male and over 50" with some apparently over 70. He then inferred that "[the] younger ones had bailed years ago, fearing career damage from the cold fusion stigma." He reported seeing "highly technical presentations" and "was amazed by the quantity of the work, its quality, and the credentials of the people pursuing it", whereas "[a] few obvious pseudoscientists, promoting their ideas in an adjoining room used for poster sessions, were politely ignored."
By 1999, attendance by researchers at the ICCF meetings drew comment from the field of science studies. Although scientific debate over cold fusion had effectively ended in 1990, attendance at the ICCF meetings for the next 8 years had been relatively stable at between 100 and 300. Sociologist Bart Simon of Concordia University described the state of the field as "undead", and considered that the conference evidenced that "as far as normal science is concerned, [cold fusion] is of interest to crackpots, pseudo-scientists, frauds and a few sociologists of science".
David Goodstein has written that although an ICCF event had "all the trappings of a normal scientific meeting", it was in fact "no normal scientific conference" since "cold fusion was a pariah field, cast out by the scientific establishment". It was an environment, he added, "...in which crackpots flourished, and this made matters worse for those who were at least willing to entertain the notion that there might have been some serious science going on."
Conferences
The conference is organized by The International Society for Condensed Matter Nuclear Science. Conference attendees include "a mix of professional scientists, along with retired, semi-retired and amateur scientists, engineers and technicians, and a number of entrepreneurs, inventors, and interested lay people."
ICCF24 - July 2022, Mountain View, California
ICCF25 - August 2023, Szczecin, Poland
ICCF26 - 26-30 May 2025, Morioka, Japan
The Japanese version of this page has a fairly comprehensive list of past conferences.
References
Further reading
– The first three conferences are commented in detail on pp. 237–247, 274–285, specially 240, 275–277.
External links
The International Society for Condensed Matter Nuclear Science home page
History of science
International conferences
Technology conferences
Cold fusion | International Conference on Cold Fusion | Physics,Chemistry,Technology | 916 |
913,935 | https://en.wikipedia.org/wiki/Mac%20NC | The Mac NC, or Macintosh NC, is a prototype networked thin client from Apple, expected for release by April 1998. The device was widely promoted by then-Apple director Larry Ellison, apparently as part of his Oracle Network Computer initiative. Mac NC was canceled, and its key technology components were inherited by the iMac G3, which was released in August 1998.
History
On May 21, 1996, Oracle Corporation, along with 30 hardware and software vendors, announced an intent to design computers around the "network computer platform". Products based on the Network Computer Reference Profile include diskless nodes, applications coded in cross-platform languages such as Java, and Internet connectivity using common software such as Netscape Navigator.
In May 1996, Apple partnered in the network computing effort, with the Pippin as its flagship.
On July 9, 1997, Gil Amelio was removed as CEO of Apple by its board of directors. Steve Jobs became interim CEO, to begin a critical restructuring of the company's product line. He became CEO until August 2011, shortly before his death.
Oracle Corporation CEO and Apple board member Larry Ellison announced in December 1997, while talking to the Harvard Computer Society, that Apple would release a product called the Macintosh NC in April 1998. He suggested the network computer would have a "near-300 MHz" processor, a 17-inch screen, a price under , and a hard disk drive option at $100.
Steve Jobs did not agree, stating via email, "Unfortunately, [Ellison] is pretty far off base [...] Maybe he is trying to deflect interest from what we are really doing." While at Oracle, Ellison had overseen the development of a business alliance that produced several Network Computer-branded devices from companies such as Sun and IBM. Apple never manufactured any devices under the Oracle alliance, but did endorse the Network Computer Reference Profile.
Jobs had already stopped all Macintosh clone efforts, including the Pippin concept and any prospects of the Mac NC.
Ultimately the technology shipped as NetBoot with the release of Mac OS X Server 1.0 in January 1999.
References
External links
Birth of the iMac article at The Mac Observer
Apple Inc. hardware
Macintosh platform
Network computer (brand) | Mac NC | Technology | 450 |
12,469,280 | https://en.wikipedia.org/wiki/TAMA%20300 | TAMA 300 is a gravitational wave detector located at the Mitaka campus of the National Astronomical Observatory of Japan. It is a project of the gravitational wave studies group at the Institute for Cosmic Ray Research (ICRR) of the University of Tokyo. The ICRR was established in 1976 for cosmic ray studies, and is currently developing the Kamioka Gravitational Wave Detector (KAGRA).
TAMA 300 was preceded in Mitaka by a 20m prototype TAMA 20 in years 1991-1994. Later the prototype was moved underground to Kamioka mine and renamed LISM. It operated 2000-2002 and established seismic quietness of the underground location.
Construction of the TAMA project started in 1995. Data were collected from 1999 to 2004. It adopted a Fabry–Pérot Michelson interferometer (FPMI) with power recycling. It is officially known as the 300m Laser Interferometer Gravitational Wave Antenna due to having 300 meter long (optical) arms.
The goal of the project was to develop advanced techniques needed for a future kilometer sized interferometer and to detect gravitational waves that may occur by chance within the Local Group.
Observation of TAMA has been terminated, and work moved to the 100 m Cryogenic Laser Interferometer Observatory (CLIO) prototype in Kamioka mine.
As of 2020, modified TAMA 300 is used as a testbed to develop new technologies.
See also
CLIO, a prototype interferometric gravitational wave detector operating in Japan.
KAGRA, a state-of-the-art interferometric gravitational wave detector under development in Japan
References
Interferometers
Gravitational-wave telescopes
Astronomical observatories in Japan
University of Tokyo | TAMA 300 | Technology,Engineering | 341 |
1,873,971 | https://en.wikipedia.org/wiki/Virotherapy | Virotherapy is a treatment using biotechnology to convert viruses into therapeutic agents by reprogramming viruses to treat diseases. There are three main branches of virotherapy: anti-cancer oncolytic viruses, viral vectors for gene therapy and viral immunotherapy. These branches use three different types of treatment methods: gene overexpression, gene knockout, and suicide gene delivery. Gene overexpression adds genetic sequences that compensate for low to zero levels of needed gene expression. Gene knockout uses RNA methods to silence or reduce expression of disease-causing genes. Suicide gene delivery introduces genetic sequences that induce an apoptotic response in cells, usually to kill cancerous growths. In a slightly different context, virotherapy can also refer more broadly to the use of viruses to treat certain medical conditions by killing pathogens.
History
Chester M. Southam, a researcher at Memorial Sloan Kettering Cancer Center, pioneered the study of viruses as potential agents to treat cancer.
Oncolytic virotherapy
Oncolytic virotherapy is not a new idea – as early as the mid 1950s doctors were noticing that cancer patients who suffered a non-related viral infection, or who had been vaccinated recently, showed signs of improvement; this has been largely attributed to the production of interferon and tumour necrosis factors in response to viral infection, but oncolytic viruses are being designed that selectively target and lyse only cancerous cells.
In the 1940s and 1950s, studies were conducted in animal models to evaluate the use of viruses in the treatment of tumours. In the 1940s–1950s some of the earliest human clinical trials with oncolytic viruses were started.
Mechanism
It is believed that oncolytic virus achieve their goals by two mechanisms: selective killing of tumor cells as well as recruitment of host immune system. One of the major challenges in cancer treatment is finding treatments that target tumor cells while ignoring non-cancerous host cells. Viruses are chosen because they can target specific receptors expressed by cancer cells that allow for virus entry. One example of this is the targeting of CD46 on multiple myeloma cells by measles virus. The expression of these receptors are often increased in tumor cells. Viruses can also be engineered to target specific receptors on tumor cells as well. Once viruses have entered the tumor cell, the rapid growth and division of tumor cells as well as decreased ability of tumor cells to fight off viruses make them advantageous for viral replication compared to non-tumorous cells. The replication of viruses in tumor cells causes tumor cells to lyse killing them and also release signal to activate the host's own immune system, overcoming immunosuppression. This is done through the disruption of the microenvironment of the tumor cells that prevents recognition by host immune cells. Tumor antigens and danger-associated molecular patterns are also released during the lysis process which helps recruit host immune cells. Currently, there are many viruses being used and tested, all differing in their ability to lyse cells, activate the immune system, and transfer genes.
Clinical development
As of 2019, there are over 100 clinical trials looking at different viruses, cancers, doses, routes and administrations. Most of the work has been done on herpesvirus, adenovirus, and vaccinia virus, but other viruses include measles virus, coxsackievirus, polio virus, newcastle disease virus, and more. Methods of delivery tested include intratumoral, intravenous, intraperitoneal, and more. Types of tumor that are currently being study with oncolytic viruses include CNS tumors, renal cancer, head and neck cancer, ovarian cancer, and more. Oncolytic virotherapy as a monotherapy has also been tested in combination with other therapies including chemotherapy, radiotherapy, surgery, and immunotherapy.
Approved for clinical use
In 2015 the FDA approved the marketing of talimogene laherparepvec, a genetically engineered herpes virus, to treat melanoma lesions that cannot be operated on; as of 2019, it is the only oncolytic virus approved for clinical use. It is injected directly into the lesion. As of 2016 there was no evidence that it extends the life of people with melanoma, or that it prevents metastasis. Two genes were removed from the virus – one that shuts down an individual cell's defenses, and another that helps the virus evade the immune system – and a gene for human GM-CSF was added. The drug works by replicating in cancer cells, causing them to burst; it was also designed to stimulate an immune response but as of 2016, there was no evidence of this. The drug was created and initially developed by BioVex, Inc. and was continued by Amgen, which acquired BioVex in 2011. It was the first oncolytic virus approved in the West.
Others
RIGVIR is a virotherapy drug that was approved by the State Agency of Medicines of the Republic of Latvia in 2004. It is wild type ECHO-7, a member of echovirus family. The potential use of echovirus as an oncolytic virus to treat cancer was discovered by Latvian scientist Aina Muceniece in the 1960s and 1970s. The data used to register the drug in Latvia is not sufficient to obtain approval to use it in the US, Europe, or Japan. As of 2017 there was no good evidence that RIGVIR is an effective cancer treatment. On March 19, 2019, the manufacturer of ECHO-7, SIA LATIMA, announced the drug's removal from sale in Latvia, quoting financial and strategic reasons and insufficient profitability. However, several days later an investigative TV show revealed that State Agency of Medicines had run laboratory tests on the vials, and found that the amount of ECHO-7 virus is of a much smaller amount than claimed by the manufacturer. In March 2019, the distribution of ECHO-7 in Latvia has been stopped.
Challenges and future prospective
Although oncolytic viruses are engineered to specifically target tumor cells, there is always the potential for off-target effects leading to symptoms that are usually associated with that virus. The most common symptom that has been reported has been flu-like symptoms. The HSV virus used as an oncolytic virus has retained their native thymidine kinase gene which allows it to be targeted with antiviral therapy in the event of unwarranted side effects.
Other challenges include developing an optimal method of delivery either directly to the tumor site or intravenously and allowing for target of multiple sites. Clinical trials include the tracking of viral replication and spread using various laboratory techniques in order to find the optimal treatment.
Another major challenge with using oncolytic viruses as therapy is avoiding the host's natural immune system which will prevent the virus from infecting the tumor cells. Once the oncolytic virus is introduced to the host system, a healthy host's immune system will naturally try to fight off the virus. Because of this, if less virus is able to reach the target site, it can reduce the efficacy of the oncolytic virus. This leads to the idea that inhibiting the host's immune response may be necessary early in the treatment, but this is brought with safety concerns. Due to these safety concerns of immunosuppression, clinical trials have excluded patients who are immunocompromised and have active viral infections.
Viral gene therapy
Viral gene therapy uses genetically engineered viral vectors to deliver therapeutic genes to cells with genetic malfunctions.
Mechanism
The use of viral material to deliver a gene starts with the engineering of the viral vector. Though the molecular mechanism of the viral vector differ from vector to vector, there are some general principles that are considered.
In diseases that are secondary to a genetic mutation that causes the lack of a gene, the gene is added back in. In diseases that are due to the overexpression of a gene, viral genetic engineering may be introduced to turn off the gene.
Viral gene therapy may be done in vivo or ex vivo. In the former, the viral vector is delivered directly to the organ or the tissue of the patient. In the later, the desired tissue is first retrieved, genetically modified, and then transferred back to the patient. The molecular mechanisms of gene delivery and/or integration into cells vary based on the viral vector that is used.
Rather than delivery of drugs that require multiple and continuous treatments. Delivery of a gene has the potential to create a long lasting cell that can continuously produce gene product.
Clinical development
There has been a few successful clinical use of viral gene therapy since the 2000s, specifically with adeno-associated virus vectors and chimeric antigen receptor T-cell therapy.
Approved for clinical use
Adeno-associated virus
Vectors made from Adeno-associated virus are one of the most established products used in clinical trials today. It was initially attractive for the use of gene therapy due to it not being known to cause any disease along with several other features. It has also been engineered so that it does not replicate after the delivery of the gene.
In 2017, the FDA approved Spark Therapeutics' Luxturna, an AAV vector-based gene therapy product for the treatment of RPE65 mutation-associated retinal dystrophy in adults. Luxturna is the first gene therapy approved in the US for the treatment of a monogenetic disorder. It has been authorized for use in the EU since 2018.
In 2019, the FDA approved Zolgensma, an AAV vector-based gene therapy product for the treatment of spinal muscular atrophy in children under the age of two. As of August 2019, it is the world's most expensive treatment, at a cost of over two million USD. Novartis is still seeking marketing approval for the drug in the EU as of 2019.
In additional, other clinical trials involving AAV-gene therapy looks to treat diseases such as Haemophilia along with various neurological, cardiovascular, and muscular diseases.
Chimeric antigen receptor T cells
Chimeric antigen receptor T cell (CAR T cell) are a type of immunotherapy that makes use of viral gene editing. CAR T cell use an ex vivo method in which T lymphocytes are extracted and engineered with a virus typically gammaretrovirus or lentivirus to recognize specific proteins on cell surfaces. This causes the T-lymphocytes to attack the cells that express the undesired protein. Currently two therapies, Tisagenlecleucel and Axicabtagene ciloleucel are FDA-approved to treat acute lymphoblastic leukemia and diffuse large B-cell lymphoma respectively. Clinical trials are underway to explore its potential benefits in solid malignancies.
Others
In 2012 the European Commission approved Glybera, an AAV vector-based gene therapy product for the treatment of lipoprotein lipase deficiency in adults. It was the first gene therapy approved in the EU. The drug never received FDA approval in the US, and was discontinued by its manufacturer in 2017 due to profitability concerns. it is no longer authorized for use in the EU.
Challenges and future prospective
Currently, there are still many challenges of viral gene therapy. Immune responses to viral gene therapies pose a challenge to successful treatment. However, responses to viral vectors at immune privileged sites such as the eye may be reduced compared to other sites of the body. As with other forms of virotherapy, prevention of off-target genome editing is a concern. In addition to viral gene editing, other genome editing technologies such as CRISPR gene editing have been shown to be more precise with more control over the delivery of genes. As genome editing become a reality, it is also necessary to consider the ethical implications of the technology.
Viral immunotherapy
Viral immunotherapy is the use of virus to stimulate the body's immune system. Unlike traditional vaccines, in which attenuated or killed virus/bacteria is used to generate an immune response, viral immunotherapy uses genetically engineered viruses to present a specific antigen to the immune system. That antigen could be from any species of virus/bacteria or even human disease antigens, for example cancer antigens.
Vaccines are another method of virotherapy that use attenuated or inactivated viruses to develop immunity to disease. An attenuated virus is a weakened virus that incites a natural immune response in the host that is often undetectable. The host also develops potentially life-long immunity due to the attenuated virus's similarity to the actual virus. Inactivated viruses are killed viruses that present a form of the antigen to the host. However, long-term immune response is limited.
Cancer treatment
Viral immunotherapy in the context of cancer stimulates the body's immune system to better fight against cancer cells. Rather than preventing causes of cancer, as one would traditionally think in the context of vaccines, vaccines against cancer are used to treat cancer. The mechanism is dependent upon the virus and treatment. Oncolytic viruses, as discussed in previous section, is stimulate host immune system through the release of tumor-associated antigens upon lysis as well as through the disruption of the cancer's microenvironment which helps them avoid the host immune system. CAR T Cells, also mentioned in previous section, is another form of viral immunotherapy that uses viruses to genetically engineer immune cells to kill cancer cells.
Other projects and products
Protozoal virotherapy
Viruses have been explored as a means to treat infections caused by protozoa. One such protozoa that potential virotherapy treatments have explored is Naegleria fowleri, which causes primary amebic meningoencephalitis (PAM). With a mortality rate of 95%, this disease-causing eukaryote has one of the highest pathogenic fatality rates known. Chemotherapeutic agents that target this amoeba for treating PAM have difficulty crossing blood-brain barriers. However, the driven evolution of virulent viruses of protozoal pathogens (VVPPs) may be able to develop viral therapies that can more easily access this eukaryotic disease organism by crossing the blood-brain barrier in a process analogous to bacteriophages. These VVPPs would also be self-replicating and therefore require infrequent administration, with lower doses, thus potentially reducing toxicity. While these treatment methods for protozoal disease may show great promise in a manner similar to bacteriophage viral therapy, a notable hazard is the evolutionary consequence of using viruses capable of eukaryotic pathogenicity. VVPPs will have evolved mechanisms of DNA insertion and replication that manipulate eukaryotic surface proteins and DNA editing proteins. VVPP engineering must therefore control for viruses that may be able to mutate and thereby bind to surface proteins and manipulate the DNA of the infected host.
See also
Cancer
Gene therapy
Oncolytic virus
Vector
Virosome, using modified viruses for drug delivery
Dog parasite press article
References
Further reading
Experimental cancer treatments
Biotechnology | Virotherapy | Biology | 3,115 |
684,143 | https://en.wikipedia.org/wiki/Wedge%20sum | In topology, the wedge sum is a "one-point union" of a family of topological spaces. Specifically, if X and Y are pointed spaces (i.e. topological spaces with distinguished basepoints and ) the wedge sum of X and Y is the quotient space of the disjoint union of X and Y by the identification
where is the equivalence closure of the relation
More generally, suppose is a indexed family of pointed spaces with basepoints The wedge sum of the family is given by:
where is the equivalence closure of the relation
In other words, the wedge sum is the joining of several spaces at a single point. This definition is sensitive to the choice of the basepoints unless the spaces are homogeneous.
The wedge sum is again a pointed space, and the binary operation is associative and commutative (up to homeomorphism).
Sometimes the wedge sum is called the wedge product, but this is not the same concept as the exterior product, which is also often called the wedge product.
Examples
The wedge sum of two circles is homeomorphic to a figure-eight space. The wedge sum of circles is often called a bouquet of circles, while a wedge product of arbitrary spheres is often called a bouquet of spheres.
A common construction in homotopy is to identify all of the points along the equator of an -sphere . Doing so results in two copies of the sphere, joined at the point that was the equator:
Let be the map that is, of identifying the equator down to a single point. Then addition of two elements of the -dimensional homotopy group of a space at the distinguished point can be understood as the composition of and with :
Here, are maps which take a distinguished point to the point Note that the above uses the wedge sum of two functions, which is possible precisely because they agree at the point common to the wedge sum of the underlying spaces.
Categorical description
The wedge sum can be understood as the coproduct in the category of pointed spaces. Alternatively, the wedge sum can be seen as the pushout of the diagram in the category of topological spaces (where is any one-point space).
Properties
Van Kampen's theorem gives certain conditions (which are usually fulfilled for well-behaved spaces, such as CW complexes) under which the fundamental group of the wedge sum of two spaces and is the free product of the fundamental groups of and
See also
Smash product
Hawaiian earring, a topological space resembling, but not the same as, a wedge sum of countably many circles
References
Rotman, Joseph. An Introduction to Algebraic Topology, Springer, 2004, p. 153.
Topology
Operations on structures
Homotopy theory | Wedge sum | Physics,Mathematics | 543 |
77,414,579 | https://en.wikipedia.org/wiki/Cactus%20alkaloids | Cactus alkaloids are alkaloids that occur in cactus. Structurally, they are tetrahydroisoquinolines and phenylethylamines.
Occurrence and Representatives
Cactus alkaloids are found in the cactus family, particularly in the genus Lophophora, which alone contains over 40 known compounds. The alkaloids can be categorized into two groups, derived from phenethylamine and tetrahydroisoquinoline, respectively. The primary alkaloid in Lophophora williamsii (in terms of quantity) is mescaline, followed by pellotine. In the species Lophophora diffusa and Lophophora fricii, the primary alkaloid is pellotine, followed by anhalonidine in L. fricii and anhalamine in L. diffusa. In species outside the genus Lophophora, the content and variety of cactus alkaloids are significantly lower, but some contain compounds such as hordenine, N-methyltyramine, mescaline, or pellotine. A number of psychoactive cacti are found in the genus Echinopsis, such as the San Pedro cactus.
Biosynthesis
The biosynthesis of cactus alkaloids starts from the amino acid tyrosine and proceeds initially via tyramine and dopamine. By introducing a third hydroxy group and methylation of all three hydroxy groups, mescaline is formed. A tetrahydroisoquinoline scaffold can also be constructed from the intermediates of mescaline biosynthesis by a ring closure, resulting in anhalamine and anhalonidine. Anhalonidine is the biosynthetic precursor of anhalonine, in which a benzodioxole unit is formed from a hydroxyl and a methoxy group. Further branching of the biosynthetic pathways occurs through the methylation of the amino group from dopamine. This pathway leads to pellotine and subsequently to lophophorin.
Synthesis
Various synthetic methods for cactus alkaloids are known. In the case of tetrahydroisoquinoline compounds, the ring system is usually built up by a Bischler-Napieralski reaction. Anhalamine, anhalidine, anhalonidine, and pellotine can be synthesized starting from mescaline, among others.
Pharmacological Effect
Mescaline is a psychedelic and is responsible for the hallucinogenic properties of Lophophora williamsii (peyote). The other alkaloids predominantly exhibit much less pronounced pharmacological effects and may have anticonvulsant properties. Pellotine was briefly used as a sedative in the early 20th century.
References
Alkaloids | Cactus alkaloids | Chemistry | 583 |
73,603,655 | https://en.wikipedia.org/wiki/Perth%20Museum | Perth Museum is a museum which opened on 30 March 2024. The museum is housed within Perth City Hall and aims showcase the city’s important collections to tell the tale of Scotland through the prism of Perth, the former capital of Scotland. The museum faces King Edward Street but is accessed from St John's Place.
History
In January 2019, BAM Construction began work on a £30 million programme of works to convert the Perth City Hall into a new heritage and arts attraction based on a design by Mecanoo. The new attraction would incorporate displays on the Stone of Destiny and the Kingdom of Alba.
A competition to name the building's forthcoming museum section was launched in March 2022, with the winning name being "Perth Museum", with 60% of the votes.
The Collection
The centrepiece of the museum is the Stone of Destiny. The museum is also home to the mummified remains of an egyptian or Kushite woman named Ta-Kr-Hb.
The collection also includes the 3,000-year-old Carpow Logboat, excavated from the Firth of Tay in 2001; the 17th century Glovers Incorporation dancing dress; and a cast of the record-breaking salmon caught in the Tay by Georgina Ballantine in 1922.
The museum, in collaboration with the British Museum and Māori advisors, has restored a rare cloak from New Zealand made entirely of kākāpō feathers. Dating from the 1810s–1820s it is thought to be the only one in existence.
References
Museums in Perth, Scotland
Local museums in Scotland
Natural history museums in Scotland
Decorative arts museums in Scotland
Glass museums and galleries
Category B listed buildings in Perth and Kinross
Listed buildings in Perth, Scotland | Perth Museum | Materials_science,Engineering | 341 |
232,386 | https://en.wikipedia.org/wiki/Motor%20skill | A motor skill is a function that involves specific movements of the body's muscles to perform a certain task. These tasks could include walking, running, or riding a bike. In order to perform this skill, the body's nervous system, muscles, and brain have to all work together. The goal of motor skill is to optimize the ability to perform the skill at the rate of success, precision, and to reduce the energy consumption required for performance. Performance is an act of executing a motor skill or task. Continuous practice of a specific motor skill will result in a greatly improved performance, which leads to motor learning. Motor learning is a relatively permanent change in the ability to perform a skill as a result of continuous practice or experience.
A fundamental movement skill is a developed ability to move the body in coordinated ways to achieve consistent performance at demanding physical tasks, such as found in sports, combat or personal locomotion, especially those unique to humans, such as ice skating, skateboarding, kayaking, or horseback riding. Movement skills generally emphasize stability, balance, and a coordinated muscular progression from prime movers (legs, hips, lower back) to secondary movers (shoulders, elbow, wrist) when conducting explosive movements, such as throwing a baseball. In most physical training, development of core musculature is a central focus. In the athletic context, fundamental movement skills draw upon human physiology and sport psychology.
Types of motor skills
Motor skills are movements and actions of the muscles. There are two major groups of motor skills:
Gross motor skills – require the use of large muscle groups in our legs, torso, and arms to perform tasks such as: walking, balancing, and crawling. The skill required is not extensive and therefore are usually associated with continuous tasks. Much of the development of these skills occurs during early childhood. We use our gross motor skills on a daily basis without putting much thought or effort into them. The performance level of gross motor skill remains unchanged after periods of non-use. Gross motor skills can be further divided into two subgroups: Locomotor skills, such as running, jumping, sliding, and swimming; and object-control skills such as throwing, catching, dribbling, and kicking.
Fine motor skills – require the use of smaller muscle groups to perform smaller movements. These muscles include those found in our wrists, hands, fingers, feet and in our toes. These tasks are precise in nature like: playing the piano, tying shoelaces, brushing your teeth, and flossing. Some fine motor skills may be susceptible to retention loss of over a period of time if not in use. The phrase "if you don't use it, you lose it" is a perfect way to describe these skills, they need to be continuously used. Discrete tasks such as switch gears in an automobile, grasping an object, or striking a match, usually require more fine motor skill than gross motor skills.
Both gross and fine motor skills can become weakened or damaged. Some reasons for these impairments could be caused by an injury, illness, stroke, congenital deformities (an abnormal change in the size or shape of a body part at birth), cerebral palsy, and developmental disabilities. Problems with the brain, spinal cord, peripheral nerves, muscles, or joints can also have an effect on these motor skills, and decrease control over them.
Development
Motor skills develop in different parts of a body along three principles:
Cephalocaudal – the principle that development occurs from head to tail. For example, infants first learn to lift their heads on their own, followed by sitting up with assistance, then sitting up by themselves. Followed by scooting, crawling, pulling up, and then walking.
Proximodistal – the principle that movement of limbs that are closer to the body develop before the parts that are further away. For example, a baby learns to control their upper arm before their hands and fingers. Fine movements of the fingers are the last to develop in the body.
Gross to specific – a pattern in which larger muscle movements develop before finer movements. For example, a child will go from only being able to pick up large objects, to then being able to pick up an object that is small, between the thumb and fingers. The earlier movements involve larger groups of muscles, but as the child grows, finer movements become possible and specific tasks can be achieved. An example of this would be a young child learning to grasp a pencil.
In children, a critical period for the development of motor skills is preschool years (ages 3–5), as fundamental neuroanatomic structure shows significant development, elaboration, and myelination over the course of this period. Many factors contribute to the rate that children develop their motor skills. Unless afflicted with a severe disability, children are expected to develop a wide range of basic movement abilities and motor skills around a certain age. Motor development progresses in seven stages throughout an individual's life: reflexive, rudimentary, fundamental, sports skill, growth and refinement, peak performance, and regression. Development is age-related but is not age dependent. In regard to age, it is seen that typical developments are expected to attain gross motor skills used for postural control and vertical mobility by 5 years of age.
There are six aspects of development:
Qualitative – changes in movement-process results in changes in movement-outcome.
Sequential – certain motor patterns precede others.
Cumulative – current movements are built on previous ones.
Directional – cephalocaudal or proximodistal
Multifactorial – numerous-factors impact
Individual – dependent on each person
In the childhood stages of development, gender differences can greatly influence motor skills. In the article "An Investigation of Age and Gender Differences in Preschool Children's Specific Motor Skills", girls scored significantly higher than boys on visual motor and graphomotor tasks. The results from this study suggest that girls attain manual dexterity earlier than boys. Variability of results in the tests can be attributed towards the multiplicity of different assessment tools used. Furthermore, gender differences in motor skills are seen to be affected by environmental factors. In essence, "parents and teachers often encourage girls to engage in [quiet] activities requiring fine motor skills, while they promote boys' participation in dynamic movement actions". In the journal article "Gender Differences in Motor Skill Proficiency From Childhood to Adolescence" by Lisa Barrett, the evidence for gender-based motor skills is apparent. In general, boys are more skillful in object control and object manipulation skills. These tasks include throwing, kicking, and catching skills. These skills were tested and concluded that boys perform better with these tasks. There was no evidence for the difference in locomotor skill between the genders, but both are improved in the intervention of physical activity. Overall, the predominance of development was on balance skills (gross motor) in boys and manual skills (fine motor) in girls.
Components of development
Growth – increase in the size of the body or its parts as the individual progresses toward maturity (quantitative structural changes)
Maturation – refers to qualitative changes that enable one to progress to higher levels of functioning; it is primarily innate
Experience or learning – refers to factors within the environment that may alter or modify the appearance of various developmental characteristics through the process of learning
Adaptation – refers to the complex interplay or interaction between forces within the individual (nature) and the environment (nurture)
Influences on development
Stress and arousal – stress and anxiety are the result of an imbalance between the demand of a task and the capacity of the individual. In this context, arousal defines the amount of interest in the skill. The optimal performance level is moderate stress or arousal.
Fatigue – the deterioration of performance when a stressful task is continued for a long time, similar to the muscular fatigue experienced when exercising rapidly or over a long period. Fatigue is caused by over-arousal. Fatigue impacts an individual in many ways: perceptual changes in which visual acuity or awareness drops, slowing of performance (reaction times or movements speed), irregularity of timing, and disorganization of performance. A study conducted by Meret Branscheidt concluded that fatigue interferes with the learning of new motor skills. In the experiment, participants were split into two different groups. One group worked the muscles in their hands until they were physically fatigued and then had to learn a new motor task, while the second group learned the task without being fatigued. Those that were fatigued had a harder time learning these new motor skills compared to those who were not. Even in the days following, after the fatigue had subsided, they still had difficulty learning those same tasks.
Vigilance – the ability to maintain attention over time and respond appropriately to relevant stimuli. When vigilance is lost, it can result in slower responses or the failure to respond to stimuli all together. Some tasks include actions that require little work and high attention.
Gender – gender plays an important role in the development of the child. Girls are more likely to be seen performing fine stationary visual motor-skills, whereas boys predominantly exercise object-manipulation skills. While researching motor development in preschool-aged children, girls were more likely to be seen performing skills such as skipping, hopping, or skills with the use of hands only. Boys were seen to perform gross skills such as kicking or throwing a ball or swinging a bat. There are gender-specific differences in qualitative throwing performance, but not necessarily in quantitative throwing performance. Male and female athletes demonstrated similar movement patterns in humerus and forearm actions but differed in trunk, stepping, and backswing actions.
Stages of motor learning
Motor learning is a change, resulting from practice. It often involves improving the accuracy of movements both simple and complex as one's environment changes. Motor learning is a relatively permanent skill as the capability to respond appropriately is acquired and retained.
The stages of motor learning are the cognitive phase, the associative phase, and the autonomous phase.
Cognitive phase – When a learner is new to a specific task, the primary thought process starts with, "What needs to be done?" Considerable cognitive activity is required so that the learner can determine appropriate strategies to adequately reflect the desired goal. Good strategies are retained and inefficient strategies are discarded. The performance is greatly improved in a short amount of time.
Associative phase – The learner has determined the most-effective way to do the task and starts to make subtle adjustments in performance. Improvements are more gradual and movements become more consistent. This phase can last for a long time. The skills in this phase are fluent, efficient, and aesthetically pleasing.
Autonomous phase – This phase may take several months to years to reach. The phase is dubbed "autonomous" because the performer can now "automatically" complete the task without having to pay any attention to performing it. Examples include walking and talking or sight reading while doing simple arithmetic.
Law of effect
Motor-skill acquisition has long been defined in the scientific community as an energy-intensive form of stimulus-response (S-R) learning that results in robust neuronal modifications. In 1898, Edward Thorndike proposed the law of effect, which states that the association between some action (R) and some environmental condition (S) is enhanced when the action is followed by a satisfying outcome (O). For instance, if an infant moves his right hand and left leg in just the right way, he can perform a crawling motion, thereby producing the satisfying outcome of increasing his mobility. Because of the satisfying outcome, the association between being on all fours and these particular arm and leg motions are enhanced. Further, a dissatisfying outcome weakens the S-R association. For instance, when a toddler contracts certain muscles, resulting in a painful fall, the child will decrease the association between these muscle contractions and the environmental condition of standing on two feet.
Feedback
During the learning process of a motor skill, feedback is the positive or negative response that tells the learner how well the task was completed.
Inherent feedback: after completing the skill, inherent feedback is the sensory information that tells the learner how well the task was completed. A basketball player will note that he or she made a mistake when the ball misses the hoop. Another example is a diver knowing that a mistake was made when the entry into the water is painful and undesirable.
Augmented feedback: in contrast to inherent feedback, augmented feedback is information that supplements or "augments" the inherent feedback. For example, when a person is driving over a speed limit and is pulled over by the police. Although the car did not do any harm, the policeman gives augmented feedback to the driver in order for him to drive more safely. Another example is a private tutor for a new student in a field of study. Augmented feedback decreases the amount of time to master the motor skill and increases the performance level of the prospect.
Transfer of motor skills: the gain or loss in the capability for performance in one task as a result of practice and experience on some other task. An example would be the comparison of initial skill of a tennis player and non-tennis player when playing table tennis for the first time. An example of a negative transfer is if it takes longer for a typist to adjust to a randomly assigned letter of the keyboard compared to a new typist.
Retention: the performance level of a particular skill after a period of no use.
The type of task can have an effect on how well the motor skill is retained after a period of non-use:
Continuous tasks – activities like swimming, bicycling, or running; the performance level retains proficiency even after years of non-use.
Discrete tasks – an instrument, video game, or a sport; the performance level drops significantly but will be better than a new learner. The relationship between the two tasks is that continuous tasks usually use gross motor skills and discrete tasks use fine motor skills.
Brain structures
The regions of the frontal lobe responsible for motor skill include the primary motor cortex, the supplemental motor area, and the premotor cortex. The primary motor cortex is located in the precentral gyrus and is often visualized as the motor homunculus. By stimulating certain areas of the motor strip and observing where it had an effect, Penfield and Rassmussen were able to map out the motor homunculus. Areas on the body that have complex movements, such as the hands, have a bigger representation on the motor homunculus.
The supplemental motor area, which is just anterior to the primary motor cortex, is involved with postural stability and adjustment as well as coordinating sequences of movement. The premotor cortex, which is just below the supplemental motor area, integrates sensory information from the posterior parietal cortex and is involved with the sensory-guided planning of movement and begins the programming of movement.
The basal ganglia are an area of the brain where gender differences in brain physiology is evident. The basal ganglia are a group of nuclei in the brain that is responsible for a variety of functions, some of which include movement. The globus pallidus and putamen are two nuclei of the basal ganglia which are both involved in motor skills. The globes pallid-us is involved with the voluntary motor movement, while the putamen is involved with motor learning. Even after controlling for the naturally larger volume of the male brain, it was found that males have a larger volume of both the globus pallidus and putamen.
The cerebellum is an additional area of the brain important for motor skills. The cerebellum controls fine motor skills as well as balance and coordination. Although women tend to have better fine motor skills, the cerebellum has a larger volume in males than in females, even after correcting for the fact that males naturally have a larger brain volume.
Hormones are an additional factor that contributes to gender differences in motor skill. For instance, women perform better on manual dexterity tasks during times of high estradiol and progesterone levels, as opposed to when these hormones are low such as during menstruation.
An evolutionary perspective is sometimes drawn upon to explain how gender differences in motor skills may have developed, although this approach is controversial. For instance, it has been suggested that men were the hunters and provided food for the family, while women stayed at home taking care of the children and doing domestic work. Some theories of human development suggest that men's tasks involved gross motor skill such as chasing after prey, throwing spears and fighting. Women, on the other hand, used their fine motor skills the most in order to handle domestic tools and accomplish other tasks that required fine motor-control.
See also
Muscle memory
Motor control
Motor skill consolidation
Motor system
Sensorimotor stage
References
External links
Section about motor learning and control in the Wikibook "Stuttering"
What's the difference between fine motor and gross motor skills?
Motor control
Skills | Motor skill | Biology | 3,460 |
3,876,831 | https://en.wikipedia.org/wiki/NGC%202547 | NGC 2547 is a southern open cluster in Vela, discovered by Nicolas Louis de Lacaille in 1751 from South Africa. The star cluster is young with an age of 20-30 million years.
Observations with the Spitzer Space Telescope showed a shell around the B3 III/IV-type star HD 68478. This could be a sign of recent mass loss in this star.
A study using Gaia DR2 data showed that NGC 2547 formed about 30 million years ago together with a new discovered star cluster, called [BBJ2018] 6. The star cluster NGC 2547 has a similar age compared with Trumpler 10, NGC 2451B, Collinder 135 and Collinder 140. It was suggested that all these clusters formed in a single event of triggered star formation.
NGC 2547 shows evidence for mass segregation down to 3 .
Cluster members with debris disks
Observations with the Spitzer Space Telescope have shown that ≤1% of the stars in NGC 2547 have infrared excess in 8.0 μm and 30-45% of the B- to F-type stars have infrared excess at 24 μm.
The system 2MASS J08090250-4858172, also called ID8 is located in NGC 2547 and showed substantial brightening of the debris disk at a wavelength of 3 to 5 micrometers, followed by a decay over a year. This was interpreted as a violent impact on a planetary body in this system.
NGC 2547 contains nine M-dwarfs with 24 μm excess. These could be debris disks and the material could be orbiting close to the snow line of these stars, indicating that planet-formation is underway in these systems. Later it was suggested that these M-dwarfs might contain Peter Pan Disks. 2MASS 08093547-4913033, which is one of the M-dwarfs with a debris disk in NGC 2547 was observed with the Spitzer Infrared Spectrograph. In this system the first detection of silicate was made from a debris disk around an M-type star.
Gallery
References
External links
SEDS – NGC 2547
2547
Open clusters
Vela (constellation)
Articles containing video clips
? | NGC 2547 | Astronomy | 449 |
39,083,294 | https://en.wikipedia.org/wiki/Synnefo | Synnefo is a complete open-source cloud stack written in Python that provides Compute, Network, Image, Volume and Storage services, similar to the ones offered by AWS. Synnefo manages multiple Google Ganeti clusters at the backend that handle low-level VM operations and uses Archipelago to unify cloud storage. To boost 3rd-party compatibility, Synnefo exposes the OpenStack APIs to users.
Synnefo is being developed by GRNET (Greek Research and Technology Network), and is powering two of its public cloud services, the service, which is aimed towards the Greek academic community, and the service, which is open for all members of the GÉANT network.
History
In November 2006, in an effort to provide advanced cloud services for the Greek academic and research community, GRNET decides to launch a cloud storage service, similar to Amazon's S3, called Pithos. The project is outsourced and opens for public beta to the members of the Greek academic and research community in May 2009.
In June 2010, GRNET decides the next step in this course; to create a complete, AWS-like cloud service (Compute/Network/Volume/Image/Storage). This service, called ~okeanos, aims to provide the Greek academic and research community with access to a virtualized infrastructure that various projects can take advantage of, e.g. experiments, simulations and labs. Given the non-ephemeral nature of the resources that the service provides, the need arises for persistent cloud servers. In search for a solution, in October 2010 GRNET decides to base the service on Google Ganeti and to design and implement all missing parts in-house.
In May 2011, the older Pithos service is rewritten from scratch in-house, with the intention of being integrated to ~okeanos as its storage service. Moreover, the new Pithos adds support for Dropbox-like syncing.
In July 2011, ~okeanos reaches its public alpha stage. This version (v0.5.2.1) includes the Identity, Compute, Network and a primitive Image service. The alpha release of the new, rewritten Pithos follows shortly after, in November 2011. It is marketed as Pithos+ and the old Pithos remains as a separate service. The new Pithos+, though not integrated to ~okeanos yet, provides syncing and sharing capabilities for files, as well as native syncing clients for Mac OS X, iPhone, iPad and Windows.
In March 2012, ~okeanos enters the public alpha2 phase. This version (v0.9) includes a complete integration of the new Pithos as part of ~okeanos and now acts as the unified store for Images and Files. Around this point, in April 2012, the ~okeanos team decides to refer to the whole software stack as Synnefo and starts writing the first version of the Synnefo documentation.
In December 2012, due to interest from other parties to the Synnefo stack, GRNET decides to conceptually separate the ~okeanos and Synnefo projects. Synnefo starts to become a branding-neutral, IaaS cloud computing software, while ~okeanos becomes its real-world application, an IaaS for the Greek academic and research community.
In April 2013, a new Synnefo version (v.013) gets released after a huge cleanup and code refactoring. All separate components are merged under the single Synnefo repository. This is the first release as a unified project, containing all parts (Compute/Network/Volume/Image/Storage).
In Jun 2013, Synnefo v0.14 gets released. Since this version, Synnefo is branding neutral (all remaining ~okeanos references are removed). It also gets a branding mechanism and the corresponding documentation, so that others can adapt it to their branding identity.
Overview
Synnefo has been designed to be deployed in any environment
Components
Synnefo is modular in nature and consists of the following components:
Astakos (Identity/Account services)
Astakos is the Identity management component which provides a common user base to the rest of Synnefo. Astakos handles user creation, user groups, resource accounting, quotas, projects, and issues authentication tokens used across the infrastructure. It supports multiple authentication methods:
Pithos (File/Object Storage services)
Pithos is the Object/File Storage component of Synnefo. Users upload files on Pithos using either the Web UI, the command-line client, or native syncing clients. It is a thin layer mapping user-files to content-addressable blocks which are then stored on a storage backend. Files are split in blocks of fixed size, which are hashed independently to create a unique identifier for each block, so each file is represented by a sequence of block names (a hashmap). This way, Pithos provides deduplication of file data; blocks shared among files are only stored once.
The current implementation uses 4MB blocks hashed with SHA256. Content-based addressing also enables efficient two-way file syncing that can be used by all Pithos clients (e.g. the kamaki command-line client or the native Windows/Mac OS clients). Whenever someone wishes to upload an updated version of a file, the client hashes all blocks of the file and then requests the server to create a new version for this block sequence. The server will return an error reply with a list of the missing blocks. The client may then upload each block one by one, and retry file creation. Similarly, whenever a file has been changed on the server, the client can ask for its list of blocks and only download the modified ones.
Pithos runs at the cloud layer and exposes the OpenStack Object Storage API to the outside world, with custom extensions for syncing. Any client speaking to OpenStack Swift can also be used to store objects in a Pithos deployment. The process of mapping user files to hashed objects is independent from the actual storage backend, which is selectable by the administrator using pluggable drivers. Currently, Pithos has drivers for two storage backends:
files on a shared filesystem, e.g., NFS, Lustre, GPFS or GlusterFS
objects on a Ceph/RADOS cluster.
Whatever the storage backend, it is responsible for storing objects reliably, without any connection to the cloud APIs or to the hashing operations.
Cyclades (Compute/Network/Image/Volume services)
Cyclades is the Synnefo component that implements the Compute, Network, Image and Volume services. It exposes the associated OpenStack REST APIs: OpenStack Compute, Network, Glance and soon also Cinder. Cyclades is the part which manages multiple Ganeti clusters at the backend. Cyclades issues commands to a Ganeti cluster using Ganeti's Remote API (RAPI). The administrator can expand the infrastructure dynamically by adding new Ganeti clusters to reach datacenter scale. Cyclades knows nothing about low-level VM management operations, e.g., handling of VM creations, migrations among physical nodes, and handling of node downtimes; the design and implementation of the end-user API is orthogonal to VM handling at the backend.
There are two distinct, asynchronous paths in the interaction between Synnefo and Ganeti. The effect path is activated in response to a user request; Cyclades issues VM control commands to Ganeti over RAPI. The update path is triggered whenever the state of a VM changes, due to Synnefo- or administrator-initiated actions happening at the Ganeti level. In the update path, we monitor Ganeti's job queue to produce notifications to the rest of the Synnefo infrastructure over a message queue.
Users have full control over their VMs: they can create new ones, start them, shutdown, reboot, and destroy them. For the configuration of their VMs they can select number of CPUs, size of RAM and system disk, and operating system from pre-defined Images including popular Linux distros (Debian, Ubuntu, CentOS, Fedora, Gentoo, Archlinux, OpenSuse), MS-Windows Server 2008 R2 and 2012 as well as FreeBSD.
The REST API for VM management, being OpenStack compatible, can interoperate with 3rd party tools and client libraries.
The Cyclades UI is written in Javascript/jQuery and runs entirely on the client side for maximum responsiveness. It is just another API client; all UI operations happen with asynchronous calls over the API.
The networking functionality includes dual IPv4/IPv6 connectivity for each VM, easy, platform-provided firewalling either through an array of pre-configured firewall profiles, or through a roll-your-own firewall inside the VM. Users may create multiple private, virtual L2 networks, so that they construct arbitrary network topologie, e.g., to deploy VMs in multi-tier configurations. The networking functionality is exported all the way from the backend to the API and the UI.
See also
OpenStack
Ganeti
Xen
KVM
References
External links
~okeanos website
~okeanos global website
Cloud infrastructure
Free software programmed in Python
Free software for cloud computing
Virtualization software for Linux | Synnefo | Technology | 1,979 |
61,243,323 | https://en.wikipedia.org/wiki/Helen%20Pearson%20%28science%20journalist%29 | Helen Pearson is a science journalist, author and Chief Magazine Editor for the journal Nature, where she oversees the journalism and opinion content. She is the author of The Life Project, a book about the British birth cohort studies, a series of longitudinal studies which have tracked thousands of people since their birth.
Education
Pearson obtained a Bachelor of Arts degree in Natural Sciences (Genetics) from the University of Cambridge in 1996. She was awarded her PhD in 1999 from the University of Edinburgh, for research completed at the Medical Research Council Human Genetics Unit. Her PhD thesis was on the role of the gene Pax6 in development of the cortex.
Career
Pearson joined Nature in 2001 as a reporter. She has interviewed and written about many high-profile scientists and academics for Nature including Robert Langer, Lawrence Summers and Joe Thornton. She has written freelance articles in outlets including The Guardian and The Independent.
Pearson's book, The Life Project: The extraordinary story of our ordinary lives was published by Allen Lane in 2016 and is about the British birth cohort studies. The oldest of these studies, the National Survey of Health and Development (NSHD), started in 1946, and the series includes the National Child Development Study, established in 1958, the 1970 British Cohort Study and the Millennium Cohort Study of babies born in 2000-2001. Pearson also included in her book the Avon Longitudinal Study of Parents and Children, also known as Children of the 90s.
Appearances
Pearson has given public lectures and talks at academic venues and literary and science festivals including the Edinburgh International Book Festival, the RSA, London School of Economics and Dartington Way with Words Literary Festival. She gave the keynote public lecture at the British Society for the History of Science conference in 2017.
She has appeared on national and international radio including Radio 4’s Start the Week, and has written about science writing and journalism as a career option for scientists.
In 2017, she gave a TED talk based on her book, The Life Project.
Awards
Pearson’s book, The Life Project was named best science book of the year by The Observer, was a book of the year for The Economist and was longlisted for the Orwell Prize, Highly Commended at the 2017 British Medical Association medical book awards and Highly Commended in the 2016 UK Medical Journalists’ Association Awards.
2013 Winner, Medical Journalists’ Association Award For feature article Coming of Age
2012 Winner Best Feature, Association of British Science Writers Award For feature article Study of a Lifetime
2010 Winner Best Feature, Association of British Science Writers Award For feature article One Gene, Twenty Years
2010 Winner, Wistar Science Journalism Award For feature article One Gene, Twenty Years
Published works
The Life Project
What makes some people happy, healthy and successful – and others not?
Britain’s birth cohort studies are the envy of the world
Lab Girl by Hope Jahren – what a life in science is really like
The lab that knows where your time really goes
Prehistoric proteins: Raising the Dead
Children of the 90s: Coming of Age
Study of a Lifetime
One Gene, Twenty Years
Protein engineering: The fate of fingers
At-Home DNA Tests Are Here, But Doctors Aren't Celebrating
External links
Official website
References
Living people
British science journalists
British women journalists
Women science writers
1973 births | Helen Pearson (science journalist) | Technology | 649 |
3,992,145 | https://en.wikipedia.org/wiki/HAZWOPER | Hazardous Waste Operations and Emergency Response (HAZWOPER; ) is a set of guidelines produced and maintained by the Occupational Safety and Health Administration which regulates hazardous waste operations and emergency services in the United States and its territories. With these guidelines, the U.S. government regulates hazardous wastes and dangerous goods from inception to disposal.
HAZWOPER applies to five groups of employers and their employees. This includes employees who are exposed (or potentially exposed) to hazardous substances (including hazardous waste) and who are engaged in one of the following operations as specified by OSHA regulations 1910.120(a)(1)(i-v) and 1926.65(a)(1)(i-v):
Cleanup operations required by a governmental body (federal, state, local or other) involving hazardous substances conducted at uncontrolled hazardous-waste sites
Corrective actions involving clean-up operations at sites covered by the Resource Conservation and Recovery Act of 1976 (RCRA) as amended (42 U.S.C. 6901 et seq.)
Voluntary cleanup operations at sites recognized by a federal, state, local, or other governmental body as uncontrolled hazardous-waste sites
Operations involving hazardous waste which are conducted at treatment, storage and disposal facilities regulated by Title 40 of the Code of Federal Regulations, parts 264 and 265 pursuant to the RCRA, or by agencies under agreement with the U.S. Environmental Protection Agency to implement RCRA regulations
Emergency response operations for releases of, or substantial threats of release of, hazardous substances (regardless of the hazard's location)
The most commonly used manual for HAZWOPER activities is Department of Health and Human Services Publication 85–115, Occupational Safety and Health Guidance Manual for Hazardous Waste Site Activities. Written for government contractors and first responders, the manual lists safety requirements for cleanups and emergency-response operations.
History
Although its acronym predates OSHA, HAZWOPER describes OSHA-required regulatory training. Its relevance dates to World War II, when waste accumulated during construction of the atomic bomb at the Hanford Site. Years later, high-profile environmental mishaps (such as Love Canal in 1978 and the attempted 1979 Valley of the Drums cleanup) spurred federal legislative action, awakening the U.S. to the need to control and contain hazardous waste. Two programs—CERCLA, the Comprehensive Environmental Response, Compensation, and Liability Act and the Resource Conservation and Recovery Act (RCRA) of 1976—were implemented to deal with these wastes. CERCLA (the Superfund) was designed to deal with existing waste sites, and RCRA addressed newly generated waste. The acronym HAZWOPER originally derived from the Department of Defense's Hazardous Waste Operations (HAZWOP), implemented on military bases slated for the disposal of hazardous waste left on-site after World War II. In 1989 production ended at the Hanford Site, and work shifted to the cleanup of portions of the site contaminated with hazardous substances including radionuclides and chemical waste. OSHA created HAZWOPER, with input from the Coast Guard, the National Institute for Occupational Safety and Health and the Environmental Protection Agency (EPA). In 1984, the combined-agency effort resulted in the Hazardous Waste Operations and Emergency Response Guidance Manual. On March 6, 1990, OSHA published Hazardous Waste Operations and Emergency Response 1910.120, the HAZWOPER standard codifying the health-and-safety requirements companies must meet to perform hazardous-waste cleanup or respond to emergencies.
Scope
Hazardous waste, as defined by the standard, is a waste (or combination of wastes) according to 40 CFR §261.3 or substances defined as hazardous wastes in 49 CFR §171.8.
Training levels
OSHA recognizes several levels of training, based on the work the employee performs and the degree of hazard faced. Each level requires a training program, with OSHA-specified topics and minimum training time.
General site workers initially require 40 hours of instruction, three days of supervised hands-on training and eight hours refresher training annually.
Workers limited to a specific task, or workers on fully characterized sites with no hazards above acceptable levels, require HAZWOPER 24-Hour initial training, one day of supervised hands-on training and eight hours of refresher training annually.
Managers and supervisors require the same level of training as those they supervise, plus eight hours.
Workers at a treatment, storage or disposal facility handling RCRA waste require 24 hours of initial training, best practice two days of supervised hands-on training and eight hours of refresher training annually. 1910.120(p)(8)(iii)(B) Employee members of TSD facility emergency response organizations shall be trained to a level of competence in the recognition of health and safety hazards to protect themselves and other employees. This would include training in the methods used to minimize the risk from safety and health hazards; in the safe use of control equipment; in the selection and use of appropriate personal protective equipment; in the safe operating procedures to be used at the incident scene; in the techniques of coordination with other employees to minimize risks; in the appropriate response to over exposure from health hazards or injury to themselves and other employees; and in the recognition of subsequent symptoms which may result from over exposures.
The First Responder Awareness Level requires sufficient training to demonstrate competence in assigned duties.
The First Responder Operations Level requires Awareness-Level training plus eight hours.
Hazardous Materials Technicians require 24 hours training plus additional training to achieve competence in specialized areas.
Hazardous Materials Specialist requires 24 hours training at the Technician level, plus additional specialized training.
On-scene Incident Commander requires 24 hours training plus additional training to achieve competence in designated areas.
In some instances, training levels overlap; other levels are not authorized by OSHA because their training is not sufficiently specific. A site safety supervisor (or officer) and a competent industrial hygienist or other technically qualified, HAZWOPER-trained person should be consulted.
Training and certification sources
An employer must ensure that the training provider covers the areas of knowledge required by the standard and provides certification to students that they have passed the training. Since the certification is for the student, not the employer, the trainer must cover all aspects of HAZWOPER operations and not only those at the current site. OSHA training requires cleanup workers to focus on personal protective equipment separately from emergency-response equipment. There are 4 levels of PPE that range from A-D that HAZWOPER training will cover that vary in skin, respiratory and eye protection.
See also
Dangerous goods
Firefighter
References
External links
Department of Health and Human Services Publication 85–115, "Occupational Safety and Health Guidance Manual for Hazardous Waste Site Activities"
OSHA HAZWOPER FAQ
OSHA Federal Registers: Hazardous Waste Operations
The Centers for Disease Control and Prevention
Occupational Safety and Health Administration
Hazardous materials
Rules | HAZWOPER | Physics,Chemistry,Technology | 1,401 |
34,625,143 | https://en.wikipedia.org/wiki/NGC%206642 | NGC 6642 is a globular cluster located 26,700 light-years from Earth, in the constellation Sagittarius. Many "blue stragglers" (stars which seemingly lag behind in their rate of aging) have been spotted in this globular, and it is known to be lacking in low-mass stars.
References
External links
Globular clusters
6642
Sagittarius (constellation) | NGC 6642 | Astronomy | 87 |
17,119,866 | https://en.wikipedia.org/wiki/PacketTrap | PacketTrap Networks, Inc., later known as just PacketTrap, was a provider of network management and traffic analysis software for midsize companies.
History
PacketTrap was founded in 2006 and headquartered in San Francisco, California. It received $5 million in Series A venture capital from August Capital in 2007.
The company was purchased by Quest Software in 2009. PacketTrap was then bought by Dell in 2012 as part of a buyout of Quest. Dell discontinued development of the software in 2013. PacketTrap was established enough that other companies continued support of the legacy product through licensing agreements with Dell.
Features
PacketTrap software features included Desktop Management, Server Management, Monitor Cloud Assets, and others.
See also
Comparison of network monitoring systems
References
External links
PacketTrap's Products and Services
Startup City - InformationWeek
Network World, Denise Dubie, Nov, 2008. "10 IT Management Companies to Watch"
Information Week, John Foley, Aug, 2008. "PacketTrap Challenges CA and IBM"
Microsoft TechNet, Greg Steen, July, 2008. "New Products for IP Pros"
PC Magazine, Jamie Bernstein, May, 2008. "Review: PacketTrap pt360"
Network World, Denise Dubie, March, 2008 "Kicking the Tires of Management Software"
What PC, Tony Luke, Feb, 2008. "A Tough Nut to Crack"
Information Week, John Foley, Feb, 2008. "The Demise Of Commercial Open Source"
Software companies established in 2006
Networking companies of the United States
System administration
File transfer software
Port scanners
Network analyzers
2006 establishments in California
Software companies based in the San Francisco Bay Area
Defunct software companies of the United States
Software companies disestablished in 2013
2009 mergers and acquisitions
2012 mergers and acquisitions
Quest Software | PacketTrap | Technology | 358 |
70,383,921 | https://en.wikipedia.org/wiki/Oceanic%20freshwater%20flux | Oceanic freshwater fluxes are defined as the transport of non saline water between the oceans and the other components of the Earth's system (the lands, the atmosphere and the cryosphere). These fluxes have an impact on the local ocean properties (on sea surface salinity, temperature and elevation), as well as on the large scale circulation patterns (such as the thermohaline circulation).
Introduction
Freshwater fluxes in general describe how freshwater is transported between and stored in the earth's systems: oceans, land, the atmosphere and the cryosphere. While the total amount of water on Earth has remained virtually constant over human timescales, the relative distribution of that total mass between the four reservoirs has been influenced by past climate states, such as glacial cycles. Since the oceans account for 71% of the Earth's surface area, 86% of evaporation (E) and 78% of precipitation (P) occur over the ocean, the oceanic freshwater fluxes represent a large part of the world's freshwater fluxes.
There are five major freshwater fluxes into and out of the ocean, namely:
Precipitation
Evaporation
Riverine discharge
Ice freezing or melting (Sea ice freezing or melting, ice shelf melting, iceberg melting)
Groundwater discharge
whereby the 1., 3. and 5. are all inputs, adding freshwater to the ocean, while 2. is an output, i.e. a negative freshwater flux and 4. can be either a freshwater loss (freezing) or gain (melting).
The quantity and the spatial distribution of those fluxes determine the ocean salinity (the salt concentration of the ocean water). A positive freshwater flux leads to mixing of water with low to zero salinity with the salty ocean water, resulting in a decrease of the water salinity. This is for example the case in regions, where precipitation is greater than evaporation. On the contrary, if evaporation gets greater than precipitation, the ocean salinity increases, since only water (H2O) evaporates, but not the ions (e.g. Na+, Cl+) which make up salt.
Estimates of the annual mean freshwater fluxes into the ocean are for precipitation (88% of total freshwater input), for riverine discharge from land (9%), for ice discharge from land (<1%) and and for saline and fresh groundwater discharge respectively (<1%). The annual mean freshwater fluxes out of the ocean via evaporation is estimated to be .
The salinity, along with temperature and pressure, determines the density of the water. Higher salinity and cooler water results in a higher water density (see also spiciness of ocean water). Since differences in water density drive large-scale ocean circulation, freshwater fluxes are most important for ocean circulation patterns like the Thermohaline Circulation (THC).
Freshwater fluxes into the ocean
Evaporation and precipitation
There are large spatial and temporal variations in precipitation and evaporation patterns. The dominant reason for precipitation is adiabatic cooling when moist air rises, whose water vapor then becomes supersaturated above a certain altitude and condenses out. Areas of large precipitation are therefore areas of convection, which is most prominent in the Intertropical Convergence Zone (ITCZ), a band of latitudes around the equator.
Evaporation describes the process when surface water changes its phase from liquid to gaseous. This process requires a high amount of energy, due to the strong hydrogen bonds between the water molecules. This results in a global evaporation pattern, where high evaporation rates can be observed mostly in warm tropical and subtropical regions, where the surface was heated by solar radiation which can provide the necessary amount of energy. At higher latitudes the evaporation rate decreases. Additionally, the evaporation rate is influenced by the relative humidity of the air overlying the water surface. Approaching the saturation of the air with water vapour, the evaporation rate decreases, i.e. a lower air–sea humidity gradient decreases evaporation.
The actual freshwater flux that the ocean experiences in a certain timeframe is the net amount of precipitation and evaporation in this time interval. This means, if evaporation minus precipitation (E-P) is positive, the ocean experiences a net loss of freshwater, while the opposite is true for a negative value for E-P. On a global scale, the subtropical gyres and western boundary currents of the Atlantic, Pacific and Indian Oceans are regions where evaporation exceeds precipitation. In contrast, the ITCZ as well as high latitudes (> 40° N/S) are regions of net precipitation, although the ITCZ exceeds the high latitudes in terms of quantity of rainfall. The equatorial region of net precipitation is centered north of the equator in the Atlantic and Pacific Oceans but is broader and extends further south in the Indian Ocean. An additional center of strong net precipitation is located over the western Pacific-Indonesian region.
Both Atlantic subtropical gyres are net evaporative, as well as the Pacific subtropical gyres, although they show an east–west transition with increased evaporation near the eastern boundaries. This spatial pattern can be attributed to the fact that the overlying air becomes saturated in humidity, subsequently leading to decreasing evaporation rates as the air is driven westward by the trade winds.
An estimation of the annual mean freshwater flux into the ocean is for precipitation, while annual mean freshwater flux out of the ocean via evaporation is estimated to be .
When considering all the ocean basins, the only ocean basin which experiences net precipitation averaged over the year, is the North Pacific. The other ocean basins, namely the South Pacific, the North and South Atlantic and the Indian Ocean are areas of net evaporation, albeit with varying strength. The net evaporation over the South Pacific Ocean is distinctly smaller than over the other ocean basins, although the South Pacific Ocean covers an area as large the whole Atlantic Ocean and one third larger than the Indian Ocean.
It is very likely that the energy increase (heat flux) observed in the upper 700 m of the global oceans can be attributed to anthropogenic climate change and increased radiative forcing due to greenhouse gas emissions. Although observed trends in evaporation minus precipitation suggest that the Atlantic Ocean will become saltier, while the Indian Ocean will become fresher in the coming decades, it is easier to project global patterns of air-sea flux based on changes in heat content and salinity while regional trends are rarely robust.
Seasonal cycle
The amount and even the sign of the net total freshwater flux E-P from an ocean basin can change throughout the year.
The net evaporation over much of the subtropics is most pronounced during winter season due to the increased strength of the easterly trades in winter. This applies for both hemispheres. The wind impacts evaporation in two ways. Firstly directly, whereby a greater wind speed carries water vapour faster away from the evaporating surface, leading to a faster reestablishment of the air–sea humidity gradient, which were reduced by the evaporation beforehand and is necessary for high evaporation rates. Secondly indirectly, since enhanced surface wind strengthens the wind-driven subtropical gyre. Since the subtropical gyres drive a northwards heat transport via the western boundary currents, the sea surface temperatures warm up along the paths of the currents and cause more evaporation by providing more energy and enlarging the air–sea humidity gradients. In the extratropics the net precipitation is not explainable by a simple seasonal cycle. In the Atlantic and Pacific Oceans the net mid-latitude precipitation reaches its peak during June–August synchronously in the northern and the southern hemisphere, i.e. in different seasons.
In the North and South Atlantic Oceans and in the North Pacific Ocean evaporation exceeds precipitation in winter and spring. During summer and autumn the sign of E-P changes for all ocean basins but the South Atlantic Ocean, which is always net evaporative. When considering the Atlantic as a whole, the constant net loss of freshwater in the South Atlantic Ocean determines the sign of the total freshwater flux and cancels the net precipitation from the North Atlantic in summer out. This means, the Atlantic in total is net-evaporative during the whole year due to the prominent influence of the South Atlantic Ocean. The opposite can be stated about the Pacific Ocean as a whole, which shows an excess of precipitation over evaporation for every season. This pattern of evaporation minus precipitation is consistent with the observed higher salinity in the Atlantic compared to the Pacific Ocean. In the Indian Ocean a net evaporation rules most of the year, except during December–February.
Changes due to climate change
Past
The report from Working Group 1 in the IPCC 2021 AR6 concluded that patterns of evaporation minus precipitation (E-P) over the ocean have enhanced the present mean pattern of wetting and drying. In general, saline surface waters had become saltier (especially in the Atlantic Ocean) while relatively fresh surface waters had become fresher (especially in the Indian Ocean). However, AR6 assessed only low confidence in globally averaged trends in E-P over the 20th century due to observational uncertainty, with a spatial dominated by evaporation increases over the ocean. Even coarse-resolution models show that mean SST and variability in SST are sensitive to changes in flux forcing.
Future
Based on the assessment of Coupled Model Intercomparison Project 6 (CMIP6) models, AR6 concluded that it is very likely that, in the long term, global mean ocean precipitation will increase with increasing Global Surface Air Temperature. Annual mean and global mean precipitation will very likely increase by 1–3% per °C warming. Hereby, the precipitation patterns will also change and exhibit substantial regional and seasonal differences. Following the general trend ‘wet-gets-wetter-dry-gets-drier’, precipitation will very likely increase over high latitudes and the tropical ocean and likely increase in large parts of the monsoon regions, but likely decrease over the subtropics, including the Mediterranean, southern Africa and southwest Australia, in response to greenhouse-gas induced warming. Although these are the expected general trends there can be distinct deviations from those pattern changes on a local scale. One possible impact of the corresponding trend in ocean salinity is an altering of the Thermohaline Circulation, which is explained below.
Continental discharge
Another source of freshwater discharge into the ocean is runoff from continents, through river estuaries. The average yearly freshwater discharge from continents is estimated around .
Compared to other ocean basins, the discharge is relatively high into the western tropical Atlantic, led by the Amazon and the Orinoco river estuaries. This causes some local effects as well adjustment to the large scale thermohaline circulation, as discussed in the "Influence on the Thermohaline Circulation" chapter.
Seasonal cycles
Most rivers exhibit some sort of seasonal cycle in their discharge, often (but not always) related to seasonal variation in the precipitation. The figure on the right shows the seasonal cycle of the runoff from the 10 largest rivers (Amazon, Mississippi, Congo, Yenisey, Paraná, Orinoco, Lena, Changjiang, Mekong, Brahmaputra/Gange), compared with the local precipitation cycle and two different P-E estimates.
In several rivers, the runoff peak follows the precipitation peak, with different delays reflecting the time needed for the surface runoff to travel to the river mouth. For shorter rivers such as Changjiang, Mekong and Brahmaputra/Gange, the lag between the precipitation and the runoff peaks is about a month or less, while in the Amazon and the Orinoco rivers, the lag is of 2 or more months.
Other larger rivers at higher latitude, such as the Yenisey, Lena and Mississippi, seem to experience a runoff cycle decoupled from the precipitation cycle. The sharp June peak of the Lena and Yesiney cycle is likely due to snowmelt, as well as the less prominent peak between March and May in the Mississippi river.
The Panama and the Congo river do not experience significant seasonal runoff cycles, despite the precipitation cycles, this is probably due to human intervention through river damming.
Multi-annual cycles and climate change impacts
River runoff is also affected by other meteorological cycles that span over several years.
In particular, a significant correlation with El Niño-Southern Oscillation (ENSO) phase and strength has been observed for several major rivers, as well as a correlation with Interdecadal Pacific Oscillation (IPO).
These irregular cycles, and other possible factors of internal variation which are yet to be researched fully, make it difficult to identify the changes in river runoff that can be ascribed to human induced climate change.
However, climate simulations under a moderate emission scenario (RCP4.5) show significant changes in river runoff by the end of the century, with decreased runoff in Central America, Mexico, the Mediterranean Basin, Southern Africa and much of South America, and increased runoff in the rest of Eurasia and North America.
These changes are consistent with the expected precipitation changes, but a component of earlier snowmelt and permafrost thawing will also have to be considered.
A 2018 study has recorded the variation in river runoff into different each oceanic basin from 1986 to 2016, showing an increased discharge into the Arctic Ocean and a decreased discharge in the Indian Ocean over the last decade.
Local impacts of river runoff
The input of freshwater from river runoff may seem negligible compared to that precipitation, but several studies have shown that its impact can't be neglected
The largest annual changes in surface salinity have been observed on the western tropical Atlantic, peaking between spring and summer, when the precipitation peak in the ITCZ coincides with the peak in the Amazon discharge. The low salinity water influx have also been shown to follow the seasonal variation of currents on the Brazil coast (northwestward in spring, eastward in summer)
As a direct consequence of the freshwater discharge, rivers have an impact on the local Sea Surface Temperature (SST). This effect is theorethically present at all river mouths, but it was possible to measure it only for very large rivers.
The freshwater placed on top of the saline water serves to stabilize the stratification, restricting the vertical mixing of colder water from higher depths, hence increasing the local SST. Simulations have shown a large SST anomaly especially close to the mouth of the Congo river between July and April, up to +1 °C; a similar anomaly has also been simulated near the mouth of the Amazon river between May and October.
River discharge also has an impact on the local sea level, through two different processes:. Those are roughly represented by the first two terms of the following equation
In which is the sea surface variation from the mean, is the variation of bottom sea pressure, is the variation of sea water density from the mean (), and is the atmospheric pressure at sea level.
The first term represents the simple increase of the ocean mass: the importance of this contribution can be established through "hosing experiment", which entails simulating the same water input of the river but with the same salinity as ocean water. While it has been shown that sea level increase caused by this contribution is carried away by bartropic waves in the timescale of days, it can still have an impact when the water basin is semi-enclosed (as in the Arctic) or the water input is particularly large. Durand et altr. simulated a "hosing experiment" with seasonally variable input in the Bay of Bengal, that showed sea level oscillation to the order of 0.1 m
Further, since the freshwater from the river runoff has a lower density than the seawater, the term is negative across the first water layer, which results in a positive contribution to the sea level from the second term. This phenomenon is called halosteric effect. The contribution of the halosteric effect generally has a longer lasting effect compared to the ocean mass contribution, while still being in the same order of margnitude (0.1-0-2 m).
The third term of the equation represents the dependency on atmospheric pressure which is unaffected by river runoff.
Other oceanic freshwater fluxes
Groundwater
The total flux of groundwater to the ocean can be divided into three different fluxes: fresh submarine groundwater discharge, near-shore terrestrial groundwater discharge and recirculated sea water. The contribution of fresh groundwater accounts for less than 1% of the total freshwater input into the ocean and is therefore negligible on a global scale. However, due to a high variability of groundwater discharge there can be an important contribution to coastal ecosystems on a local scale.
Ice freezing and melting
Two categories of ice have to be considered in context of oceanic freshwater fluxes: sea ice and (recently) grounded ice like ice shelfs and icebergs.
Sea ice is considered as part of the oceanic water budget, therefore, its melting or freezing states not an input or output of water in general. However, at a regional scale and intraannually timescale, it can present an important determinator of ocean salinity, by adding freshwater during melting process or by rejecting salt during the freezing process. For example, over the Arctic Ocean evaporation and precipitation rates are quite low, respectively, about 5±10 cm/yr and 20±30 cm/yr in liquid water equivalent. The freshwater cycle in the Arctic Ocean is, therefore, significantly determined by freezing and melting of sea ice, for which characteristic rates are about 100 and 50 cm/yr, respectively. If the ice drifts during the long intervals between the phase changes (frozen and liquid), the result is a net local distillation, where the sea ice was formed and a net local freshening of water, where the sea ice melts. This freezing and melting of sea ice, with their accompanying salinity changes, supply local buoyancy forcing that influences ocean circulation.
The calving of a previously grounded ice sheet into the ocean as an iceberg as well as the melting of ice shelfs related to warm ocean water constitute a net freshwater influx, not only on a local but whole ocean scale. Although, the total input from the cryosphere is small compared to the total input of precipitation and riverine discharge (less than 1% on a global scale), on a local scale this can be an important contributor of freshwater and influencing ocean circulation.
Influence on thermohaline circulation (THC)
The Thermohaline Circulation is part of the global ocean circulation. Although this phenomenon is not fully understood yet, it is known that its driving processes are thermohaline forcing and turbulent mixing. Thermohaline forcing refers to density-gradient driven motions, whereby density is determined by the temperature (‘thermo’) and salt concentration (‘haline’) of the water. Heat and freshwater fluxes at the ocean's surface play therefore a key role in forming ocean currents. Those currents exert a major effect on regional and global climate.
The Atlantic Meridional Overturning Circulation (AMOC) is the Atlantic branch of the THC. Hereby, northward moving surface water release heat and water to the atmosphere and gets therefore colder, more saline and consequently denser. This leads to the formation of cold deep water in the North Atlantic. This cold deep water flows back to the south a depth of 2–3 km until it joins the Antarctic Circumpolar Current.
The described differences in net precipitation-evaporation patterns between the Atlantic and the Pacific, with the Atlantic being net evaporative and the Pacific experiencing net precipitation, leads to a distinct difference in salinity contrast, with the Atlantic being more saline than the Pacific. This freshwater flux driven salinity contrast is the main reason that the Atlantic supports a meridional overturning circulation and the Pacific does not. The lower surface salinity of the North Pacific, due to high precipitation rates, inhibits deep convection in the Pacific. AR6 concluded from model simulations from the Climate Model Intercomparison Project 6 (CMIP6) that the AMOC will very likely weaken in the 21st century, but there is low confidence in the models’ projected timing and magnitude of AMOC decline. The projected AMOC weakening can be explained by the CMIP6 projection of an increase in high-latitude temperature and precipitation, along with freshwater input from increased melting of the Greenland Ice Sheet, which cause high-latitude North Atlantic surface waters to become less dense and more stable, preventing overturning and weakening AMOC.
Impacts of river runoff on the large-scale thermohaline circulation
While evaporation and precipitation processes are the main cause of the salinity anomalies that drive the THC, large rivers seem to have a not negligible impact as well. In particular, a 2017 study simulated the shutdown of the Amazon runoff, and measured its impact on the AMOC. It was found that the Amazon shutdown could cause a strengthening in the AMOC, increased upwelling and lower SST in the equator and southern tropics. The cooler SST over the equator consequently could cause a reduction of the rainfall in the ITCZ, weakening of the meridional atmospheric cells and the westerlies winds in the extratropics. North America and the Arctic would then experience warmer winters (with anomalies up to 1.3 °C), while Northern Eurasia would have cooler and drier condition. In the southern hemisphere, the Amazonia region could also experience drier conditions, possibly causing a positive feedback. The paper concluded by advising caution in the building of dams over the Amazon river (more than a hundred new dams are being considered for construction in the next few decades)
In what could be seen as a small-scale case study, the damming of the Nile river in 1964 (Aswan High Dam) has been shown to have had an impact on the THC of the Mediterranean Sea. A steady increase in the surface and intermediate waters' salinity has been recorded in the West Mediterranean over the last 40 years. This is connected to a growth of the activity in the deep water formation sites in the South Adriatic. The damming of the Nile has been found to be responsible for about 40% of this salinity increase (and hence the increase in deep water formation)
References
Oceanography | Oceanic freshwater flux | Physics,Environmental_science | 4,613 |
26,835,066 | https://en.wikipedia.org/wiki/Millbank%20bag | A Millbank bag is a portable water filtration device made of tightly woven canvas for outdoor use. They are light, compact, and easy but slow to use. The bag is filled with water, which filters through the canvas by gravity. It is useful for removing sediment and organic matter but the water may require further sterilisation before being drunk.
See also
LifeSaver bottle
References
Water filters
Drinking water | Millbank bag | Chemistry | 84 |
10,883,673 | https://en.wikipedia.org/wiki/NGC%206242 | NGC 6242 is an open cluster of stars in the southern constellation Scorpius. It can be viewed with binoculars or a telescope at about 1.5° to the south-southeast of the double star Mu Scorpii. This cluster was discovered by French astronomer Nicolas-Louis de Lacaille in 1752 from South Africa. It is located at a distance of approximately from the Sun, just to the north of the Sco OB 1 association. The cluster has an estimated age of 77.6 million years.
A microquasar with the designation GRO J1655-40 is located in the vicinity of NGC 6242 and is moving away from the cluster with a runaway space velocity of . It may have originated in the cluster during a supernova explosion ago.
References
External links
SEDS – NGC 6242
Open clusters
Scorpius
6242 | NGC 6242 | Astronomy | 175 |
14,330,135 | https://en.wikipedia.org/wiki/Symmetrical%20double-sided%20two-way%20ranging | In radio technology, symmetrical double-sided two-way ranging (SDS-TWR) is a ranging method that uses two delays that naturally occur in signal transmission to determine the range between two stations:
Signal propagation delay between two wireless devices
Processing delay of acknowledgements within a wireless device
This method is called symmetrical double-sided two-way ranging because:
It is symmetrical in that the measurements from station A to station B are a mirror-image of the measurements from station B to station A (ABA to BAB).
It is double-sided in that only two stations are used for ranging measurement station A and station B.
It is two-way in that a data packet (called a test packet) and an ACK packet is used.
Signal propagation delay
A special type of packet (test packets) is transmitted from station A (node A) to station B (node B). As time the packet travels through space per meter is known (from physical laws), the difference in time from when it was sent from the transmitter and received at the receiver can be calculated. This time delay is known as the signal propagation delay.
Processing delay
Station A now expects an acknowledgement from Station B. A station takes a known amount of time to process the incoming test packet, generate an acknowledgement (ack packet), and prepare it for transmission. The sum of time taken to process this acknowledgement is known as processing delay.
Calculating the range
The acknowledgement sent back to station A includes in its header those two delay values – the signal propagation delay and the processing delay. A further signal propagation delay can be calculated by Station A on the received acknowledgement, even as this delay was calculated on the test packet. These three values can then be used by an algorithm to calculate the range between these two stations.
Verifying the range calculation
To verify that the range calculation was accurate, the same procedure is repeated by station B sending a test packet to station A and station A sending an acknowledgement to station B. At the end of this procedure, two range values are determined and an average of the two can be used to achieve a fairly accurate distance measurement between these two stations.
See also
Multilateration
Real-time locating system
References
Radio technology
Wireless locating | Symmetrical double-sided two-way ranging | Technology,Engineering | 454 |
16,620,919 | https://en.wikipedia.org/wiki/Network%20Centric%20Product%20Support | Network Centric Product Support (NCPS) is an early application of an Internet of Things (IoT) computer architecture developed to leverage new information technologies and global networks to assist in managing maintenance, support and supply chain of complex products made up of one or more complex systems, such as in a mobile aircraft fleet or fixed location assets such as in building systems. This is accomplished by establishing digital threads connecting the physical deployed subsystem with its design Digital Twins virtual model by embedding intelligence through networked micro-web servers that also function as a computer workstation within each subsystem component (i.e. Engine control unit on an aircraft) or other controller and enabling 2-way communications using existing Internet technologies and communications networks - thus allowing for the extension of a product lifecycle management (PLM) system into a mobile, deployed product at the subsystem level in real time. NCPS can be considered to be the support flip side of Network-centric warfare, as this approach goes beyond traditional logistics and aftermarket support functions by taking a complex adaptive system management approach and integrating field maintenance and logistics in a unified factory and field environment. Its evolution began out of insights gained by CDR Dave Loda (USNR) from Network Centric Warfare-based fleet battle experimentation at the US Naval Warfare Development Command (NWDC) in the late 1990s, who later lead commercial research efforts of NCPS in aviation at United Technologies Corporation. Interaction with the MIT Auto-ID Labs, EPCglobal, the Air Transport Association of America ATA Spec 100/iSpec 2200 and other consortium pioneering the emerging machine to machine Internet of Things (IoT) architecture contributed to the evolution of NCPS.
Purpose
Simply put, this architecture extends the existing World Wide Web infrastructure of networked web servers down into the product at its subsystem's controller level using a Systems Engineering "system of systems" nested approach. Its core is an embedded dual function webserver/computer workstation connected to the product controller's test ports (as used in retrofit applications, or integrated directly into the controller for new products), hence providing access to operational cycles, sensor and other information in a clustered, internet addressable node that allows for local or remote access, and the ability to host remotely reconfigurable software that can collect and process data from its mated subsystem controller onboard and pull in other computing resources throughout the network. It can then establish a localized wireless World Wide Web in and around the product that can be securely connected to by a mechanic with any web browser-equipped handheld independent of the greater World Wide Web, as well as seamlessly integrate into global networks when external wireless communications is available - thus creating data Digital Twins at the factory, connecting deployed product usage in the Product lifecycle with constantly updated digital threads. This allows for an integrated approach which enables both offline and online updates to occur. Legacy systems usually require a human to physically connect a laptop to the system controller or a telematic solution to manually collect data and carry it back to a location where it can be later transferred to the factory or to restricted webserver-based download sites for offboard analysis.
The architecture also enables communications with other micro-webservers in its Computer cluster (i.e. in an aircraft), or to higher level clusters (such as an internet portal managed fleet and flight operations managers), thus enabling access to data resources and personnel and factory engineers at remote office computers through the World Wide Web. As stated previously, the system operates asynchronously, in that it does not have to be always connected to the World Wide Web to function; rather it simply operates locally, then synchronizes two-way information relevant to the subsystem, acting as a Gateway (telecommunications) on board that connects with other gateways within the network, which can be airborne or on the ground, on an as-needed basis when communications is available. This can be accomplished through a wireless LAN, satellite, cellular network or other wireless or wired communications capabilities.
Security of the network is critical, and the architecture can utilize standard web security protocols, from Public-key cryptography to embedded hardware encryption devices.
Typical Usage
The extension of the World Wide Web architecture into the product is important to understand, as all decisions for manufacturing of spare parts, scheduling for flights, and other factory OEM and airline operator functions, are driven primarily by what happens to the product in the field (rate of wear and impending failure, primarily). Predicting the rate of wear, and hence the impact on operations and forecasting for producing spare parts in the future, is critical for optimizing operations for all involved. Managing a complex system such as a fleet of aircraft, vehicles or fixed location products can be accomplished in this manner. For example, coupled with technologies such as RFID, the system could track parts from the factory to the aircraft on board, then continue to read the configuration of the subsystem’s replaceable tagged parts, map their configuration to hours run and duty cycles, then process/communicate the projected wear rate through the World Wide Web back to the operator or factory. In this way mechanical wear rates and future failures can be predicted more accurately and the forecasting of spare parts manufacturing and shipment can be significantly improved. This is called Prognostics Health Monitoring (PHM), which has become possible in recent years with the advent of electronic controllers, and is a recent evolutionary step in aircraft support and maintenance management that began as individual processes prior to World War II and solidified into a manual tracking system to support aircraft fleets in the Korean War. Support for the mechanic comes in local wireless access to technical information stored and remotely updated on board the micro-webserver component relevant to that product, such as service bulletins, factory updates, fault code driven, intelligent 3D computer game-like maintenance procedures, and social media applications for sharing of product issues and maintenance procedure improvements in the field to include collaborative 2-way voice, text and image communications. Note that this architecture can be utilized on any system that requires monitoring and trending, to include mobile medical applications for monitoring functionality of human systems when the subject is equipped with data sensors.
Background and Other Examples
The original micro-webserver component (i. e. the onboard unit) that is key to enabling the NCPS architecture was first prototyped and demonstrated in 2001 by David Loda, Enzo Macchia, Sam Quadri and Bjorn Stickling at United Technologies' Pratt & Whitney division and initially tested on board a Fairchild-Dornier 328 (later AvCraft 328) regional jet in January 2002. It was introduced to the public and demonstrated at the Farnborough Air Show in July 2002 in prototype form and again in 2004 as a flight certified product offering marketed by United Technologies as the DTU and later FAST data management units for service in a number of aircraft and helicopter fleets.
A similar complex systems approach in a completely different application is successfully embodied in the Eisenhower Interstate Highway System, though what is transported in NCPS is information, not cars and trucks. Network Centric Product Support, or net-centric product support, is an architectural concept, and merely connects the major avenues already existing in global communications and the internet down into the mobile product, extending maintenance and supply chain processes into an integrated product centric system with a real time feedback loop to the designers, factory and maintainers as to product performance and reliability. For example, to gain information about a particular engine on a mobile aircraft, it is most efficient to send the inquiry to the engine directly and host all information generated and relevant to that system there, as well synchronize in a twin remote database for access and queuing when the engine system is not in communications. Other examples where this can be applied include shipping containers, automobiles, spacecraft, appliances, human medical monitoring, or any other complex product with sensors and subsystems that require maintenance support and monitoring.
Many organizations are beginning to see the value of a netcentric (also spelled "net-centric") approach to managing complex systems, including the Network Centric Operations Industry Consortium (NCOIC), which is an association of leading aerospace and defense contractors in the Network Centric Warfare arena. Network Centric thinking for aircraft operations, including Network Enabled Operations (NEO) demonstrations, also figure prominently in the commercial Next Generation Air Transportation System (NextGen) approach being made by the US Government to revamping air transportation management in the 21st century.
References
Flight Global Article, Farnborough Air Show, July 2002: Server is Like Having an Onboard Engineer
Aviation Week Article, Farnborough Air Show, July 2004: Onboard Internet Microserver
Aviation Today Article, Nov 2004: Right Hemisphere Pioneers "Just in Time" Training
Desktop Engineering Article, April 2005: Interactive 3D Visualization Heats Up
BNET Article, Oct 2006, Data Transmission Units on Falcon 2000EX and Falcon 7X Business Jets
Network Centric Industry Association (NCOIC)
AutoID Labs, Cambridge University June 2005: Networked RFID Research at Pratt & Whitney
RFID Aerospace Alignment Minutes, Nov 2006
Consensus Software Awards, Right Hemisphere, 2006: Product Graphics Management
NCOIC Report: Comparison of SESAR & NEXTGEN Concept of Operations
FAA CRADA Award Oct 2008: Network Centric Airborne Web Server Test Capability on an FAA Technical Center Aircraft for use in NextGen
Report to FAA May 2010: SWIMLINK Secure Airborne HTTP Data Communications
Aviation International News Article, May 2014: Pratt & Whitney Canada's FAST Systems Earns STC
Related US Patents, filings/issued 2001-2014
Related European Patents, filings/issued 2001-2014
Computer architecture | Network Centric Product Support | Technology,Engineering | 1,966 |
16,921,836 | https://en.wikipedia.org/wiki/F.%20Gordon%20A.%20Stone | Francis Gordon Albert Stone CBE, FRS, FRSC (19 May 1925 – 6 April 2011), always known as Gordon, was a British chemist who was a prolific and decorated scholar. He specialized in the synthesis of main group and transition metal organometallic compounds. He was the author of more than 900 academic publications resulting in an h-index of 72 in 2011.
Early life
Gordon Stone was born in Exeter, Devon in 1925, the only child of Sidney Charles Stone, a civil servant, and Florence Beatrice Stone (née Coles). He received his B.A. in 1948 and Ph.D. in 1951, both from Christ's College, Cambridge (Cambridge University), England, where he studied under Harry Julius Emeléus.
Academic life
After graduating from Cambridge, he was a Fulbright Scholar at the University of Southern California for two years, before being appointed as an instructor in the Chemistry Department at Harvard University, and was appointed assistant professor in 1957. He was the Robert A. Welch Distinguished Professor of Chemistry at Baylor University, Texas until 2010, but his most productive period was as Professor of Inorganic Chemistry at Bristol University, England (1963–1990), where he published hundreds of papers over the course of 27 years. In research he competed with his contemporary Geoffrey Wilkinson.
Elected to the Royal Society of Chemistry in 1970, and to the Royal Society in 1976, he was awarded the Davy Medal "In recognition of his many distinguished contributions to organometallic chemistry, including the discovery that species containing carbon-metal of metal-metal multiple bonds are versatile reagents for synthesis of cluster compounds with bonds between different transition elements" in 1989.
Among the many foci of his studies were complexes of fluorocarbon, isocyanide, polyolefin, alkylidene and alkylidyne ligands. At Baylor, he maintained a research program on boron hydrides, a lifelong interest.
In 1988 he chaired the Review Committee commissioned by the British Government (the now-dissolved University Grants Committee) to carry out a review of chemistry in UK academia ("University Chemistry — The Way Forward", "The Stone Report"). His main recommendation, "that the UGC [...] fund properly not fewer than 30 chemistry departments" and that "at least 20 of these departments have 30 or more academic staff [...] to compete successfully at the international level" was never implemented.
His autobiography Leaving No Stone Unturned, Pathways in Organometallic Chemistry, was published in 1993. With Wilkinson, he edited the influential series Comprehensive Organometallic Chemistry. With Robert West, he edited the series Advances in Organometallic Chemistry.
The Gordon Stone Lecture series at the University of Bristol is named in his honour.
Annual Stone Symposiums are also held at Baylor University in his honor.
Awards
Fellow of the Royal Society of Chemistry (1970)
Fellow of the Royal Society (1976), Vice-President 1987-1988
Chugaev Medal of the Kurnakov Institute (Russian Academy of Sciences) (1978)
Royal Society of Chemistry’s Ludwig Mond Award (1983)
American Chemical Society’s award in Inorganic Chemistry (1985)
Royal Society of Chemistry’s Sir Edward Frankland Prize Lectureship (1988)
Royal Society's Davy Medal (1989)
Royal Society of Chemistry’s Longstaff Prize (1990)
Commander of the Order of the British Empire (1990)
Personal life
He married Judith Hislop (1928-2008) of Sydney, Australia in 1956 with whom he had three sons.
References
Further reading
F. Gordon A. Stone, (1993) Leaving No Stone Unturned, Pathways in Organometallic Chemistry, American Chemical Society. Autobiography.
20th-century British chemists
Inorganic chemists
Fellows of the Royal Society
Fellows of the Royal Society of Chemistry
Commanders of the Order of the British Empire
Academics of the University of Bristol
Baylor University faculty
1925 births
2011 deaths
Scientists from Exeter
Alumni of Christ's College, Cambridge | F. Gordon A. Stone | Chemistry | 811 |
40,997,391 | https://en.wikipedia.org/wiki/Puente%20%28holiday%29 | A (Spanish for bridge) is a holiday in Spain. It is the day off to bridge the time between the weekend and a holiday, thereby creating a long weekend. A typically occurs when a holiday falls on a Tuesday or Thursday, workers will then take the Monday or Friday as a , a day off. Some businesses will close down altogether.
In 2012, the Spanish government led by Mariano Rajoy, as it was faced with the eurozone crisis, initiated measures to move public holidays to Mondays and Fridays. The aim of the measure was to avoid puentes. Gayle Allard, an economist at IE Business School has said that the measure can improve productivity. The Spanish Catholic Church opposed the measure, which would shuffle the day of the Feast of the Immaculate Conception.
In some years, such as 2022, December 6 (Constitution Day) falls on a Tuesday and December 8 (feast of the Immaculate Conception) falls on a Thursday. Thus, a period of 9 consecutive days has only three work days.
Some workers take a very long weekend by asking just one, two or three days off.
Such multiple are sometimes called ("aqueducts", keeping the metaphor) or ("macro-bridges")
References
Public holidays in Spain
Metaphors referring to objects
Bridges | Puente (holiday) | Engineering | 256 |
49,503,671 | https://en.wikipedia.org/wiki/Future%20Anterior | Future Anterior is a biannual peer-reviewed academic journal published by the University of Minnesota Press. The editor-in-chief is Jorge Otero-Pailos (Columbia Graduate School of Architecture, Planning and Preservation).
History
The journal was established in 2004 by Jorge Otero-Pailos and is dedicated to the "critical examination of historic preservation." The journal's title is a reference to the grammatical tense, futur antérieur, and is an allusion to the field of historic preservation as "concerned both with what has not yet happened (future) and what has already happened (anterior)."
Scope
In addition to its primary focus on historic preservation history, theory, and criticism, it also includes essays on various topics including "art, philosophy, law, geography, archeology, planning, materials science, cultural anthropology, and conservation." Each issue contains articles, an exhibition review, a feature piece, a book review, and an artist intervention.
Impact
At its establishment, Otero-Pailos said that the journal "signals the maturation of the field of preservation and a shift…towards an active involvement in the understanding and creative transformation of human environments." Future Anterior also provides a forum for discussion of the field of preservation and architecture and influenced the 2006-2007 student architect competition, Preservation as Provocation: Re-designing Saarinen's Cranbrook Academy of Art in Michigan, that "called on students to address a complex set of criteria that roughly broke down as engagement, program, time, and technology, prompted by the theory of historic provocation suggested by Otero-Pailos's writings in the journal that he edits, Future Anterior."
The journal is abstracted and indexed in the Arts & Humanities Citation Index, Current Contents/Arts & Humanities, and Scopus.
References
External links
Online index
Journal page at University of Columbia
Academic journals published by university presses of the United States
Biannual journals
Academic journals established in 2004
English-language journals
Architecture journals
Historic preservation | Future Anterior | Engineering | 413 |
28,498,504 | https://en.wikipedia.org/wiki/Project%20Space%20Track | Project Space Track was a research and development project of the US Air Force, to create a tracking system for all artificial satellites of the Earth and space probes, domestic and foreign.
Project Space Track was started in 1957 at the Air Force Research Laboratory at Laurence G. Hanscom Field, now Hanscom Air Force Base, in Bedford, Massachusetts shortly after the launch of Sputnik I. Observations were obtained from some 150 sensors worldwide by 1960 and regular orbital predictions were issued to the sensors and interested parties.
Space Track was the only organization that used observations from all types of sources: radar, optical, radio, and visual. All unclassified observations were shared with the Smithsonian Astrophysical Observatory. In 1961, the system was declared operational and assigned to the new 1st Aerospace Surveillance and Control Squadron until 1976, as part of NORAD's Space Detection and Tracking System (SPADATS).
Establishment
On 29 November 1957, shortly after the launch of Sputnik I on 4 October, two German expatriates, Dr. G. R. Miczaika (from Prussia) and Dr. Eberhart W. Wahl (from Berlin) formed Project Space Track (originally called Project Harvest Moon). It was established in Building 1535 of the Geophysics Research Directorate (GRD), Air Force Cambridge Research Center, Laurence G. Hanscom Field, Massachusetts. Both scientists had backgrounds in astronomy, although Dr. Wahl's PhD was in meteorology.
The mission of Space Track was to create a tracking system to track and compute orbits for all artificial satellites of the Earth, including both US and Soviet payloads, booster rockets, and debris. With the Soviet launch of Luna 1 on 2 January 1959, Space Track also started tracking space probes. The first major tracking effort was Sputnik II, which was launched on 3 November 1957 and contained the dog Laika.
An Electronic Support System Program Office, 496L, had been established in February 1959, with the program office at Waltham, Massachusetts under the direction of Col Victor A. Cherbak, Jr. By late 1959, the SPO had received additional responsibilities under the DoD Advanced Research Projects Agency (ARPA) to develop techniques and equipment for military surveillance of satellites . Continuing development of Space Track was an integral part of this effort.
Since December 1958, Space Track had been the interim National Space Surveillance Control Center. In December 1959, Space Track was moved to a new building, the National Space Surveillance Control Center (NSSCC), which was formally dedicated on 9 February 1960. The NSSCC was part of the Air Force Command and Control Development Division (known informally as C²D²), Air Research and Development Command. Dr. Harold O. Curtis of Lincoln Laboratory was the Director of the NSSCC. The name Space Track continued in use.
By 1960, there were about 70 people in the NSSCC involved in operations.
Space Track continued tracking satellites and space probes until 1961. In late 1960, USAF Vice Chief of Staff General Curtis E. LeMay decided that the research and development system was ready to become operational.
Eleven officers and one Senior Master Sergeant were selected to be the initial cadre of what became the 1st Aerospace Surveillance and Control Squadron. The initial cadre came to Space Track for training that started 7 November 1960. (The cadre was assigned to the new squadron on 6 March 1961.)
On 1 July 1961, the new squadron became operational under the USAF Air Defense Command at Ent AFB, Colorado Springs, part of NORAD's Space Detection and Tracking System (SPADATS). The first Squadron Commander was Colonel Robert Miller. The Space Track organization at Hanscom Field assumed a backup role for squadron operations.
In cavalier disregard of the Air Force Regulation on the subject, which specified clearly that unclassified nicknames, such as Space Track, should be two words (while codewords, such as CORONA, which were then themselves classified, should be only one word), ADC immediately decided to rename Space Track as SPACETRACK and the name has stuck since – although the web site of the 614th Air & Space Operations Center, which currently performs the mission, has returned to two words. The 614th is part of the Joint Space Operations Center at Vandenberg AFB, California.
Sensors
The Department of Defense had decided that the US Air Force should develop a command and control system for tracking satellites and that the US Army and US Navy should develop sensors for the purpose. US Navy development was at Dahlgren, Virginia and the US Army's program was at the Aberdeen Proving Ground, Maryland.
Drs. Miczaika and Wahl had assembled a list of facilities that could track satellites, either by monitoring telemetry or by using radar. The latter were mostly astronomical radio telescopes equipped with radars used in studying the moon (e.g., Jodrell Bank Observatory in England directed by Sir Bernard Lovell, Millstone Hill of Lincoln Laboratory in Massachusetts directed by Dr. Gordon Pettingill, and a radar at the Stanford Research Institute in California, directed by Walter Jaye). Two USAF radars, one on Shemya Island in the Aleutians and the other at Diyarbakır, Turkey, had been built to observe Soviet missile launches and became valuable for satellite tracking as well. BMEWS prototype radars on Trinidad also participated. Normally, the first radar reports of a new satellite launch from Tyuratam (Baikonur) came from Shemya and the first of a new launch from Kapustin Yar came from Diyarbakır. A USAF radar at the Laredo Test Site in Texas and one at Moorestown, New Jersey also participated later. Observations were received from the Royal Canadian Air Force research radar at Prince Albert, Saskatchewan, Canada. The Goldstone facility of the Jet Propulsion Laboratory was exceptionally helpful with radio observations of Soviet space probes.
In general, observations were in the form of time, azimuth and elevation (and range, from radars) as measured at the site or, in some cases, such as at Goldstone, in astronomical form (Right Ascension and Declination) Some early observations were very primitive, such as a report that a satellite passed near a star that could be identified.
On rare occasions, the observations were purely verbal. For example, individuals on ships, planes, and islands in the Caribbean reported sightings of the decay of satellite 1957 β, although one aircraft was able to provide a detailed observation because the navigator happened to be completing a celestial fix at the exact time
Some sites could record the Doppler shift of satellite transmission or, in a few cases, the Doppler shift from their own transmissions reflected from the orbiting object. One doppler site was the Space Track Doppler Field Site at Billerica, Massachusetts. The observations obtained by this technique were the time of closest approach to the station.
The Navy program was operated as
NAVSPASUR and is now operated by the US Air Force. The Army program, although achieving accurate tracking results with doppler techniques and furnishing observations to Space Track, did not achieve funding for deployment.
One of SPASUR's contributions to satellite tracking was the invention of a map of the earth that showed both poles, so that the position of all satellites, including those in polar orbits, could be shown. This was not possible with Mercator or other projections, which do not show the entire earth. The map was, of course, very distorted at the poles (the North pole was the entire top line of the long map) but the concept proved very useful.
Optical sensors included the twelve Baker-Nunn satellite tracking cameras operated for NASA by the Smithsonian Astrophysical Observatory (SAO), three Baker-Nunn cameras operated by the USAF, and the Boston University camera at Patrick Air Force Base operated by Walter Manning.
SAO cameras were at Woomera, Australia; Jupiter, Florida; Organ Pass, New Mexico; Olifantsfontein, Union of South Africa; Cadiz, Spain; Mitaka, Japan; Nani Tal, India; Arequipa, Peru; Shiraz, Iran; Curaҫao, Netherlands West Indies; Villa Dolores, Argentina; and Haleakala, Maui, Hawaii. USAF cameras were at Oslo, Norway; Edwards AFB, California, and Santiago, Chile. Two additional cameras were later added to the USAF inventory – one of the USAF cameras was transferred to the Royal Canadian Air Force at Cold Lake, Alberta, Canada in 1961.
Volunteer amateur astronomers as part of the SAO Moonwatch Team also contributed observations. Among these many volunteers was Arthur S. Leonard of Davis, California, leader of the Sacramento, California team.
By 1960, Space Track had about 150 cooperating sensors. Space Track was the only US organization that used all methods of observation to track satellites.
The observations were recorded on IBM punched cards for computer processing. All unclassified observations were exchanged daily with the Smithsonian Astrophysical Observatory, Cambridge, Massachusetts.
Space track maintained close contact with the US National Security Agency, the CIA Foreign Missile and Space Analysis Center (FMSAC), and Headquarters USAF Intelligence, Major Harry Holeman.
It was helpful that the USSR press service, TASS, always announced new Soviet satellite or space probe launches promptly, so Space Track was free to discuss the new objects without worrying about compromising sources. Translations of the Russian announcements were provided by the Foreign Broadcast Information Service (FBIS).
Orbital computations
Dr. Wahl had been computing all the satellite ephemerides by hand using a Friden Square Root Calculator, the most advanced mechanical calculator then available.
The method for computing ephemerides (documented in detail in a 1960 report by P.M. Fitzpatrick and G.B. Findley) was originally developed by Dr. Wahl, based on historic astronomical methods.
In late August 1958, Space Track obtained its first computer, an IBM 610, used in conjunction with the Cambridge Research Center IBM 650. The IBM 610 was a very primitive machine, the programing of which was done with a plug board (similar to the ones used for IBM accounting machines in the early 1950s) and a punched paper tape.
The new NSSCC building was equipped with an IBM 709 and, a few months later, with an IBM 7090. Major programming of the new computers was done by the Aeronutronic Division of the Ford Motor Company, Newport Beach CA. The Wolf Corporation also supported the NSSCC.
The ephemeris computations were issued in what was called a bulletin. The bulletin listed each equatorial crossing of the satellite and described the path between crossings. Space track also furnished "look angles", altitude and azimuth directions so that specific sensors could point in the correct direction to acquire the satellite. Special versions of the look angles were tailored for specific sites, such as the Army and Navy sensor development projects. At the NSSCC, these computations were transmitted by the Duty Controller.
Space Track also issued public catalogues listing all the satellites, including ones no longer in orbit, called Satellite Situation Reports, which gave basic orbital elements for each piece. At first, this took less than a page of type. The Smithsonian Astrophysical Observatory also issued a similar document but, in 1961, NASA's Goddard Space Flight Center assumed responsibility for both reports, combining them into one document.
In October 1960, George Westrum presented a short college-level course in Celestial Mechanics for those NSSCC personnel who wished to participate
Operations
By international agreement under the International Astronomical Union, the satellites and space probes were initially named with Greek letters, following the system for naming stars in constellations. The year of launch was included in the launch names, so Sputnik I was 1957 Alpha. The payload was called Alpha I, when known – in the case of Sputnik I, it wasn't clear initially which was the payload, so the payload became Alpha II. Other pieces were also numbered, so the carrier rocket was usually Alpha II. The 24 Greek letters were soon used, so the next sequence started Alpha Alpha and so forth. By 1962 Beta Psi had been launched and it was clear that the Greek alphabet system would no longer work. Thereafter, launches were numbered, starting with 1963-1 with the payload normally being 1963-1A, etc...
As soon as a new satellite or space probe was launched, Space Track alerted the primary sensors and processed observations as they came in, issuing a preliminary tracking bulletin promptly and updating it after about 24 hours when additional observations from around the world had been obtained. Routine bulletins continued to be issued regularly as needed to keep up with the changing orbits, some of which decayed fairly rapidly in the atmosphere. There was another flurry of activity when the last revolutions occurred, as it was difficult to forecast the exact reentry path.
The NSSCC had a room dedicated as a filter center for monitoring communications and obtaining observations. The filter center had displays listing the orbiting and decayed satellites and a projector system that could show the motion of one satellite over the earth. The displays were devised by A/3C Peter P. Kamrowski. The center was manned by a Duty Controller and his assistants. The center was designed by Senior Controller 1st Lt Cotter, based on his earlier experience as a volunteer member of the USAF Ground Observer Corps (the Ground Observer Corps filter centers were in turn based on the United Kingdom aircraft tracking centers developed during World War II to track Nazi aircraft).
By 1960, the position of Duty Analyst was established. Once observations had been reduced, the duty analyst reviewed them and decided which orbits needed to be recomputed to bring them up to date. In the case of new launches or decaying satellites, one analyst was dedicated to processing observations for that satellite.
As with many other activities in the dawning space age, Space Track operations often involved doing things for which no precedent existed.
Unusual Space Track operations
On 2 January 1959, the Soviets launched Luna 1 (aka Mechta (Dream)), their first lunar probe. Tracking data was obtained for Space Track by the Goldstone site of the California Institute of Technology, which verified that the probe had headed for the moon. Dr. Curtis used a plot of this data in a presentation to a committee of the US House of Representatives. His presentation helped influence President Kennedy to establish the Apollo Program. Kenneth E. Kissell later published a Project Space Track analysis of the trajectory.
At this period, the 6594th Aerospace Test Wing was trying to achieve a successful launch in the Discoverer satellite program. The satellites, launched from Vandenberg AFB, were all in polar orbits. They were controlled by the 6594th at Palo Alto (later the Air Force Satellite Control Facility at Sunnyvale CA). Lt Cotter was the liaison officer between Space Track and the 6594th. The first 12 launch attempts were failures; the first success was Discoverer 1 (1959 Beta). Lockheed Corporation, the development contractor, won their bonus payment because the telemetry showed the satellite achieved orbit, but it was never seen again, despite massive Space Track and other efforts to find it.
By this time Space Track had contacts with many sensors around the world. One of them was at the South Pole, associated with the International Geophysical Year. One of their ninety observations of Discoverer 2 (1959 Gamma) was sent from Byrd Station saying that the satellite had passed to the left of the zenith at 2.25 degrees, implying an orbital inclination of 89.9 degrees. This report is probably the only direct observation of the inclination of a satellite's orbit that has ever been made.
Because the Discoverer satellites carried payloads that were deorbited and recovered from parachutes by aircraft of the 6594th Aerospace Test Wing based in Hawaii, the timing of deorbit was critical. (The deorbit attempt of Discoverer 2's payload went seriously wrong: the payload landed on Spitsbergen, instead of coming down over the Pacific ocean. It was recovered by Russian miners, likely very helpful to Russian intelligence and the Russian space program in general). Later, to improve the accuracy of the deorbit commands, orbital analysts Lt Algimantas Šimoliūnas, Lawrence Cuthbert, or Ed Casey would update the Space Track ephemeris for each Discoverer at the last minute and send the update to the 6594th. The 6594th had a global network of tracking stations (including Alaska, Hawaii, Seychelles, Guam, and the UK), used for command and on-orbit control of the satellites. However, the tracking data was derived from telemetry monitoring and was not as precise as the Space Track data, which was based in major part on radar and optical tracking.
Lockheed decided to put a small light on Discoverer XI (1960 Delta). Space Track acted as liaison between the 6594th and the Smithsonian Astrophysical Observatory, to use their Baker-Nunn camera at Cadiz, Spain, to photograph the light. This would give Lockheed valuable information about the accuracy of their orbit computations. The experiment worked very well and was not repeated.
Discoverer XIX (1960 Tau) had a payload called MIDAS, the developmental version of what later became the Defense Support Program. The Air Force decided that the MIDAS orbit should be classified, which meant that Space Track sensor observations had to be classified also. This led to a surreptitious midnight data transfer in central Concord, Massachusetts between Dr. Gordon Pettingill of Millstone Hill and Lt Cotter, as there was no secure teletypewriter or telephone available.
Perhaps causing inadvertent fireworks in celebration of the activation of the 1st Aerospace Surveillance and Control Squadron, the Ablestar stage for the Navy's Transit 4A satellite, 1961 Omicron, which was launched on 29 June 1961, exploded about 77 minutes after attaining orbit, at 0608Z. The NORAD Ballistic Missile Early Warning System (BMEWS) made early radar observations and Mr. Leonard of the Sacramento, California Moonwatch team alerted Space Track when he saw many fragments where only a few satellites were expected from the launch. In the next few days, this gave Project Space Track its first major effort as a backup for the new squadron. Lawrence W. Cuthbert, 1st Lt Algimantas Šimoliūnas, and Ed Casey achieved a landmark in satellite tracking, plotting observations by hand and identifying orbits for 296 of the fragments. Orbital Analysts at 1st Aero were also heavily involved in the achievement. Observations from the SPASUR fence were very helpful in tracking the fragments (SPASUR had initially refused to send Space Track individual observations, sending instead only orbital parameters, but this policy had fortunately been changed by 1961).
The technique used to identify multiple objects orbiting in the same orbital plane was refined by Lawrence Cuthbert and published as an automated program by the Wolf Corporation [Later, Larry worked with Bob Morris, Chief Orbital Analyst at Colorado Springs, to develop a program to derive orbital elements for all unknown radar tracks; the methodology worked and it became known as the Cuthbert-Morris Algorithm. The resulting program was called "Breakup, Lost and Decay" and, along with subsequent improvements, it has found thousands of the objects in the Space Satellite Catalog. It is still the Air Force Astrodynamic Standard for Uncorrelated Target (UCT) processing.
Communications
Most Space Track communication was by teletypewriter or, in some cases, by telephone, mail, or messenger.
The bulletins and look angles were initially typed by hand by airmen in the communications office and sent by teletypewriter to all the participating sensors. The teletypewriter machines used punched paper tape, before the invention of chadless tape.
Eventually, Roy Norris and Lt Cotter inveigled the IBM 610 into cutting paper tapes for the satellite bulletins, so that the airmen in the communication department would not have to type all the data by hand. This was not part of the IBM 610 design and was a surprise to IBM personnel. Later computers would also prepare the bulletin and look angle data tapes automatically.
There was some limited secure communication: One method valid for sending classified information was a pair of one-time pads. These pads were each made of twin sets of pages, the top one of which had all letters and numbers on a line, perhaps 40 lines to a page. The top sheet was carbonless paper. To use the sheets, one circled each letter or number row-by-row on the top sheet. This marked the second sheet, which had all the letters and numbers scrambled. The scrambled version could then be transmitted by teletypewriter or telephone to the recipient who, using his matching set of one-time pads, could reverse the process and read the secure message.
Another method Space Track later had was a secure teletypewriter machine that had a pre-punched paper tape attached. The tape served to garble each letter typed, which could then be decrypted by a reverse procedure at the other end of the teletypewriter line. This system was used to communicate with Air Force Intelligence at the Pentagon. More sophisticated cryptographic equipment was available later.
In addition to data communications, Space Track published a series of technical reports. (e.g. see References,
).
Dr. Wahl presented detailed descriptions of Space Track activity at the first two
International Symposia on Rockets and Astronautics in Tokyo, 1959 and 1960. Dr. Curtis and Lt Cotter made a similar presentation in 1960.
Contractors
In 1960, Aeronutronic, a division, of the Ford Motor Company, had a contract with Space Track to develop improved methods of predicting the orbits of decaying satellites, a computer program called Spiral Decay, and for other software for new computers in the new building. (Aeronutronic had been hired to do a system analysis of the control center on 1 October 1959. Detailed reports of this and other Aeronutronic support of Project Space Track are on file at the offices of Lockheed Martin (formerly Loral Corporation) in Colorado Springs, Colorado. An index of the reports is at the National Museum of the Air Force.)
Another very important group was the employees of Wolf R&D Corporation (Concord, Massachusetts), which did programming and had the contract for operating computers at the NSSCC, including the IBM 7090 mainframe.
Further reading
- includes coverage of Project Space Track era
External links
Jodrell Bank
Jodrell Bank
Millstone Hill
RCAF participation
References
Except as noted, all documents referenced are in the archives of the National Museum of the United States Air Force, Wright-Patterson AFB, Ohio. For JPEG copies of the references, see the Talk Page.
Cuthbert, Lawrence W.: Ballbuster in Orbit. The Official History of Spacetrack. [Humor] Project Space Track: Bedford MA: June 1965.
1957 establishments in Massachusetts
Scientific organizations established in 1957
20th-century history of the United States Air Force
Projects of the United States Air Force
Aerospace
Military units and formations in Massachusetts
Lockheed Corporation
Satellites
Space probes
Tracking
Bedford, Massachusetts
Waltham, Massachusetts | Project Space Track | Physics,Astronomy,Technology | 4,711 |
8,801,791 | https://en.wikipedia.org/wiki/Hawking%20%28birds%29 | Hawking is a feeding strategy in birds involving catching flying insects in the air. The term usually refers to a technique of sallying out from a perch to snatch an insect and then returning to the same or a different perch, though it also applies to birds that spend almost their entire lives on the wing. This technique is called "flycatching" and some birds known for it are several families of "flycatchers": Old World flycatchers, monarch flycatchers, and tyrant flycatchers; however, some species known as "flycatchers" use other foraging methods, such as the grey tit-flycatcher. Other birds, such as swifts, swallows, and nightjars, also take insects on the wing in continuous aerial feeding. The term "hawking" comes from the similarity of this behavior to the way hawks take prey in flight, although, whereas raptors may catch prey with their feet, hawking is the behavior of catching insects in the bill. Many birds have a combined strategy of both hawking insects and gleaning them from foliage. Mainly founded in the grass lands and from dark oak wood fort trees.
Flycatching
The various methods of taking insects have been categorized as: gleaning (perched bird takes prey from branch or tree trunk), snatching (flying bird takes prey from ground or branch), hawking (bird leaves perch and takes prey from air), pouncing (bird drops to ground and takes prey) and pursuing (flying bird takes insects from air).
In hawking behavior, a bird will watch for prey from a suitable perch. When it spies potential prey, the bird will fly swiftly from its perch to catch the insect in its bill, then return to the perch or sometimes to a different perch. This maneuver is also called a "sally". Prey that is very small relative to the bird, such as gnats, may be consumed immediately while in flight, but larger prey, such as bees or moths, are usually brought back to a perch before being eaten. Sometimes the prey will attempt to escape and this can result in a fluttering pursuit before returning to the perch. Depending on the species of bird, there are observable variations on this behavior. Some species, such as the olive-sided flycatcher of North America and the ashy drongo of the Indian Subcontinent, tend to choose an exposed perch, such as a dead tree branch overlooking a clearing, whereas others, such as the North American Acadian flycatcher and the Asian small niltava perch within the cover of foliage deep in a forest or woodland habitat.
Many birds make use of a variety of tactics. A study of feeding behaviors in the family Tyrannidae categorized the following moves as ways of taking insect prey: aerial hawking (i.e. flycatching), perch-to-ground sallying, ground feeding (chasing after insects on the ground), perch-to-water sallying, sally-gleaning (can involve an hover-gleaning or a rapid strike), and gleaning while perched. Some tyrant flycatchers, such as those that choose a prominent perch from which to hawk insects, have more of a tendency to return to the same perch after each sally, while others, particularly those of the forest interior, show less of this tendency. A similar pattern is seen in Great Britain, where there are but two flycatchers, the spotted flycatcher and the pied flycatcher. The spotted flycatcher is the specialist, and tends to return to the same perch after each sally. The pied flycatcher is more of a generalist, gleaning as well as flycatching, and changes perches often.
Birds with the name "flycatcher" are not the only ones to engage in flycatching behavior. For example, Lewis's woodpecker feeds by flycatching. Some honeyeaters of Australasia employ hawking and gleaning as feeding tactics. Bee-eaters catch bees in a similar manner and return to the perch to remove the sting before consuming. Furthermore, many small owls take insect prey on the wing; examples include the western screech owl of North America and the brown boobook of Asia.
Sustained-flight feeding
Continuous aerial feeding is a different way of hawking insects. It requires long wings and skillful flying, as in nightjars, swallows, and swifts. Swifts are the masters of aerial feeding; several species spend virtually their entire lives in the air (some non-mating common swifts have spent as much as 10 months in the air without landing), and have come to rely on insects as their main source of food. Swallows, though visually similar to swifts but being unrelated to them, feed in a similar manner, but less continuously, as they don't glide as much and they stop to perch for a while between bouts of aerial feeding. This has to do with their prey: swifts fly higher in pursuit of smaller, lighter insects that are scattered by rising air currents, while swallows generally chase after medium-sized insects that are lower to the ground, such as flies. When swallows fly higher to go after smaller insects, they adjust their fight style to glide more, like a swift. Birds of the nightjar family employ a variety of moves for catching insects. The common nighthawk of North America flies in swift-like fashion on its long, slender, pointed wings. The common poorwill, on the other hand, flies low and perches low to the ground and will sally up into the air after insects.
Opportunistic feeding
Many other birds are known to engage in hawking as an opportunistic feeding technique or a supplemental source of nutrition: among these are the cedar waxwing, which mostly eats fruit but is also often observed hawking insects over streams; terns of the genus Chlidonias, such as the black tern, fly in search of insects, sometimes chasing after dragonflies in flight; and even large owls that normally feed on rodents will snatch flying insects when the opportunity arises.
Physical adaptations
Hawking insects, like any feeding strategy, must provide a bird with sufficient nourishment to make the expenditure of energy worthwhile. The strategies and tactics for feeding on airborne insects are inextricably related to the adaptations and lifestyles of the birds that employ them.
Flight, especially flight driven by the muscle-powered flapping of wings, is a strenuous physical activity. Although a sally from a perch may look like a single, rapid movement to the human eye, actually the bird must perform several moves: it begins its take-off by pushing with its feet to get into the air, it flaps its wings to generate forward motion (thrust), pursues the prey item, turns in the air, flies back, and, with a final flurry of wings, lands on its perch. When a bird hawks insects, the prey must be substantial enough to pay off in terms of a biological energy budget. In other words, the bird must take in more energy in food than it is using up in the pursuit of food. Therefore, flycatchers tend to prefer insect prey of moderate size, such as flies, over smaller insects like gnats.
For birds that live in a forest habitat or other setting where short bursts of flight are used in sallies or for getting from tree branch to tree branch, their short, rounded wings are suitable for the rapid flapping required to maneuver in tight spaces. Birds in more open settings that sally after larger insects like bees, such as kingbirds and bee-eaters, benefit from longer, more pointed wings, which are more efficient because they generate more lift and less drag. Swallows and swifts, which glide about in totally open spaces, have even longer wings. Another function of long, pointed wings is to enable these birds to turn quickly and smoothly in mid-glide. The wingtips create little vortices of air, within which the low air pressure creates additional lift on the wingtips. Furthermore, long, forked tails provide additional lift, stability, and steering ability, which is important for flying at slower speeds (swifts, though capable of flying very fast, actually must fly relatively slowly to intercept airborne insects). In fact, swifts have bodies so well adapted for flying that they are unable to perch on branches or land on the ground, and so they nest and roost on precipices such as rocky cliffs, behind waterfalls (as the black swift of North America and the great dusky swift of South America are known to do) or in chimneys, as in the case of the chimney swift.
Bill size and shape is also important. Compared to the bills of birds specialized for gleaning, a relatively larger, broader bill is ideal for catching sizeable insects such as bees and flies. The presence of bristles near the bill (rictal bristles) in some flycatchers may be an adaptation for hawking insects; scientists are not sure of the function but they may help protect the eyes or they might actually help provide the bird sensory information as to the location of the prey. Swallows, swifts, and nightjars do not have large bills, but they have wide-gaping mouths. Some nightjars also have bristles around the bill (the common poorwill does, the common nighthawk does not).
When different kinds of birds have the same adaptations, such similarities are not necessarily indicative of any familial relationship between bird species. Rather, they are the result of convergent evolution. Consider, for example, the marked resemblance in body size, shape, and coloration between flycatchers of several families, though these species are not closely related: the Asian brown flycatcher (of the Muscicapidae or Old World flycatcher family), Acadian flycatcher (of the Tyrannidae or tyrant flycatcher family) of the New World, and slaty monarch (of the Monarchidae or monarch flycatcher family), endemic to Fiji. All three use flycatching to acquire some or all of their food. But these three families belong to separate branches of the evolutionary tree of songbirds, which diverged in two branching events some 60 and 90 million years ago and continued to evolve independently in different parts of the world. Likewise, the similarities of swifts and swallows once led naturalists to conclude they were related, but it is now established that they are unrelated, and that the same lifestyle has led to the same adaptations.
Ecological implications
In temperate climates, the availability of flying insects as a food source is seasonal, and this is probably why many birds that rely on this food source during the breeding season migrate in winter. Migration is timed to the availability of the birds' preferred food. For instance, it has been observed in Great Britain that migrating swallows arrive earlier in the spring than swifts, this correlates with the later profusion of small insects that swifts feed on. Weather also affects the availability of flying insects. Swallows, for example, are obliged to go where the insects are, and depending on the weather they may adjust their choice of prey or be forced to seek out prey in different locations.
The preference for certain kinds of aerial insect as a food source seems to correlate with gregarious or colonial behavior versus territoriality. For birds that take advantage of swarming insects, which are by nature found in local concentrations, colonial breeding can be a successful strategy. An example is the cliff swallow of western North America. Its relative the barn swallow hunts larger, non-swarming insects, and is more solitary.
Certain neotropical tyrant flycatchers will join mixed-species foraging flocks, as will some Asian drongos. Such flocks stir up flying insects, which can then be picked off in quick sallies.
References
Bird behavior
Bird feeding | Hawking (birds) | Biology | 2,439 |
18,738,884 | https://en.wikipedia.org/wiki/Oxidized%20cellulose | Oxidized cellulose is a water-insoluble derivative of cellulose. It can be produced from cellulose by the action of an oxidizing agent, such as chlorine, hydrogen peroxide, peracetic acid, chlorine dioxide, nitrogen dioxide, persulfates, permanganate, dichromate-sulfuric acid, hypochlorous acid, hypohalites or periodates and a variety of metal catalysts. Oxidized cellulose may contain carboxylic acid, aldehyde, and/or ketone groups, in addition to the original hydroxyl groups of the starting material, cellulose, depending on the nature of the oxidant and reaction conditions.
It is an antihemorrhagic. It works both by absorbing the blood (similar to a cotton ball) and by triggering the contact activation system. It is poorly absorbed and may cause healing complications postoperatively.
See also
Regenerated cellulose
References
Antihemorrhagics
Polysaccharides | Oxidized cellulose | Chemistry | 227 |
30,864,881 | https://en.wikipedia.org/wiki/Jacob%20Israelachvili | Jacob Nissim Israelachvili, (19 August 1944 – 20 September 2018) was an Israeli physicist who was a professor at the University of California, Santa Barbara (UCSB).
Personal life
He was born in Tel Aviv, Israel and sent to an English boarding school at the age of 7. After completing his secondary education he returned to Israel to carry out his military service before moving back to England to study the Natural Sciences Tripos at the University of Cambridge. He received his Ph.D. in Physics from Christ's College, Cambridge in 1972 under the supervision of David Tabor. He then became a Research Fellow at the Biophysics Institute, University of Stockholm and at the Karolinska Institute, Sweden until 1974.
He moved to Australia to take a post as fellow in the Research School of Physical Science and the Research School of Biological Sciences at the Institute of Advanced Studies, Australian National University in Canberra from 1974 to 1977. He was then appointed senior fellow in the Department of Applied Mathematics and Department of Neurobiology at the Institute of Advanced Studies, Australian National University in Canberra.
He relocated to California to join UCSB in 1986, where he worked till his death on the 20th of September 2018.
Research
His research has involved study of molecular and interfacial forces. His work is applicable to a wide range of industrial and fundamental science problems. In particular, he has contributed significantly to the understanding of colloidal dispersions, biological systems, and polymer engineering applications. He has studied interfacial phenomena, the physics of thin films, and fundamental questions in rheology and tribology of surfaces.
Israelachvili has developed numerous techniques for the static and dynamic measurement of material and molecular properties of vapors, liquids, and surfaces. In particular, he pioneered a sensitive interfacial force-sensing technique known as the surface forces apparatus (SFA). This instrument involves carefully approaching two surfaces (usually immersed in a solvent, such as water), and measuring the force of attraction and repulsion between them. Using piezoelectric positional movement and optical interferometry for position sensing, this instrument can resolve distances to within 0.1 nanometer, and forces at the 10−8 N level. This technique is similar to measuring the force of interaction between an atomic force microscope (AFM) and a sample surface, except that the specialized SFA can measure much longer-range forces and is intended for surface-surface interaction measurements (as opposed to tip-surface or molecule-surface measurements). The results of SFA experiments can be used to characterize the nature of intermolecular potentials and other molecular properties.
Israelachvili is also well known as the author of the textbook "Intermolecular and Surface Forces," published by Academic Press. This authoritative book describes the fundamental concepts and equations applicable to all intermolecular and interfacial science disciplines.
Israelachvili was also founder of SurForce, LLC. The company specializes in researching surface force interactions and producing SFA systems.
Appointments, honors and awards
Tribology Gold Medal, Institution of Mechanical Engineers (2013)
ACS National Award in Colloid and Surface Chemistry (2009)
Named by the AICHE as one of the “One Hundred Chemical Engineers of the Modern Era (2008)
Honorary Degree of Doctor of Engineering – University of South Florida (2007)
Honorary Degree of Doctor sc. h.c. - ETH Zurich (2006)
Schlumberger Visiting Professor – University of Oxford, UK (2005)
MRS Medal, awarded for recent work on adhesion and friction (2004)
Elected to the US National Academy of Sciences in the area of Engineering Science (2004)
Elected Fellow of the American Physical Society in the area of Biological Physics (2004)
Adhesion Society Award for “excellence in adhesion science (2003)
Fellow of the Royal Society (1988)
Matthew Flinders Medal and Lecture (1986)
David Syme Research Prize (1983)
(FAA) (Elected a fellow of the Australian Academy of Science (1982)
Pawsey Medal (1977)
References
Israeli people of Georgian descent
1944 births
2018 deaths
Scientists from Tel Aviv
Alumni of Christ's College, Cambridge
Israeli physicists
Israeli chemical engineers
Rheologists
Intermolecular forces
University of California, Santa Barbara faculty
Fellows of the Royal Society
Fellows of the Australian Academy of Science
Members of the United States National Academy of Sciences
Members of the United States National Academy of Engineering
Fellows of the American Physical Society
Tribologists | Jacob Israelachvili | Chemistry,Materials_science,Engineering | 906 |
1,721,778 | https://en.wikipedia.org/wiki/Spitting | Spitting is the act of forcibly ejecting saliva, sputum, nasal mucus and/or other substances from the mouth. The act is often done to get rid of unwanted or foul-tasting substances in the mouth, or to get rid of a large buildup of mucus. Spitting of small saliva droplets can also happen unintentionally during talking, especially when articulating ejective and implosive consonants.
Spitting in public is considered rude and a social taboo in many parts of the world including the West, while in some other parts of the world it is considered more socially acceptable.
Spitting upon another person, especially onto the face, is a global sign of anger, hatred, disrespect or contempt. It can represent a "symbolical regurgitation" or an act of intentional contamination.
Cultural attitudes
Western world
Social attitudes towards spitting have changed greatly in Western Europe since the Middle Ages. Then, frequent spitting was part of everyday life, and at all levels of society, it was thought ill-mannered to suck back saliva to avoid spitting. By the early 1700s, spitting had become seen as something which should be concealed, and by 1859 it had progressed to being described by at least one etiquette guide as "at all times a disgusting habit." Sentiments against spitting gradually transitioned from being included in adult conduct books to so obvious as to only appear in guides for children to not be included in conduct literature even for children "because most [Western] children have the spitting ban internalized well before learning how to read."
Spittoons (also known as cuspidors) were used openly during the 19th century to provide an acceptable outlet for spitters. Spittoons became far less common after the influenza epidemic of 1918, and their use has since virtually disappeared, though each justice of the Supreme Court of the United States continues to be provided with a personal one.
In the first half of the 20th century the National Association for the Study and Prevention of Tuberculosis, the precursor to the American Lung Association, and state affiliates had educational campaigns against spitting to reduce the chance of spreading tuberculosis. According to the World Health Organization coughing, sneezing, or spitting, can spread tuberculosis. The chance of catching a contagious disease by being spit on is low.
After coffee cupping, tea tasting, and wine tasting, the sample is spit into a 'spit bucket' or spittoon.
There have been instances of spitting reported in the US, particularly from American men. In Minnesota, instances have been reported from some young people. In Canada, spitting has been reported for cities such as Ottawa and Winnipeg.
Other regions
In certain nations, spitting is an accepted part of one's lifestyle.
Spitting has been attributed to some people from Asia-Pacific countries such as Bangladesh, China, India, Indonesia, Myanmar, Papua New Guinea, Philippines, South Korea, United Arab Emirates, and Vietnam. The practice is often linked to betel chewing in many of those regions. Spitting has also been reported in some parts of Africa, such as Ghana.
In India and Indonesia, spitting is often associated with various forms of chewing juices.
According to Ross Coomber, a professor of sociology at Plymouth University, spitting is perceived as a cleansing practice for the body by many individuals in China.
Competitions
There are some places where spitting is a competitive sport, with or without a projectile in the mouth. For example, there is a Guinness World Record for cherry pit spitting and cricket spitting, and there are world championships in Kudu dung spitting.
Spitting as a protection against evil
In rural parts of North India, it was customary in olden days for mothers to lightly spit at their children (usually to the side of the children rather than directly at them) to imply a sense of disparagement and imperfection that protects them from evil eye (or nazar). Excessive admiration, even from well-meaning people, is believed to attract the evil eye, so this is believed to protect children from nazar that could be caused by their own mothers' "excessive" love of them. However, because of hygiene, transmission of disease and social taboos, this practice has waned and instead a black mark of kohl or kajal is put on the forehead or cheek of the child to ward off the evil eye. Adults use an amulet containing alum or chillies and worn on the body for this purpose. Sometimes, this is also done with brides and others by their loved ones to protect them from nazar.
Shopkeepers in the region used to sometimes make a spitting gesture on the cash proceeds from the first sale of the day (called bohni), which is a custom believed to ward-off nazar from the business.
Such a habit also existed in some Eastern European countries like Romania, and Moldova, although it is no longer widely practiced. People would gently spit in the face of younger people (often younger relatives such as grandchildren or nephews) they admire in order to avoid deochi, an involuntary curse on the individual being admired or "strangely looked upon", which is claimed to be the cause of bad fortune and sometimes malaise or various illnesses. In Greece, it is customary to "spit" three times after making a compliment to someone, the spitting is done to protect from the evil eye. This applies to all people, not just between mothers and children.
A similar-sounding expression for verbal spitting occurs in modern Hebrew as "Tfu, tfu" (here, only twice), which some say that Hebrew-speakers borrowed from Russian.
Anti-spitting hoods
When a suspect in a criminal case is arrested, they will sometimes try to spit at their captors, which often causes a fear of infection by Hepatitis C and other diseases. Spit hoods are meant to prevent this.
Gleeking
Gleeking is the projection of saliva from the submandibular gland under the tongue. It can happen deliberately or accidentally, particularly when yawning.
In other animals
Camel
Llama
Spitting cobra
Spitting spider
See also
Drooling
Spit-take
References
External links
Habits
Excretion
Saliva | Spitting | Biology | 1,241 |
390,562 | https://en.wikipedia.org/wiki/Tomasulo%27s%20algorithm | Tomasulo's algorithm is a computer architecture hardware algorithm for dynamic scheduling of instructions that allows out-of-order execution and enables more efficient use of multiple execution units. It was developed by Robert Tomasulo at IBM in 1967 and was first implemented in the IBM System/360 Model 91’s floating point unit.
The major innovations of Tomasulo’s algorithm include register renaming in hardware, reservation stations for all execution units, and a common data bus (CDB) on which computed values broadcast to all reservation stations that may need them. These developments allow for improved parallel execution of instructions that would otherwise stall under the use of scoreboarding or other earlier algorithms.
Robert Tomasulo received the Eckert–Mauchly Award in 1997 for his work on the algorithm.
Implementation concepts
The following are the concepts necessary to the implementation of Tomasulo's algorithm:
Common data bus
The Common Data Bus (CDB) connects reservation stations directly to functional units. According to Tomasulo it "preserves precedence while encouraging concurrency". This has two important effects:
Functional units can access the result of any operation without involving a floating-point-register, allowing multiple units waiting on a result to proceed without waiting to resolve contention for access to register file read ports.
Hazard Detection and control execution are distributed. The reservation stations control when an instruction can execute, rather than a single dedicated hazard unit.
Instruction order
Instructions are issued sequentially so that the effects of a sequence of instructions, such as exceptions raised by these instructions, occur in the same order as they would on an in-order processor, regardless of the fact that they are being executed out-of-order (i.e. non-sequentially).
Register renaming
Tomasulo's algorithm uses register renaming to correctly perform out-of-order execution. All general-purpose and reservation station registers hold either a real value or a placeholder value. If a real value is unavailable to a destination register during the issue stage, a placeholder value is initially used. The placeholder value is a tag indicating which reservation station will produce the real value. When the unit finishes and broadcasts the result on the CDB, the placeholder will be replaced with the real value.
Each functional unit has a single reservation station. Reservation stations hold information needed to execute a single instruction, including the operation and the operands. The functional unit begins processing when it is free and when all source operands needed for an instruction are real.
Exceptions
Practically speaking, there may be exceptions for which not enough status information about an exception is available, in which case the processor may raise a special exception, called an imprecise exception. Imprecise exceptions cannot occur in in-order implementations, as processor state is changed only in program order (see ).
Programs that experience precise exceptions, where the specific instruction that took the exception can be determined, can restart or re-execute at the point of the exception. However, those that experience imprecise exceptions generally cannot restart or re-execute, as the system cannot determine the specific instruction that took the exception.
Instruction lifecycle
The three stages listed below are the stages through which each instruction passes from the time it is issued to the time its execution is complete.
Legend
RS - Reservation Status
RegisterStat - Register Status; contains information about the registers.
regs[x] - Value of register x
Mem[A] - Value of memory at address A
rd - destination register number
rs, rt - source registration numbers
imm - sign extended immediate field
r - reservation station or buffer that the instruction is assigned to
Reservation Station Fields
Op - represents the operation being performed on operands
Qj, Qk - the reservation station that will produce the relevant source operand (0 indicates the value is in Vj, Vk)
Vj, Vk - the value of the source operands
A - used to hold the memory address information for a load or store
Busy - 1 if occupied, 0 if not occupied
Register Status Fields
Qi - the reservation station whose result should be stored in this register (if blank or 0, no values are destined for this register)
Stage 1: issue
In the issue stage, instructions are issued for execution if all operands and reservation stations are ready or else they are stalled. Registers are renamed in this step, eliminating WAR and WAW hazards.
Retrieve the next instruction from the head of the instruction queue. If the instruction operands are currently in the registers, then
If a matching functional unit is available, issue the instruction.
Else, as there is no available functional unit, stall the instruction until a station or buffer is free.
Otherwise, we can assume the operands are not in the registers, and so use virtual values. The functional unit must calculate the real value to keep track of the functional units that produce the operand.
Stage 2: execute
In the execute stage, the instruction operations are carried out. Instructions are delayed in this step until all of their operands are available, eliminating RAW hazards. Program correctness is maintained through effective address calculation to prevent hazards through memory.
If one or more of the operands is not yet available then: wait for operand to become available on the CDB.
When all operands are available, then: if the instruction is a load or store
Compute the effective address when the base register is available, and place it in the load/store buffer
If the instruction is a load then: execute as soon as the memory unit is available
Else, if the instruction is a store then: wait for the value to be stored before sending it to the memory unit
Else, the instruction is an arithmetic logic unit (ALU) operation then: execute the instruction at the corresponding functional unit
Stage 3: write result
In the write Result stage, ALU operations results are written back to registers and store operations are written back to memory.
If the instruction was an ALU operation
If the result is available, then: write it on the CDB and from there into the registers and any reservation stations waiting for this result
Else, if the instruction was a store then: write the data to memory during this step
Algorithm improvements
The concepts of reservation stations, register renaming, and the common data bus in Tomasulo's algorithm presents significant advancements in the design of high-performance computers.
Reservation stations take on the responsibility of waiting for operands in the presence of data dependencies and other inconsistencies such as varying storage access time and circuit speeds, thus freeing up the functional units. This improvement overcomes long floating point delays and memory accesses. In particular the algorithm is more tolerant of cache misses. Additionally, programmers are freed from implementing optimized code. This is a result of the common data bus and reservation station working together to preserve dependencies as well as encouraging concurrency.
By tracking operands for instructions in the reservation stations and register renaming in hardware the algorithm minimizes read-after-write (RAW) and eliminates write-after-write (WAW) and Write-after-Read (WAR) computer architecture hazards. This improves performance by reducing wasted time that would otherwise be required for stalls.
An equally important improvement in the algorithm is the design is not limited to a specific pipeline structure. This improvement allows the algorithm to be more widely adopted by multiple-issue processors. Additionally, the algorithm is easily extended to enable branch speculation.
Applications and legacy
Tomasulo's algorithm was implemented in the System/360 Model 91 architecture. Outside of IBM, it went unused for several years. However, it saw a vast increase in usage during the 1990s for 3 reasons:
Once caches became commonplace, the algorithm's ability to maintain concurrency during unpredictable load times caused by cache misses became valuable in processors.
Dynamic scheduling and branch speculation from the algorithm enables improved performance as processors issued more and more instructions.
Proliferation of mass-market software meant that programmers would not want to compile for a specific pipeline structure. The algorithm can function with any pipeline architecture and thus software requires few architecture-specific modifications.
Many modern processors implement dynamic scheduling schemes that are variants of Tomasulo's original algorithm, including popular Intel x86-64 chips.
See also
Re-order buffer (ROB)
Instruction-level parallelism (ILP)
References
Further reading
External links
HASE Java applet simulation of the Tomasulo's algorithm
Algorithms
Instruction processing | Tomasulo's algorithm | Mathematics | 1,679 |
76,780,183 | https://en.wikipedia.org/wiki/CYB210010 | CYB210010, also known as 2C-T-TFM, is a lesser-known psychedelic drug of the phenethylamine family related to compounds such as 2C-T and 2C-T-21.
Alexander Shulgin attempted to synthesise this compound in the 1990s, and mentions it in his book PiHKAL under the entry for 2C-T-21, but was unsuccessful in producing a key intermediate and never assigned it a 2C-T number. This compound was ultimately first synthesised by Geoffrey Varty and colleagues at Irish biopharmaceutical company Cybin in 2023.
It has a Ki of 0.35nM at the serotonin 5-HT2A receptor, and an of 4.1nM at the serotonin 5-HT2A receptor and 7.3nM at the serotonin 5-HT2C receptor, compared to 88nM at the serotonin 5-HT2B receptor. It is a potent, selective, long acting, and orally active agonist for the serotonin 5-HT2A and 5-HT2C receptors and produces psychedelic-like responding in several different animal species. It has only been studied preclinically as of 2024 and is not known to have been tested in humans.
Related drugs include the deuterated phenethylamine CYB005 and the deuterated substituted tryptamines CYB003 and CYB004.
See also
2C (psychedelics)
2C-T-21.5
2C-T-28
2C-TFM
2C-TFE
3C-DFE
DOPF
Tiflorex
Trifluoromescaline
References
5-HT2A agonists
5-HT2B agonists
5-HT2C agonists
2C (psychedelics)
Amines
Entheogens
Experimental hallucinogens
Methoxy compounds
Thioethers
Trifluoromethylthio compounds | CYB210010 | Chemistry | 432 |
25,086,315 | https://en.wikipedia.org/wiki/Dynamic%20hydrogen%20electrode | A dynamic hydrogen electrode (DHE) is a reference electrode, more specific a subtype of the standard hydrogen electrodes for electrochemical processes by simulating a reversible hydrogen electrode with an approximately 20 to 40 mV more negative potential.
Principle
A separator in a glass tube connects two electrolytes and a small current is enforced between the cathode and anode.
Applications
In-situ reference electrode for direct methanol fuel cells
Proton-exchange membrane fuel cells
See also
Palladium-hydrogen electrode
References
Electrodes
Hydrogen technologies | Dynamic hydrogen electrode | Chemistry | 111 |
11,421,092 | https://en.wikipedia.org/wiki/Snake%20H/ACA%20box%20small%20nucleolar%20RNA | In molecular biology, Snake H/ACA box small nucleolar RNA refers to a number of very closely related non-coding RNA (ncRNA) genes identified in snakes which have been predicted to be small nucleolar RNAs (snoRNAs). This type of ncRNA is involved in the biogenesis of other small nuclear RNAs and are often referred to as 'guide' RNAs. They are usually located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis.
These snoRNA genes were initially identified in the introns of the cardiotoxin 4 and cobrotoxin genes of the Taiwan cobra (Bungarus multicinctus) and the Taiwan banded krait (Bungarus multicinctus) during sequencing of these genes. These snoRNAs are predicted to act as H/ACA box type methylation guides as they have the predicted hairpin-hinge-hairpin-tail structure and extended regions of complementarity to 5S ribosomal RNA (rRNA).
References
External links
Small nuclear RNA | Snake H/ACA box small nucleolar RNA | Chemistry | 229 |
31,148,481 | https://en.wikipedia.org/wiki/Out%20of%20the%20Ordinary%20Festival | The Out of the Ordinary Festival was an annual family- and eco-friendly music festival near Hailsham in the Sussex countryside. From 2007 to 2013 it celebrated the autumn equinox in England with a variety of live music, talks and workshops, performances, activities for children with green and ethical businesses, many powered by solar panels and wind generators. It started as a development of The Antiquarian Society in Brighton. The festival took place for three days in the autumn and had a capacity of 5,000 people. The festival was held at Knockhatch Park, a setting which comprises an ex-landfill site).
Festival details
The Out of the Ordinary business was started by Stuart Mason and his partner Emily in 2007 on the Sussex Downs near the vale of the Long Man of Wilmington.
Workshops and talks included such topics as prehistoric culture, ancient knowledge and earth mysteries. This included yoga, meditation, alternative healing and the chance to use a variety of telescopes, a planetarium and laser guided tours of the constellations. Notable speakers include Professor Gordon Hillman, Jonathan Cainer, Andy Thomas, Edmund Marriage, Leo Rutherford and Robert Bauval.
The festival was divided into seven areas named after each of the seven chakras. The music area was appropriately named the throat chakra with two stages, the Ootopia and Peace Stage, as well as an indoor bar and the Conscious Cabaret. In 2011, the site had a new, simplified layout comprising the four elements, though still paying tribute to the seven chakras that defined its inner workings. The festival has an eclectic mix of music ranging from folk to reggae and electronic and DJs. Sussex Downs College co-sponsored a solar-powered music stage at the festival in 2010.
Music
Out of the Ordinary Festival has featured many acts that are well known in the alternative scene. Notable performers include:
2007 (21–23 September): Small White Light, Maya, The Drookit Dogs.
2008 (12–14 September): Banco de Gaia, Katharine Blake, The Burlettes
2009 (18–20 September): Zub Zub (Zia Geelani, formerly of the Ozric Tentacles), Eat Static, Celt Islam, The Dolmen, Banco de Gaia, DJ Paygan, Turiya.
2010 (17–19 September): Dr. Alex Patterson from The Orb, Andy Barlow from Lamb, Seize The Day, Nucleus Roots, King Porta Stomp.
2011 (23–25 September): System 7, Fujiya & Miyagi, Zub Zub (Zia Geelani, formerly of the Ozric Tentacles), Orchid Star.
See also
List of electronic music festivals
References
External links
Out of the Ordinary Festival official site
Out of the Ordinary press dispensary
The Antiquarian Society – Archaeoastronomy and Geomancy
Folk festivals in the United Kingdom
Jazz festivals in the United Kingdom
World music festivals
Music festivals in East Sussex
Archaeoastronomy
Electronic music festivals in the United Kingdom
Music festivals established in 2007 | Out of the Ordinary Festival | Astronomy | 616 |
62,812,480 | https://en.wikipedia.org/wiki/PHL-16 | The PHL-16, also known as PCL-191, is a truck-mounted self-propelled multiple rocket launcher (MRL) system developed by the People's Republic of China.
Development
It is based on the AR-3 MRL developed by Norinco. The AR-3 was marketed in 2010. The PHL-16 was unveiled during the Chinese National Day Parade in 2019; unlike other rocket systems in the parade, the vehicles were unlabelled.
Design
The launcher vehicles are operated in a firepower battery. The system also capable of autonomous operation. A typical battery includes six launcher vehicles, several reloading vehicles, command post vehicle, meteorological survey vehicle and other service support vehicles.
Rockets
Unlike the earlier PHL-03, which is loaded with a fixed type of ammunition, the new PHL-16 has two modularized launch cells, which can carry different types of ammunition. Each launch cell can carry either five 300 mm rockets or four 370 mm rockets. The export version of the new multiple rocket launcher, the AR-3, can even switch to the 750 mm Fire Dragon 480 tactical ballistic missile and 380 mm TL-7B anti-ship missile. This capability is possibly transferred to the PLA variants.
The configuration displayed during the 2019 National Day Parade was with 8 370 mm rockets.
Chassis
The vehicle chassis is based on the 45 ton WS2400 8×8 special wheeled vehicle chassis.
Operational history
In February 2023, PHL-16 was observed in deployment by 73rd Group Army of the Eastern Theatre Command, which responsible for the Taiwan Strait area.
Variants
AR-3 Baseline; first marketed in 2010.
PHL-16 Development for the People's Liberation Army
Operators
People's Liberation Army Ground Force – 50+ units as of 2021.
References
Military vehicles introduced in the 2010s
Modular rocket launchers
Multiple rocket launchers
Norinco
Self-propelled artillery of the People's Republic of China
Wheeled self-propelled rocket launchers | PHL-16 | Engineering | 401 |
610,773 | https://en.wikipedia.org/wiki/Per-unit%20system | In the power systems analysis field of electrical engineering, a per-unit system is the expression of system quantities as fractions of a defined base unit quantity. Calculations are simplified because quantities expressed as per-unit do not change when they are referred from one side of a transformer to the other. This can be a pronounced advantage in power system analysis where large numbers of transformers may be encountered. Moreover, similar types of apparatus will have the impedances lying within a narrow numerical range when expressed as a per-unit fraction of the equipment rating, even if the unit size varies widely. Conversion of per-unit quantities to volts, ohms, or amperes requires a knowledge of the base that the per-unit quantities were referenced to. The per-unit system is used in power flow, short circuit evaluation, motor starting studies etc.
The main idea of a per unit system is to absorb large differences in absolute values into base relationships. Thus, representations of elements in the system with per unit values become more uniform.
A per-unit system provides units for power, voltage, current, impedance, and admittance. With the exception of impedance and admittance, any two units are independent and can be selected as base values; power and voltage are typically chosen. All quantities are specified as multiples of selected base values. For example, the base power might be the rated power of a transformer, or perhaps an arbitrarily selected power which makes power quantities in the system more convenient. The base voltage might be the nominal voltage of a bus. Different types of quantities are labeled with the same symbol (pu); it should be clear whether the quantity is a voltage, current, or other unit of measurement.
Purpose
There are several reasons for using a per-unit system:
Similar apparatus (generators, transformers, lines) will have similar per-unit impedances and losses expressed on their own rating, regardless of their absolute size. Because of this, per-unit data can be checked rapidly for gross errors. A per unit value out of normal range is worth looking into for potential errors.
Manufacturers usually specify the impedance of apparatus in per unit values.
Use of the constant is reduced in three-phase calculations.
Per-unit quantities are the same on either side of a transformer, independent of voltage level
By normalizing quantities to a common base, both hand and automatic calculations are simplified.
It improves numerical stability of automatic calculation methods.
Per unit data representation yields important information about relative magnitudes.
The per-unit system was developed to make manual analysis of power systems easier. Although power-system analysis is now done by computer, results are often expressed as per-unit values on a convenient system-wide base.
Base quantities
Generally base values of power and voltage are chosen. The base power may be the rating of a single piece of apparatus such as a motor or generator. If a system is being studied, the base power is usually chosen as a convenient round number such as 10 MVA or 100 MVA. The base voltage is chosen as the nominal rated voltage of the system. All other base quantities are derived from these two base quantities. Once the base power and the base voltage are chosen, the base current and the base impedance are determined by the natural laws of electrical circuits. The base value should only be a magnitude, while the per-unit value is a phasor. The phase angles of complex power, voltage, current, impedance, etc., are not affected by the conversion to per unit values.
The purpose of using a per-unit system is to simplify conversion between different transformers. Hence, it is appropriate to illustrate the steps for finding per-unit values for voltage and impedance. First, let the base power (S) of each end of a transformer become the same. Once every S is set on the same base, the base voltage and base impedance for every transformer can easily be obtained. Then, the real numbers of impedances and voltages can be substituted into the per-unit calculation definition to get the answers for the per-unit system. If the per-unit values are known, the real values can be obtained by multiplying by the base values.
By convention, the following two rules are adopted for base quantities:
The base power value is the same for the entire power system of concern.
The ratio of the voltage bases on either side of a transformer is selected to be the same as the ratio of the transformer voltage ratings.
With these two rules, a per-unit impedance remains unchanged when referred from one side of a transformer to the other. This allows the ideal transformer to be eliminated from a transformer model.
Relationship between units
The relationship between units in a per-unit system depends on whether the system is single-phase or three-phase.
Single-phase
Assuming that the independent base values are power and voltage, we have:
Alternatively, the base value for power may be given in terms of reactive or apparent power, in which case we have, respectively,
or
The rest of the units can be derived from power and voltage using the equations , , and (Ohm's law), being represented by . We have:
Three-phase
Power and voltage are specified in the same way as single-phase systems. However, due to differences in what these terms usually represent in three-phase systems, the relationships for the derived units are different. Specifically, power is given as total (not per-phase) power, and voltage is line-to-line voltage.
In three-phase systems the equations and also hold. The apparent power now equals
Example of per-unit
As an example of how per-unit is used, consider a three-phase power transmission system that deals with powers of the order of 500 MW and uses a nominal voltage of 138 kV for transmission. We arbitrarily select , and use the nominal voltage 138 kV as the base voltage . We then have:
If, for example, the actual voltage at one of the buses is measured to be 136 kV, we have:
Per-unit system formulas
The following tabulation of per-unit system formulas is adapted from Beeman's Industrial Power Systems Handbook.
In transformers
It can be shown that voltages, currents, and impedances in a per-unit system will have the same values whether they are referred to primary or secondary of a transformer.
For instance, for voltage, we can prove that the per unit voltages of two sides of the transformer, side 1 and side 2, are the same. Here, the per-unit voltages of the two sides are E1pu and E2pu respectively.
(source: Alexandra von Meier Power System Lectures, UC Berkeley)
E1 and E2 are the voltages of sides 1 and 2 in volts. N1 is the number of turns the coil on side 1 has. N2 is the number of turns the coil on side 2 has. Vbase1 and Vbase2 are the base voltages on sides 1 and 2.
For current, we can prove that the per-unit currents of the two sides are the same below.
(source: Alexandra von Meier Power System Lectures, UC Berkeley)
where I1,pu and I2,pu are the per-unit currents of sides 1 and 2 respectively. In this, the base currents Ibase1 and Ibase2 are related in the opposite way that Vbase1 and Vbase2 are related, in that
The reason for this relation is for power conservation
Sbase1 = Sbase2
The full load copper loss of a transformer in per-unit form is equal to the per-unit value of its resistance:
Therefore, it may be more useful to express the resistance in per-unit form as it also represents the full-load copper loss.
As stated above, there are two degrees of freedom within the per unit system that allow the engineer to specify any per unit system. The degrees of freedom are the choice of the base voltage (V) and the base power (S). By convention, a single base power (S) is chosen for both sides of the transformer and its value is equal to the rated power of the transformer. By convention, there are actually two different base voltages that are chosen, V and V which are equal to the rated voltages for either side of the transformer. By choosing the base quantities in this manner, the transformer can be effectively removed from the circuit as described above. For example:
Take a transformer that is rated at 10 kVA and 240/100 V. The secondary side has an impedance equal to 1∠0° Ω. The base impedance on the secondary side is equal to:
This means that the per unit impedance on the secondary side is 1∠0° Ω / 1 Ω = 1∠0° pu When this impedance is referred to the other side, the impedance becomes:
The base impedance for the primary side is calculated the same way as the secondary:
This means that the per unit impedance is 5.76∠0° Ω / 5.76 Ω = 1∠0° pu, which is the same as when calculated from the other side of the transformer, as would be expected.
Another useful tool for analyzing transformers is to have the base change formula that allows the engineer to go from a base impedance with one set of a base voltage and base power to another base impedance for a different set of a base voltage and base power. This becomes especially useful in real life applications where a transformer with a secondary side voltage of 1.2 kV might be connected to the primary side of another transformer whose rated voltage is 1 kV. The formula is as shown below.
References
Electrical engineering
Electric power
Power engineering | Per-unit system | Physics,Engineering | 1,980 |
73,708,520 | https://en.wikipedia.org/wiki/HD%20192827 | HD 192827, also known as HR 7745 or rarely 83 G. Telescopii, is a solitary red hued star located in the southern constellation Telescopium. It has an apparent magnitude of 6.28, placing it near the limit for naked eye visibility. The object is located relatively far at a distance of 1,320 light years based on Gaia DR3 parallax measurements, but it is approaching with a heliocentric radial velocity of . At its current distance, HD 192827's brightness is diminished by 0.19 magnitudes due to interstellar dust and it has an absolute magnitude of −1.07.
HD 192827 has a stellar classification of M1 III, indicating that it is an evolved red giant. It is currently on the asymptotic giant branch, generating energy by fusing hydrogen and helium shells around an inert carbon core. Having exhausted hydrogen at its core, HD 192827 has expanded to 119 times the radius of the Sun and now radiates 1,242 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of . It has a comparable mass to the Sun and has a metallicity of [Fe/H] = −0.24, making it metal deficient.
HD 192827 was first suspected to be variable in 1997 by the Hipparcos satellite. It fluctuates between magnitudes 6.34 and 6.40 in the Hipparcos passband. As of 2004 however, HD 192827 has not been confirmed to be variable.
See also
HD 192886, an F-type main-sequence star located 519.4" away.
References
M-type giants
Asymptotic-giant-branch stars
Suspected variables
Telescopium
Telescopii, 83
CD-48 13509
192827
100151
7745 | HD 192827 | Astronomy | 383 |
1,239,940 | https://en.wikipedia.org/wiki/REFSMMAT | REFSMMAT is a term used by guidance, navigation, and control system flight controllers during the Apollo program, which carried over into the Space Shuttle program. REFSMMAT stands for "Reference to Stable Member Matrix". It is a numerical definition of a fixed orientation in space and is usually (but not always) defined with respect to the stars. It was used by the Apollo Primary Guidance, Navigation and Control System (PGNCS) as a reference to which the gimbal-mounted platform at its core should be oriented. Every operation within the spacecraft that required knowledge of direction was carried out with respect to the orientation of the guidance platform, itself aligned according to a particular REFSMMAT.
During an Apollo flight, the REFSMMAT being used, and therefore the orientation of the guidance platform, would change as operational needs required it, but never during a guidance process—that is, one REFSMMAT might be in use from launch through Trans-Lunar Injection, another from TLI to Midpoint, but would not change during the middle of a burn or set of maneuvers.
One consideration in choosing each respective REFSMMAT was to avoid taking the spacecraft near the gimbal lock zone of its Inertial Measurement Unit during any expected spacecraft maneuvers, since the exact orientation of the "forbidden" range of spacecraft attitudes would depend on the current REFSMMAT.
Additionally, it was considered good practice to have the spacecraft displays show some meaningful attitude value that would be easy to monitor during an important engine burn. Flight controllers at mission control in Houston would calculate what attitude the spacecraft had to be at for that burn and would devise a REFSMMAT that matched it in some way. Then, when it came time for the burn, if the spacecraft was in its correct attitude, the crew would see their 8-ball display a simple attitude that would be easy to interpret, allowing errors to be easily tracked and corrected.
In the hallowed halls of mission control, Captain Refsmmat was a Kilroy-type character, conceived as a joke spoken to a 'Flight Dynamics Branch' rookie by Flight Controller RETRO John Llewellyn, and first drawn by flight controller FIDO Ed Pavelka as the "ideal mission controller". 'Capt. Refsmmat' served during the Apollo and Skylab years as an aid to the esprit de corps within the mission control team.
See also
Apollo program
References
Aerospace engineering | REFSMMAT | Astronomy,Engineering | 490 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.