text
stringlengths
11
320k
source
stringlengths
26
161
The Robinson–Dadson curves are one of many sets of equal-loudness contours for the human ear , determined experimentally by D. W. Robinson and R. S. Dadson. [ 1 ] Until recently, it was common to see the term Fletcher–Munson used to refer to equal-loudness contours generally, even though the re-determination carried out by Robinson and Dadson in 1956, became the basis for an ISO standard ISO 226 which was only revised recently. It is now better to use the term equal-loudness contours as the generic term, especially as a recent survey by ISO redefined the curves in a new standard, ISO 226:2003. According to the ISO report, the Robinson-Dadson results were the odd one out, differing more from the current standard than did the Fletcher–Munson curves. It comments that it is fortunate that the 40-Phon Fletcher-Munson curve on which the A-weighting standard was based turns out to have been in good agreement with modern determinations. The article also comments on the large differences apparent in the low-frequency region, which remain unexplained. Possible explanations are:
https://en.wikipedia.org/wiki/Robinson–Dadson_curves
The Robinson–Gabriel synthesis is an organic reaction in which a 2-acylamino- ketone reacts intramolecularly followed by a dehydration to give an oxazole . A cyclodehydrating agent is needed to catalyze the reaction [ 1 ] [ 2 ] [ 3 ] It is named after Sir Robert Robinson and Siegmund Gabriel who described the reaction in 1909 and 1910, respectively. The 2-acylamino-ketone starting material can be synthesized using the Dakin–West reaction . Protonation of the keto moiety ( 1 ) is followed by cyclization ( 2 ) and dehydration ( 3 ), the oxazole ring is less basic that the starting 2-acylamidoketone and so may be readily neutralized ( 4 ). [ 4 ] Labeling studies have determined that the amide oxygen is the most Lewis basic and therefore is the one included in the oxazole. [ 5 ] Recently, a solid-phase version of the Robinson–Gabriel synthesis has been described. The reaction requires trifluoroacetic anhydride to be used as the cyclodehydrating agent in ethereal solvent and the 2-acylamidoketone be linked by the nitrogen atom to a benzhydrylic-type linker. [ 6 ] A one-pot diversity-oriented synthesis has been developed via a Friedel-Crafts /Robinson–Gabriel synthesis using a general oxazolone template. The combination of aluminum chloride as the Friedel-Craft Lewis acid and trifluoromethanesulfonic acid as the Robinson-Gabriel cyclodehydrating agent were determined to generate the desired products. [ 7 ] A popular extension of the Robinson-Gabriel cyclodehydration has been reported by Wipf et al. to allow the synthesis of substituted oxazoles from readily available amino acid derivatives. This is achieved through the side-chain oxidation of β-keto amides with the Dess-Martin reagent followed by the cyclodehydration of intermediate β-keto amides with triphenylphosphine, iodine, and triethylamine. [ 8 ] Additionally, a coupled Ugi and Robinson–Gabriel synthesis has been reported, beginning with the Ugi reagents and ending with an oxazole core within the molecule. The oxazole is formed from the Ugi intermediate, which is ideal to undergo Robinson-Gabriel cyclodehydration with sulfuric acid. [ 9 ] Many cyclodehydrating agents have been discovered to be of use in the Robinson–Gabriel synthesis. Historically, the dehydration agent is concentrated sulfuric acid . To date, the reaction has been shown to proceed with a variety of other agents including phosphorus pentachloride , phosphorus pentoxide , phosphoryl chloride , thionyl chloride , phosphoric acid-acetic anhydride , polyphosphoric acid , and hydrogen fluoride among others. [ 10 ] Oxazoles have been found to be common substructures in multiple naturally isolated compounds and have thus garnered attention within the chemical and pharmaceutical community. The Robinson–Gabriel synthesis has been used during multiple studies dealing with molecules that incorporate an oxazole, among them diazonamide A, [ 11 ] diazonamide B, [ 12 ] bis-phosphine platinum (II) complexes, [ 13 ] mycalolide A, [ 14 ] (−)-muscoride A. [ 15 ] Eric Biron et al. developed a solid-phase synthesis of 1,3-oxazole-based peptides on solid phase from dipeptides by oxidation of the side-chain followed by Wipf and Miller's cyclodehydration of β-ketoamides described above. [ 16 ] Lilly Research Laboratories has disclosed the structure of a described dual PPARα/γ agonist that has possible beneficial impact on type 2 diabetes. The Robinson-Gabriel cyclodehydration is the second part of a two reaction synthesis of the agonist. Starting with aspartic acid β esters undergoing acylation to differentiate the first substituent, linked to carbon-2, followed by Dakin-West conversion to keto-amide to introduce the second substituent, and ending with the Robinson-Gabriel cyclodehydration at 90°C for 30 minutes with either phosphorus oxychloride in dimethylformamide or catalytic sulfuric acid in acetic anhydride . [ 17 ]
https://en.wikipedia.org/wiki/Robinson–Gabriel_synthesis
In mathematics , the Robinson–Schensted–Knuth correspondence , also referred to as the RSK correspondence or RSK algorithm , is a combinatorial bijection between matrices A with non-negative integer entries and pairs ( P , Q ) of semistandard Young tableaux of equal shape, whose size equals the sum of the entries of A . More precisely the weight of P is given by the column sums of A , and the weight of Q by its row sums. It is a generalization of the Robinson–Schensted correspondence , in the sense that taking A to be a permutation matrix , the pair ( P , Q ) will be the pair of standard tableaux associated to the permutation under the Robinson–Schensted correspondence. The Robinson–Schensted–Knuth correspondence extends many of the remarkable properties of the Robinson–Schensted correspondence , notably its symmetry: transposition of the matrix A results in interchange of the tableaux P , Q . The Robinson–Schensted correspondence is a bijective mapping between permutations and pairs of standard Young tableaux , both having the same shape. This bijection can be constructed using an algorithm called Schensted insertion , starting with an empty tableau and successively inserting the values σ 1 , ..., σ n of the permutation σ at the numbers 1, 2, ..., n ; these form the second line when σ is given in two-line notation: σ = ( 1 2 … n σ 1 σ 2 … σ n ) {\displaystyle \sigma ={\begin{pmatrix}1&2&\ldots &n\\\sigma _{1}&\sigma _{2}&\ldots &\sigma _{n}\end{pmatrix}}} . The first standard tableau P is the result of successive insertions; the other standard tableau Q records the successive shapes of the intermediate tableaux during the construction of P . The Schensted insertion easily generalizes to the case where σ has repeated entries; in that case the correspondence will produce a semistandard tableau P rather than a standard tableau, but Q will still be a standard tableau. The definition of the RSK correspondence reestablishes symmetry between the P and Q tableaux by producing a semistandard tableau for Q as well. The two-line array (or generalized permutation ) w A corresponding to a matrix A is defined [ 1 ] as in which for any pair ( i , j ) that indexes an entry A i , j of A , there are A i , j columns equal to ( i j ) {\displaystyle {\tbinom {i}{j}}} , and all columns are in lexicographic order , which means that The two-line array corresponding to is By applying the Schensted insertion algorithm to the bottom line of this two-line array, one obtains a pair consisting of a semistandard tableau P and a standard tableau Q 0 , where the latter can be turned into a semistandard tableau Q by replacing each entry b of Q 0 by the b -th entry of the top line of w A . One thus obtains a bijection from matrices A to ordered pairs, [ 2 ] ( P , Q ) of semistandard Young tableaux of the same shape, in which the set of entries of P is that of the second line of w A , and the set of entries of Q is that of the first line of w A . The number of entries j in P is therefore equal to the sum of the entries in column j of A , and the number of entries i in Q is equal to the sum of the entries in row i of A . In the above example, the result of applying the Schensted insertion to successively insert 1,3,3,2,2,1,2 into an initially empty tableau results in a tableau P , and an additional standard tableau Q 0 recoding the successive shapes, given by and after replacing the entries 1,2,3,4,5,6,7 in Q 0 successively by 1,1,1,2,2,3,3 one obtains the pair of semistandard tableaux The above definition uses the Schensted algorithm, which produces a standard recording tableau Q 0 , and modifies it to take into account the first line of the two-line array and produce a semistandard recording tableau; this makes the relation to the Robinson–Schensted correspondence evident. It is natural however to simplify the construction by modifying the shape recording part of the algorithm to directly take into account the first line of the two-line array; it is in this form that the algorithm for the RSK correspondence is usually described. This simply means that after every Schensted insertion step, the tableau Q is extended by adding, as entry of the new square, the b -th entry i b of the first line of w A , where b is the current size of the tableaux. That this always produces a semistandard tableau follows from the property (first observed by Knuth [ 2 ] ) that for successive insertions with an identical value in the first line of w A , each successive square added to the shape is in a column strictly to the right of the previous one. Here is a detailed example of this construction of both semistandard tableaux. Corresponding to a matrix one has the two-line array w A = ( 2 2 3 4 5 6 6 8 4 6 4 7 5 3 4 1 ) . {\displaystyle w_{A}={\begin{pmatrix}2&2&3&4&5&6&6&8\\4&6&4&7&5&3&4&1\end{pmatrix}}.} The following table shows the construction of both tableaux for this example If A {\displaystyle A} is a permutation matrix then RSK outputs standard Young Tableaux (SYT), P , Q {\displaystyle P,Q} of the same shape λ {\displaystyle \lambda } . Conversely, if P , Q {\displaystyle P,Q} are SYT having the same shape λ {\displaystyle \lambda } , then the corresponding matrix A {\displaystyle A} is a permutation matrix. As a result of this property by simply comparing the cardinalities of the two sets on the two sides of the bijective mapping we get the following corollary: Corollary 1 : For each n ≥ 1 {\displaystyle n\geq 1} we have ∑ λ ⊢ n ( t λ ) 2 = n ! {\displaystyle \sum _{\lambda \vdash n}(t_{\lambda })^{2}=n!} where λ ⊢ n {\displaystyle \lambda \vdash n} means λ {\displaystyle \lambda } varies over all partitions of n {\displaystyle n} and t λ {\displaystyle t_{\lambda }} is the number of standard Young tableaux of shape λ {\displaystyle \lambda } . Let A {\displaystyle A} be a matrix with non-negative entries. Suppose the RSK algorithm maps A {\displaystyle A} to ( P , Q ) {\displaystyle (P,Q)} then the RSK algorithm maps A T {\displaystyle A^{T}} to ( Q , P ) {\displaystyle (Q,P)} , where A T {\displaystyle A^{T}} is the transpose of A {\displaystyle A} . [ 1 ] In particular for the case of permutation matrices, one recovers the symmetry of the Robinson–Schensted correspondence: [ 3 ] Theorem 2 : If the permutation σ {\displaystyle \sigma } corresponds to a triple ( λ , P , Q ) {\displaystyle (\lambda ,P,Q)} , then the inverse permutation , σ − 1 {\displaystyle \sigma ^{-1}} , corresponds to ( λ , Q , P ) {\displaystyle (\lambda ,Q,P)} . This leads to the following relation between the number of involutions on S n {\displaystyle S_{n}} with the number of tableaux that can be formed from S n {\displaystyle S_{n}} (An involution is a permutation that is its own inverse ): [ 3 ] Corollary 2 : The number of tableaux that can be formed from { 1 , 2 , 3 , … , n } {\displaystyle \{1,2,3,\ldots ,n\}} is equal to the number of involutions on { 1 , 2 , 3 , … , n } {\displaystyle \{1,2,3,\ldots ,n\}} . Proof : If π {\displaystyle \pi } is an involution corresponding to ( P , Q ) {\displaystyle (P,Q)} , then π = π − {\displaystyle \pi =\pi ^{-}} corresponds to ( Q , P ) {\displaystyle (Q,P)} ; hence P = Q {\displaystyle P=Q} . Conversely, if π {\displaystyle \pi } is any permutation corresponding to ( P , P ) {\displaystyle (P,P)} , then π − {\displaystyle \pi ^{-}} also corresponds to ( P , P ) {\displaystyle (P,P)} ; hence π = π − {\displaystyle \pi =\pi ^{-}} . So there is a one-one correspondence between involutions π {\displaystyle \pi } and tableaux P {\displaystyle P} The number of involutions on { 1 , 2 , 3 , … , n } {\displaystyle \{1,2,3,\ldots ,n\}} is given by the recurrence: Where a ( 1 ) = 1 , a ( 2 ) = 2 {\displaystyle a(1)=1,a(2)=2} . By solving this recurrence we can get the number of involutions on { 1 , 2 , 3 , … , n } {\displaystyle \{1,2,3,\ldots ,n\}} , Let A = A T {\displaystyle A=A^{T}} and let the RSK algorithm map the matrix A {\displaystyle A} to the pair ( P , P ) {\displaystyle (P,P)} , where P {\displaystyle P} is an SSYT of shape α {\displaystyle \alpha } . [ 1 ] Let α = ( α 1 , α 2 , … ) {\displaystyle \alpha =(\alpha _{1},\alpha _{2},\ldots )} where the α i {\displaystyle \alpha _{i}} are non-negative integers and ∑ i α i < ∞ {\textstyle \sum _{i}\alpha _{i}<\infty } . Then the map A ⟼ P {\displaystyle A\longmapsto P} establishes a bijection between symmetric matrices with r o w ( A ) = α {\displaystyle \mathrm {row} (A)=\alpha } and SSYT's of weight α {\displaystyle \alpha } . The Robinson–Schensted–Knuth correspondence provides a direct bijective proof of the following celebrated identity for symmetric functions: where s λ {\displaystyle s_{\lambda }} are Schur functions . Fix partitions μ , ν ⊢ n {\displaystyle \mu ,\nu \vdash n} , then where K λ μ {\displaystyle K_{\lambda \mu }} and K λ ν {\displaystyle K_{\lambda \nu }} denote the Kostka numbers and N μ ν {\displaystyle N_{\mu \nu }} is the number of matrices A {\displaystyle A} , with non-negative elements, with r o w ( A ) = μ {\displaystyle \mathrm {row} (A)=\mu } and c o l u m n ( A ) = ν {\displaystyle \mathrm {column} (A)=\nu } .
https://en.wikipedia.org/wiki/Robinson–Schensted–Knuth_correspondence
RoboTurb is a welding robot used to repair turbine blades developed at Universidade Federal de Santa Catarina . It is a redundant robot with a flexible rail. [ 1 ] The Roboturb project started in 1998 at the Universidade Federal de Santa Catarina initially with the support of the Brazilian Government and the public power utility company COPEL – Companhia Paranaense de Energia Eletrica. Three phases followed, and now the project is mainly maintained by another public power utility company FURNAS – Furnas Centrais Eletricas. This robotics-related article is a stub . You can help Wikipedia by expanding it . This Brazilian science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/RoboTurb
Robocasting (also known as robotic material extrusion [ 1 ] ) is an additive manufacturing technique analogous to Direct Ink Writing and other extrusion-based 3D-printing techniques in which a filament of a paste-like material is extruded from a small nozzle while the nozzle is moved across a platform. [ 2 ] The object is thus built by printing the required shape layer by layer. The technique was first developed in the United States in 1996 as a method to allow geometrically complex ceramic green bodies to be produced by additive manufacturing. [ 3 ] In robocasting, a 3D CAD model is divided up into layers in a similar manner to other additive manufacturing techniques. The material (typically a ceramic slurry) is then extruded through a small nozzle as the nozzle's position is controlled, drawing out the shape of each layer of the CAD model. The material exits the nozzle in a liquid-like state but retains its shape immediately, exploiting the rheological property of shear thinning . It is distinct from fused deposition modelling as it does not rely on the solidification or drying to retain its shape after extrusion. Robocasting begins with a software process. One method is importing an STL file and slicing that shape into layers of similar thickness to the nozzle diameter. The part is produced by extruding a continuous filament of material in the shape required to fill the first layer. Next, either the stage is moved down or the nozzle is moved up and the next layer is deposited in the required pattern. This is repeated until the 3D part is complete. Numerically controlled mechanisms are typically used to move the nozzle in a calculated tool-path generated by a computer-aided manufacturing (CAM) software package. Stepper motors or servo motors are usually employed to move the nozzle with precision as fine as nanometers. [ 4 ] The part is typically very fragile and soft at this point. Drying, debinding and sintering usually follow to give the part the desired mechanical properties. Depending on the material composition, printing speed and printing environment, robocasting can typically deal with moderate overhangs and large spanning regions many times the filament diameter in length, where the structure is unsupported from below. [ 5 ] This allows intricate periodic 3D scaffolds to be printed with ease, a capability which is not possessed by other additive manufacturing techniques. These parts have shown extensive promise in fields of photonic crystals , bone transplants, catalyst supports, and filters. Furthermore, supporting structures can also be printed from a "fugitive material" which is easily removed. This allows almost any shape to be printed in any orientation. One key advantage of the robocasting additive manufacturing technique is its ability to utilize a wide range of feedstock “inks,” as shear-thinning ability is the only inherently required material property. As such, robocasting has seen diverse application among many disparate materials classes such as metallic foams , [ 6 ] pre-ceramic polymers , [ 7 ] and biological tissues . [ 8 ] This allows for a wide range of mechanical characteristics to be accessible through this technique, with additional tailoring possible through the use of ink fillers and varying extrusion parameters. Micro- and nano-scale filler materials are commonly used to create composite feedstocks for robocasting and are available in a wide range of compositions, with morphologies typically falling into the broad categories of spheres, platelets, and filaments/tubes. Both composition and morphology play significant roles in the mechanical characteristics imparted by the filler. For example, the inclusion of stiff boron nitride nanobarbs within epoxy feedstock has been demonstrated to anisotropically increase overall composite strength and stiffness along the direction of fiber orientation due to their shape asymmetry, [ 9 ] while the inclusion of hollow glass microspheres within the same epoxy feedstock has been demonstrated to isotropically improve specific strength by significantly reducing total density of the composite. [ 10 ] In addition to shape, differing size regimes within fillers of the same morphology have been demonstrated to yield significant changes in mechanical properties. For epoxy-carbon fiber composite systems of identical composition, flexural strength has been shown to generally decrease with decreasing fiber length. However, shorter fibers have also been demonstrated to produce better overall printing behavior during the robocasting process as increasing length also increases the likelihood of jamming within the extruder; higher print fidelity as seen for the shorter fibers generally results in greater reproducibility of mechanical behavior. In addition, very long fibers have exhibited a tendency to break during extrusion, essentially imparting a de facto size cap on filament-type fillers used in robocasting. [ 11 ] Extrusion phenomena inherently tied into the robocasting technique have been shown to have appreciable effects on the mechanical behavior of resulting parts. One of the most significant is the alignment of filler materials within composite feedstocks during deposition, which is enhanced as filler anisotropy increases. This alignment phenomenon also becomes more pronounced with decreasing nozzle diameter and increasing ink deposition speed, as these factors increase the effective shearing experienced by fillers suspended within the feedstock in accordance with Jeffrey-Hamel flow theory . Fillers are thus driven to align parallel to the extrusion pathway, imparting significant anisotropic character within the finished part. This anisotropy can be further enhanced by prescribing extrusion pathways that remain parallel throughout the manufacturing process; conversely, prescribing extrusion pathways that exhibit differing orientations, such as 90° “logpile” rotation between layers, can mitigate this effect. [ 12 ] Selection of deposition pathing can also be exploited to alter mechanical characteristics of robocasting products, such as in the case of non-dense and graded components. The creation of open lattice-type structures via robocasting is widespread and enables optimization of specific strength and stiffness by reducing the cross-sectional footprint of a given feedstock material while retaining much of its bulk mechanical integrity. [ 13 ] [ 14 ] [ 15 ] In addition, the creation of unique deposition pathing via finite element analysis of a desired structure can generate dynamically-graded geometries optimized for specific applications. [ 16 ] The technique can produce non-dense ceramic bodies which can be fragile and must be sintered before they can be used for most applications, analogous to a wet clay ceramic pot before being fired. A wide variety of different geometries can be formed from the technique, from solid monolithic parts [ 2 ] to intricate microscale "scaffolds", [ 17 ] and tailored composite materials. [ 18 ] A heavily-researched application for robocasting is in the production of biologically compatible tissue implants. "Woodpile" stacked lattice structures can be formed quite easily which allow bone and other tissues in the human body to grow and eventually replace the transplant. With various medical scanning techniques the precise shape of the missing tissue was established and input into 3D modelling software and printed. Calcium phosphate glasses and hydroxyapatite have been extensively explored as candidate materials due to their biocompatibility and structural similarity to bone. [ 19 ] Other potential applications include the production of specific high surface area structures, such as catalyst beds or fuel cell electrolytes. [ 20 ] Advanced metal matrix- and ceramic matrix- load bearing composites can be formed by infiltrating woodpile bodies with molten glasses, alloys or slurries. Robocasting has also been used to deposit polymer and sol-gel inks through much finer nozzle diameters (less than 2 μm) than is possible with ceramic inks. [ 4 ]
https://en.wikipedia.org/wiki/Robocasting
The Robocrane is a kind of manipulator resembling a Stewart platform but using an octahedral assembly of cables instead of struts. Like the Stewart platform, the Robocrane has six degrees of freedom (x, y, z, pitch, roll, & yaw). It was developed by Dr. James S. Albus of the US National Institute of Standards and Technology (NIST), using the Real-Time Control System which is a hierarchical control system . Given its unusual ability to "fly" tools around a work site, it has many possible applications, including stone carving, ship building, bridge construction, inspection, pipe or beam fitting and welding. Albus invented and developed a new generation of robot cranes based on six cables and six winches configured as a Stewart platform. The NIST RoboCraneTM has the capacity to lift and precisely manipulate heavy loads over large volumes with fine control in all six degrees of freedom. Laboratory RoboCranes have demonstrated the ability to manipulate tools such as saws, grinders, and welding torches, and to lift and precisely position heavy objects such as steel beams and cast iron pipe. In 1992, the RoboCrane was selected by Construction Equipment magazine as one of the 100 most significant new products of the year for construction and related industries. It was also selected by Popular Science magazine for the "Best of What's New" award as one of the 100 top products, technologies, and scientific achievements of 1992. [ 1 ] A version of the RoboCrane has been commercially developed for the United States Air Force to enable rapid paint stripping, inspection, and repainting of very large military aircraft such as the C-5 Galaxy . RoboCrane is expected to save the United States Air Force $8 million annually at each of its maintenance facilities. This project was recognized in 2008 by a National Laboratories Award for technology transfer . Potential future applications of the RoboCrane include ship building, construction of high rise buildings, highways, bridges, tunnels, and port facilities; cargo handling, ship-to-ship cargo transfer on the high seas, radioactive and toxic waste clean-up; and underwater applications such as salvage, drilling, cable maintenance, and undersea waste site management. [ 1 ] This article incorporates public domain material from the National Institute of Standards and Technology
https://en.wikipedia.org/wiki/Robocrane
Beijing Roborock Technology Co. Ltd. , branded as Roborock , is a Chinese consumer goods company known for its robotic sweeping and mopping devices [ 1 ] and handheld cordless stick vacuums . Xiaomi played a key role in the company's founding. [ 2 ] Beijing Roborock Technology Co. Ltd. was founded in 2014 in Beijing , China. [ 3 ] Its launch was largely supported by Xiaomi. [ 2 ] The company raised about $640 million in its February 2020 IPO, [ 3 ] and the company had annual revenue of approximately CNY 4.5 billion as of August 2021. [ 1 ] Roborock currently trades on Beijing's STAR market. [ 4 ] Newer models in Roborock's "S" line of robotic floor cleaning devices have an obstacle avoidance system which uses dual cameras and a microprocessor to discern objects as small as 5 cm wide by 3 cm high. [ 5 ] As the cleaners move about a space they create a schematic map, marking objects to be avoided later. Roborock has previously claimed that their floor cleaning devices do not store images or upload them to the cloud, and that all captured images are immediately deleted after processing. [ 5 ] Roborock introduced ReactiveAI 2.0 with the release of the Roborock S7 MaxV. It has an RGB camera and 3D structured light scanning [ clarification needed ] with a new neural processor for improved object recognition regardless of lighting conditions. [ 6 ] In addition to their front-mounted cameras, newer Roborock floor cleaning devices use top-mounted LIDAR to map rooms. Using an app, users can set off-limits areas to ensure the device does not clean there. Users can also set "no-mop" areas where the device may vacuum but not mop. [ 5 ] Roborock Q7 Max, [ 7 ] released in 2022, [ 8 ] generates 4,200 Pa suction, and can be controlled by Alexa , Siri , or Google Assistant . [ 9 ] In 2023, Roborock released the S8, S8 Plus and S8 Pro Ultra. The main difference between the models is the docking station each includes. The S8 has a standard charging base whereas the S8 Plus includes an Auto-Empty Dock. [ 10 ] The S8 Pro Ultra ships with the RockDock Ultra, the most advanced dock Roborock offers. In addition to emptying the S8's dustbin and charging the robot, the dock also manages the S8's mopping system including refilling its water and drying its mop pad. The S8 Pro Ultra is the first Roborock robot vacuum with lifting dual brushrolls. [ 11 ] The S8 and S8 Plus have dual brushrolls but they do not lift. All models which precede the S8 have a single brushroll. Roborock S7 MaxV Ultra has 5,100Pa suction and a livestreaming camera. [ 12 ] [ 13 ] [ 14 ] Roborock S7, [ 15 ] which debuted at CES 2021, [ 16 ] uses trademarked VibaRise. [ 17 ] Roborock S7 can detect the type of floor to use either its mop or vacuum. [ 18 ] The Roborock S6 MaxV operates at 67 dB and generates maximum suction of 2,500 Pa. [ 5 ] [ 19 ] Its dustbin measures 460 mL at full capacity. [ 20 ] It can vacuum approximately 250 square meters between charges, and its mop can cover about 200 square meters of hard flooring on the same charge. The Roborock S4 does not mop. In 2022, Roborock released the Q5 which replaces the S models, and is similar to the S4 Max and the S5. The Q5 has a higher suction power but lacks the mop feature. [ 21 ]
https://en.wikipedia.org/wiki/Roborock
The Robot Constitution is a security ruleset part of AutoRT set by Google DeepMind in January 2024 for its AI products. The rules are inspired by Asimov 's Three Laws of Robotics .The rules are applied to the underlying large language models of the helper robots. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Rule number 1 is a robot “may not injure a human being” . This artificial intelligence -related article is a stub . You can help Wikipedia by expanding it . This Google -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Robot_Constitution
The Robot Interaction Language ( ROILA ) is the first spoken language created specifically for talking to robots. [ 1 ] ROILA is being developed by the Department of Industrial Design at Eindhoven University of Technology . The major goals of ROILA are that it should be easily learnable by the user, and optimized for efficient recognition by robots. ROILA has a syntax that allows it to be useful for many different kinds of robots, including the Roomba , and Lego Mindstorms NXT . ROILA is free for anybody to use and to contribute to, as the team has released all documentation and tools under a Creative Commons license. [ 2 ] ROILA was developed due to the need for a unified language for humans to speak to robots. The designers performed research into the ability of robots to recognize and interpret natural languages. They discovered that natural languages can be very confusing for robots to interpret sometimes, due to elements such as homophones and tenses . Based on this research, the team set out to create a genetic algorithm that would generate an artificial vocabulary in a way that would be easy for a human to pronounce. The algorithm used the most common phonemes from the most popular natural languages and created easy to pronounce words. The team took the results of this algorithm and formed the ROILA vocabulary. [ 3 ] ROILA has an isolating grammar, meaning that it doesn't have suffixes or prefixes added to words to change their meanings. Instead, these changes are constructed by adding word markers that specify what the changes are, such as the tense of the previous verb. For example, in English the suffix “ed” is added to a word to show that it is in the past tense, but in ROILA the marker word “jifi” is placed after the verb. [ 4 ] Below is the list of all letters and sounds used in ROILA: [ 5 ] Of the 26 letters of the English alphabet, c, d, g, h, q, r, v, x, y, and z are not used. The vocabulary of ROILA was generated by an algorithm designed to create a vocabulary with the least confusion amongst words. Each word generated by this algorithm was assigned a basic meaning, as taken from Basic English . The words from Basic English that are used the most frequently are assigned to the shortest ROILA words generated by the algorithm. A short list of words in ROILA is included below, along with their English meaning. ROILA was designed to have a regular grammar, with no exceptions to anything. All rules apply to all words in a part of speech. Due to the simple isolating type grammar of ROILA whole word markers are added following parts of speech to show the grammatical category. For example, a word marker placed after a verb type would apply a tense, while a word marker applied after a noun type would apply plurality. ROILA has five parts of speech: nouns, verbs, adverbs, adjectives, and pronouns. The only pronouns are I, you, he, and she. [ 7 ] Sentences follow a subject–verb–object word order. The following examples attempt to show what the syntax of the language looks like in various uses. ROILA is currently only available for the Lego Mindstorms NXT . It uses the CMU Sphinx speech recognition library to interpret spoken commands to the NXT, and transform them into ROILA commands.
https://en.wikipedia.org/wiki/Robot_Interaction_Language
Robot ethics , sometimes known as " roboethics ", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic (such as in healthcare or as " killer robots " in war), and how robots should be designed such that they act "ethically" (this last concern is also called machine ethics ). Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. [ 1 ] Robot ethics is a sub-field of the ethics of technology , specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race. [ 2 ] While the issues are as old as the word robot , serious academic discussions started around the year 2000. Robot ethics requires the combined commitment of experts of several disciplines, who have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI. The main fields involved in robot ethics are: robotics , computer science , artificial intelligence , philosophy , ethics , theology , biology , physiology , cognitive science , neurosciences , law , sociology , psychology , and industrial design . [ 3 ] Some of the central discussion of ethics in relation to the treatment of non-human or non-biological things and their potential "spirituality". Another central topic, has to do with the development of machinery and eventually robots, this philosophy was also applied to robotics. One of the first publications directly addressing and setting the foundation for robot ethics was " Runaround ", a science fiction short story written by Isaac Asimov in 1942, which featured his well-known Three Laws of Robotics . These three laws were continuously altered by Asimov, and a fourth – or "zeroth" – law was eventually added to precede the first three, in the context of his science fiction works. The short term "roboethics" was most likely coined by Gianmarco Veruggio. [ 4 ] An important event that propelled the concern of roboethics was the First International Symposium on Roboethics in 2004 by the collaborative effort of Scuola di Robotica, the Arts Lab of Scuola Superiore Sant'Anna, Pisa, and the Theological Institute of Pontificia Accademia della Santa Croce, Rome. [ 5 ] This symposium on roboethics was organized due to the activities of the School of Robotics, which is a non-profit organization and is to promote the knowledge of the science of robotics among students, and the general public. In discussions with students and non-specialists, Gianmarco Veruggio and Fiorella Operto thought that it was necessary to spread correct conceptions among the general public about the alleged dangers in robotics. They thought that a productive debate based on accurate insights and real knowledge could push people to take an active part in the education of public opinion, make them comprehend the positive uses of the new technology, and prevent its abuse. After two days of intense debating, anthropologist Daniela Cerqui identified three main ethical positions emerging from this debate: These are some important events and projects in robot ethics. Further events in the field are announced by the euRobotics ELS topics group , and by RoboHub : Computer scientist Virginia Dignum noted in a March 2018 issue of Ethics and Information Technology that the general societal attitude toward artificial intelligence (AI) has, in the modern era, shifted away from viewing AI as a tool and toward viewing it as an intelligent "team-mate". In the same article, she assessed that, with respect to AI, ethical thinkers have three goals, each of which she argues can be achieved in the modern era with careful thought and implementation. [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] The three ethical goals are as follows: Roboethics as a science or philosophical topic has begun to be a common theme in science fiction literature and films. One film that could be argued to be ingrained in pop culture that depicts the dystopian future use of robotic AI is The Matrix , depicting a future where humans and conscious sentient AI struggle for control of planet earth resulting in the destruction of most of the human race. An animated film based on The Matrix , the Animatrix , focused heavily on the potential ethical issues and insecurities between humans and robots. The movie is broken into short stories. Animatrix's animated shorts are also named after Isaac Asimov's fictional stories. Another facet of roboethics is specifically concerned with the treatment of robots by humans, and has been explored in numerous films and television shows. One such example is Star Trek: The Next Generation , which has a humanoid android, named Data , as one of its main characters. For the most part, he is trusted with mission-critical work, but his ability to fit in with the other living beings is often in question. [ 20 ] More recently, the movie Ex Machina and the TV show Westworld have taken on these ethical questions quite directly by depicting hyper-realistic robots that humans treat as inconsequential commodities. [ 21 ] [ 22 ] The questions surrounding the treatment of engineered beings has also been key component of Blade Runner for over 50 years. [ 23 ] Films like Her have even distilled the human relationship with robots even further by removing the physical aspect and focusing on emotions. Although not a part of roboethics per se , the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an conscious AI program with no restraint on the termination of its enemies. This series too has the same archetype as The Matrix series, where robots have taken control. Another famous pop culture case of robots or AI without programmed ethics or morals is HAL 9000 in the Space Odyssey series, where HAL (a computer with advanced AI capabilities who monitors and assists humans on a space station) kills all the humans on board to ensure the success of the assigned mission after his own life is threatened. [ 24 ] Lethal Autonomous Weapon Systems (LAWS) which is often called “killer robots,” are theoretically able to target and fire without human supervision and interference. In 2014, the Convention on Conventional Weapons (CCW) held two meetings. The first was the Meeting of Experts on Lethal Autonomous Weapons Systems (LAWS). This meeting was about the special mandate on LAWS and intrigued intense discussion. [ 25 ] National delegations and many non-governmental organizations(NGOs) expressed their opinions on the matter. Numerous NGOs and certain states such as Pakistan and Cuba are calling for a preventive prohibition of LAWS. They proposed their opinions based on deontological and consequentialist reasoning. On the deontological side, certain philosophers such as Peter Asaro and Robert Sparrow, most NGOs, and the Vatican all argue that authorizing too much rights to machine violates human dignity, and that people have the “right not to be killed by a machine.” To support their standpoint, they repeatedly cite the Martens Clause . In the end of this meeting, the most important consequentialist objection was that LAWS would never be able to respect international humanitarian law (IHL), as believed by NGOs, many researchers, and several states ( Pakistan , Austria , Egypt , Mexico ). According to the International Committee of the Red Cross (ICRC), “there is no doubt that the development and use of autonomous weapon systems in armed conflict is governed by international humanitarian law.” [ 26 ] States recognize this: those who participated in the first UN Expert Meeting in May 2014 recognized respect for IHL as an essential condition for the implementation of LAWS. With diverse predictions, certain states believe LAWS will be unable to meet this criterion, while others underline the difficulty of adjudicating at this stage without knowing the weapons' future capabilities ( Japan , Australia ). All insist equally on the ex-ante verification of the systems' conformity to IHL before they are put into service, in virtue of article of the first additional protocol to the Geneva Conventions. Three classifications of the degree of human control of autonomous weapon systems were laid out by Bonnie Docherty in a 2012 Human Rights Watch report. [ 27 ] In 2015, the Campaign Against Sex Robots (CASR) was launched to draw attention to the sexual relationship of humans with machines. The campaign claims that sex robots are potentially harmful and will contribute to inequalities in society, and that an organized approach and ethical response against the development of sex robots is necessary. [ 28 ] In the article Should We Campaign Against Sex Robots? , published by the MIT Press , researchers pointed some flaws on this campaign and did not support a ban on sex robots completely. Firstly, they argued that the particular claims advanced by the CASR were "unpersuasive," partly because of a lack of clarity about the campaign's aims and partly because of substantive defects in the main ethical objections put forward by campaign's founders. Secondly, they argued that it would be very difficult to endorse a general campaign against sex robots unless one embraced a highly conservative attitude towards the ethics of sex. Drawing upon the example of the campaign to stop killer robots, they thought that there were no inherently bad properties of sex robots that give rise to similarly serious levels of concern, the harm caused by sex robots being speculative and indirect. Nonetheless, the article concedes that there are legitimate concerns that can be raised about the development of sex robots. [ 29 ] With contemporary technological issues emerging as society pushes on, one topic that requires thorough thought is robot ethics concerning the law. Academics have been debating the process of how a government could go about creating legislation with robot ethics and law. A pair of scholars that have been asking these questions are Neil M. Richards Professor of Law at Washington University School of Law as well as, William D. Smart Associate Professor of Computer Science at McKelvey School of Engineering . In their paper "How Should Robots Think About Law" they make four main claims concerning robot ethics and law. [ 30 ] The groundwork of their argument lies on the definition of robot as "non-biological autonomous agents that we think captures the essence of the regulatory and technological challenges that robots present, and which could usefully be the basis of regulation." Second, the pair explores the future advanced capacities of robots within around a decades time. Their third claim argues a relation between the legal issues robot ethics and law experiences with the legal experiences of cyber-law. Meaning that robot ethics laws can look towards cyber-law for guidance. The "lesson" learned from cyber-law being the importance of the metaphors we understand emerging issues in technology as. This is based on if we get the metaphor wrong for example, the legislation surrounding the emerging technological issue is most likely wrong. The fourth claim they argue against is a metaphor that the pair defines as "The Android Fallacy". They argue against the android fallacy which claims humans and non-biological entities are "just like people". There is mixed evidence as to whether people judge robot behavior similarly to humans or not. Some evidence indicates that people view bad behavior negatively and good behavior positively regardless of whether the agent of the behavior is a human or a robot; however, robots receive less credit for good behavior and more blame for bad behavior. [ 31 ] Other evidence suggests that malevolent behavior by robots is seen as more morally wrong than benevolent behavior is seen as morally right; malevolent robot behavior is seen as more intentional than benevolent behavior. [ 32 ] In general, people's moral judgments of both robots and humans are based on the same justifications and concepts but people have different moral expectations when judging humans and robots. [ 33 ] Research has also found that when people try to interpret and understand how robots decide to behave in a particular way, they may see robots as using rules of thumb (advance the self, do what is right, advance others, do what is logical, and do what is normal) that align with established ethical doctrines (egotism, deontology, altruism, utilitarianism, and normative). [ 34 ]
https://en.wikipedia.org/wiki/Robot_ethics
Robot Research Initiative (RRI) is a research institute dedicated to advanced robotics research. It is an affiliated organization of Chonnam National University in Gwangju , Republic of Korea . Prof. Jong Oh Park moved from the Korea Institute of Science and Technology to Chonnam National University in early 2005 and established RRI in March 2008, where he is still actively in charge. RRI is currently a leading institute in the medical robotics field, especially in the area of biomedical micro/nano robotics. RRI is one of the largest institutions among university robotics laboratories in Korea and competes globally. [ 1 ] The current research focuses of RRI are biomedical micro/nano robotics, surgery robotics, cable robotics, and so on. The Korean government invests roughly 200 million USD annually in the Korean robotics industry, and almost 90% of this budget is designated for R&D. RRI has been actively involved in the government-funded R&D projects. After an over 10-year investment in personal service robotics, as well as IT–based ubiquitous robotics, the government has been strategically investing in medical robotics for the past 6 years. The biomedical micro/nano robotics field in Korea has been exclusively initiated by RRI, and the reputation and status of RRI is currently stable. The global networking of RRI is mostly focused on biomedical micro/nano robotics, covering engineering and scientific approaches. Prof. Park and his staff have led both government- and industry-funded R&D robotics projects. As the industry partners Samsung Electronics , Hyundai Motors , Daewoo Motors , and DSME . The Robot Research Initiative was established at the building of engineering 1A, Chonnam National University in March 2008. [ 2 ] As a director of the Robot Research Initiative, Professor Jong Oh Park was nominated a month later. In year 2008, Robot Research Initiative signed MOU for mutual cooperation with KIST Europe, [ 3 ] Dario Lab in Scuola Superior Sant’Anna in Italy and Sitti Lab in Carnegie Mellon University. In year 2010, Robot Research Initiative announced ‘Development of Biomedical Microrobot for Intravascular Therapy' [ 4 ] ’ to the public. Also, ‘Pioneer research center for bacteriobot’ and ‘Space Robot Research Center’ was opened. As of March 2011, Prof. Jong Oh Park signed MOU for mutual cooperation with centern for Micro-Nano Mechatronics, Nagoya University (CMM), Japan and Fondazione Instituto Italiano di Tecnologia(IIT). In year 2012, MOU for mutual cooperation was signed with Fraunhofer-Gesellschaft and Fraunhofer-IPA, and with National Science Foundation -founded Materials Research Science and Engineering Centre of Brandeis University. Also, March 2013, MOU with Yanbian University of Science&Technology(Mechanical Material Automation Engineering) was made. In June, Cooperation agreement with Fraunhofer-Gesellschaft and Fraunhofer-IPA was made as well a year later of its MOU. Recently, Robot Research Initiative signed MOU with Daewoo Shipbuilding & Marine Enhoneeting Co., LTD. in April, 2014. [ 5 ] [ 8 ] Prof. Jong Oh Park , director of RRI, transferred following technologies when he worked at KIST.
https://en.wikipedia.org/wiki/Robot_research_initiative
The Robotech Defenders are a line of scale model kits released by Revell during the early 1980s with an accompanying limited comic series published by DC Comics. Contrary to what their name seems to imply, the "'Robotech Defenders'" are not part of the Robotech anime universe adapted by Carl Macek and released by Harmony Gold USA , but they did adopt the same moniker and logo. The "Robotech Defenders" were one of two "Robotech" lines released by Revell , the other being the "Robotech Changers". The "Robotech Changers" line initially consisted of three models based on the Valkyrie Variable fighter designs from Macross , and the NEBO model, based upon the Orguss of Super Dimension Century Orguss . The "Robotech Defenders" model line was tied into a two-issue limited series of the same name, published by DC comics. It shares many common themes with other science fiction series of that time, including invading aliens, and giant mechanical war machines. Seeking to capitalize on the Mecha craze of the early 1980s, Model Company Revell went to Japan to look for suitable mecha models prior to 1984. They eventually licensed a number of Takara's Fang of the Sun Dougram models for the "Defenders" line. These models were repackaged with the "Robotech" moniker, and released in North America and Europe . The humanoid Mech models had an average size of 30 cm, the in-scale humans were about 2 cm. One of the features of the models was that they were not static, but had fully movable joints and removable equipment. Because of the complexity, details and parts they can be challenging and require adult skill level even though they were sold with "ages 12 and up" on their packaging. Even experienced modelers found them challenging. In the North American market, the model kits met with much success, appealing to both fans of Robotech and the players of Battletech tabletop strategy game. In Europe, however, model sales were disappointing, possibly due to the non-existent background story included with the models, and the relatively high prices. Listed below are the Revell Robotech Defenders model kits by number and the source of the model (as well as the corresponding BattleTech name, if known): Warhammer 1150 "Thoren" & 1151 "Zoltek" models are 1/48 scale though marked on the box as 1/72. 1152 "Condar" model kit was boxed in two versions, one stating scale as 1/72(wrong) and one as 1/48(correct) though both were same kit. Listed below are the Revell Robotech Changers model kits by number and the source of the model (as well as the corresponding BattleTech name, if known): Listed below are the Revell Robotech model kits by number and the source of the model (as well as the corresponding BattleTech name, if known): Revell Robotech models from the Fang of the Sun Dougram line seem to be repacks of model kits made by Takara. The models from the Super Dimension Fortress Macross line seem to be repacks of model kits made by Imai. The models from the Super Dimension Century Orguss line seem to be repacks of model kits made by Arii. Release of the "'Robotech Defenders'" and "'Robotech Changers'" model lines caused problems for media company Harmony Gold USA , who licensed the North American video rights to the Japanese Macross anime series, combining it with two other series to produce an 85 episode series they hoped to market direct to video. Since Revell was already distributing the models, Harmony Gold could not support the show with merchandising. In the end, both companies decided to enter into a co-licensing agreement and the name Robotech was eventually adopted for the syndicated television show that the home video line had transformed into. Players of FASA's BattleTech tabletop strategy game universe will instantly recognize many of the Revell models as Mechs from the original Role Playing Game sourcebooks. The reason for this is that all of the original edition's 'Mech visuals were based on designs from a variety of anime series, including Macross , Dougram and Crusher Joe , some of which Revell kits are sourced. FASA eventually became embroiled in a lawsuit with Harmony Gold regarding the use of Macross images, [ 1 ] and after which FASA removed all Macross related images along with any other images not created in house from their Sourcebooks. Those 'Mechs would later be known by BattleTech fans as ' The Unseen '. The eponymous comic book , a two-issue mini-series, was published by DC Comics in 1984. It was originally intended to be a trilogy, but was reduced to the first normal-sized issue and a 32-page second issue with no advertisement. The universe of the "'Robotech Defenders'" comic book series bears no resemblance at all to the Robotech universe adapted by Harmony Gold USA . The Robotech Defenders comic predates the conception of the original Robotech cartoon show by about a year. The story followed the battles of a team of pilots who fight a savage race of aliens, called "Grelons", who have invaded all planets of a star system using superior technology. They plan to colonize the planets, using their titanic war machines to eliminate all resistance. The heroes, a small combat unit, are losing badly when their leader accidentally activates one of the Robotech Defenders. She then learns of the existence of the other machines, which are scattered on the other pilots' home planets. Each of these units has a unique range of abilities and environmental specialties (e.g., Aqualo was capable of diving and sea-based activities, Ziyon's Element was cold and snow, Thoren's heat and magma, Gartan's urban combat). By the end of the first issue, the team have managed to recover all the robots and engage the enemy in battle, but are still defeated and get captured. They escape by pushing a big red button which releases the Defenders' minds, unleashing the latter's' full combat capabilities. The pilots then track down the controllers of the savage aliens. They defeat them by causing the evil alien energy siphon to suck the energy from the sun, causing their space ship to explode. Revell's division in West Germany, Revell Plastic, GmbH, published a one-shot promotional issue of Robotech Defenders with a subtitle translating to "The Defenders of the Cosmos". Written by W. Spiegel with Artwork by W. Neugebauer , this original comic was not a reprint of the DC Comics series and was not connected to its continuity. It was translated to Swedish [ 2 ] and packaged with the model. Like the DC Comics series, it also had no connection to the TV series. [ 3 ]
https://en.wikipedia.org/wiki/Robotech_Defenders
Robotic magnetic navigation ( RMN ) (also called remote magnetic navigation) uses robotic technology to direct magnetic fields which control the movement of magnetic-tipped endovascular catheters into and through the chambers of the heart during cardiac catheterization procedures. [ 1 ] Because the human heart beats during ablation procedures, catheter stability can be affected by navigation technique. Magnetic fields created by RMN technology guide the tip of a catheter using a “pull” mechanism of action (as opposed to “push” with manual catheter navigation). Magnetic catheter navigation has been associated with greater catheter stability. [ 2 ] As of 2015 there were two robotic catheterization systems on the market for atrial fibrilation ; one of them used magnetic guidance. [ 3 ] After long-term follow up, RMN navigation has been associated with better procedural and clinical outcomes for AF ablation when compared with manual catheter navigation for cardiac ablation. [ 4 ] RMN has been shown to be safe and effective for cardiac catheter ablation in various patient populations with ventricular tachycardia . [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Robotic_magnetic_navigation
Robotic sperm (also called spermbots ) are biohybrid microrobots consisting of sperm cells and artificial microstructures. [ 1 ] [ 2 ] [ 3 ] Currently there are two types of spermbots. The first type, the tubular spermbot, consists of a single sperm cell that is captured inside a microtube. Single bull sperm cells enter these microtubes and become trapped inside. The tail of the sperm is the driving force for the microtube. [ 1 ] The second type, the helical spermbot, is a small helix structure which captures and transports single immotile sperm cells. In this case, a rotating magnetic field drives the helix in a screw-like motion. Both kinds of spermbots can be guided by weak magnetic fields. [ 2 ] These two spermbot designs are hybrid microdevices, they consist of a living cell combined with synthetic attachments. Other approaches exist to create purely synthetic microdevices inspired by the swimming of natural sperm cells, i.e. with a biomimetic design, for example so-called Magnetosperm which are made of a flexible polymeric structure coated with a magnetic layer and can be actuated by a magnetic field. [ 4 ] Initially, the microtubes for the tubular spermbots were made using roll-up nanotechnology on photoresist . [ 5 ] In this process, thin films of titanium and iron were deposited onto a sacrificial layer. When the sacrificial layer was removed, the thin films rolled into 50 μm long microtubes with a diameter of 5 - 8 μm. Later on, the microtubes were made from a temperature-responsive polymer to enable the controlled release of the sperm cells upon a small temperature change of a few degrees. [ 6 ] Tubular spermbots are assembled by adding a large amount of the microtubes to a diluted sperm sample under the microscope. The sperm cells randomly enter the microtubes and become trapped in their slightly conical cavity. In order to increase the coupling efficiency between sperm cells and microtubes, the microtubes have been functionalized with proteins or sperm chemoattractant . This has been done using thiol chemistry once the tubes are rolled-up or by transferring the molecules with an elastomer stamp onto the material before rolling the tubes. [ 7 ] Helical spermbots are assembled by driving a magnetic microhelix over an individual sperm cell, thereby confining its tail inside the helix lumen and pushing the head of the sperm forward. The sperm cell is loosely coupled to the helix and can be released by reversing the rotation of the helix, letting it withdraw from the head and free the confined tail in the process. Such microhelices were fabricated by direct laser lithography and coated with nickel or iron for magnetization. [ 2 ] Robotic sperm can be navigated by weak external magnetic fields of a few mT . These fields can be generated by permanent magnets or by a setup of electromagnets . The applied magnetic field can be a homogeneous, rotating, or gradient field. [ 8 ] Tubular and helical spermbots can also be navigated in a closed-loop control scheme with an electromagnetic coil setup. [ 9 ] Spermbots hold promise for potential application in single cell manipulation and assisted reproduction , but also for targeted drug delivery . A recent study shows that modified tubular spermbots can be used for delivery of cancer drugs. [ 10 ] In this case, the sperm cell is loaded with doxorubicin . The artificial microstructure fabricated by two-photon nanolithography captures the drug-loaded sperm cell. The sperm cell is the actuation source for the magnetic microstructure and can propel it to cancer spheroids . At this location, the drug-loaded sperm is released by a spring mechanism and the sperm cell delivers the drug to the cancer cells. Robotic sperms as microswimmers are interesting for diverse biomedical applications, specifically for new assisted fertilization techniques and for the targeted delivery of therapeutic cargo. These microswimmers are meant to operate in in vivo environments, a feature that may revolutionize assisted reproduction technologies and nanomedicine in the future. [ 11 ] New designs are emerging and plenty of applications can be derived from the here reported concept. [ 3 ] [ 11 ]
https://en.wikipedia.org/wiki/Robotic_sperm
A robotic vacuum cleaner , sometimes called a robovac or a roomba as a generic trademark , is an autonomous vacuum cleaner which has a limited vacuum floor cleaning system combined with sensors and robotic drives with programmable controllers and cleaning routines. Early designs included manual operation via remote control and a "self-drive" mode which allowed the machine to clean autonomously. [ 1 ] Marketing materials for robotic vacuums frequently cite low noise, ease of use, and autonomous cleaning as main advantages. The perception that these devices are set-and-forget solutions is widespread but not always correct. Robotic vacuums are usually smaller than traditional upright vacuums, and weigh significantly less than even the lightest canister models. However, a downside to a robotic vacuum cleaner is that it takes an extended amount of time to vacuum an area due to its size. They are also relatively expensive, [ 2 ] and replacement parts and batteries can contribute significantly to their operating cost. [ 3 ] Concerns over privacy and security have also been raised around robotic vacuums. [ 4 ] [ 5 ] [ 6 ] In 1956, the American science fiction author Robert A. Heinlein described the concept of a robotic vacuum cleaner with a recharging dock in his novel The Door into Summer : "Basically it was just a better vacuum cleaner .... It went quietly looking for dirt all day long, in search curves that could miss nothing .... Around dinner time it would go to its stall and soak up a quick charge." [ 7 ] The following year engineer Donald Moore filed a patent for robotic appliances, including a sweeper, that could follow a track laid below the floor. Whirlpool demonstrated the concept at the 1959 American National Exhibition but did not bring it to market. [ 8 ] In 1969 an episode of The Avengers was broadcast in which the character Inge Tilson played by Dora Reisser says "...I saw a demonstration once. A robot vacuum cleaner. It swept around the house, went back into its cupboard, automatically plugged in and recharged itself...". The teleplay for this episode which was entitled "Thingumajig" was written by Terry Nation . It was episode 27 of Season 7. [ 9 ] In 1985, Tomy released the Dustbot as a part of their Omnibot line of toys. Dustbot was the first robot to feature a built in vacuum, and was able to turn when it sensed an edge or ran into something. Dustbot would carry a mini broom and dustpan for decoration. [ 10 ] [ 11 ] [ 12 ] In 1990, three roboticists, Colin Angle, Helen Greiner, and Rodney Brooks, founded iRobot . [ 13 ] It was originally dedicated to making robots for military and domestic use. It launched the Roomba in 2002, which was able to change direction when it encountered an obstacle, detect dirty spots on the floor, and identify steep drops to keep it from falling down stairs. [ 3 ] The Roomba proved to be the first commercially successful robot vacuum. [ 14 ] In 2005, iRobot introduced the Scooba , which scrubbed hard floors. In 1996, Electrolux introduced the first robotic vacuum cleaner, the Electrolux Trilobite . [ 3 ] It worked well but had frequent problems with colliding with objects and stopping short of walls and other objects, as well as leaving small areas not cleaned. [ 3 ] As a result, it failed in the market and was discontinued. [ 3 ] In 1997, one of Electrolux's first versions of the Trilobite vacuum was featured on the BBC 's science program, Tomorrow's World . [ 15 ] In 2001, Dyson built and demonstrated a robot vacuum known as the DC06. However, due to its high price, it was never released to the market. [ 16 ] Electrolux released the Trilobite robotic vacuum cleaner. The Robotic vacuum cleaner launched at a price of $1,800.00. There were two models: the ZA1 and the ZA2. In 2010, Neato Robotics introduced the XV-11, one of the first robot vacuums to utilize laser-based mapping that allowed for navigation in systematic straight lines rather than random navigation. [ 17 ] [ 18 ] In 2015, Dyson and iRobot both introduced camera-based mapping. [ 19 ] [ 20 ] In 2016, iRobot claimed that 20% of vacuum cleaners sales worldwide were robots. [ 21 ] As of 2018, obstacles such as dog feces, cables and shoes remain very difficult for robots to navigate around. [ 22 ] [ 23 ] In 2022, ECOVACS launched DEEBOT-X1 Family featuring YIKO [ 24 ] Voice Assistant , which was the industry's first natural language for home robots with Al voice interaction and control technologies. [ 25 ] [ 26 ] [ 27 ] In 2023, SwitchBot introduced the K10 Plus, [ 28 ] claiming it as the world's smallest robot vacuum. [ 29 ] [ 30 ] Robotic vacuums have different types of cleaning modes, enabling the robot target specific areas or work more generally, and to function either under direct human control or automatically. [ 31 ] Some models can also mop for wet cleaning, autonomously vacuuming and wet-mopping a floor in one pass (sweep and mop combo). [ 32 ] The mop is either manually wet before attachment to the bottom of the robot or the robot may be able to automatically spray water on to the floor before running over it. Some advanced robot vacuum cleaners have a sensor that detects and avoids mopping in carpeted areas. However, if there is no sensor, most of the robot vacuum cleaner manufacturers add a no-mop zone feature in the app to make robot vacuums to avoid certain areas to clean. These robot vacuums are also capable to mop about 150 m 2 (1,600 sq ft) in one go.
https://en.wikipedia.org/wiki/Robotic_vacuum_cleaner
Robotics engineering is a branch of engineering that focuses on the conception, design, manufacturing, and operation of robots . It involves a multidisciplinary approach, drawing primarily from mechanical , electrical , software , and artificial intelligence (AI) engineering . [ 1 ] [ 2 ] Robotics engineers are tasked with designing these robots to function reliably and safely in real-world scenarios, which often require addressing complex mechanical movements, real-time control, and adaptive decision-making through software and AI. [ 1 ] Robotics engineering combines several technical disciplines, all of which contribute to the performance, autonomy, and robustness of a robot. Mechanical engineering is responsible for the physical construction and movement of robots. This involves designing the robot's structure, joints, and actuators , as well as analyzing its kinematics and dynamics. [ 3 ] Kinematic models are essential for controlling the movements of robots. Robotics engineers use forward kinematics to calculate the positions and orientations of a robot's end-effector , given specific joint angles, and inverse kinematics to determine the joint movements necessary for a desired end-effector position. These calculations allow for precise control over tasks such as object manipulation or locomotion. [ 4 ] Robotics engineers select actuators—such as electric motors , hydraulic systems , or pneumatic systems —based on the robot's intended function, power needs, and desired performance characteristics. [ 5 ] Materials used in the construction of robots are also carefully chosen for strength, flexibility, and weight, with lightweight alloys and composite materials being popular choices for mobile robots . [ 6 ] Robots depend on electrical systems for power, communication, and control. Powering a robot's motors, sensors , and processing units requires sophisticated electrical circuit design. Robotics engineers ensure that power is distributed efficiently and safely across the system, often using batteries or external power sources in a way that minimizes energy waste. [ 7 ] [ 8 ] A robot's ability to interact with its environment depends on interpreting data from various sensors. Electrical engineers in robotics design systems to process signals from cameras, LiDAR , ultrasonic sensors , and force sensors, filtering out noise and converting raw data into usable information for the robot's control systems . [ 9 ] [ 10 ] Software engineering is a fundamental aspect of robotics, focusing on the development of the code and systems that control a robot's hardware, manage real-time decision-making, and ensure reliable operation in complex environments. Software in robotics encompasses both low-level control software and high-level applications that enable advanced functionalities. [ 11 ] Robotics engineers develop embedded systems that interface directly with a robot's hardware, managing actuators, sensors, and communication systems. These systems must operate in real-time to process sensor inputs and trigger appropriate actions, often with strict constraints on memory and processing power. [ 12 ] [ 13 ] Modern robots rely on modular and scalable software architectures . A popular framework in the field is the Robot Operating System ( ROS ), which facilitates communication between different subsystems and simplifies the development of robotic applications. Engineers use such frameworks to build flexible systems capable of handling tasks such as motion planning , perception, and autonomous decision-making. [ 14 ] Robots frequently operate in environments where real-time processing is critical. Robotics engineers design software that can respond to sensor data and control actuators within tight time constraints. This includes optimizing algorithms for low-latency and developing robust error-handling procedures to prevent system failure during operation. [ 15 ] AI engineering plays an increasingly critical role in enabling robots to perform complex, adaptive tasks. It focuses on integrating artificial intelligence techniques such as machine learning , computer vision , and natural language processing to enhance a robot's autonomy and intelligence. [ 16 ] Robots equipped with AI-powered perception systems can process and interpret visual and sensory data from their surroundings. Robotics engineers develop algorithms for object recognition , scene understanding, and real-time tracking , allowing robots to perceive their environment in ways similar to humans. These systems are often used for tasks such as autonomous navigation or grasping objects in unstructured environments. [ 17 ] [ 18 ] Machine learning techniques, particularly reinforcement learning and deep learning , allow robots to improve their performance over time. Robotics engineers design AI models that enable robots to learn from their experiences, optimizing control strategies and decision-making processes. This is particularly useful in environments where pre-programmed behavior is insufficient, such as in search and rescue missions or unpredictable industrial tasks. [ 19 ] [ 20 ] Control systems engineering ensures that robots move accurately and perform tasks in response to environmental stimuli. Robotics engineers design control algorithms that manage the interaction between sensors, actuators, and software. [ 21 ] [ 22 ] Most robots rely on closed-loop control systems , where sensors provide continuous feedback to adjust movements and behaviors. This is essential in applications like robotic surgery , where extreme precision is required, or in manufacturing , where consistent performance over repetitive tasks is critical. [ 22 ] [ 23 ] For more advanced applications, robotics engineers develop adaptive control systems that can modify their behavior in response to changing environments. Nonlinear control techniques are employed when dealing with complex dynamics that are difficult to model using traditional methods, such as controlling the flight of drones or autonomous underwater vehicles . [ 24 ] [ 25 ] [ 26 ] Robotics engineers leverage a wide array of software tools and technologies to design, test, and refine robotic systems. Before physical prototypes are created, robotics engineers use advanced simulation software to model and predict the behavior of robotic systems in virtual environments. MATLAB and Simulink are standard tools for simulating both the kinematics (motion) and dynamics (forces) of robots. These platforms allow engineers to develop control algorithms, run system-level tests, and assess performance under various conditions without needing physical hardware. ROS (Robot Operating System) is another key framework, facilitating the simulation of robot behaviors in different environments. [ 27 ] For mechanical design, robotics engineers use Computer-Aided Design (CAD) software, such as SolidWorks , AutoCAD , and PTC Creo , to create detailed 3D models of robotic components. These models are essential for visualizing the physical structure of the robot and for ensuring that all mechanical parts fit together precisely. CAD models are often integrated with simulation tools to test mechanical functionality and detect design flaws early in the process. [ 28 ] Once the designs are verified through simulation, rapid prototyping technologies, including 3D printing and CNC machining , allow for the fast and cost-effective creation of physical prototypes. These methods enable engineers to iterate quickly, refining the design based on real-world testing and feedback, reducing the time to market. [ 29 ] [ 30 ] To ensure the robustness and durability of robotic components, engineers perform structural testing using finite alement analysis (FEA) software like ANSYS and Abaqus . FEA helps predict how materials will respond to stress, heat, and other environmental factors, optimizing designs for strength, efficiency, and material usage. [ 31 ] To bridge the gap between simulation and physical testing, robotics engineers often use hardware-in-the-loop (HIL) systems. HIL testing integrates real hardware components into simulation models, allowing engineers to validate control algorithms and system responses in real-time without needing the full robotic system built, thus reducing risks and costs. [ 32 ] The complexity of robotics engineering presents ongoing challenges. Designing robots that can reliably operate in unpredictable environments is a key engineering challenge. Engineers must create systems that can detect and recover from hardware malfunctions, sensor failures, or software errors. This is important in mission-critical applications such as space exploration or medical robotics . [ 33 ] [ 34 ] Ensuring safety in human-robot interaction is a significant challenge in the field of robotics engineering. In addition to technical aspects, such as the development of sensitive control systems and force-limited actuators, engineers must address the ethical and legal implications of these interactions. AI algorithms are employed to enable robots to anticipate and respond to human behavior in collaborative environments; however, these systems are not without flaws. When errors occur—such as a robot misinterpreting human movement or failing to halt its actions in time—the issue of responsibility arises. [ 35 ] This question of accountability poses a substantial ethical dilemma. Should the responsibility for such errors fall upon the engineers who designed the robot, the manufacturers who produced it, or the organizations that deploy it? Furthermore, in cases where AI algorithms play a key role in the robot's decision-making process, there is the added complexity of determining whether the system itself could be partly accountable. This issue is particularly pertinent in industries such as healthcare and autonomous vehicles , where mistakes may result in severe consequences, including injury or death. [ 36 ] Current legal frameworks in many countries have not yet fully addressed the complexities of human-robot interaction. Laws concerning liability, negligence, and safety standards often struggle to keep pace with technological advancements. The creation of regulations that clearly define accountability, establish safety protocols, and safeguard human rights will be crucial as robots become increasingly integrated into daily life. [ 36 ] [ 37 ] [ 38 ] Robotics engineers must balance the need for high performance with energy efficiency. Motion-planning algorithms and energy-saving strategies are critical for mobile robots, especially in applications like autonomous drones or long-duration robotic missions where battery life is limited. [ 39 ] [ 40 ]
https://en.wikipedia.org/wiki/Robotics_engineering
A robotic android , also known simply as a robot android , robotic droid , robot droid , robotoid , robodroid or roboid , is an artificial lifeform that is created through processes that are different from cloning or synthetics. In short, it is the cybernetic equivalent of an android . Perhaps the first mention of "robotoid" was in the Lost in Space episode War of the Robots which originally aired on February 9, 1966 and credits Robby the Robot as a robotoid and William Bramley and Ollie O'Toole as uncredited "robotoid voice" actors. [ 1 ] In the episode, the Lost in Space robot says: "It is more than a machine...it is a robotoid." The robot goes on to explain that as a robot, it is constrained by its programming, whereas the robotoid has the capability of making a choice . [ 2 ] [ 3 ] [ better source needed ] The episode is described as: "The family's robot is seemingly replaced when Will repairs a robotoid from an advanced civilization - until the new machine wreaks havoc by trying to take over the ship." [ 4 ] Piers Anthony 's short story Getting Through University , which may have been published as early as 1967/1968 in the science fiction magazine Worlds of If , mentions a robotoid. [ 5 ] In April 1968, Marvel Comics released Avengers #51 which introduced the Robotoid. [ 6 ] On December 20, 1978, the Battle of the Planets TV series episode Rage of the Robotoids was released. [ 7 ]
https://en.wikipedia.org/wiki/Robotoid
ROBOTY ( Arabic : روبوتي ) is a differential wheeled robot with self-balancing, motion, speech and object recognition capabilities. ROBOTY is also the first autonomous robot in Yemen , all of which will be primarily controlled by voice commands . The final goal of this research project is to build a robot capable of playing chess . [ 1 ] ROBOTY was first introduced on October 21, 2010 by its inventor, Hamdi M. Sahloul, as his final year project. The seminar showed the components and capabilities of the robot. These capabilities included moving, speaking, hearing, facial recognition, and GPS navigation. [ 2 ] Various media and newspapers covered this event, including Yemen TV Channel, [ 3 ] Al-Motamar, [ 4 ] 26 Sep., [ 5 ] Almasdar Online, [ 6 ] Al-Sahwa, [ 7 ] 22 May, [ 8 ] Al-Moheet, [ 9 ] Al-Hadath , [ 10 ] Al-Tagheer, [ 11 ] Al-Bida Press, [ 12 ] Shabab Al-Yemen, [ 13 ] Yemen Sound [ 14 ] and Nashwan News. [ 15 ]
https://en.wikipedia.org/wiki/Roboty
A rodent-borne virus , abbreviated as robovirus , is a zoonotic virus that is transmitted by a rodent vector. [ 1 ] [ 2 ] Roboviruses mainly belong to the virus families Arenaviridae and Hantaviridae . [ 3 ] [ 4 ] Like arbovirus ( ar thropod bo rne) and tibovirus ( ti ck bo rne) the name refers to its method of transmission, known as its vector . This is distinguished from a clade , which groups around a common ancestor. Some scientists now refer to arbovirus and robovirus together with the term ArboRobo-virus. [ 5 ] Rodent borne disease can be transmitted through different forms of contact such as rodent bites, scratches, urine, saliva, etc. [ 6 ] Potential sites of contact with rodents include habitats such as barns, outbuildings, sheds, and dense urban areas. Transmission of disease through rodents can be spread to humans through direct handling and contact, or indirectly through rodents carrying the disease spread to ticks, mites, fleas (arboborne). [ citation needed ] One example of a robovirus is hantavirus , which causes hantavirus pulmonary syndrome . Humans can be infected with Hantavirus Pulmonary Syndrome through direct contact with rodent droppings, saliva, or urine infected with strains of the virus. These components mix into the air and get transmitted when inhaled through airborne transmission. [ 7 ] Lassa virus from the Arenaviridae family causes Lassa hemorrhagic fever and is also a robovirus transmitted by the rodent genus Mastomys natalensis . [ 8 ] [ 9 ] The multimammate rat is able to excrete the virus in its urine and droppings. These rat are often found in the savannas and forests of Africa. When these rats scavenge and enter households this provides an outlet for direct contact transmission with humans. It has also been found that airborne transmission can occur by engaging in cleaning activities such as sweeping. In some areas of Africa, the Mastomys rodent is caught and used as a source of food. This process can also lead to transmission and infection. [ 10 ] Colorado tick fever virus causes high fevers, chills, headache, fatigue and sometimes vomiting, skin rash, and abdominal pain. The virus is caused by a Rocky Mountain wood tick ( Dermacentor andersoni ). It is an arbovirus, but rodents serve as the reservoir. The tick is carried by five species of rodents: the least chipmunk ( Eutamias minimus ), Richardson's ground squirrel ( Urocitellus richardsonii ), deer mice ( Peromyscus maniculatus ), the golden-mantled ground squirrel ( Callospermophilus lateraliss ), and the Uinta chipmunk ( Neotamias umbrinus ). [ 11 ] The infected tick will be carried by its rodent host and infect another host (animal or human) as it feeds. [ 12 ] Rodent populations are affected by a number of diverse factors, including climatic conditions. Warmer winters and increased rainfall will make it more likely for rodent populations to survive, therefore increasing the number of rodent reservoirs for disease. Increased rainfall accompanied by flooding can also increase human to rodent contact [ 13 ] Global climate change will affect the distribution and prevalence of roboviruses. Inadequate hygiene and sanitation, as seen in some European countries, also contribute to increase rodent populations and higher risks of rodent borne disease transmission. [ 14 ]
https://en.wikipedia.org/wiki/Robovirus
Robust Header Compression ( ROHC ) is a standardized method to compress the IP , UDP , UDP-Lite , RTP , and TCP headers of Internet packets. In streaming applications, the overhead of IP, UDP, and RTP is 40 bytes for IPv4 , or 60 bytes for IPv6 . For VoIP , this corresponds to around 60% of the total amount of data sent. Such large overheads may be tolerable in local wired links where capacity is often not an issue, but are excessive for wide area networks and wireless systems where bandwidth is scarce. [ 1 ] ROHC compresses these 40 bytes or 60 bytes of overhead typically into only one or three bytes, by placing a compressor before the link that has limited capacity, and a decompressor after that link. The compressor converts the large overhead to only a few bytes, while the decompressor does the opposite. The ROHC compression scheme differs from other compression schemes, such as IETF RFC 1144 and RFC 2508 , by the fact that it performs well over links where the packet loss rate is high, such as wireless links. The ROHC protocol takes advantage of information redundancy in the headers of the following: Redundant information is transmitted in the first packets only. The next packets contain variable information, e.g. identifiers or sequence numbers. These fields are transmitted in a compressed form to save more bits. For better performance, the packets are classified into streams before being compressed. This classification takes advantage of inter-packet redundancy. The classification algorithm is not defined by the ROHC protocol itself but left to the equipment vendor's implementation. Once a stream of packets is classified, it is compressed according to the compression profile that fits best. A compression profile defines the way to compress the different fields in the network headers. Several compression profiles are available, including the following: According to RFC 3095, the ROHC scheme has three modes of operation, as follows: Both the compressor and the decompressor start in U-mode. They may then transition to O-mode if a usable return link is available, and the decompressor sends a positive acknowledgement, with O-mode specified, to the compressor. The transition to R-mode is achieved in the same way. In the Unidirectional mode of operation, packets are only sent in one direction: from compressor to decompressor. This mode therefore makes ROHC usable over links where a return path from decompressor to compressor is unavailable or undesirable. In order to handle potential decompression errors, the compressor sends periodic refreshes of the stream context to the decompressor. The Bidirectional Optimistic mode is similar to the Unidirectional mode, except that a feedback channel is used to send error recovery requests and (optionally) acknowledgments of significant context updates from the decompressor to compressor. The O-mode aims to maximize compression efficiency and aims for sparse usage of the feedback channel. The Bidirectional Reliable mode differs in many ways from the previous two modes. The most important differences are a more intensive usage of the feedback channel, and a stricter logic at both the compressor and the decompressor that prevents loss of context synchronization between compressor and decompressor, except for very high residual bit error rates. The notion of compressor/decompressor states is orthogonal to the operational modes. Whatever the mode is, both the compressor and the decompressor work in one of their three states. They are basically finite state machines. Every incoming packet may cause the compressor/decompressor to change its internal state. Every state refers to a defined behaviour and compression level. The ROHC algorithm is similar to video compression, in that a base frame and then several difference frames are sent to represent an IP packet flow. This has the advantage of allowing ROHC to survive many packet losses in its highest compression state, as long as the base frames are not lost. The compressor's state machine defines the following three states: In Initialization and Refresh (IR) state, the compressor has just been created or reset, and full packet headers are sent. In First-Order (FO) state, the compressor has detected and stored the static fields (such as IP addresses and port numbers) on both sides of the connection. The compressor is also sending dynamic packet field differences in FO state. Thus, FO state is essentially static and pseudo-dynamic compression. In Second-Order (SO) state, the compressor is suppressing all dynamic fields such as RTP sequence numbers, and sending only a logical sequence number and partial checksum to cause the other side to predictively generate and verify the headers of the next expected packet. In general, FO state compresses all static fields and most dynamic fields. SO state is compressing all dynamic fields predictively using a sequence number and checksum. Transitions between the above states occur when the compressor: A typical ROHC implementation will aim to get the terminal into Second-Order state, where a 1-byte ROHC header can be substituted for the 40-byte IPv4/UDP/RTP or the 60-byte IPv6/UDP/RTP (i.e. VoIP) header. In this state, the 8-bit ROHC header contains three fields: The decompressor's state machine defines the following three states: Transitions between the above states occur when the decompressor: The size of the sequence number (SN) field governs the number of packets that ROHC can lose before the compressor must be reset to continue. The W-LSB algorithm is used to compress the SN in a robust way. The size of the sequence number in 1 and 2 byte ROHC packets is either 4 bits ( −1/+14 frame offset ), or 6 bits ( −1/+62 frame offset ), respectively, so ROHC can tolerate at most 62 lost frames with a 1-2 byte header. The RFC 3095 defines a generic compression mechanism. It may be extended by defining new compression profiles dedicated to specific protocol headers. New RFCs were published to compress new protocols: There have been two new RFCs published RFC 4995 and RFC 5225 to address the confusion some have encountered when attempting to interpret and implement ROHC. The first document defines a ROHC framework, while the second defines newer versions of the established ROHC profiles.
https://en.wikipedia.org/wiki/Robust_Header_Compression
In mathematics, specifically in computational geometry , geometric nonrobustness is a problem wherein branching decisions in computational geometry algorithms are based on approximate numerical computations, leading to various forms of unreliability including ill-formed output and software failure through crashing or infinite loops. For instance, algorithms for problems like the construction of a convex hull rely on testing whether certain "numerical predicates" have values that are positive, negative, or zero. If an inexact floating-point computation causes a value that is near zero to have a different sign than its exact value, the resulting inconsistencies can propagate through the algorithm causing it to produce output that is far from the correct output, or even to crash. One method for avoiding this problem involves using integers rather than floating point numbers for all coordinates and other quantities represented by the algorithm, and determining the precision required for all calculations to avoid integer overflow conditions. For instance, two-dimensional convex hulls can be computed using predicates that test the sign of quadratic polynomials , and therefore may require twice as many bits of precision within these calculations as the input numbers. When integer arithmetic cannot be used (for instance, when the result of a calculation is an algebraic number rather than an integer or rational number), a second method is to use symbolic algebra to perform all computations with exactly represented algebraic numbers rather than numerical approximations to them. A third method, sometimes called a "floating point filter", is to compute numerical predicates first using an inexact method based on floating-point arithmetic , but to maintain bounds on how accurate the result is, and repeat the calculation using slower symbolic algebra methods or numerically with additional precision when these bounds do not separate the calculated value from zero. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Robust_geometric_computation
Robust random early detection ( RRED ) is a queueing discipline for a network scheduler . The existing random early detection (RED) algorithm and its variants are found vulnerable to emerging attacks, especially the Low-rate Denial-of-Service attacks (LDoS). Experiments have confirmed that the existing RED-like algorithms are notably vulnerable under LDoS attacks due to the oscillating TCP queue size caused by the attacks. [ 1 ] The Robust RED (RRED) algorithm was proposed to improve the TCP throughput against LDoS attacks. The basic idea behind the RRED is to detect and filter out attack packets before a normal RED algorithm is applied to incoming flows. RRED algorithm can significantly improve the performance of TCP under Low-rate denial-of-service attacks. [ 1 ] A detection and filter block is added in front of a regular RED block on a router. The basic idea behind the RRED is to detect and filter out LDoS attack packets from incoming flows before they feed to the RED algorithm. How to distinguish an attacking packet from normal TCP packets is critical in the RRED design. Within a benign TCP flow, the sender will delay sending new packets if loss is detected (e.g., a packet is dropped). Consequently, a packet is suspected to be an attacking packet if it is sent within a short-range after a packet is dropped. This is the basic idea of the detection algorithm of Robust RED (RRED). [ 1 ] The simulation code of the RRED algorithm is published as an active queue management and denial-of-service attack (AQM&DoS) simulation platform. The AQM&DoS Simulation Platform is able to simulate a variety of DoS attacks (Distributed DoS, Spoofing DoS, Low-rate DoS, etc.) and active queue management (AQM) algorithms ( RED , RRED, SFB, etc.). It automatically calculates and records the average throughput of normal TCP flows before and after DoS attacks to facilitate the analysis of the impact of DoS attacks on normal TCP flows and AQM algorithms.
https://en.wikipedia.org/wiki/Robust_random_early_detection
Robustification is a form of optimisation whereby a system is made less sensitive to the effects of random variability , or noise , that is present in that system's input variables and parameters . The process is typically associated with engineering systems , but the process can also be applied to a political policy , a business strategy or any other system that is subject to the effects of random variability. Robustification as it is defined here is sometimes referred to as parameter design or robust parameter design (RPD) and is often associated with Taguchi methods . Within that context, robustification can include the process of finding the inputs that contribute most to the random variability in the output and controlling them, or tolerance design. At times the terms design for quality or Design for Six Sigma (DFFS) might also be used as synonyms Robustification works by taking advantage of two different principles. Consider the graph below of a relationship between an input variable x and the output Y , for which it is desired that a value of 7 is taken, of a system of interest. It can be seen that there are two possible values that x can take, 5 and 30. If the tolerance for x is independent of the nominal value, then it can also be seen that when x is set equal to 30, the expected variation of Y is less than if x were set equal to 5. The reason is that the gradient at x = 30 is less than at x = 5, and the random variability in x is suppressed as it flows to Y . This basic principle underlies all robustification, but in practice there are typically a number of inputs and it is the suitable point with the lowest gradient on a multi-dimensional surface that must be found. Consider a case where an output Z is a function of two inputs x and y that are multiplied by each other. Z = x y For any target value of Z there is an infinite number of combinations for the nominal values of x and y that will be suitable. However, if the standard deviation of x was proportional to the nominal value and the standard deviation of y was constant, then x would be reduced (to limit the random variability that will flow from the right hand side of the equation to the left hand side) and y would be increased (with no expected increase random variability because the standard deviation is constant) to bring the value of Z to the target value. By doing this, Z would have the desired nominal value and it would be expected that its standard deviation would be at a minimum: robustified. By taking advantage of the two principles covered above, one is able to optimise a system so that the nominal value of a systems output is kept at its desired level while also minimising the likelihood of any deviation from that nominal value. This is despite the presence of random variability within the input variables. There are three distinct methods of robustification, but a combination that provides the best in results, resources, and time can be used. The experimental approach is probably the most widely known. It involves the identification of those variables that can be adjusted and those variables that are treated as noises . An experiment is then designed to investigate how changes to the nominal value of the adjustable variables can limit the transfer of noise from the noise variables to the output. This approach is attributed to Taguchi and is often associated with Taguchi methods . While many have found the approach to provide impressive results, the techniques have also been criticised for being statistically erroneous and inefficient. Also, the time and effort required can be significant. Another experimental method that was used for robustification is the Operating Window. It was developed in the United States before the wave of quality methods from Japan came to the West , but still remains unknown to many. [ 1 ] In this approach, the noise of the inputs is continually increased as the system is modified to reduce sensitivity to that noise. This increases robustness, but also provides a clearer measure of the variability that is flowing through the system. After optimisation, the random variability of the inputs is controlled and reduced, and the system exhibits improved quality. The analytical approach relies initially on the development of an analytical model of the system of interest. The expected variability of the output is then found by using a method like the propagation of error or functions of random variables. [ 2 ] These typically produce an algebraic expression that can be analysed for optimisation and robustification. This approach is only as accurate as the model developed and it can be very difficult if not impossible for complex systems. The analytical approach might also be used in conjunction with some kind of surrogate model that is based on the results of experiments or numerical simulations of the system. [ citation needed ] In the numerical approach a model is run a number of times as part of a Monte Carlo simulation or a numerical propagation of errors to predict the variability of the outputs. Numerical optimisation methods such as hill climbing or evolutionary algorithms are then used to find the optimum nominal values for the inputs. This approach typically requires less human time and effort than the other two, but it can be very demanding on computational resources during simulation and optimization.
https://en.wikipedia.org/wiki/Robustification
In evolutionary biology , robustness of a biological system (also called biological or genetic robustness [ 1 ] ) is the persistence of a certain characteristic or trait in a system under perturbations or conditions of uncertainty. [ 2 ] [ 3 ] Robustness in development is known as canalization . [ 4 ] [ 5 ] According to the kind of perturbation involved, robustness can be classified as mutational , environmental , recombinational , or behavioral robustness etc . [ 6 ] [ 7 ] [ 8 ] Robustness is achieved through the combination of many genetic and molecular mechanisms and can evolve by either direct or indirect selection . Several model systems have been developed to experimentally study robustness and its evolutionary consequences. Mutational robustness (also called mutation tolerance) describes the extent to which an organism's phenotype remains constant in spite of mutation . [ 9 ] Robustness can be empirically measured for several genomes [ 10 ] [ 11 ] and individual genes [ 12 ] by inducing mutations and measuring what proportion of mutants retain the same phenotype , function or fitness . More generally, robustness corresponds to the neutral band in the distribution of fitness effects of mutation (i.e. the frequencies of different fitnesses of mutants). Proteins so far investigated have shown a tolerance to mutations of roughly 66% (i.e. two thirds of mutations are neutral). [ 13 ] Conversely, measured mutational robustnesses of organisms vary widely. For example, >95% of point mutations in C. elegans have no detectable effect [ 14 ] and even 90% of single gene knockouts in E. coli are non-lethal. [ 15 ] Viruses, however, only tolerate 20-40% of mutations and hence are much more sensitive to mutation. [ 10 ] Biological processes at the molecular scale are inherently stochastic . [ 16 ] They emerge from a combination of stochastic events that happen given the physico-chemical properties of molecules. For instance, gene expression is intrinsically noisy. This means that two cells in exactly identical regulatory states will exhibit different mRNA contents. [ 17 ] [ 18 ] The cell population level log-normal distribution of mRNA content [ 19 ] follows directly from the application of the Central Limit Theorem to the multi-step nature of gene expression regulation . [ 20 ] In varying environments , perfect adaptation to one condition may come at the expense of adaptation to another. Consequently, the total selection pressure on an organism is the average selection across all environments weighted by the percentage time spent in that environment. Variable environment can therefore select for environmental robustness where organisms can function across a wide range of conditions with little change in phenotype or fitness (biology) . Some organisms show adaptations to tolerate large changes in temperature, water availability, salinity or food availability. Plants, in particular, are unable to move when the environment changes and so show a range of mechanisms for achieving environmental robustness. Similarly, this can be seen in proteins as tolerance to a wide range of solvents , ion concentrations or temperatures . Genomes mutate by environmental damage and imperfect replication, yet they display remarkable tolerance. This comes from robustness both at many different levels. There are many mechanisms that provide genome robustness. For example, genetic redundancy reduces the effect of mutations in any one copy of a multi-copy gene. [ 21 ] Additionally the flux through a metabolic pathway is typically limited by only a few of the steps, meaning that changes in function of many of the enzymes have little effect on fitness. [ 22 ] [ 23 ] Similarly metabolic networks have multiple alternate pathways to produce many key metabolites . [ 24 ] Protein mutation tolerance is the product of two main features: the structure of the genetic code and protein structural robustness. [ 25 ] [ 26 ] Proteins are resistant to mutations because many sequences can fold into highly similar structural folds . [ 27 ] A protein adopts a limited ensemble of native conformations because those conformers have lower energy than unfolded and mis-folded states (ΔΔG of folding). [ 28 ] [ 29 ] This is achieved by a distributed, internal network of cooperative interactions ( hydrophobic , polar and covalent ). [ 30 ] Protein structural robustness results from few single mutations being sufficiently disruptive to compromise function. Proteins have also evolved to avoid aggregation [ 31 ] as partially folded proteins can combine to form large, repeating, insoluble protein fibrils and masses. [ 32 ] There is evidence that proteins show negative design features to reduce the exposure of aggregation-prone beta-sheet motifs in their structures. [ 33 ] Additionally, there is some evidence that the genetic code itself may be optimised such that most point mutations lead to similar amino acids ( conservative ). [ 34 ] [ 35 ] Together these factors create a distribution of fitness effects of mutations that contains a high proportion of neutral and nearly-neutral mutations. [ 12 ] During embryonic development , gene expression must be tightly controlled in time and space in order to give rise to fully functional organs. Developing organisms must therefore deal with the random perturbations resulting from gene expression stochasticity. [ 36 ] In bilaterians , robustness of gene expression can be achieved via enhancer redundancy. This happens when the expression of a gene under the control of several enhancers encoding the same regulatory logic (ie. displaying binding sites for the same set of transcription factors ). In Drosophila melanogaster such redundant enhancers are often called shadow enhancers . [ 37 ] Furthermore, in developmental contexts were timing of gene expression in important for the phenotypic outcome, diverse mechanisms exist to ensure proper gene expression in a timely manner. [ 36 ] Poised promoters are transcriptionally inactive promoters that display RNA polymerase II binding, ready for rapid induction. [ 38 ] In addition, because not all transcription factors can bind their target site in compacted heterochromatin , pioneer transcription factors (such as Zld or FoxA ) are required to open chromatin and allow the binding of other transcription factors that can rapidly induce gene expression. Open inactive enhancers are call poised enhancers . [ 39 ] Cell competition is a phenomenon first described in Drosophila [ 40 ] where mosaic Minute mutant cells (affecting ribosomal proteins ) in a wild-type background would be eliminated. This phenomenon also happens in the early mouse embryo where cells expressing high levels of Myc actively kill their neighbors displaying low levels of Myc expression. This results in homogeneously high levels of Myc . [ 41 ] [ 42 ] Patterning mechanisms such as those described by the French flag model can be perturbed at many levels (production and stochasticity of the diffusion of the morphogen, production of the receptor, stochastic of the signaling cascade , etc). Patterning is therefore inherently noisy. Robustness against this noise and genetic perturbation is therefore necessary to ensure proper that cells measure accurately positional information. Studies of the zebrafish neural tube and antero-posterior patternings has shown that noisy signaling leads to imperfect cell differentiation that is later corrected by transdifferentiation, migration or cell death of the misplaced cells. [ 43 ] [ 44 ] [ 45 ] Additionally, the structure (or topology) of signaling pathways has been demonstrated to play an important role in robustness to genetic perturbations. [ 46 ] Self-enhanced degradation has long been an example of robustness in System biology . [ 47 ] Similarly, robustness of dorsoventral patterning in many species emerges from the balanced shuttling-degradation mechanisms involved in BMP signaling . [ 48 ] [ 49 ] [ 50 ] Since organisms are constantly exposed to genetic and non-genetic perturbations, robustness is important to ensure the stability of phenotypes . Also, under mutation-selection balance, mutational robustness can allow cryptic genetic variation to accumulate in a population. While phenotypically neutral in a stable environment, these genetic differences can be revealed as trait differences in an environment-dependent manner (see evolutionary capacitance ), thereby allowing for the expression of a greater number of heritable phenotypes in populations exposed to a variable environment. [ 51 ] Being robust may even be a favoured at the expense of total fitness as an evolutionarily stable strategy (also called survival of the flattest). [ 52 ] A high but narrow peak of a fitness landscape confers high fitness but low robustness as most mutations lead to massive loss of fitness. High mutation rates may favour population of lower, but broader fitness peaks. More critical biological systems may also have greater selection for robustness as reductions in function are more damaging to fitness . [ 53 ] Mutational robustness is thought to be one driver for theoretical viral quasispecies formation. Natural selection can select directly or indirectly for robustness. When mutation rates are high and population sizes are large, populations are predicted to move to more densely connected regions of neutral network as less robust variants have fewer surviving mutant descendants. [ 54 ] The conditions under which selection could act to directly increase mutational robustness in this way are restrictive, and therefore such selection is thought to be limited to only a few viruses [ 55 ] and microbes [ 56 ] having large population sizes and high mutation rates. Such emergent robustness has been observed in experimental evolution of cytochrome P450s [ 57 ] and B-lactamase . [ 58 ] Conversely, mutational robustness may evolve as a byproduct of natural selection for robustness to environmental perturbations. [ 59 ] [ 60 ] [ 61 ] [ 62 ] [ 63 ] Mutational robustness has been thought to have a negative impact on evolvability because it reduces the mutational accessibility of distinct heritable phenotypes for a single genotype and reduces selective differences within a genetically diverse population. [ citation needed ] Counter-intuitively however, it has been hypothesized that phenotypic robustness towards mutations may actually increase the pace of heritable phenotypic adaptation when viewed over longer periods of time. [ 64 ] [ 65 ] [ 66 ] [ 67 ] One hypothesis for how robustness promotes evolvability in asexual populations is that connected networks of fitness-neutral genotypes result in mutational robustness which, while reducing accessibility of new heritable phenotypes over short timescales, over longer time periods, neutral mutation and genetic drift cause the population to spread out over a larger neutral network in genotype space. [ 68 ] This genetic diversity gives the population mutational access to a greater number of distinct heritable phenotypes that can be reached from different points of the neutral network. [ 64 ] [ 65 ] [ 67 ] [ 69 ] [ 70 ] [ 71 ] [ 72 ] However, this mechanism may be limited to phenotypes dependent on a single genetic locus; for polygenic traits, genetic diversity in asexual populations does not significantly increase evolvability. [ 73 ] In the case of proteins, robustness promotes evolvability in the form of an excess free energy of folding . [ 74 ] Since most mutations reduce stability, an excess folding free energy allows toleration of mutations that are beneficial to activity but would otherwise destabilise the protein. In sexual populations, robustness leads to the accumulation of cryptic genetic variation with high evolutionary potential. [ 75 ] [ 76 ] Evolvability may be high when robustness is reversible, with evolutionary capacitance allowing a switch between high robustness in most circumstances and low robustness at times of stress. [ 77 ] There are many systems that have been used to study robustness. In silico models have been used to model promoters , [ 78 ] [ 79 ] RNA secondary structure , protein lattice models , or gene networks . Experimental systems for individual genes include enzyme activity of cytochrome P450 , [ 57 ] B-lactamase , [ 58 ] RNA polymerase , [ 13 ] and LacI [ 13 ] have all been used. Whole organism robustness has been investigated in RNA virus fitness, [ 10 ] bacterial chemotaxis , Drosophila fitness, [ 15 ] segment polarity network, neurogenic network and bone morphogenetic protein gradient, C. elegans fitness [ 14 ] and vulval development, and mammalian circadian clock . [ 9 ]
https://en.wikipedia.org/wiki/Robustness_(evolution)
In celestial mechanics , the Roche limit , also called Roche radius , is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity , will disintegrate because the first body's tidal forces exceed the second body's self-gravitation . [ 1 ] Inside the Roche limit, orbiting material disperses and forms rings , whereas outside the limit, material tends to coalesce . The Roche radius depends on the radius of the second body and on the ratio of the bodies' densities. The term is named after Édouard Roche ( French: [ʁɔʃ] , English: / r ɒ ʃ / ROSH ), the French astronomer who first calculated this theoretical limit in 1848. [ 2 ] The Roche limit typically applies to a satellite 's disintegrating due to tidal forces induced by its primary , the body around which it orbits . Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial , can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet , could be broken up when it passes within its Roche limit. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit. (Notable exceptions are Saturn's E-Ring and Phoebe ring . These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.) The gravitational effect occurring below the Roche limit is not the only factor that causes comets to break apart. Splitting by thermal stress , internal gas pressure , and rotational splitting are other ways for a comet to split under stress. The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. For example, a rubble-pile asteroid will behave more like a fluid than a solid rocky one; an icy body will behave quite rigidly at first but become more fluid as tidal heating accumulates and its ices begin to melt. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory. The rigid-body Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium . These assumptions, although unrealistic, greatly simplify calculations. The Roche limit for a rigid spherical satellite is the distance, d {\displaystyle d} , from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object: [ 3 ] [ 4 ] where R M {\displaystyle R_{M}} is the radius of the primary, ρ M {\displaystyle \rho _{M}} is the density of the primary, and ρ m {\displaystyle \rho _{m}} is the density of the satellite. This can be equivalently written as where R m {\displaystyle R_{m}} is the radius of the secondary, M M {\displaystyle M_{M}} is the mass of the primary, and M m {\displaystyle M_{m}} is the mass of the secondary. A third equivalent form uses only one property for each of the two bodies, the mass of the primary and the density of the secondary, is These all represent the orbital distance inside of which loose material (e.g. regolith ) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also go away from, rather than toward, the satellite. A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid . The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit: However, a better approximation that takes into account the primary's oblateness and the satellite's mass is: where c / R {\displaystyle c/R} is the oblateness of the primary. The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9 's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior. [ 5 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Roche_limit
This is a compilation of the properties of different analog materials used to simulate deformational processes in structural geology. Such experiments are often called analog or analogue models . The organization of this page follows the review of rock analog materials in structural geology and tectonics of Reber et al. 2020. [ 1 ] These materials need to exhibit brittle deformation upon failure as well as elastic and viscous deformation before failure. Various fluids are used to simulate deformation of the lower crust and mantle, such as: linear, non-linear, and yield stress fluids. In combination with brittle model materials, silicone oils/polymers can investigate many processes in salt tectonics, including the deformation of sediments adjacent and above a salt body. *Honey can also be used as a non-linear viscous fluid under certain conditions. At this time, pure petrolatum has not been used for analog material. Composite materials combine phases with different physical properties. A common composite mixture contains dry granular materials and fluids. These analog materials have been used: The most commonly used granular materials in composite mixtures are: Common fluids used in composite mixtures are: Visco-elasto-plastic deformation exhibits a combination of elastic, viscous, and plastic deformation at the same time. Various asphalts and bituminous materials demonstrate visco-elasto-plastic deformation but they are rarely as modeling materials (McBirney and Best, 1961 [ 103 ] ).Common modeling materials demonstrating complex rheology are;
https://en.wikipedia.org/wiki/Rock_analogs_for_structural_geology
A rock hyrax midden is a stratified accumulation of fecal pellets and a brown amber-like a urinary product known as hyraceum excreted by the rock hyrax and closely related species. [ 1 ] Hyrax middens form very slowly (ranging from ~5 years to >1000 years for 1 mm of hyraceum accumulation [ 2 ] [ 3 ] ), over long periods of time, with many spanning tens of thousands of years [ 4 ] and some dating as far back as ~70,000 years. [ 5 ] Hyrax middens contain a diverse range of paleoenvironmental proxies, including fossil pollen and stable carbon, nitrogen and hydrogen isotopes. [ 5 ] [ 4 ] [ 6 ] [ 7 ] [ 8 ] Combined with the antiquity of hyrax middens, and the often-continuous nature of their deposition, hyrax middens have become a valuable means of reconstructing past environmental and climate change [ 1 ] Rock hyraxes are known to use communal latrines. [ 9 ] [ 10 ] These sites are often found in sheltered locations, where the threat of predation is limited, and middens form when they are protected from the elements. At well-protected sites, it may accumulate in deposits in excess of a meter thick and several meters across. [ 2 ] [ 5 ] The thickness of hyrax middens depends on the nature of the shelter and the regional climate history and geology. Hyraceum shows hygroscopic properties and periods of increased precipitation or elevated ambient humidity will destroy existing middens, while more arid periods allow their development/preservation. [ 1 ] Thicker formations tend to occur in shallow shelters that during more arid periods, presumably provided sufficient shelter from rainfall for substantial midden accumulations, but under wetter conditions no longer provide adequate protection, resulting in the removal of the more soluble components of the midden. At poorly protected sites in arid regions hyrax urine leaves a white, calcium carbonate [ 11 ] precipitate on the rocks. Varying degrees of protection result in varying degrees of midden preservation. Small overhangs, vertical fractures in cap rocks, and groundwater flow along weakness in the shelter's architecture may lead to midden degradation if rainfall exceeds a certain amount and/or intensity. The thickest middens have been found at sites composed of massive, horizontally bedded rock such as granite and quartzites with between ~30 and 480 mm of annual rainfall. [ 1 ] In more humid environments (>800 mm mean annual rainfall), there is little to no evidence of hyraceum accumulation, and middens typically resemble piles of compost, as the masticated plant material in the pellets rapidly decomposes. Hyraceum-rich middens do not typically form in coastal situations, despite the presence of hyraxes, and it is considered that the ambient humidity of the air and the occurrence of coastal fogs preclude midden development [ 1 ] Studies of other herbivore midden remains have been very effective in palaeoenvironmental studies in dryland regions on several continents. In the southwestern United States pack rat middens have provided an unprecedented record of environmental changes over the last 40,000 years. [ 12 ] [ 13 ] [ 14 ] As a result of this work, the vegetation dynamics of this area are some of the best understood for any of the world's drylands at this timescale, and the critical data provided have dramatically helped define the range of regional climate variability. This work has also led to important perspectives on ecological theory, [ 15 ] which have impacted on management strategies [ 16 ] [ 17 ] by allowing a distinction to be made between anthropogenic environmental impacts and natural processes. Midden studies have also been undertaken in Australia [ 18 ] [ 19 ] [ 20 ] [ 21 ] and South America. [ 22 ] [ 23 ] [ 24 ] This work has highlighted a fundamental difference between middens from these regions and hyrax middens. American and Australian middens are essentially nests composed of sticks and other macrobotanical remains. These middens are generally reported have no clear stratigraphy, and researchers have thus adopted the methodology of processing them as single samples that provide a palaeoenvironmental snapshot. [ 25 ] Hyrax middens, on the other hand, are primarily urino-fecal deposits, and are deposited progressively as a series of layers. [ 1 ] This diachroneity is one of the fundamental advantages of hyrax middens over nest middens, which are only secondarily preserved as the animals urinate in their shelters. Examinations of the internal and external structure of hyrax middens suggest flow/deposition dynamics similar to speleothems (cave deposits, e.g. stalactites), with the fresh urine flowing across the surface of the midden, then drying and crystallising, preserving the stratigraphic integrity of the midden. The general morphology of middens is often characterised by (1) lobate forms, (2) undulating weathering features on exposed midden faces, and (3) in some cases the formation of thin (1–3 mm in diameter) stalactites on the underside of some middens. As a result, questions over the potential for post-depositional remobilisation of hyraceum may be raised. The examination of over 150 middens, however, has confirmed the visible stratigraphic integrity of the middens, and while some surficial alteration of exposed surfaces can occur, consistently coherent age-depth models, and the nearly vertical exposed external faces of the middens indicate that, once dry, hyraceum is not prone to significant remobilization. [ 1 ] Hyrax midden structures and accumulation rates can vary considerably based on the relative proportion of their two primary components, pellets and hyraceum, which is determined by the architecture of the site itself. Depending on the shape and irregularities of the floor of the site in question, pellets are likely to either accumulate (in concave structures) or roll away (in convex or inclined structures). Whereas hyrax urine will deposit only a very thin film of hyraceum after evaporation, pellets are usually 0.5–1 cm in diameter, and thereby accumulate much more quickly., [ 7 ] with deep piles accumulating perhaps within just a few years, or even months. Compared to this, we have observed that middens composed primarily of hyraceum accumulate much more slowly; generally between ~5 and >1000 years/mm. [ 3 ] [ 2 ] The rate of hyraceum accumulation depends on the morphology of the midden, the architecture of the site, as well as presumably the size of the hyrax colony, and as such net rates can be highly variable [ 5 ] [ 4 ] Radiocarbon ages from hyraceum are not subject to reservoir effects or the inclusion of new carbon. [ 26 ] [ 27 ] This is primarily a function of middens being isolated systems, and that through respiration the hyraceum is brought into equilibrium with atmospheric 14 C at the time of deposition. Published data show that hyrax middens can be of considerable antiquity, and middens from the Groenfontein site in the Cederberg Mountains of South Africa are considered to have begun accumulating ~70,000 years ago [ 5 ] It has been commonly observed that many middens are no longer actively accumulating. Often this is controlled by the shelters in which they are found, with accumulation ceasing when the middens grow to such an extent that the hyraxes can no longer physically enter the shelters. Until recently, field sampling was limited to the collection of middens that were most accessible and easiest to sample. In many cases this meant that the individual sampled middens were relatively thin (<5 cm) with aggregate records subsequently constructed from fragments of as many as 25 separate middens. [ 28 ] [ 29 ] (Scott and Woodborne, 2007a, b). With recent developments in sampling tools and techniques, larger, more stratigraphically coherent middens are more regularly sampled, which better represent the full period of accumulation at a given site [ 7 ] [ 6 ] [ 5 ] The very nature of hyrax middens implies that they comprise a mixture of materials, which include animal metabolic products, undigested food, and any allochtonous material blown into the middens or deposited via feet or fur. [ 1 ] In terms of organic matter, the existence of such potentially distinct sources (i.e. extraneous organic matter and animal metabolites) implies that a range of information concerning inter alia: animal diet, animal behaviour, metabolic responses to environmental stress, changing behaviour, as well the wider palaeoecological setting of the site may all be preserved within hyraceum. Hyraceum essentially comprises a mix of organic compounds, soluble salts, calcium carbonate and the mineral sylvite. [ 11 ] [ 30 ] More recent data from Raman Spectroscopy and Fourier Transform Infrared (FTIR) Spectroscopy demonstrate the presence of a number of CaCO3 polymorphs, the abundance of sylvite (KCl) and an organic component [ 31 ] The organic components within hyraceum have been investigated using pyrolysis-GC/MS (py-GC/MS) and GC/MS analysis of solvent-extractable lipids. [ 32 ] Py-GC/MS is commonly applied to elucidate macromolecular organic matter structure and composition. Py-GC/MS measurements on samples from two distant sites, Spitzkoppe, Namibia and Truitjes Kraal, Western Cape Province, South Africa produced remarkably similar suites of pyrolysis products, [ 1 ] despite their contrasting environmental settings. The pyrolysis products were dominated by aromatic compounds; notably the nitrogenous compounds benzonitrile and benzamide. Pyrolysis in the presence of a methylating agent tetramethylammonium hydroxide (TMAH) implied that benzamide is a monomer of a larger polymeric structure, the major organic component of the hyraceum OM. [ 32 ] This is further supported by the ubiquity of benzamide within solvent extracts and it is probable that it is derived from hippuric or benzoic acid, which are common metabolites in ruminants. [ 33 ] Given its abundance, the metabolite (or metabolite product) benzamide is likely the major source of organic nitrogen and carbon measured in bulk stable isotope analyses, [ 7 ] [ 5 ] [ 6 ] [ 32 ] and can therefore provide insights into animal diet and its isotopic signature. Interestingly, common plant-derived pyrolysis products, such as lignin were not detected using py-GC/MS, although low molecular weight polysaccharide pyrolysis products (e.g. acetyl furan, furaldehyde, dimethyl furan) were found in trace amounts [ 1 ] That such plant-derived compounds might be identified with this technique following more detailed analytical pyrolysis protocols is implied by new FTIR analyses of the organic fraction, which support the basic pyrolysis-based interpretation of Spitzkoppe and Truitjes Kraal midden chemical compositions. The Spitzkoppe FTIR spectra following carbonate removal contains a broad absorption band at w3300 cm1 as well as sharper absorptions from w1600 to 1700, 1400, 1130, 770 and 690 cm −1 . Multiplets between 1560 and 1640 cm −1 have been reported as being due to NeH bending in primary amines., [ 34 ] [ 35 ] while a signal at w1650 cm1 is representative of C==O stretches of the amide band. The spectra thus bear a strong resemblance to benzamide. [ 35 ] [ 36 ] FTIR spectra from the Truitjes Kraal midden, which is rich in faecal material, shows some resemblance to that of cellulose, with strong broad bands at 3400 cm −1 and 1050 cm −1 , and some weaker broad bands at 1730 and1670 cm −1 . Overall, the FTIR spectra and previous studies reveal a complex mixture of salts and organic compounds, with the latter incorporating aromatics, polysaccharides, amines, amides and other carbonyl-containing compounds. There are also clear similarities with the spectrum of benzamide, particularly at Spitzkoppe, which is consistent with the pyrolysis data [ 1 ] Part of the extraordinary potential of hyrax middens as palaeoenvironmental archives is the large range of proxies that are contained within them. Initially, when their diachronic nature was less evident, they were viewed as the poor relation to the better studied pack rat middens. [ 37 ] [ 38 ] While pack rat middens are rich in identifiable macrofossils, which can be directly dated and provide high taxonomic resolution, hyrax middens are poor in macroremains. Those that are found are almost exclusively masticated material that has been incorporated into the deposits as faecal pellets. While some studies have analysed these midden components, [ 37 ] [ 38 ] more recent work suggests that this approach does not maximise the full potential of hyrax middens as palaeoenvironmental archives [ 1 ] Hyrax middens contain a suite of proxies that have the potential to provide clear insights into past climate and vegetation change. Working within the context of the middens' stratigraphy, and building on robust chronologies indicating predictable and consistent accumulation rates, sampling methodologies are now more akin to those applied to speleothems rather than to packrat middens. Whereas the early focus was on small (<1 kg), accessible middens and in some cases in-situ sub-sampling, it is now standard practice to collect larger (10–70 kg) segments of the best-developed middens. [ 5 ] [ 6 ] [ 7 ] The segments are then split and polished in a controlled environment, and subsamples for radiocarbon dating and proxy analysis. That multiple proxies can be analysed from the same subsample allows for direct comparability, and much more reliable insights into the interrelationships between the systems being studied. This is valuable when comparing proxies that reflect vegetation change (e.g. fossil pollen ) and those that are primarily influenced by climate (e.g. δ 15 N ), as the relative roles of climatic forcing versus vegetation dynamics related to competitive processes within an ecosystem can be better resolved, resulting in a fuller and more reliable understanding of palaeoenvironmental dynamics [ 3 ] [ 7 ] Hyrax middens contain well-preserved micro plant material including pollen, which is sealed in middens by hyraceum, protecting it from microbial activity and decay. The earliest study of fossil pollen from a hyrax midden was undertaken in the late 1950s by Pons and Quézel. [ 39 ] in the Hoggar Massif of Algeria, whereas the first palynological analyses of southern African middens were undertaken during the late 1980s and early 1990s, [ 40 ] [ 41 ] [ 42 ] [ 43 ] and demonstrated that hyrax middens are very useful as pollen and microfossil traps. [ 40 ] [ 42 ] Subsequently, hyrax middens have been become an important archive for fossil pollen analysis in South Africa [ 44 ] [ 28 ] [ 45 ] [ 46 ] [ 47 ] and Namibia. [ 48 ] [ 49 ] [ 3 ] Studies of fossil pollen in hyrax middens have also been undertaken in Jordan [ 38 ] Ethiopia, [ 50 ] Yemen [ 51 ] and Oman [ 52 ] Middens are excellent traps for pollen derived from the local and regional surroundings either via the alimentary channel of the animals (excreted in pellets) or via deposition on the middens. The airborne pollen rain is incorporated by (1) collecting on the surface of the midden, (2) being brought in on the fur of the hyraxes, or (3) being ingested as dust on dietary items such as plant leaves or drinking water. [ 40 ] [ 42 ] [ 1 ] The dietary component may also represent the ingestion of flowers, which may result in the occasional over-representation of pollen of certain plant species in the pellet fraction of certain middens. A clear benefit of midden pollen spectra over wetland pollen spectra is that they may more clearly reflect terrestrial vegetation, without the high proportions of hydrophilic elements found in wetland sequences, which is particularly problematic in some dryland pollen records. [ 53 ] Furthermore, as the pollen found in hyraceum is not exclusively wind-transported, usually under-represented entomophilous plants are more clearly represented. Preservation of pollen sealed in hyraceum is usually very good, but the degradation of pollen grains has been occasionally observed in loose pellets or middens semi-exposed to the elements, such as in dolerite shelters in the central grassland region of South Africa, where some Asteraceae pollen have apparently lost their ektexine (L. Scott, unpublished observation). Compared to other available palaeoarchives in the region, such as fluvial sediments or paleosols, and to more widely used pollen records from peat bog and lakes, middens contain high fossil pollen concentrations; usually between 1 and 2 x 10 5 pollen grains per gram of sample. Pollen concentrations are high even in poorly productive ecosystems such as the Namib Desert margins. [ 54 ] Concentrations increase markedly when analysing pollen contents from pellets, reaching 5–30 x 10 5 pollen grains/gram of sample. There are some potential drawbacks for the palynological analysis of middens, however, as the diverse taphonomic vectors can complicate interpretations if they are not adequately considered and controlled for. Pollen spectra from pellets – reflecting the animal's needs or preferences on a particular day – may contrast strongly with pollen spectra preserved in hyraceum, and which thought to be primarily brought to the midden via the fur of the hyraxes (which is collected as it moves through the vegetation around its shelter) and the wind. The degree to which dietary biases affect pollen spectra in pellets is a subject that is not fully understood. While Scott and Cooremans [ 55 ] have shown that at the biome scale, fresh pellets reflect vegetation of the region from which they were collected, including the seasonal variations within vegetation types, most published studies also indicate significant differences both between modern pellets, and between modern pellets and surface sediment samples from the same site. [ 56 ] [ 54 ] A number of options might explain this, but it is assumed that as any given pellet represents what was eaten in the last day(s), there will be substantial inter-seasonal and inter-annual variation in the pollen preserved. In most fossil pollen archives, wind-pollinated plants may dominate the natural pollen rain. Pollen production, however, is likely to have a less significant influence on the pollen that hyraxes ingest, and considering the wide variety of plants that they may eat, it may be possible to control for the taxon over-representation resulting from a production bias while still attaining a reasonable representation of the local vegetation. [ 57 ] [ 55 ] [ 30 ] A study from the Lower Omo Basin of Ethiopia collected several dozen pellets from different areas around the study site, aggregated them into a single sample, and compared them to the local vegetation. [ 50 ] Structured studies to clarify the relative influence of regional (aeolian) and local (fur) signals in the pollen preserved in hyraceum remain to be completed. At least in some cases, aeolian inputs appear to be negligible as some middens that have accumulated in vertical cracks – precluding the incorporation of pellets and direct contact with the animals – have been found to be devoid of pollen. If aeolian pollen does represent a small percentage of the pollen preserved in hyraceum then it might be inferred that hyraceum pollen assemblages reflect primarily local vegetation cover from within the animals' primary feeding range. [ 1 ] As hyrax middens have been developed as palaeoenvironmental archives, there has been increasing emphasis on the application of stable isotope analyses to midden sequences. Initially this focussed on the use of bulk 13 C data, with an emphasis on identifying changes in the relative abundance of C 3 /C 4 /CAM vegetation and associated palaeoecological/palaeoenvironmental inferences. [ 58 ] This is useful in climatic transition zones, such as the Western Cape Province of South Africa, where modern rainfall seasonality has a strong impact on C 3 /C 4 grass distributions. [ 59 ] [ 58 ] δ 13 C records can also be used in some ecoregions, such as the dry savannah at Spitzkoppe in Namibia, as an indicator of the reliability of grass cover. As hyraxes will preferentially graze (grasses are C 4 in the region), more depleted δ 13 C values from hyrax middens have been interpreted as evidence that the animals were forced to obtain a greater proportion of their diet from trees and shrubs, which are less susceptible to extended periods of drought. [ 6 ] However, these data do not necessarily provide a direct and unambiguous indicator of past arid/humid shifts. As such, other studies have focussed on the use of δ 15 N data as a potential proxy for water availability in the environment [ 6 ] [ 7 ] [ 4 ] [ 5 ] In palaeoclimatology, the variables for which reconstructions are most often sought are humidity and temperature. Unfortunately, direct, or even reliable, proxies for these are rarely available, and it is necessary to make several inferential steps in order to interpret their past variability. Recent work on hyrax middens has shown that δ 15 N records from middens may provide a clearer, more direct estimation of water availability than previously possible in southern Africa, e.g. [ 5 ] [ 4 ] [ 2 ] [ 6 ] As described by Chase et al. [ 1 ] it has long been understood that the 15 N abundance in animal tissues is influenced by diet, climate and/or physiology. [ 68 ] [ 69 ] [ 70 ] In terms of diet, a clear distinction exists between δ 15 N values in carnivores and herbivores, with enrichment in 15 N occurring up trophic levels. [ 71 ] Among herbivores, a link between increased δ 15 N values in animal tissues and aridity was identified very early, [ 68 ] [ 71 ] but it was thought to be predominantly a function of the animals' metabolism. Ambrose and DeNiro, [ 69 ] [ 72 ] based in part on the apparent lack of relationship between 15 N/ 14 N ratios in plants and the amount of rainfall, [ 68 ] developed a model to account for the enrichment of 15 N in animal tissues based on physiological mechanisms of water conservation and nitrogen isotope mass balance. In this model, under arid conditions drought-tolerant herbivores concentrate their urine and excrete more 15 N-depleted urea, leaving the body enriched in 15 N. Conversely, water-dependent species that do not concentrate their urine were observed to have smaller δ 15 N ranges and lower mean values in their tissues. [ 69 ] This predicted differentiation between drought-tolerant and water-dependent species, however, only found partial support in South Africa. [ 73 ] This study suggested that animals in arid regions are likely to eat lower protein diets (%N decreases with increasing aridity [ 74 ] ), and that the additional protein produced by symbiotic bacteria in the animals' digestive tracts would essentially result in a shift in trophic level and an enrichment of 15 N in the animals' tissues. Similarly, Codron and Codron [ 75 ] found no significant difference in faecal δ 15 N between drought-tolerant and water-dependent herbivores, but did identify a significant correlation between %N and δ 15 N. In contrast to the initial findings of Heaton et al., [ 68 ] subsequent studies of soils and plants across aridity gradients, indicate a clear negative correlation between 15 N and rainfall. [ 76 ] [ 77 ] [ 74 ] [ 78 ] As this was the original impetus for the construction of the mass balance model and its corollaries, these models, and their implications for interpreting δ 15 N records in plant and animals tissues, should be reconsidered. Although a strong relationship has been established between soil and plant δ 15 N, the link with rainfall is sometimes considered to be less robust (e.g. [ 75 ] ). One of the primary difficulties in determining the relationship between precipitation and δ 15 N values in soils, plants, animal tissues and excrement is the means by which precipitation is determined. It has been noted by Handley et al. [ 76 ] that δ 15 N in soils and plants may change substantially across a landscape as a function of variations in soil moisture. Since soil moisture varies as a result of subtle changes in topography, aspect and soil type, particularly in drylands where sparse vegetation and poorly-formed soils exacerbate the heterogeneity of the biogeochemical landscape, [ 79 ] the common practice of using rainfall records from the nearest gauge and/or interpolated from regional stations will inevitably weaken the significance of any correlation. Soil moisture and δ 15 N also vary significantly over short, sub-seasonal timescales [ 76 ] and, combined, these fine-scale spatio-temporal variations need to be adequately controlled for if reliable δ 15 N-climate correlations are to be identified. If we accept that plant δ 15 N is determined by soil δ 15 N, and the link with climate, while identified, has been imperfectly explored, it remains to determine to what extent variations in plant d 15 N account for the variations identified in animal tissue and/or excrement. Murphy and Bowman [ 80 ] [ 81 ] investigated variations in grass and kangaroo bone δ 15 N from across Australia and demonstrated a remarkably consistent relationship between plant and bone δ 15 N signals. Moisture availability, through its influence on the isotopic signature of plants/diet, was inferred as the primary control on animal δ 15 N, with metabolism having no clear effect. It is interesting to note that Ambrose and DeNiro's [ 69 ] findings are not inconsistent with these results, as drought-tolerant species can inhabit more arid regions with less regular rainfall (higher, wider δ 15 N range) while water-dependent animals will be more restricted to well-watered areas (lower, smaller δ 15 N range). To extend the findings of Murphy and Bowman [ 80 ] [ 81 ] to the study of excrement and hyrax middens, one can consider the studies of (1) Codron and Codron, [ 75 ] which concluded that faecal δ 15 N correspond to changes in plant δ 15 N, and (2) Sponheimer et al., [ 82 ] which found that, while preferential urinary excretion of isotopically light nitrogen may occur under conditions of disequilibrium, an unstressed animal at "steady state" will have equivalent dietary and excreta δ 15 N. Since faecal and animal δ 15 N track plant δ 15 N, and under normal conditions total excreta δ 15 N is equivalent to dietary (plant) δ 15 N, it follows that urinary δ 15 N, while perhaps more negative relative to dietary δ 15 N, [ 82 ] will reflect trends in plant δ 15 N and water availability. Hyrax middens thus provide an optimal archive for the study of δ 15 N as a proxy for long-term environmental change. The effects of contemporary ecosystem variability are mitigated by the spatial and temporal averaging intrinsic in hyraxes' wide dietary preferences, restricted range, the probable contribution of multiple individuals to a single δ 15 N sample, and the relatively long periods of time incorporated into each sample. In these archives, microtopographic variations in soil moisture (and thus δ 15 N) are accounted for by the feeding habits of the hyrax, and it is expected that the spatio-temporal averaging will allow for the reliable identification of long-term changes in water availability as reflected in variations in midden δ 15 N. Over long timescales (10 2 -10 3 yr), this expectation is borne out, and the potential of hyrax middens as diachronic palaeoclimatic records has been supported by strong similarities between variations in δ 15 N records and a range of palaeoenvironmental proxies reflecting changes in precipitation. [ 5 ] [ 4 ] [ 2 ] [ 6 ]
https://en.wikipedia.org/wiki/Rock_hyrax_midden
Rock mechanics is a theoretical and applied science of the mechanical behavior of rocks and rock masses. [ 1 ] Compared to geology, it is the branch of mechanics concerned with the response of rock and rock masses to the force fields of their physical environment. [ 1 ] Rock mechanics is part of a much broader subject of geomechanics , which is concerned with the mechanical responses of all geological materials, including soils. [ 1 ] Rock mechanics is concerned with the application of the principles of engineering mechanics to the design of structures built in or on rock. [ 1 ] The structure could include many objects such as a drilling well, a mine shaft, a tunnel, a reservoir dam, a repository component, or a building. [ 1 ] Rock mechanics is used in many engineering disciplines, but is primarily used in Mining, Civil, Geotechnical, Transportation, and Petroleum Engineering. [ 2 ] [ 3 ] Rock mechanics answers questions such as, "is reinforcement necessary for a rock, or will it be able to handle whatever load it is faced with?" [ 4 ] It also includes the design of reinforcement systems, such as rock bolting patterns. [ 4 ] Before any work begins, the construction site must be investigated properly to inform of the geological conditions of the site. [ 5 ] Field observations, deep drilling, and geophysical surveys, can all give necessary information to develop a safe construction plan and create a site geological model. [ 5 ] The level of investigation conducted at this site depends on factors such as budget, time frame, and expected geological conditions. [ 5 ] The first step of the investigation is the collection of maps and aerial photos to analyze. [ 5 ] This can provide information about potential sinkholes, landslides, erosion, etc. Maps can provide information on the rock type of the site, geological structure, and boundaries between bedrock units. [ 5 ] Creating a borehole is a technique that consists of drilling through the ground in various areas at various depths, to get a better understanding of the sites geology. [ 5 ] Boreholes must be spaced properly from one another and drilled deep enough to provide accurate information for the geological model. [ 5 ] Samples from the borehole are investigated and factors such as rock type, degree of weathering, and types of discontinuities are all recorded. [ 5 ] Testing the properties of a rock is essential to understand how stable or unstable it is. [ 2 ] Rock mechanics involves 3 categories of testing methods: tests on intact rocks, discontinuities and rock masses. [ 6 ] Two direct methods of testing that can be done are laboratory tests and in-situ tests. [ 6 ] There are also indirect methods of testing which involve correlations and estimations that are obtained by analyzing field observations. [ 6 ] The data these testing methods provide are crucial for the design, structure and research of rock mechanics and rock engineering. [ 6 ] Intact rocks and discontinuities can be tested in the laboratory through running small-scale experiments to gather empirical data, however rock masses require some larger-scale field measurements rather than laboratory work due to their more complex nature. [ 6 ] Laboratory tests provide both classification and characterization of the rock as well as a determination of what rock properties will be used in the engineering design. [ 6 ] Examples of some of these laboratory tests include: sound velocity tests, hardness tests, creep tests and tensile strength tests. [ 6 ] In-situ tests, which is when the rock being studied is subjected to a heavy load and then being watched to see if it deforms, provides an insight into what impacts a rock masses' strength and stability. [ 6 ] Understanding the strength of a rock mass is difficult but necessary for ensuring the safety of anything built on or around it, and it all depends on different factors the rock mass faces, such as the environmental conditions, size of the mass, and how discontinued it might be. [ 7 ]
https://en.wikipedia.org/wiki/Rock_mechanics
A rock shed is a civil engineering structure used in mountainous areas where rock slides and land slides create highway closure problems. A rock shed is built over a roadway that is in the path of the slide. They are equally used to protect railroads. [ 1 ] They are usually designed as a heavy reinforced concrete covering over the road, protecting the surface and vehicles from damage due to the falling rocks with a sloping surface to deflect slip material beyond the road, [ 2 ] however an alternative is to include an impact-absorbing layer above the ceiling. [ 3 ] A further use of this type of structure may be seen protecting the A4 road ; although constructed primarily to alleviate risk from falling rocks from a limestone seam [ 4 ] it also serves to protect against objects or persons falling from the Clifton Suspension Bridge [ 5 ] where the height differential of approximately 70 metres from the bridge to the bottom of the Avon Gorge would give sufficient kinetic energy to even a relatively small item to cause injury on impact. This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Rock_shed
A rockbreaker is a machine designed to manipulate large rocks, including reducing large rocks into smaller rocks. They are typically used in the mining industry to remove oversize rocks that are too large or too hard to be reduced in size by a crusher . Rockbreakers consist of two major components, a hydraulic hammer (used to break rocks) and a boom (the arm). There are two major types of rock breakers, mobile and stationary - typically placed on a pedestal or slew frame. In 2008, researchers from the CSIRO implemented remote-operation functionality for a Transmin rockbreaker located at Rio Tinto 's West Angelas iron ore mine from Perth, over 1000 km away. [ 1 ] In 2011, Transmin developed the first commercially available automation system for pedestal rockbreakers. The system was first installed at Newcrest 's Ridgeway Deeps gold mine providing collision avoidance and remote operation functionality.
https://en.wikipedia.org/wiki/Rockbreaker
Designed by Vannevar Bush after he became director of the Carnegie Institution for Science in Washington DC, the Rockefeller Differential Analyzer (RDA) was an all-electronic version of the Differential Analyzer , which Bush had built at the Massachusetts Institute of Technology between 1928 and 1931. [ 1 ] The RDA was operational in 1942, a year after the Zuse Z3 . It was equipped with 2000 vacuum tubes, weight 100 tons, used 200 miles of wire, 150 motors and thousand of relays. According to historian Robin Boast , "the RDA (Rockefeller Differential Analyzer) was revolutionary, and later was considered to be one of the most important calculating machines of the Second World War." [ 2 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Rockefeller_Differential_Analyzer
A Rocker Shovel Loader , sometimes simply referred to as a Rocker Shovel or Mucker is a type of mechanical loader used in underground mining . [ 1 ] A Rocker Shovel is usually powered by compressed air, or in some cases electricity. It is commonly mounted on steel wheels designed to run on narrow gauge rails, with some later models using metal or rubber-tyred road wheels. The operator, standing on a raised platform to one side of the machine, operates the controls, one lever to drive the machine along the tracks, and another to raise and lower the bucket. Once the bucket has been filled by driving the loader forwards into the pile of material, the rocker mechanism throws the contents over the top of the machine and into a wagon behind. Once full, the loaded wagon can be taken away and replaced with an empty one to allow loading to continue. On 28 May 1937, Edwin Burton Royle applied for a patent as inventor of the "loading machine" and US Patent No. 2,134,582 was issued on October 25, 1938 and assigned to the Eastern Iron Metals Company (later to be known as EIMCO ). [ 2 ] In 2000, the American Society of Mechanical Engineers added the EIMCO 12B Rocker Shovel Loader of 1938 to its List of Historic Mechanical Engineering Landmarks as reference number 212 out of a total number of 259 objects (as of 2015). [ 1 ] In June 2012, an EIMCO 12B Rocker Shovel was featured in an episode of the American reality television series Auction Hunters , filmed in Littleton, Colorado . It was sold to a gold miner for $ 3,600. This article about mining is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Rocker_Shovel_Loader
A rocker box (also known as a cradle or a big box) is a gold mining implement for separating alluvial placer gold from sand and gravel which was used in placer mining in the 19th century. It consists of a high-sided box, which is open on one end and on top, [ 1 ] and was placed on rockers . The inside bottom of the box is lined with riffles and usually a carpet (called Miner's Moss) similar to a sluice box . On top of the box is a classifier sieve (usually with half-inch or quarter-inch openings) which screens-out larger pieces of rock and other material, allowing only finer sand and gravel through. Between the sieve and the lower sluice section is a baffle, which acts as another trap for fine gold and also ensures that the aggregate material being processed is evenly distributed before it enters the sluice section. It sits at an angle and points towards the closed back of the box. Traditionally, the baffle consisted of a flexible apron made of canvas or a similar material, which had a sag of about an inch and a half in the center, to act as a collection pocket for fine gold. Later rockers (including most modern ones) dispensed with the flexible apron and used a pair of solid wood or metal baffle boards. These are sometimes covered with carpet to trap fine gold. The entire device sits on rockers at a slight gradient, which allows it to be rocked side to side. Today, the rocker box is not used as extensively as the sluice, but still is an effective method of recovering gold in areas where there is not enough available water to operate a sluice effectively. Like a sluice box, the rocker box has riffles and a carpet in it to trap gold. It was designed to be used in areas with less water than a sluice box. The mineral processing involves pouring water out of a small cup and then rocking the small sluice box like a cradle , thus the name rocker box or cradle. Rocker boxes must be manipulated carefully, to prevent losing the gold. Although big, and difficult to move, the rocker can pick up twice the amount of the gravel, and therefore more gold in one day than an ordinary gold mining pan. The rocker, like the pan, is used extensively in small-scale placer work, in sampling, and for washing sluice concentrates and material cleaned by hand from bedrock in other placer operations. One to three cubic yards, bank measure, can be dug and washed in a rocker per man-shift, depending upon the distance the gravel or water has to be carried, the character of the gravel, and the size of the rocker. Rockers are usually homemade and display a variety of designs. A favorite design consists essentially of a combination washing box and screen, a canvas or carpet apron under the screen, a short sluice with two or more riffles, and rockers under the sluice. The bottom of the washing box consists of sheet metal with holes about a half an inch in diameter punched in it, or a half-inch mesh screen can be used. Dimensions shown are satisfactory, but variations are possible. The bottom of the rocker should be made of a single wide, smooth board, which will greatly facilitate cleanups. The materials for building a rocker cost only a few dollars, depending mainly on the source of lumber.
https://en.wikipedia.org/wiki/Rocker_box
A rocket sled launch , also known as ground-based launch assist , catapult launch assist , and sky-ramp launch , is a proposed method for launching space vehicles. With this concept the launch vehicle is supported by an eastward pointing rail or maglev track that goes up the side of a mountain while an externally applied force is used to accelerate the launch vehicle to a given velocity. Using an externally applied force for the initial acceleration reduces the propellant the launch vehicle needs to carry to reach orbit. This allows the launch vehicle to carry a larger payload and reduces the cost of getting to orbit. When the amount of velocity added to the launch vehicle by the ground accelerator becomes great enough, single-stage-to-orbit flight with a reusable launch vehicle becomes possible. For hypersonic research in general, tracks at Holloman Air Force Base have tested, as of 2011, small rocket sleds moving at up to 6453 mph (2,885 m/s; Mach 8.5). [ 1 ] Effectively a sky ramp would make the most expensive, first stage of a rocket fully reusable since the sled is returned to its starting position to be refueled, and may be reused in the order of hours after use. Present launch vehicles have performance-driven costs of thousands of dollars per kilogram of dry weight ; sled launch would aim to reduce performance requirements and amortize hardware expenses over frequent, repeated launches. Designs for mountain based inclined-rail sleds often use jet engines or rockets to accelerate the spacecraft mounted on it. Electromagnetic methods (such as Bantam, Maglifter, and StarTram ) are another technique investigated to accelerate a rocket before launch, potentially scalable to greater rocket masses and velocities than air launch . [ 2 ] [ 3 ] Rockets carrying their own propellant with them use the vast majority of that propellant at the beginning of their journey to accelerate most of that very same propellant, as enshrined in the rocket equation . For example, the Space Shuttle used more than a third of its fuel just to reach 1,000 mph (1,600 km/h). [ 4 ] If that energy was provided without (yet, or at all) using a propellant the rocket carries, its propellant need would be much reduced, and its payload could be a larger fraction of its liftoff mass, increasing its efficiency. Due to factors including the exponential nature of the rocket equation and higher propulsive efficiency than if a rocket takes off stationary, a NASA Maglifter study estimated that a 270 m/s (600 mph) launch of an ELV rocket from a 3000-meter altitude mountain peak could increase payload to low Earth orbit by 80% compared to the same rocket from a conventional launch pad . [ 5 ] Mountains of such height are available within the mainland U.S. for the easiest logistics, or nearer to the Equator for a little more gain from Earth's rotation . Among other possibilities, a larger single-stage-to-orbit (SSTO) could be reduced in liftoff mass by 35% with such launch assist, dropping to 4 instead of 6 engines in one case considered. [ 5 ] At an anticipated efficiency close to 90%, electrical energy consumed per launch of a 500-ton rocket would be around 30 gigajoules (8,300 kWh) (each kilowatt-hour costing a few cents at the current cost of electricity in the United States), aside from any additional losses in energy storage. It is a system with low marginal costs dominated by initial capital costs [ 3 ] Although a fixed site, it was estimated to provide a substantial net payload increase for a high portion of the varying launch azimuths needed by different satellites, with rocket maneuvering during the early stage of post-launch ascent (an alternative to adding electric propulsion for later orbital inclination change ). Maglev guideway costs were estimated as $10–20 million per mile in the 1994 study, which had anticipated annual maglev maintenance costs on the order of 1% of capital costs. [ 5 ] Rocket sled launch helps a vehicle gain altitude, and proposals commonly involve the track curving up a mountain. Advantages to any launch system that starts from high altitudes include reduce gravity drag (the cost of lifting fuel in a gravity well). The thinner air will reduce air resistance and allow more efficient engine geometries. Rocket nozzles have different shapes (expansion ratios) to maximize thrust at different air pressures. (Though NASA's aerospike engine for the Lockheed Martin X-33 was designed to change geometry to remain efficient at a variety of different pressures, the aerospike engine had added weight and complexity; X-33 funding was canceled in 2001; and other benefits from launch assist would remain even if aerospike engines reached flight testing). [ 6 ] [ 7 ] For example, the air is 39% thinner at 2500 meters. The more efficient rocket plume geometry and the reduced air friction allows the engine to be 5% more efficient per amount of fuel burned. [ 8 ] Another advantage to high altitude launches is that it eliminates the need to throttle back the engine when the max Q limit is attained. Rockets launched in thick atmosphere can go so fast that air resistance may cause structural damage. [ 9 ] Engines are throttled back when max Q is reached, until the rocket is high enough that they can resume full power. The Atlas V 551 gives an example of this. It reaches its max Q at 30,000 feet. Its engine is throttled back to 60% thrust for 30 seconds. [ 10 ] This reduced acceleration adds to the gravity drag the rocket must overcome. Additionally, space craft engines concerned with max Q are more complex as they must be throttled during launch. A launch from high altitude need not throttle back at max Q as it starts above the thickest portion of the Earth's atmosphere. Debora A. Grant and James L. Rand, in "The Balloon Assisted Launch System – A Heavy Lift Balloon", [ 11 ] wrote: "It was established some time ago that a ground launched rocket capable of reaching 20 km would be able to reach an altitude of almost 100km if it was launched from 20km." They suggest that small rockets are lifted above the majority of the atmosphere by balloon in order to avoid the problems discussed above. A sled track that gave a Mach 2 or greater launch assist could reduce the fuel to orbit by 40% or more, while helping counter the weight penalty when aiming to make a fully reusable launch vehicle . Angled at 55° to vertical, a track on a tall mountain could allow a single stage to orbit reusable vehicle with no new technology. [ 12 ]
https://en.wikipedia.org/wiki/Rocket_sled_launch
A rockfall or rock-fall [ 1 ] is a quantity of rock that has fallen freely from a cliff face. The term is also used for collapse of rock from roof or walls of mine or quarry workings. A rockfall is "a fragment of rock (a block) detached by sliding, toppling, or falling, that falls along a vertical or sub-vertical cliff, proceeds down slope by bouncing and flying along ballistic trajectories or by rolling on talus or debris slopes". [ 2 ] Alternatively, a rockfall is "the natural downward motion of a detached block or series of blocks with a small volume involving free falling, bouncing, rolling, and sliding". The mode of failure differs [ how? ] from that of a rockslide . [ 1 ] Favourable geology and climate are the principal causal mechanisms of rockfall, factors that include intact condition of the rock mass, discontinuities within the rockmass, weathering susceptibility, ground and surface water, freeze-thaw , root-wedging, and external stresses. A tree may be blown by the wind, and this causes a pressure at the root level and this loosens rocks and can trigger a fall. The pieces of rock collect at the bottom creating a talus or scree . Rocks falling from the cliff may dislodge other rocks and serve to create another mass wasting process, for example an avalanche . A cliff that has favorable geology to a rockfall may be said to be incompetent. One that is not favorable to a rockfall, which is better consolidated, may be said to be competent. [ 3 ] In higher altitude mountains, rockfalls may be caused by thawing of rock masses with permafrost . [ 4 ] In contrast, lower altitude mountains with warmer climates rockfalls may be caused by weathering being enhanced by non-freezing conditions. [ 4 ] Assessing the propagation of rockfall is a key issue for defining the best mitigation strategy as it allows the delineation of run out zones and the quantification of the rock blocks kinematics parameters along their way down to the elements at risk. [ 5 ] In this purpose, many approaches may be considered. For example, the energy line method allows expediently estimating the rockfall run out. [ 6 ] Numerical models simulating the rock block propagation offer a more detailed characterisation of the rockfall propagation kinematics. [ 7 ] These simulation tools in particular focus on the modeling of the rebound of the rock block onto the soil [ 8 ] The numerical models in particular provide the rock block passing height and kinetic energy that are necessary for designing passive mitigation structures. Typically, rockfall events are mitigated in one of two ways: either by passive mitigation or active mitigation. [ 9 ] Passive mitigation is where only the effects of the rockfall event are mitigated and are generally employed in the deposition or run-out zones, such as through the use of drape nets, rockfall catchment fences, galeries, ditches, embankments , etc. The rockfall still takes place but an attempt is made to control the outcome. In contrast, active mitigation is carried out in the initiation zone and prevents the rockfall event from ever occurring. Some examples of these measures are rock bolting , slope retention systems, shotcrete , etc. Other active measures might be by changing the geographic or climatic characteristics in the initiation zone, e.g. altering slope geometry, dewatering the slope , revegetation, etc. Design guides of passive measures with respect to the block trajectory control have been proposed by several authors. [ 10 ] [ 11 ] [ 12 ] The effect of rockfalls on trees can be seen in several ways. The tree roots may rotate, via the rotational energy of the rockfall. The tree may move via the application of translational energy. And lastly deformation may occur, either elastic or plastic. Dendrochronology can reveal a past impact, with missing tree rings , as the tree rings grow around and close over a gap; the callus tissue can be seen microscopically. A macroscopic section can be used for dating of avalanche and rockfall events. [ 13 ]
https://en.wikipedia.org/wiki/Rockfall
Rockwell Collins, Inc. was a multinational corporation headquartered in Cedar Rapids , Iowa , providing avionics and information technology systems and services to government agencies and aircraft manufacturers . It was formed when the Collins Radio Company , facing financial difficulties, was purchased by Rockwell International in 1973. In 2001, the avionics division of Rockwell International was spun off to form the current Rockwell Collins, Inc., retaining its name. It was acquired by United Technologies Corporation on November 27, 2018, and since then operates as part of Collins Aerospace , a subsidiary of the RTX Corporation (formerly Raytheon Technologies). [ 3 ] Arthur A. Collins founded Collins Radio Company in 1933 in Cedar Rapids, Iowa . It designed and produced both shortwave radio equipment and equipment for the AM radio broadcast industry. Collins supplied the military, the scientific community, and the larger AM radio stations with equipment. Collins provided the equipment to establish a communications link with the South Pole expedition of Rear Admiral Richard E. Byrd in 1933. In 1936, Collins had begun production of the 12H audio console, 12X portable field announcers box, and the 300E and 300F broadcast transmitters. Throughout World War II , the 212A1 and 212B1 replaced the 12H design. Collins became the principal supplier of radio and navigation equipment used in the military. [ citation needed ] In the postwar years, the Collins Radio Company expanded its work in the communications field, while broadening its technology into flight-control instruments, radio-communication devices, and satellite voice transmissions. Collins Radio Company provided communications for the United States' role in the Space Race , including equipment for astronauts to communicate with earth stations and equipment to track and communicate with spacecraft. Collins communications equipment was used for Projects Mercury , Gemini and Apollo . [ 4 ] In 1973, the U.S. Skylab program used Collins equipment to provide communication from the astronauts to Earth. After facing financial difficulties, the Collins Radio Company was purchased by Rockwell International in 1973. In 2001, the avionics division of Rockwell International was spun off to form Rockwell Collins, Inc, retaining its name. Rockwell Collins was highly concentrated in the defense and commercial avionics markets, and no longer marketed receivers to the public. On April 28, 2000, Rockwell International Corp and its Rockwell Collins unit agreed to acquire Sony Corp's Sony Trans Com ( Irvine, California ) for undisclosed terms. [ 5 ] [ 6 ] Sony had purchased the business from Sundstrand Corporation in 1989. [ 7 ] [ 8 ] On December 20, 2000, Rockwell Collins expanded its services to commercial and executive aviation in Mercosur countries. [ 9 ] The company had acquired several companies, including Hughes-Avicom's in-flight entertainment business (1998), Sony Trans Com (2000), Intertrade Ltd., Flight Dynamics, K Systems, Inc. (Kaiser companies), Communication Solutions, Inc., Airshow, Inc. (2002), NLX (Simulation Business) in 2003, [ 10 ] portions of Evans & Sutherland , TELDIX GmbH , IP Unwired, Anzus Inc. in 2006, Information Technology and Applications Corp in 2007, Athena Technologies , Datapath Inc. (divested in 2014), SEOS Displays Ltd., Air Routing International in 2010, [ 11 ] Computing Technologies for Aviation (CTA) in 2011, [ 12 ] ARINC in 2014, [ 13 ] and BE Aerospace in 2017. [ 14 ] The company was among the major suppliers of in-flight entertainment (IFE). Rockwell Collins' key competitors in this industry included Panasonic Avionics Corporation , Thales Group , and JetBlue 's IFE subsidiary LiveTV , which was later purchased by Thales in 2014 for $400 million. [ 15 ] In 2010, the company employed over 20,000 people [ 16 ] and had an annual turnover of US$ 4.665 billion. Its nonexecutive chairman was Anthony Carbone following the retirement of Clayton M. Jones . [ 17 ] In September 2012, Kelly Ortberg was appointed as president of the company. [ 18 ] In August 2013, Kelly Ortberg was appointed CEO of Rockwell Collins. [ 19 ] On September 4, 2017, United Technologies of Farmington, Connecticut , agreed to acquire the company for $30 billion. [ 20 ] The transaction closed on November 26, 2018. [ 21 ] Starting in the mid-1930s, the Collins Radio Company constructed and sold transmitters and audio mixing consoles to the broadcast industry. In 1939, the model 12 Speech Input Console, in addition to the 26C limiter amplifier, was licensed to Canadian Marconi Co. for both sales in Canada and His Majesty's Service for the war effort. [ citation needed ] Collins' success in constructing broadcast transmitters continued to grow, selling well over a thousand up to the start of World War II. During World War II , Collins' expertise grew in high-power transmitters, producing designs that ran well over 15 kilowatts (kW) of RF power on a continuous basis. After the war, some AM transmitters were produced, called the 300G, and remain the finest in low-power AM transmitters (300W) ever produced. Collins remained an important manufacturer of AM and FM broadcast radio transmitters for the commercial market surviving the drastic cost-cutting market of the 1960s and 1970s. [ citation needed ] The transmitter line was later sold to Continental Electronics , which continued to produce a number of Collins designs under its own nameplate before phasing them out in the 1980s. Collins produced several shortwave transmitters to the commercial market. A "30" Series production catered to the growing need of state highway patrol agencies and Department of Commerce aviation needs. During World War II, Collins produced high-power transmitters for aircraft, notably the ART-13 equipped with automatic tuning circuits, which represented an important enhancement for airborne radio communications. [ 22 ] : 60-61 After World War II, Collins supported both broadcast and the growing postwar amateur radio market. The United States Coast Guard Cutter USCGC Courier was employed as seagoing relay station for Voice of America programming using two Collins 207B-1 transmitters . [ 23 ] [ 24 ] Amateur radio transmitters included the 32V-1, -2, and -3, the KWS-1, and the rack-mounted KW-1. [ 25 ] Around 1947, the company introduced their first amateur radio receiver, the 75A-1 (called the 75A). This set achieved excellent stability for the time due to high build quality and the use of a permeability tuned oscillator in its second conversion stage. It was one of the few double-conversion superheterodynes on the market, and covered only the amateur bands. With the experience gained in the design of the 75A-1, Collins released the 51J-1 receiver, a general-coverage HF set covering 500 kHz to 30 MHz . It was produced in somewhat updated versions (51J-2, 51J-3, 51J-4) for about a decade. It was known as the R-388 and was used in multiple receiver diversity radioteletype installations. The 75A amateur line was updated throughout the early 1950s, finishing with the 75A-4 , which was released in 1955. The Collins mechanical filter was introduced to consumers in the 75A-3, and the 75A-4 was one of the first receivers marketed specifically as a single sideband receiver. Around 1950, Collins began designing the R-390 ( 500 kHz — 30 MHz ) for the US military. This was intended to be a receiver of the highest performance available, with the ruggedness and serviceability required for military duty. It featured direct mechanical digital frequency readout. The set is composed of several modules for easy field repair—a bad module could simply be swapped out and repaired later, or junked. Sets built during the original 1951 contract cost the government about US$2,500 (equivalent to $30,285 in 2024) each, and around 16,000 were produced. Concurrently, Collins developed the R-389, a long-wave version with fewer than 1,000 made. The R-391, another variant of the R-390, allowed choice of eight different autotuned channels. Three years later, Collins delivered the R-390A [ 26 ] to the military. About 54,000 were produced and the set was a military workhorse until the 1970s. Like the R-390, it can outperform many modern radios, to the point that it was designated top secret until the late 1960s. In 1958, Collins replaced the 75A series with the much smaller 75S series, part of the S/Line. These featured mechanical filters, very accurate frequency readout, and excellent stability. At the request of the US government, Collins designed the 51S-1 general-coverage set, which was essentially (in intended use) a physically smaller replacement for the 51J series. It was not intended as a replacement for the higher-performance R-390A, and unlike the R-390A, it was extensively marketed for commercial use. Collins produced a few high-performance solid-state receivers in the 1970s, such as the 651S-1. Like their tube predecessors, these are coveted by collectors today. With the introduction of the S/Line in 1958, Collins moved from designing individual products that could be used together, to ones that were designed to integrate and operate together, in various combinations, as a system. They were the first equipment maker to take this approach. Collins was also the first to introduce a compact HF transceiver , the KWM-1, the year before. Together, these two innovations put Collins temporarily ahead of its competition, and set the stage for other manufacturers and the next generation of amateur (and military) HF radio equipment. The 75S-1 receiver and 32S-1 transmitter, comprising the heart of the S/Line, operated separately or together to transceive. The units included crystal band-pass filters and a new compact design that provided stable, highly linear tuning across 200 kHz band segments . The S/Line tuning-dial mechanism was unique when introduced. It used concentric dials and a gear mechanism that provided precise dial resolution, better than 1 kHz. Within a few years, Collins had introduced additional S/Line components, including the 30S-1 kilowatt power amplifier, the 30L-1 desktop power amplifier, and the 62S-1 transverter , which provided coverage of the 6-m (50 MHz) and 2-m (144 MHz) amateur bands. The KWM-2 transceiver replaced the KWM-1 using many of the S/Line's design features and matching its styling. Other accessories included speakers, microphones, and control consoles. Illustrating the uniqueness of their new, smaller units in the market, Collins advertisements in the 1950s and early 1960s emphasized the S/Line's physical styling and size, as often as they did its performance. [ 27 ] Collins continued to improve the S/Line, first introducing the S-2, then the S-3 units, the 75S-3 (and -3A, -3B and -3C) receiver, and the 32S-3 and -3A transmitters. The -3A and -3C units were identical to the -3 and -3B units, respectively, except they provided an extra set of heterodyne oscillator crystals, enabling them to cover extra bands – useful for military, amateur and MARS operation, where operation just outside the regular amateur bands was necessary. Among amateur radio operators, the S/Line established its reputation as perhaps the most solidly engineered equipment available, and the most costly. As a result, S/Line equipment, and the A-Line and other predecessors, are restored, prized, and operated on the air by collectors today. Collins continued to produce the S/Line well into the late 1970s, and after its acquisition by Rockwell. In 1978, with the move to solid-state design, the S/Line came to an end after a two-decade production run. The KWM-380 transceiver was introduced the next year, a break with the past both in its use of transistors and digital technology, and its styling. It was Collins' final entry in the amateur radio market until it was discontinued in the mid-1980s. [ 28 ] In the 1960s, the company designed and sold C-System computerized message-switching equipment, built an intranet, and began implementing computer storage of design data for circuit boards and assemblies. They had a goal of automating all functions from parts ordering and inventory to factory scheduling to generation of maintenance provisioning. With products technically successful and far ahead of their time in many respects, Mr. Collins continued to invest in development at a rate that could not be supported by sales when a downturn occurred, and began to have financial problems. In 1991, Rockwell sold its Richardson, Texas-based Network Transmission Systems division to Alcatel . [ 29 ] [ 30 ] [ 31 ] In 2008, Rockwell Collins acquired Athena Technologies for US$107 million (equivalent to $156.27 million in 2024). [ 32 ] In August 2013, Rockwell Collins announced the agreement to purchase ARINC . On December 23, 2013, Rockwell Collins announced it had completed its acquisition of ARINC for US$1.4 billion (equivalent to $1.89 billion in 2024). [ 33 ] The purchase of ARINC allowed Rockwell Collins to shift their balance in commercial aviation. In April 2017, Rockwell Collins entered the aircraft cabin interiors market through the acquisition of B/E Aerospace for US$8.3 billion (equivalent to $10.65 billion in 2024). [ 34 ] Based in Wellington, Florida , B/E products included seating, food and beverage preparation and storage equipment, lighting and oxygen systems, and modular galley and lavatory systems for commercial airliners and business jets. B/E benefits from rival Zodiac Aerospace 's delivery troubles. Retrofit opportunities are provided by its $12 billion installed base. B/E shareholders received 20% of the new Rockwell, which then had $8.1 billion in revenues and $1.9 billion in pretax earnings with nearly 30,000 employees. [ 35 ] Rockwell Collins filed for regulatory approval for its intended acquisition of B/E Aerospace, before the Philippine Competition Commission , since the latter has a branch in the Philippines operating a manufacturing plant in Tanauan, Batangas. [ 36 ] As a result of the acquisition, a newly created direct or indirect subsidiary of Rockwell, Quarterback Merger Sub Corp., merged with and into B/E Aerospace, with the latter surviving the merger as a direct or indirect subsidiary of Rockwell Collins. [ 36 ] Rockwell Collins has five main divisions: The CS division services the commercial airline industry and business aircraft, providing navigation, communication, synthetic vision , other cockpit products such as autoland autopilots , and cabin products such as in-flight entertainment. The GS division services primarily the US government and military, but also provides some products and services to foreign governments with close ties to the United States. Notable government-related projects that Rockwell Collins has involvement with are Common Avionics Architecture System (CAAS), Joint Tactical Radio System (JTRS), Tactical Targeting Network Technology (TTNT), Defense Advanced GPS Receiver (DAGR), and Future Combat Systems . The I&SS division is an amalgamation of International Business organization, whose responsibility is sales, engineering, and human resources of personnel outside of North America, and Service Solutions, which provides support services such as customer support, simulation and training, and technical publications. I&SS provides a common service to both CS and GS divisions, and its formation was announced on the Rockwell Collins press release web page on February 19, 2010. [ 37 ] The Donald R. Beall Advanced Technology Center is a research and development center within Rockwell Collins that focuses on creating, identifying, and maturing technologies targeted at driving business growth. It maintains a portfolio that balances short-term deliverables focused on core and adjacent markets, with technologies for long-term growth. It has three departments: Advanced Radio Systems, Communications and Navigation Systems, and Embedded Information Systems. [ citation needed ] As with several other brands of vintage radio equipment, an active community of Collins radio enthusiasts existed, with clubs, web sites, and on-line discussions dedicated to restoring and operating the equipment. The Collins Collectors Association [ 38 ] and the Collins Radio Association [ 39 ] were two examples of such organizations. Groups of Collins users have also organized meetings, gatherings at hamfests , and regularly scheduled on-air discussions called nets . In December 2019, CNBC listed Rockwell Collins along with 91 additional Fortune 500 companies that "paid an effective federal tax rate of 0% or less" as a result of the Tax Cuts and Jobs Act of 2017 . [ 40 ]
https://en.wikipedia.org/wiki/Rockwell_Collins
The Rocky Mountain Arsenal was a United States chemical weapons manufacturing center located in the Denver Metropolitan Area in Commerce City , Colorado. The site was completed December 1942, [ 1 ] operated by the United States Army throughout the later 20th century and was controversial among local residents until its closure in 1992. [ citation needed ] Much of the site is now protected as the Rocky Mountain Arsenal National Wildlife Refuge . After the attack on Pearl Harbor and the United States' entry into World War II, the U.S. Army began looking for land to create a chemical manufacturing center. Located just north of Denver, in Commerce City and close to the Stapleton Airport , the U.S. Army purchased 20,000 acres (81 km 2 ). The location was ideal, not only because of the proximity to the airport, but because of the geographic features of the site, it was less likely to be attacked. The Rocky Mountain Arsenal manufactured chemical weapons including mustard gas , napalm , white phosphorus , lewisite , chlorine gas , and sarin . In the early 1960s, the U.S. Army began to lease out its facilities to private companies to manufacture pesticides. In the early 1980s the site was selected as a Superfund site and the cleanup process began. In the mid-1980s, wildlife, including endangered species, moved into the space and the land became a protected wildlife refuge. [ 1 ] The environmental movement began in the United States in the 1960-1970s. The U.S. Congress responded to the movement in 1980 with the creation of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), most commonly referred to as a Superfund. CERCLA was a tax imposed on chemical and petroleum industries. CERCLA also gave the Federal government the authority to respond to the release of life-threatening hazardous materials. [ 2 ] After 42 years of chemical manufacturing, in 1984, the United States Army began to inspect the level of contamination at Rocky Mountain Arsenal (RMA). The site was placed on the National Priorities List (NPL), a list of the most contaminated areas in the United States. Rocky Mountain Arsenal, among other post-military sites, was a top priority, establishing RMA as a superfund site. This was further exacerbated when the U.S. Army discovered an endangered species, the bald eagle. [ 3 ] After the bald eagles were captured, tested, and found to be healthy, the National Wildlife Federation worked with policymakers to transition RMA to a wildlife refuge. In 1992, Congress passed the Rocky Mountain Arsenal National Wildlife Refuge Act (RMANWR Act). Included in the RMANWR Act, areas within RMA that were still contaminated were still owned by the U.S. Army, however, the vast majority of the land that was deemed clean would be managed by the Federal Fish and Wildlife Service (FWS). Tensions arose between the United States Environmental Protection Agency (USEPA), the State of Colorado, United States Army, and the chemical industries as they partnered to clean up the site and create the RMANWR. This led the State of Colorado to take legal action over who has legal authority over RMA remediation efforts, payment of natural resource damages (NRDs), and reimbursement of costs expended for cleanup activities (response costs). [ 4 ] The Arsenal's location was selected due to its relative distance from the coasts (and presumably not likely to be attacked), a sufficient labor force to work at the site, weather that was conducive to outdoor work, and the appropriate soil needed for the project. It was also helpful that the location was close to Stapleton airfield, a major transportation hub. [ 5 ] In 1942, the US Army acquired 19,915 acres (80.59 km 2 ) of land on which to manufacture weapons in support of World War II military activities at a cost of $62,415,000. Additionally, some of this land was used for a prisoner of war camp (for German combatants ) and later transferred to the city of Denver as Stapleton Airport expanded. A lateral was built off the High Line Canal to supply water to the Arsenal. Weapons manufactured at RMA included both conventional and chemical munitions, including white phosphorus ( M34 grenade ), napalm , mustard gas , lewisite , and chlorine gas. [ 6 ] [ 7 ] RMA is also one of the few sites that had a stockpile of Sarin gas (aka nerve agent GB), an organophosphorus compound . The manufacturing of these weapons continued until 1969. Rocket fuel to support Air Force operations was also manufactured and stored at RMA. Subsequently, through the 1970s until 1985, RMA was used as a demilitarization site to destroy munitions and chemically related items. Coinciding with these activities, from 1946 to 1982, the Army leased RMA facilities to private industries for the production of pesticides . One of the major lessees, Shell Oil Company , along with Julius Hyman and Company and Colorado Fuel and Iron , had manufacturing and processing capabilities on RMA between 1952 and 1982. The military reserved the right to oust these companies and restart chemical weapon production in the event of a national emergency. RMA contained a deep injection well that was constructed in 1961. [ 8 ] It was drilled to a depth of 12,045 feet (3,671 m). The well was cased and sealed to a depth of 11,975 feet (3,650 m), with the remaining 70 feet (21 m) left as an open hole for the injection of Basin F liquids. For testing purposes, the well was injected with approximately 568,000 US gallons (2150 m³) of city water prior to injecting any waste. The injected fluids had very little potential for reaching the surface or usable groundwater supply since the injection point had 11,900 feet (3,600 m) of rock above it and was sealed at the opening. The Army discontinued use of the well in February 1966 because the fluid injection triggered a series of earthquakes in the area. [ 8 ] [ 9 ] The well remained unused until 1985 when the Army permanently sealed the disposal well. In 1984, the Army began a systematic investigation of site contamination in accordance with the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), commonly referred to as Superfund . In 1987, the RMA was placed on the National Priorities List (NPL) of Superfund sites. As provided by CERCLA, a Remedial Investigation/Feasibility Study (RI/FS) was conducted to determine the extent of contamination. Since 1985, the mission at RMA has been the remediation of the site. The primary contaminants include organochloride pesticides, organophosphate pesticides, carbamate insecticides, organic solvents and feedstock chemicals used as raw products or intermediates in the manufacturing process (e.g., chlorinated benzenes), heavy metals, chemical warfare material and their related breakdown products and biological warfare agent such as TX. Additionally, ordnance (including incendiary munitions) was manufactured and tested, and asbestos and polychlorinated biphenyls (PCBs) were used at RMA. Today, it is considered a hazardous waste site according to the Colorado Department of Public Health and Environment. The contamination of the underlying alluvial aquifer occurred due to the discharge of waste into unlined basins. The following data were derived from the United States Nuclear Regulatory Commission. From 1943 to 1956, the US Army and Shell discharged wastes into the unlined basins resulting in the contamination of the South Platte River outside the Arsenal. Farmers in the vicinity complained about the damage to crops due to the water pumped from the shallow alluvial aquifer. In response, the Army constructed an asphalt-lined impoundment for the disposal of wastes in 1956. Further, in 1961, the Army constructed a 12,000-foot deep injection well for the disposal of wastes. This resulted in subsequent earthquakes in Denver area. In 1975, Colorado Department of Public Health and Environment ordered the Army and Shell to stop the non-permitted discharge of contaminants, to control the contaminated groundwater leaving the site, and to implement a monitoring plan. The Army and Shell took remedial actions to prevent the contamination that includes the installation of the groundwater barrier system which treated approximately 1 billion gallons of water every year. The deep injection well was closed in 1985 and Basin F was closed in 1988 [ 10 ] According to National Resource Damage Assessment, although the contamination has been reduced by the treatment efforts, the water in and around the arsenal may never be fully clean. A volume of approximately 52,500 acre-feet (65 million cubic metres) of the alluvial aquifer is not usable for human consumption. [ 11 ] The NRDA found several injuries to wildlife. It was estimated by the U.S. Fish and Wildlife Service that at least 20,000 ducks died in a 10-year span during the 1970s. Mallard carcasses found to have higher levels of Dieldrin . Many mammals and birds were found dead and may have suffered lower reproduction rates or birth defects. [ 11 ] Because of the Superfund site status and the dramatic cleanups, many residents in neighborhoods surrounding the RMA voiced concern about ongoing health risks of living within the close vicinity of the site. In September 2017, the state of Colorado filed a lawsuit to sue the United States government for the right to control the contaminated areas of the RMA. [ 12 ] Though the cleanup of the site was considered complete in 2010, soil and groundwater monitoring practices occur every five years to ensure the effects of the clean-up remain. Restrictions on well water use, residential development, consumption of fish and game from the arsenal, and agricultural use of the arsenal will exist in perpetuity until further scientific research is completed at the site. Many of the surrounding neighborhoods have been provided with potable tap water from other areas of Adams county because of the potential effects of contaminated groundwater from wells. Trace amounts of the chemical 1,4-dioxane has been found in some samples of drinking water. There is no appropriate standard by the EPA, but the state of Colorado has a standard treatment protocol for this chemical. [ 13 ] As part of the clean up of the RMA, much of the soil, up to 10 feet below the surface was removed from the site. This soil is contained in hazardous waste landfills. Contaminated areas of soil remain in the Rocky Mountain Arsenal, but are contained in basins and containment structures. [ 11 ] During the cleanup of the RMA, concern for air pollution from the hazardous materials was raised. The Colorado Department of Public Health and Environment established monitoring systems throughout various locations of the RMA. Throughout the decades of cleanup, the air monitors revealed there was no safety hazard to public health as no arsenal chemicals had been released into the air. Longstanding agricultural and health concerns related to the Rocky Mountain Arsenal have resulted in a complex history of political and legal battles. Heavy volatile contaminants related to Basin F raised concern among the public for the site and the process of the clean-up itself of the Arsenal and a medical monitoring program (MMP) was put in place as part of the Record of Decision (ROD) between the U.S. Army, the U.S. Environmental Protection Agency, and the Colorado Department of Health and Environment in 1996. [ 3 ] One of the goals of the MMP was to enhance community assurance that the clean-up was effective, and it included air quality monitoring, cancer surveillance, and birth defects surveillance. [ 11 ] Air quality monitoring of the Arsenal began concurrently with the decontamination process in 1997 and surveillance continued until July 2009. The Surveillance for Birth Defects utilized passive observational data from an existing birth defects registry March 1989 – March 2009. [ 14 ] The following data were derived from the Rocky Mountain Arsenal Medical Monitoring Program Surveillance for Birth Defects Compendium prepared by Colorado Department of Public Health and Environment and published in February 2010. In this study, baseline birth defects were estimated from the time period 1989–1997, which was the point at which the clean-up began, and inclusion criteria included mother's address at the time of birth being within the geographical study area. Other demographics of the mother were gathered as well. Birth defects included in the analysis were: "total congenital anomalies, major congenital anomalies, heart defects, muscle and skeletal defects, and kidney and bladder defects," and these categories were inconsistent in reporting accuracy. Statistically significant findings (p<0.01) of this study included demographic differences in the mothers as follows: median age 24, compared to 27 years of age in Colorado as a whole, higher percent of mothers who were white/Hispanic and black, mean education level of 11.8 years compared to 13.1 years in Colorado as a whole, fewer mothers who were married, and fewer prenatal visits on average. These potential confounders are not clearly addressed in this report and may complicate the analysis as well as raise concern for disparities in exposure risk that is dependent upon demographic factors. Baseline rates of congenital anomalies in the study area compared to Colorado as a whole did not show significant differences between populations. No significant increase was observed in congenital anomalies during the clean-up period compared to pre-clean up, although there are no baseline data prior to initial contamination events because data was not yet being collected and the population was very different at that time. In summary, there is no current evidence of health effects. The Colorado Department of Public Health and Environment found no increased risk of birth defects in infants. A separate study of cancer incidence by the Colorado Department of Health did not find convincing evidence of increased cancer risk in people living in residential areas surrounding the arsenal, [ 15 ] although the study was made more difficult by the large demographic changes in the area and was also confounded by smoking and obesity rates. Additionally, studies performed at Colorado State University found no increased risk of Arsenic , Mercury , or neurotoxicity in communities within 15 miles of the RMA. [ 16 ] [ 17 ] Many projects have attempted to clean contaminated groundwater at the Arsenal. For example, DIMP (diisopropyl methyl phosphonate) was one of the main contaminants in the area. One monitoring project has demonstrated incremental improvements over time, and specifically measured 640 parts per billion (ppb) in 1987 and 55 ppb in 1989, while a different off-post monitoring well measured 138 ppb in 1985, 105 ppb in 1987, 14 ppb in 1988, and 6.7 ppb in 1989. [ 18 ] While it is difficult to capture the societal cost to clean up the site, the list of actions dealing with groundwater contamination listed by Mears and Heise include: Direct economic totals add up to approximately $111 million and this estimation does not include operation and maintenance costs. In addition, there were actions completed by Future Farmers of America (FFA) between 1991 and 1993 that cost approximately $151.2 million. A more recent article in 2004 by Pimentel, [ 19 ] estimated the cost of removal pesticides from the groundwater and soil at the Rocky Mountain Arsenal by approximately $2 billion. Also, they noted that if all groundwater were to be cleared for human consumption, the cost would be $500 million annually. Estimating exact direct and indirect impact of the contamination is very challenging as the cleaning and monitoring costs are complex. Further, there have been damages to the rural areas due to contamination resulting in livestock losses, and crop losses. In addition, contamination affects public health and nature (honeybee poisonings, pesticide resistance in pests, destruction of natural predators, wild birds, microbes) negatively. There are many studies that try to estimate the total costs due to contamination of pesticides in U.S. as well as in other countries; however, indirect costs are difficult to estimate, but likely several times than total direct environmental and social costs. [ 20 ] In the case of Rocky Mountain Arsenal, total indirect cost was not estimated at all. In 1986, it was discovered that the absence of human activity had made the area an involuntary park when a winter communal roost of bald eagles , then an endangered species , was discovered on site. The U.S. Fish and Wildlife Service inventoried more than 330 species of wildlife that inhabit the Arsenal including deer , coyotes , white pelicans , and owls . The Rocky Mountain Arsenal National Wildlife Refuge Act was passed in October 1992 and signed by President George H. W. Bush . It stipulates that the majority of the site will become a National Wildlife Refuge under the jurisdiction of the Fish and Wildlife Service when the environmental restoration is completed. The act also provides that to the extent possible, parts of the arsenal are to be managed as a refuge in the interim. Finally, the act provides for the transfer of some arsenal land for road expansion around the perimeter of the arsenal and 915 acres (3.70 km 2 ) to be sold for development and annexation by Commerce City. Already since 1995, the buildings became the seat of the National Eagle Repository , an office of the Fish and Wildlife Service that receives the bodies of all dead golden and bald eagles in the nation and provides feathers and other parts to Native Americans for cultural uses. In September 2010, the cleanup was considered complete, and the remaining portions of land were transferred to the U.S. Fish and Wildlife Service, bringing the total to 15,000 acres (61 km 2 ). Two sites were retained by the Army: the South Plants location due to historical use [ clarification needed ] , and the North Plant location, which is now a landfill containing the remains of various buildings used in the plants. On May 21, 2011, the official visitor center for the refuge was opened with an exhibit about the site's history, ranging from the homesteading era to its current status. Congruent with the outline of the June 1996 USFWS Comprehensive Management Plan, RMA will be available for public use through both community outreach and educational programs (as provided by the Visitor Access Plan and the USFWS). This public availability will be implemented while simultaneously supporting the remediation effort and the USFWS activities. In April 2007 Dick's Sporting Goods Park , a soccer-specific stadium , was opened on part of the former Rocky Mountain Arsenal land that was transferred to Commerce City. The new venue hosts the Colorado Rapids of Major League Soccer . A small herd of wild bison was introduced to the refuge in March 2007 as part of the USFWS Bison Project. The animals were transferred from the National Bison Range in Montana .
https://en.wikipedia.org/wiki/Rocky_Mountain_Arsenal
A rocky shore is an intertidal area of seacoasts where solid rock predominates. Rocky shores are biologically rich environments, and are a useful "natural laboratory" for studying intertidal ecology and other biological processes. Due to their high accessibility, they have been well studied for a long time and their species are well known. [ 1 ] [ 2 ] Many factors favour the survival of life on rocky shores. Temperate coastal waters are mixed by waves and convection, maintaining adequate availability of nutrients. Also, the sea brings plankton and broken organic matter in with each tide. The high availability of light (due to low depths) and nutrient levels means that primary productivity of seaweeds and algae can be very high. Human actions can also benefit rocky shores due to nutrient runoff . Despite these favourable factors, there are also a number of challenges to marine organisms associated with the rocky shore ecosystem . Generally, the distribution of benthic species is limited by salinity , wave exposure, temperature, desiccation and general stress. The constant threat of desiccation during exposure at low tide can result in dehydration. Hence, many species have developed adaptations to prevent this drying out, such as the production of mucous layers and shells. Many species use shells and holdfasts to provide stability against strong wave actions. There are also a variety of other challenges such as temperature fluctuations due to tidal flow (resulting in exposure), changes in salinity and various ranges of illumination. Other threats include predation from birds and other marine organisms, as well as the effects of pollution . The Ballantine scale is a biologically defined scale for measuring the degree of exposure level of wave action on a rocky shore. Devised in 1961 by W. J. Ballantine, then at the zoology department of Queen Mary University of London , London , U.K. , the scale is based on the observation that where shoreline species are concerned "Different species growing on rocky shores require different degrees of protection from certain aspects of the physical environment, of which wave action is often the most important." The species present in the littoral zone therefore indicate the degree of the shore's exposure. [ 3 ] The scale runs from (1), an "extremely exposed" shore, to (8), an "extremely sheltered" shore. Tidal movements of water creates zonation patterns along rocky shores from high to low-tide. [ 4 ] The area above the high-tide mark is the supralittoral zone which is virtually a terrestrial environment. The area around the high-tide mark is known as the intertidal fringe. Between the high and low-tide marks is the intertidal or littoral zone. Below the low-tide mark is the sublittoral or subtidal zone. The presence and abundance of different animals and algae vary in different zones along the rocky shore due to differing adaptations to the varying levels of exposure to sun and desiccation along the rocky shore. Rocky shores are exposed to many forms of pollution, in particular pollution related to oil spills . Prominent spills are the Torrey Canyon spill , [ 5 ] the Amoco Cadiz spill outside the Brittany coast in France [ 6 ] and the Exxon Valdez spill in Prince William Sound, Alaska, US. Garbage such as plastics and metals being left behind by people is also a problem among many rocky coastlines that attract tourists.
https://en.wikipedia.org/wiki/Rocky_shore
Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus . Rod calculus played a key role in the development of Chinese mathematics to its height in the Song dynasty and Yuan dynasty , culminating in the invention of polynomial equations of up to four unknowns in the work of Zhu Shijie . The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand. In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). [ citation needed ] In 1975 a bundle of bamboo counting rods was unearthed. [ citation needed ] The use of counting rods for rod calculus flourished in the Warring States , although no archaeological artefacts were found earlier than the Western Han dynasty (the first half of Han dynasty ; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States ); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago. The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table , which were learned by heart by pupils, merchants, government officials and mathematicians alike. Rod numerals is the only numeric system that uses different placement combination of a single symbol to convey any number or fraction in the Decimal System. For numbers in the units place, every vertical rod represent 1. Two vertical rods represent 2, and so on, until 5 vertical rods, which represents 5. For number between 6 and 9, a biquinary system is used, in which a horizontal bar on top of the vertical bars represent 5. The first row are the number 1 to 9 in rod numerals, and the second row is the same numbers in horizontal form. For numbers larger than 9, a decimal system is used. Rods placed one place to the left of the units place represent 10 times that number. For the hundreds place, another set of rods is placed to the left which represents 100 times of that number, and so on. As shown in the adjacent image, the number 231 is represented in rod numerals in the top row, with one rod in the units place representing 1, three rods in the tens place representing 30, and two rods in the hundreds place representing 200, with a sum of 231. When doing calculation, usually there was no grid on the surface. If rod numerals two, three, and one is placed consecutively in the vertical form, there's a possibility of it being mistaken for 51 or 24, as shown in the second and third row of the adjacent image. To avoid confusion, number in consecutive places are placed in alternating vertical and horizontal form, with the units place in vertical form, [ 1 ] as shown in the bottom row on the right. In Rod numerals , zeroes are represented by a space, which serves both as a number and a place holder value. Unlike in Hindu-Arabic numerals , there is no specific symbol to represent zero. Before the introduction of a written zero, in addition to a space to indicate no units, the character in the subsequent unit column would be rotated by 90°, to reduce the ambiguity of a single zero. [ 2 ] For example 107 (𝍠 𝍧) and 17 (𝍩𝍧) would be distinguished by rotation, in addition to the space, though multiple zero units could lead to ambiguity, e.g. 1007 (𝍩 𝍧), and 10007 (𝍠 𝍧). In the adjacent image, the number zero is merely represented with a space. Song mathematicians used red to represent positive numbers and black for negative numbers . However, another way is to add a slash to the last place to show that the number is negative. [ 3 ] The Mathematical Treatise of Sunzi used decimal fraction metrology. The unit of length was 1 chi , 1 chi = 10 cun , 1 cun = 10 fen , 1 fen = 10 li , 1 li = 10 hao , 10 hao = 1 shi, 1 shi = 10 hu . 1 chi 2 cun 3 fen 4 li 5 hao 6 shi 7 hu is laid out on counting board as where is the unit measurement chi . Southern Song dynasty mathematician Qin Jiushao extended the use of decimal fraction beyond metrology. In his book Mathematical Treatise in Nine Sections , he formally expressed 1.1446154 day as He marked the unit with a word “日” (day) underneath it. [ 4 ] Rod calculus works on the principle of addition. Unlike Arabic numerals , digits represented by counting rods have additive properties. The process of addition involves mechanically moving the rods without the need of memorising an addition table . This is the biggest difference with Arabic numerals, as one cannot mechanically put 1 and 2 together to form 3, or 2 and 3 together to form 5. The adjacent image presents the steps in adding 3748 to 289: The rods in the augend change throughout the addition, while the rods in the addend at the bottom "disappear". In situation in which no borrowing is needed, one only needs to take the number of rods in the subtrahend from the minuend . The result of the calculation is the difference. The adjacent image shows the steps in subtracting 23 from 54. In situations in which borrowing is needed such as 4231–789, one need use a more complicated procedure. The steps for this example are shown on the left. Sunzi Suanjing described in detail the algorithm of multiplication. On the left are the steps to calculate 38×76: The animation on the left shows the steps for calculating ⁠ 309 / 7 ⁠ = 44 ⁠ 1 / 7 ⁠ . The Sunzi algorithm for division was transmitted in toto by al Khwarizmi to Islamic country from Indian sources in 825AD. Al Khwarizmi's book was translated into Latin in the 13th century, The Sunzi division algorithm later evolved into Galley division in Europe. The division algorithm in Abu'l-Hasan al-Uqlidisi 's 925AD book Kitab al-Fusul fi al-Hisab al-Hindi and in 11th century Kushyar ibn Labban 's Principles of Hindu Reckoning were identical to Sunzu's division algorithm. If there is a remainder in a place value decimal rod calculus division, both the remainder and the divisor must be left in place with one on top of another. In Liu Hui 's notes to Jiuzhang suanshu (2nd century BCE), the number on top is called "shi" (实), while the one at bottom is called "fa" (法). In Sunzi Suanjing , the number on top is called "zi" (子) or "fenzi" (lit., son of fraction), and the one on the bottom is called "mu" (母) or "fenmu" (lit., mother of fraction). Fenzi and Fenmu are also the modern Chinese name for numerator and denominator , respectively. As shown on the right, 1 is the numerator remainder, 7 is the denominator divisor, formed a fraction ⁠ 1 / 7 ⁠ . The quotient of the division ⁠ 309 / 7 ⁠ is 44 + ⁠ 1 / 7 ⁠ . Liu Hui used a lot of calculations with fractions in Haidao Suanjing . This form of fraction with numerator on top and denominator at bottom without a horizontal bar in between, was transmitted to Arabic country in an 825AD book by al Khwarizmi via India, and in use by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī 's work "Arithematic Key". ⁠ 1 / 3 ⁠ + ⁠ 2 / 5 ⁠ ⁠ 8 / 9 ⁠ − ⁠ 1 / 5 ⁠ 3 ⁠ 1 / 3 ⁠ × 5 ⁠ 2 / 5 ⁠ The algorithm for finding the highest common factor of two numbers and reduction of fraction was laid out in Jiuzhang suanshu . The highest common factor is found by successive division with remainders until the last two remainders are identical. The animation on the right illustrates the algorithm for finding the highest common factor of ⁠ 32,450,625 / 59,056,400 ⁠ and reduction of a fraction. In this case the hcf is 25. Divide the numerator and denominator by 25. The reduced fraction is ⁠ 1,298,025 / 2,362,256 ⁠ . Calendarist and mathematician He Chengtian ( 何承天 ) used fraction interpolation method, called "harmonisation of the divisor of the day" ( 调日法 ) to obtain a better approximate value than the old one by iteratively adding the numerators and denominators a "weaker" fraction with a "stronger fraction". [ 5 ] Zu Chongzhi 's legendary π = ⁠ 355 / 113 ⁠ could be obtained with He Chengtian's method [ 6 ] Chapter Eight Rectangular Arrays of Jiuzhang suanshu provided an algorithm for solving System of linear equations by method of elimination : [ 7 ] Problem 8-1: Suppose we have 3 bundles of top quality cereals, 2 bundles of medium quality cereals, and a bundle of low quality cereal with accumulative weight of 39 dou. We also have 2, 3 and 1 bundles of respective cereals amounting to 34 dou; we also have 1,2 and 3 bundles of respective cereals, totaling 26 dou. Find the quantity of top, medium, and poor quality cereals. In algebra, this problem can be expressed in three system equations with three unknowns. This problem was solved in Jiuzhang suanshu with counting rods laid out on a counting board in a tabular format similar to a 3x4 matrix: Algorithm: The amount of one bundle of low quality cereal = 99 36 = 2 3 4 {\displaystyle ={\frac {99}{36}}=2{\frac {3}{4}}} From which the amount of one bundle of top and medium quality cereals can be found easily: Algorithm for extraction of square root was described in Jiuzhang suanshu and with minor difference in terminology in Sunzi Suanjing . The animation shows the algorithm for rod calculus extraction of an approximation of the square root 234567 ≈ 484 311 968 {\displaystyle {\sqrt {234567}}\approx 484{\tfrac {311}{968}}} from the algorithm in chap 2 problem 19 of Sunzi Suanjing: The algorithm is as follows: . North Song dynasty mathematician Jia Xian developed an additive multiplicative algorithm for square root extraction , in which he replaced the traditional "doubling" of "fang fa" by adding shang digit to fang fa digit, with same effect. Jiuzhang suanshu vol iv "shaoguang" provided algorithm for extraction of cubic root. 〔一九〕今有積一百八十六萬八百六十七尺。問為立方幾何?答曰:一百二十三尺。 problem 19: We have a 1860867 cubic chi, what is the length of a side ? Answer:123 chi. North Song dynasty mathematician Jia Xian invented a method similar to simplified form of Horner scheme for extraction of cubic root. The animation at right shows Jia Xian's algorithm for solving problem 19 in Jiuzhang suanshu vol 4. North Song dynasty mathematician Jia Xian invented Horner scheme for solving simple 4th order equation of the form South Song dynasty mathematician Qin Jiushao improved Jia Xian's Horner method to solve polynomial equation up to 10th order. The following is algorithm for solving This equation was arranged bottom up with counting rods on counting board in tabular form Algorithm: Yuan dynasty mathematician Li Zhi developed rod calculus into Tian yuan shu Example Li Zhi Ceyuan haijing vol II, problem 14 equation of one unknown: − x 2 − 680 x + 96000 = 0 {\displaystyle -x^{2}-680x+96000=0} Mathematician Zhu Shijie further developed rod calculus to include polynomial equations of 2 to four unknowns. For example, polynomials of three unknowns: Equation 1: − y − z − y 2 ∗ x − x + x y z = 0 {\displaystyle -y-z-y^{2}*x-x+xyz=0} Equation 2: − y − z + x − x 2 + x z = 0 {\displaystyle -y-z+x-x^{2}+xz=0} Equation 3: y 2 − z 2 + x 2 = 0 ; {\displaystyle y^{2}-z^{2}+x^{2}=0;} After successive elimination of two unknowns, the polynomial equations of three unknowns was reduced to a polynomial equation of one unknown: x 4 − 6 x 3 + 4 x 2 + 6 x − 5 = 0 {\displaystyle x^{4}-6x^{3}+4x^{2}+6x-5=0} Solved x=5; Which ignores 3 other answers, 2 are repeated.
https://en.wikipedia.org/wiki/Rod_calculus
In Greek mythology , the Rod of Asclepius (⚕; / æ s ˈ k l iː p i ə s / , Ancient Greek : Ῥάβδος τοῦ Ἀσκληπιοῦ , Rhábdos toû Asklēpioû , sometimes also spelled Asklepios ), also known as the Staff of Aesculapius and as the asklepian , [ 1 ] is a serpent-entwined rod wielded by the Greek god Asclepius , a deity associated with healing and medicine. In modern times, it is the predominant symbol for medicine and health care, although it is sometimes confused with the similar caduceus , which has two snakes and a pair of wings. [ 1 ] The Rod of Asclepius takes its name from the Greek god Asclepius , a deity associated with healing and medicinal arts in ancient Greek religion and mythology . Asclepius' attributes, the snake and the staff, sometimes depicted separately in antiquity, are combined in this symbol. [ 2 ] [ full citation needed ] The most famous temple of Asclepius was at Epidaurus in north-eastern Peloponnese . [ 3 ] Another famous healing temple (or asclepeion ) was located on the island of Kos , where Hippocrates , the legendary "father of medicine", may have begun his career. Other asclepieia were situated in Trikala , Gortys (Arcadia) , and Pergamum in Asia . In honour of Asclepius, a particular type of non-venomous rat snake was often used in healing rituals, and these snakes – the Aesculapian snakes – crawled around freely on the floor in dormitories where the sick and injured slept. These snakes were introduced at the founding of each new temple of Asclepius throughout the classical world. From about 300 BCE onwards, the cult of Asclepius grew very popular and pilgrims flocked to his healing temples (Asclepieia) to be cured of their ills. Ritual purification would be followed by offerings or sacrifices to the god (according to means), and the supplicant would then spend the night in the holiest part of the sanctuary – the abaton (or adyton). Any dreams or visions would be reported to a priest who would prescribe the appropriate therapy by a process of interpretation. [ 4 ] Some healing temples also used sacred dogs to lick the wounds of sick petitioners. [ 5 ] [ 6 ] The original Hippocratic Oath began with the invocation "I swear by Apollo the Healer and by Asclepius and by Hygieia and Panacea and by all the gods ..." [ 5 ] The serpent and the staff appear to have been separate symbols that were combined at some point in the development of the Asclepian cult. [ 7 ] The significance of the serpent has been interpreted in many ways; sometimes the shedding of skin and renewal is emphasized as symbolizing rejuvenation, [ 8 ] [ a ] while other assessments center on the serpent as a symbol that unites and expresses the dual nature of the work of the Apothecary Physician, who deals with life and death, sickness and health. [ 10 ] The ambiguity of the serpent as a symbol, and the contradictions it is thought to represent, reflect the ambiguity of the use of drugs, [ 8 ] which can help or harm, as reflected in the meaning of the term pharmakon , which meant "drug", "medicine", and "poison" in ancient Greek. [ 11 ] However the word may become less ambiguous when "medicine" is understood as something that heals the one taking it because it poisons that which afflicts it, meaning medicine is designed to kill or drive away something and any healing happens as a result of that thing being gone, not as a direct effect of medicine. Products deriving from the bodies of snakes were known to have medicinal properties in ancient times, and in ancient Greece, at least some were aware that snake venom that might be fatal if it entered the bloodstream could often be imbibed. Snake venom appears to have been prescribed in some cases as a form of therapy. [ 12 ] The staff has also been variously interpreted. One view is that it, like the serpent, "conveyed notions of resurrection and healing", while another (not necessarily incompatible) is that the staff was a walking stick associated with itinerant physicians. [ 13 ] Cornutus , a Greek philosopher probably active in the first century CE, in the Theologiae Graecae Compendium (Ch. 33) offers a view of the significance of both snake and staff: Asclepius derived his name from healing soothingly and from deferring the withering that comes with death. For this reason, therefore, they give him a serpent as an attribute, indicating that those who avail themselves of medical science undergo a process similar to the serpent in that they, as it were, grow young again after illnesses and slough off old age; also because the serpent is a sign of attention, much of which is required in medical treatments. The staff also seems to be a symbol of some similar thing. For by means of this it is set before our minds that unless we are supported by such inventions as these, in so far as falling continually into sickness is concerned, stumbling along we would fall even sooner than necessary. [ 9 ] : 13 In any case, the two symbols certainly merged in antiquity as representations of the snake coiled about the staff are common. [ 6 ] It is relatively common, especially in the United States, to find the caduceus, with its two snakes and wings, (mis)used as a symbol of medicine instead of the Rod of Asclepius, with only a single snake. This usage was popularized by the adoption of the caduceus as its insignia by the U.S. Army Medical Corps in 1902 at the insistence of a single officer (though there are conflicting claims as to whether this was Capt. Frederick P. Reynolds or Col. John R. van Hoff). [ 14 ] [ 15 ] The Rod of Asclepius is the dominant symbol for professional healthcare associations in the United States. One survey found that 62% of professional healthcare associations used the rod of Asclepius as their symbol. [ 16 ] The same survey found that 76% of commercial healthcare organizations use the caduceus. The author of the study suggests that professional associations are more likely to have a historical understanding of the two symbols, whereas commercial organizations are more likely to be concerned with the visual impact a symbol will have on its sales. [ 16 ] The long-standing historical association of the caduceus with commerce has engendered significant criticism of its use in medicine. Medical professionals argue that the Rod of Asclepius better represents the field of medicine. [ 17 ] Writing in the journal Scientific Monthly , Stuart L. Tyson said of the Staff of Hermes (the caduceus): As god of the high-road and the market-place Hermes was perhaps above all else the patron of commerce and the fat purse: as a corollary, he was the special protector of the traveling salesman. As spokesman for the gods, he not only brought peace on earth (occasionally even the peace of death), but his silver-tongued eloquence could always make the worse appear the better cause. [ 18 ] From this latter point of view, would not his symbol be suitable for certain Congressmen, all medical quacks, book agents and purveyors of vacuum cleaners, rather than for the straight-thinking, straight-speaking therapeutist? As conductor of the dead to their subterranean abode, his emblem would seem more appropriate on a hearse than on a physician's car. A number of organizations and services use the rod of Asclepius as their logo, or part of their logo. These include: In Russia, the emblem of Main Directorate for Drugs Control features a variation with a sword and a snake on the shield. A symbol for the rod of Asclepius has a code point ( U+2695 ⚕ STAFF OF AESCULAPIUS ) in the Miscellaneous Symbols table of the Unicode Standard: the spelling is theirs. The rod of Asclepius has been likened to the Old Testament account of Moses's brazen serpent, a sculpture depicting a snake arranged on a rod. According to the Bible story ( Numbers 21:4–9 ), the rod had the divine power to protect the Israelites from the bites of venomous snakes if they looked upon it. [ 20 ]
https://en.wikipedia.org/wiki/Rod_of_Asclepius
NASA 's Rodent Research Hardware System [ 1 ] provides a research platform aboard the International Space Station for long-duration experiments on rodents in space. Such experiments will examine how microgravity affects the rodents, providing information relevant to human spaceflight , discoveries in basic biology, and knowledge that can help treat human disease on Earth. Based on the recommendations from the National Academies of Sciences, Engineering, and Medicine report Recapturing a Future for Space Exploration: Life and Physical Sciences Research for a New Era (2011). The report included a recommendation that NASA establish a rodent research facility aboard the International Space Station designated as a national laboratory “as soon as possible” to enable high-priority, long duration rodent studies. The goal was to conduct studies of durations up to 6 months. As mice and rats have life spans of at most 5 years the “studies on these rodents in space have the potential to extrapolate important implications for humans living in space well beyond six months." [ 1 ] [ 2 ] The Rodent Research Hardware System was developed by scientists and engineers at NASA's Ames Research Center in Moffett Field, California. [ 3 ] In the past short-term rodent experiments transported to space on various vehicle including the Space Shuttle . This is the first "permanent" laboratory for rodent research. The system was developed based on what was learned from the Animal Enclosure Module [ 4 ] [ 5 ] that flew aboard 27 Space Shuttle missions between 1983 and 2011. The first Rodent Research Hardware System was delivered to the ISS by SpaceX CRS-4 . [ 6 ] [ 7 ] The system has 4 major components. The Transporter is used to safely house the rodents while being transported from Earth to the space station. This is also referred to as the Animal Enclosure Module-Transporter(AEM-T). [ 8 ] As the trip from Earth can take up to 10 days an Environmental Control and Life Support System(ECLSS) is required. This is provided by the Animal Enclosure Module-ECLSS(AEM-E). [ 9 ] The Animal Access Unit provides containment while transferring of rodents between the Transporter and the Habitat; and the Habitat that provides long-term housing for rodents aboard the station. The Habitat component operate in an EXPRESS Rack [ 10 ] facility aboard the station. Crew members will use the access module to examine the rodents closely during the study and to transfer them between habitats as needed. Each habitat module provides as many as 10 mice or six rats with all of the basics they need to live comfortably aboard the station including water, food, lighting and fresh air. Rodents easily can move around the living space by grasping grids that line the floor and walls. The modules include data downlink capability that enables monitoring of environmental conditions such as temperature. A visible light and infrared video system allows the crew in space and scientists and veterinarians on the ground to monitor behavior and overall health of the rodents on a daily basis. [ 1 ] [ 7 ] Delivered on 21 September 2014 to the ISS by SpaceX CRS-4 . Mission was a validation of the operational capabilities of the hardware to support rodent research provides valuable information applicable to future long-term space missions. Rodent Research-1 was a joint operation between NASA and CASIS . The experiments involved 20 mice; 10 NASA mice and 10 CASIS mice. This was the first time rodents were transported to the ISS aboard an uncrewed commercial vehicle. Lasting 37 days, Rodent Research-1 was the longest duration spaceflight rodent study to date conducted in a NASA facility. [ 6 ] [ 3 ] [ 11 ] [ 12 ] [ 13 ] [ 7 ] [ 14 ] The Bone Densitometer was also delivered on this mission to be used in later missions. [ 15 ] [ 16 ] Delivered on 14 April 2015 to the ISS by SpaceX CRS-6 . The research was sponsored by the Center for the Advancement of Science in Space (CASIS) and Novartis Institute for Biomedical Research. The primary objective of the research was to monitor the effects of the space environment on the musculoskeletal and neurological systems of mice as model organisms of human health and disease. In addition to the primary research focus other organ systems, including whole blood, brain, heart, lungs, kidney/adrenal glands, liver, spleen, and small intestines, were also studied for molecular and morphological changes as a function of duration of spaceflight exposure. The study included 40 mice, 20 that were flown to the ISS and 20 as controls that remained on Earth. The study lasted 37 days. [ 17 ] [ 18 ] The Bone Densitometer Validation experiment was used in support of RR-2. [ 15 ] [ 16 ] Delivered on 8 April 2016 to the ISS by SpaceX CRS-8 . The research was sponsored by the International Space Station U.S. National Laboratory in partnership with Eli Lilly and Company . The primary objective was to countermeasure against muscle atrophy. The study assessed myostatin inhibition to prevent skeletal muscle atrophy and weakness in mice. Twenty mice were flown for this experiment and the study lasted 33 days. [ 19 ] [ 20 ] [ 21 ] As part of the study astronauts successfully completed a functional assessment of grip strength in mice on the orbiting laboratory. This was the first time a grip strength meter has been used for rodent research on orbit, and the data gathered will be used to assess the efficacy of the anti-myostatin treatments in preventing muscle loss in space. [ 22 ] [ 14 ] [ 23 ] Delivered on 19 February 2017 to the ISS by SpaceX CRS-10 . The research was sponsored by the United States Department of Defense (DoD) Space Test Program and the Center for the Advancement of Science in Space (CASIS), manager of the ISS National Laboratory. The primary objective of the study was to better understand bone healing and bone tissue regeneration and to study the impacts of microgravity on these processes. The study also intended to gauge certain agents capable of inducing bone healing and regeneration in spaceflight. The study lasted 28 days. [ 24 ] [ 25 ] NASA studies in space involving mice require housing mice at densities higher than recommended in the Guide for the Care and Use of Laboratory Animals. [ 26 ] For this reason all previous NASA missions in which mice were co-housed, involved female mice. This spaceflight study examining bone healing, male mice are required for optimal experimentation. To ensure valid results from this first NASA study involving male mice an additional study on the housing density was done. [ 27 ] The study included 80 mice, 40 that were flown to the ISS and 40 as controls that remained on Earth. Some of the results of this study have been published in the journal of Life Sciences in Space Research focusing on the impact of launch into space on bone fracture healing. [ 28 ] [ 29 ] [ 30 ] [ 31 ] Delivered on 3 June 2017 to the ISS by SpaceX CRS-11 . The research was sponsored by the Center for the Advancement of Science in Space (CASIS) in partnership with the University of California at Los Angeles. The primary objective of the study was to evaluate a new strategy to mitigate one of the negative effects of living in space (bone degradation). All the mice were periodically injected with either a control treatment or an experimental treatment that contains NELL1 , a protein that when expressed can help regulate bone-remodeling. The study is based on research on NELL1 done by a group led by Dr. Chia Soo, a UCLA professor of plastic and reconstructive surgery and orthopedic surgery . [ 32 ] The experiments involved 40 mice that were flown to the ISS. On 3 July 2017 twenty of the mice were returned to Earth live. This was the first time the Transporter unit was used to carry mice from the ISS back to Earth alive. The entire study lasted 30 days. [ 33 ] [ 34 ] [ 35 ] [ 36 ] Delivered on 14 August 2017 to the ISS by SpaceX CRS-12 . The research was sponsored by the National Aeronautics and Space Administration's Space Life and Physical Sciences program. This is the first Rodent Research mission that is dedicated to NASA-sponsored science experiments. Previous missions on the ISS involved commercial and other government agency experiments selected by the Center for Advancement of Science in Space ( CASIS ). [ 37 ] The mission consisted of three separate experiments led by principal investigators Michael Delp, Xiao Wen Mao, and Jeffrey Willey. Delp's investigation was designed to study the effects of long duration spaceflight on fluid shifts and increased fluid pressures in the head, Mao's was to examine the impact of spaceflight on the vessels that supply blood to the eyes, and Willey's was designed to study the extent of knee and hip joint degradation caused by prolonged exposure to weightlessness. The flight lasted 33 days. [ 38 ] [ 39 ] Delivered on 15 December 2017 to the ISS by SpaceX CRS-13 . The research was sponsored by the Center for the Advancement of Science in Space (CASIS) in partnership with Novartis and NanoMedical Systems. The primary objective of the study was to evaluate a novel therapeutic drug delivery chip in microgravity . The nanochannel drug delivery chip delivered the drug formoterol , used in the management of asthma and other medical conditions, to achieve a constant and reliable dosage. [ 40 ] The experiments involved 40 mice that were flown to the ISS. On 13 January 2018 twenty of the mice were returned to Earth alive. The remaining 20 mice were studied for and additional 30 days. The study lasted 60 days. [ 41 ] [ 42 ] [ 43 ] Delivered on 29 June 2018 to the ISS by SpaceX CRS-15 . The research was the second mission sponsored by the National Aeronautics and Space Administration's Space Life and Physical Sciences program. The primary objective of the study was to study the impact of the space environment on the gut microbiota of mice. The importance of this study is that disruption of the normal microbiota communities in the digestive tract has been linked to multiple health problems: including the intestinal, immune, mental, and metabolic health. The experiments involved 20 mice that were flown to the ISS. On 3 August 2018 ten of the mice were returned to Earth alive. The entire study lasted 77 days. [ 44 ] [ 45 ] [ 46 ] [ 47 ] Delivered on 8 December 2018 to the ISS by SpaceX CRS-16 . Strangely it did not appear on the list of science payloads for the mission. [ 48 ] The experiment was blamed for delaying the launch due to mold being discovered on the food for the mice. [ 49 ] The research is sponsored by the National Laboratory in partnership with Center for the Advancement of Science in Space (CASIS) and Taconic Biosciences . The primary objective of the study is to study the physiology of aging and the effect of age on disease progression using groups of young and old mice. The study will consist of 2 groups of 20 mice each. Half of each group will be 10–16 weeks old (young group), the other half will be 30–52 weeks old (old group). Half of each group will be returned to Earth alive after about 30 days. The remaining mice will be euthanized and cryogenically preserved for study back on Earth. [ 50 ] This is also been designated as Rodent Research Reference Mission-1 (RRR-1). For this mission the samples gathered will be made available for other researchers through a proposals submitted to CASIS. [ 51 ] This mission is scheduled to fly to the ISS on SpaceX CRS-17 . [ 48 ] The research is sponsored by NASA Research Office - Space Life and Physical Sciences. The primary objective of the study is to examine the CDKN1a/ p21 pathway and its role in the arresting bone regeneration in microgravity . The study consisted of 20 mice, 10 of are transgenic CDKN1a/ p21 -Null mice. The study is expected to last up to 35 days. [ 52 ] This mission is scheduled to fly to the ISS on SpaceX CRS-17 . [ 48 ] The research was sponsored by NASA Research Office - Space Life and Physical Sciences. The primary objective of the study is to study how MicroRNA related to vascular health in microgravity . The study consisted of 20 mice to be flown to the ISS and 20 mice that remained on the ground as controls. After approximately 30 days, the 20 mice on the ISS will be returned alive. [ 53 ]
https://en.wikipedia.org/wiki/Rodent_Research_Hardware_System
Rodenticides are chemicals made and sold for the purpose of killing rodents . While commonly referred to as " rat poison ", rodenticides are also used to kill mice , woodchucks , chipmunks , porcupines , nutria , beavers , [ 1 ] and voles . [ 2 ] Some rodenticides are lethal after one exposure while others require more than one. Rodents are disinclined to gorge on an unknown food (perhaps reflecting an adaptation to their inability to vomit ), [ 3 ] preferring to sample, wait and observe whether it makes them or other rats sick. [ 4 ] [ 5 ] This phenomenon of poison shyness is the rationale for poisons that kill only after multiple doses. Besides being directly toxic to the mammals that ingest them, including dogs, cats, and humans, many rodenticides present a secondary poisoning risk to animals that hunt or scavenge the dead corpses of rats. [ 6 ] Anticoagulants are defined as chronic (death occurs one to two weeks after ingestion of the lethal dose, rarely sooner), single-dose (second generation) or multiple-dose (first generation) rodenticides, acting by effective blocking of the vitamin-K cycle , resulting in inability to produce essential blood-clotting factors—mainly coagulation factors II ( prothrombin ) and VII ( proconvertin ). [ 1 ] [ 7 ] In addition to this specific metabolic disruption, massive toxic doses of 4-hydroxycoumarin , 4-thiochromenone and 1,3-indandione anticoagulants cause damage to tiny blood vessels ( capillaries ), increasing their permeability, causing internal bleeding. These effects are gradual, developing over several days. In the final phase of the intoxication, the exhausted rodent collapses due to hemorrhagic shock or severe anemia and dies. The question of whether the use of these rodenticides can be considered humane has been raised. [ 8 ] The main benefit of anticoagulants over other poisons is that the time taken for the poison to induce death means that the rats do not associate the damage with their feeding habits. These are harder to group by generation. The U.S. Environmental Protection Agency considers chlorophacinone and diphacinone as first generation agents. [ 12 ] According to some sources, the indandiones are considered second generation. [ 15 ] Phylloquinone has been suggested, and successfully used, as antidote for pets or humans accidentally or intentionally exposed to anticoagulant poisons. Some of these poisons act by inhibiting liver functions and in advanced stages of poisoning, several blood-clotting factors are absent, and the volume of circulating blood is diminished, so that a blood transfusion (optionally with the clotting factors present) can save a person who has been poisoned, an advantage over some older poisons. A unique enzyme produced by the liver enables the body to recycle vitamin K . To produce the blood clotting factors that prevent excessive bleeding, the body needs vitamin K. Anticoagulants hinder this enzyme's ability to function. Internal bleeding could start if the body's reserve of anticoagulant runs out from exposure to enough of it. Because they bind more closely to the enzyme that produces blood clotting agents, single-dose anticoagulants are more hazardous. They may also obstruct several stages of the recycling of vitamin K. Single-dose or second-generation anticoagulants can be stored in the liver because they are not quickly eliminated from the body. [ 17 ] Metal phosphides have been used as a means of killing rodents and are considered single-dose fast acting rodenticides (death occurs commonly within 1–3 days after single bait ingestion). A bait consisting of food and a phosphide (usually zinc phosphide ) is left where the rodents can eat it. The acid in the digestive system of the rodent reacts with the phosphide to generate toxic phosphine gas. This method of vermin control has possible use in places where rodents are resistant to some of the anticoagulants, particularly for control of house and field mice; zinc phosphide baits are also cheaper than most second-generation anticoagulants, so that sometimes, in the case of large infestation by rodents, their population is initially reduced by copious amounts of zinc phosphide bait applied, and the rest of population that survived the initial fast-acting poison is then eradicated by prolonged feeding on anticoagulant bait. Inversely, the individual rodents that survived anticoagulant bait poisoning (rest population) can be eradicated by pre-baiting them with nontoxic bait for a week or two (this is important to overcome bait shyness, and to get rodents used to feeding in specific areas by specific food, especially in eradicating rats) and subsequently applying poisoned bait of the same sort as used for pre-baiting until all consumption of the bait ceases (usually within 2–4 days). These methods of alternating rodenticides with different modes of action gives actual or almost 100% eradications of the rodent population in the area, if the acceptance/palatability of baits are good (i.e., rodents feed on it readily). Zinc phosphide is typically added to rodent baits in a concentration of 0.75% to 2.0%. The baits have strong, pungent garlic-like odor due to the phosphine liberated by hydrolysis . The odor attracts (or, at least, does not repel) rodents, but has a repulsive effect on other mammals. Birds, notably wild turkeys , are not sensitive to the smell, and might feed on the bait, and thus fall victim to the poison. [ citation needed ] The tablets or pellets (usually aluminium, calcium or magnesium phosphide for fumigation/gassing) may also contain other chemicals which evolve ammonia , which helps reduce the potential for spontaneous combustion or explosion of the phosphine gas. [ citation needed ] Metal phosphides do not accumulate in the tissues of poisoned animals, so the risk of secondary poisoning is low. Before the advent of anticoagulants, phosphides were the favored kind of rat poison. During World War II, they came into use in United States because of shortage of strychnine due to the Japanese occupation of the territories where the strychnine tree is grown. Phosphides are rather fast-acting rat poisons, resulting in the rats dying usually in open areas, instead of in the affected buildings. Phosphides used as rodenticides include: Cholecalciferol (vitamin D 3 ) and ergocalciferol (vitamin D 2 ) are used as rodenticides . They are toxic to rodents for the same reason they are important to humans: they affect calcium and phosphate homeostasis in the body. Vitamins D are essential in minute quantities (few IUs per kilogram body weight daily, only a fraction of a milligram), and like most fat soluble vitamins , they are toxic in larger doses, causing hypervitaminosis D . If the poisoning is severe enough (that is, if the dose of the toxin is high enough), it leads to death. In rodents that consume the rodenticidal bait, it causes hypercalcemia , raising the calcium level, mainly by increasing calcium absorption from food, mobilising bone-matrix-fixed calcium into ionised form (mainly monohydrogencarbonate calcium cation, partially bound to plasma proteins, [CaHCO 3 ] + ), which circulates dissolved in the blood plasma . After ingestion of a lethal dose, the free calcium levels are raised sufficiently that blood vessels , kidneys , the stomach wall and lungs are mineralised/calcificated (formation of calcificates, crystals of calcium salts/complexes in the tissues, damaging them), leading further to heart problems (myocardial tissue is sensitive to variations of free calcium levels, affecting both myocardial contractibility and action potential propagation between the atria and ventricles), bleeding (due to capillary damage) and possibly kidney failure. It is considered to be single-dose, cumulative (depending on concentration used; the common 0.075% bait concentration is lethal to most rodents after a single intake of larger portions of the bait) or sub-chronic (death occurring usually within days to one week after ingestion of the bait). Applied concentrations are 0.075% cholecalciferol (30,000 IU/g) [ 18 ] [ 19 ] and 0.1% ergocalciferol (40,000 IU/g) when used alone, which can kill a rodent or a rat. There is an important feature of calciferols toxicology, that they are synergistic with anticoagulant toxicant. In other words, mixtures of anticoagulants and calciferols in same bait are more toxic than a sum of toxicities of the anticoagulant and the calciferol in the bait, so that a massive hypercalcemic effect can be achieved by a substantially lower calciferol content in the bait, and vice versa, a more pronounced anticoagulant/hemorrhagic effects are observed if the calciferol is present. This synergism is mostly used in calciferol low concentration baits, because effective concentrations of calciferols are more expensive than effective concentrations of most anticoagulants. [ 17 ] The first application of a calciferol in rodenticidal bait was in the Sorex product Sorexa D (with a different formula than today's Sorexa D), back in the early 1970s, which contained 0.025% warfarin and 0.1% ergocalciferol. Today, Sorexa CD contains a 0.0025% difenacoum and 0.075% cholecalciferol combination. Numerous other brand products containing either 0.075-0.1% calciferols (e.g. Quintox) alone or alongside an anticoagulant are marketed. [ 1 ] The Merck Veterinary Manual states the following: Although this rodenticide [cholecalciferol] was introduced with claims that it was less toxic to nontarget species than to rodents, clinical experience has shown that rodenticides containing cholecalciferol are a significant health threat to dogs and cats. Cholecalciferol produces hypercalcemia, which results in systemic calcification of soft tissue, leading to kidney failure , cardiac abnormalities, hypertension, CNS depression and GI upset. Signs generally develop within 18-36 hours of ingestion and can include depression, anorexia, polyuria and polydipsia. As serum calcium concentrations increase, clinical signs become more severe. ... GI smooth muscle excitability decreases and is manifest by anorexia, vomiting and constipation. ... Loss of renal concentrating ability is a direct result of hypercalcemia. As hypercalcemia persists, mineralization of the kidneys results in progressive renal insufficiency." [ 20 ] Additional anticoagulant renders the bait more toxic to pets as well as humans. Upon single ingestion, solely calciferol-based baits are considered generally safer to birds than second generation anticoagulants or acute toxicants. Treatment in pets is mostly supportive, with intravenous fluids and pamidronate disodium. The hormone calcitonin is no longer commonly used. [ 20 ] Rodenticides have two important drawbacks: 1) they cause a delayed, protracted and painful death for the rodent 2) they bioaccumulate, so that any predator that eats a rodent after it has ingested the poison, will also be poisoned. [ 21 ] The effect is cumulative and can be fatal to the predator. This decimates owl, raptor, fox and other predators as well as domestic cats and dogs. To address these issues, birth control for rodents has been introduced. The most commonly available, Contraceptol, has been shown to be effective at controlling rodent populations, particularly when coupled with environment modifications to make the area less attractive (remove food sources and minimize potential nesting sites). This form of birth control includes pheromones to attract both male and female rodents. After ingesting the contraceptive, the rodent experiences no discomfort, but cannot effectively breed for a month. If a large predator eats a rodent after it has ingested the contraceptive, it will not be harmed; at most it may experience a few days of infertility. Therefore, contraception offers an environmentally safe, non-toxic, non-polluting, extremely effective and humane alternative to traditional poisons. There are a few drawbacks to contraception. Since it doesn't kill the rodent immediately, it takes longer to see results. Both poisons and contraception need continued application to control the population on an ongoing basis, otherwise the rodent population will quickly rebound. Other chemical poisons include: In some countries, fixed three-component rodenticides, i.e., anticoagulant + antibiotic + vitamin D, are used. Associations of a second-generation anticoagulant with an antibiotic and/or vitamin D are considered to be effective even against most resistant strains of rodents, though some second generation anticoagulants (namely brodifacoum and difethialone), in bait concentrations of 0.0025% to 0.005% are so toxic that resistance is unknown, and even rodents resistant to other rodenticides are reliably exterminated by application of these most toxic anticoagulants. Powdered corn cob and corn meal gluten have been developed as rodenticides. They were approved in the EU and patented in the US in 2013. These preparations rely on dehydration and electrolyte imbalance to cause death. [ 22 ] [ 23 ] Inert gas killing of burrowing pest animals is another method with no impact on scavenging wildlife. One such method has been commercialized and sold under the brand name Rat Ice . One of the potential problems when using rodenticides is that dead or weakened rodents may be eaten by other wildlife, either predators or scavengers. Members of the public deploying rodenticides may not be aware of this or may not follow the product's instructions closely enough. There is evidence of secondary poisoning being caused by exposure to prey. [ 17 ] The faster a rodenticide acts, the more critical this problem may be. For the fast-acting rodenticide bromethalin, for example, there is no diagnostic test or antidote. [ 24 ] This has led environmental researchers to conclude that low strength, long duration rodenticides (generally first generation anticoagulants) are the best balance between maximum effect and minimum risk. [ 25 ] In 2008, after assessing human health and ecological effects, as well as benefits, [ 13 ] the US Environmental Protection Agency (EPA) announced measures to reduce risks associated with ten rodenticides. [ 26 ] New restrictions by sale and distribution restrictions, minimum package size requirements, use site restriction, and tamper resistant products would have taken effect in 2011. The regulations were delayed pending a legal challenge by manufacturer Reckitt-Benkiser. [ 24 ] The entire rat populations of several islands have been eradicated, most notably New Zealand's Campbell Island , [ 27 ] Hawadax Island , Alaska (formerly known as Rat Island), [ 28 ] Macquarie Island [ 29 ] and Canna, Scotland (declared rat-free in 2008). [ 30 ] According to the Friends of South Georgia Island, all of the rats have been eliminated from South Georgia . [ 31 ] Alberta, Canada , through a combination of climate and control, is also believed to be rat-free. [ 32 ]
https://en.wikipedia.org/wiki/Rodenticide
In the theory of three-dimensional rotation , Rodrigues' rotation formula , named after Olinde Rodrigues , is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation . By extension, this can be used to transform all three basis vectors to compute a rotation matrix in SO(3) , the group of all rotation matrices, from an axis–angle representation . In terms of Lie theory, the Rodrigues' formula provides an algorithm to compute the exponential map from the Lie algebra so (3) to its Lie group SO(3) . This formula is variously credited to Leonhard Euler , Olinde Rodrigues , or a combination of the two. A detailed historical analysis in 1989 concluded that the formula should be attributed to Euler, and recommended calling it "Euler's finite rotation formula." [ 1 ] This proposal has received notable support, [ 2 ] but some others have viewed the formula as just one of many variations of the Euler–Rodrigues formula , thereby crediting both. [ 3 ] If v is a vector in ℝ 3 and k is a unit vector describing an axis of rotation about which v rotates by an angle θ according to the right hand rule , the Rodrigues formula for the rotated vector v rot is v r o t = v cos ⁡ θ + ( k × v ) sin ⁡ θ + k ( k ⋅ v ) ( 1 − cos ⁡ θ ) . {\displaystyle \mathbf {v} _{\mathrm {rot} }=\mathbf {v} \cos \theta +(\mathbf {k} \times \mathbf {v} )\sin \theta +\mathbf {k} ~(\mathbf {k} \cdot \mathbf {v} )(1-\cos \theta )\,.} The intuition of the above formula is that the first term scales the vector down, while the second skews it (via vector addition ) toward the new rotational position. The third term re-adds the height (relative to k {\displaystyle {\textbf {k}}} ) that was lost by the first term. An alternative statement is to write the axis vector as a cross product a × b of any two nonzero vectors a and b which define the plane of rotation, and the sense of the angle θ is measured away from a and towards b . Letting α denote the angle between these vectors, the two angles θ and α are not necessarily equal, but they are measured in the same sense. Then the unit axis vector can be written This form may be more useful when two vectors defining a plane are involved. An example in physics is the Thomas precession which includes the rotation given by Rodrigues' formula, in terms of two non-collinear boost velocities, and the axis of rotation is perpendicular to their plane. Let k be a unit vector defining a rotation axis, and let v be any vector to rotate about k by angle θ ( right hand rule , anticlockwise in the figure), producing the rotated vector v rot {\displaystyle \mathbb {v} _{\text{rot}}} . Using the dot and cross products , the vector v can be decomposed into components parallel and perpendicular to the axis k , where the component parallel to k is called the vector projection of v on k , and the component perpendicular to k is called the vector rejection of v from k : where the last equality follows from the vector triple product formula: a × ( b × c ) = ( a ⋅ c ) b − ( a ⋅ b ) c {\textstyle \mathbf {a} \times (\mathbf {b} \times \mathbf {c} )=(\mathbf {a} \cdot \mathbf {c} )\mathbf {b} -(\mathbf {a} \cdot \mathbf {b} )\mathbf {c} } . Finally, the vector k × v ⊥ = k × v {\displaystyle \mathbf {k} \times \mathbf {v} _{\perp }=\mathbf {k} \times \mathbf {v} } is a copy of v ⊥ {\displaystyle \mathbf {v} _{\perp }} rotated 90° around k {\displaystyle \mathbf {k} } . Thus the three vectors k , v ⊥ , k × v {\displaystyle \mathbf {k} \,,\ \mathbf {v} _{\perp }\,,\,\mathbf {k} \times \mathbf {v} } form a right-handed orthogonal basis of R 3 {\displaystyle \mathbb {R} ^{3}} , with the last two vectors of equal length. Under the rotation, the component v ∥ {\displaystyle \mathbf {v} _{\parallel }} parallel to the axis will not change magnitude nor direction: while the perpendicular component will retain its magnitude but rotate its direction in the perpendicular plane spanned by v ⊥ {\displaystyle \mathbf {v} _{\perp }} and k × v {\displaystyle \mathbf {k} \times \mathbf {v} } , according to in analogy with the planar polar coordinates ( r , θ ) in the Cartesian basis e x , e y : Now the full rotated vector is: Substituting v ⊥ = v − v ‖ {\displaystyle \mathbf {v} _{\perp }=\mathbf {v} -\mathbf {v} _{\|}} or v ‖ = v − v ⊥ {\displaystyle \mathbf {v} _{\|}=\mathbf {v} -\mathbf {v} _{\perp }} in the last expression gives respectively: The linear transformation on v ∈ R 3 {\displaystyle \mathbf {v} \in \mathbb {R} ^{3}} defined by the cross product v ↦ k × v {\displaystyle \mathbf {v} \mapsto \mathbf {k} \times \mathbf {v} } is given in coordinates by representing v and k × v as column matrices : That is, the matrix of this linear transformation (with respect to standard coordinates) is the cross-product matrix : That is to say, The last formula in the previous section can therefore be written as: Collecting terms allows the compact expression where R = I + ( sin ⁡ θ ) K + ( 1 − cos ⁡ θ ) K 2 {\displaystyle \mathbf {R} =\mathbf {I} +(\sin \theta )\mathbf {K} +(1-\cos \theta )\mathbf {K} ^{2}} is the rotation matrix through an angle θ counterclockwise about the axis k , and I the 3 × 3 identity matrix . [ 4 ] This matrix R is an element of the rotation group SO(3) of ℝ 3 , and K is an element of the Lie algebra s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} generating that Lie group (note that K is skew-symmetric, which characterizes s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} ). In terms of the matrix exponential, To see that the last identity holds, one notes that characteristic of a one-parameter subgroup , i.e. exponential, and that the formulas match for infinitesimal θ . For an alternative derivation based on this exponential relationship, see exponential map from s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} to SO(3) . For the inverse mapping, see log map from SO(3) to s o ( 3 ) {\displaystyle {\mathfrak {so}}(3)} . The above result can be written in index notation as follows. The elements of the matrix for an active rotation by an angle θ {\displaystyle \theta } about an axis n are given by Here, i, j, and k label the Cartesian components (x, y, z) or (1, 2, 3), δ i j {\displaystyle \delta _{ij}} and ϵ i j k {\displaystyle \epsilon _{ijk}} are the Kronecker and Levi-Civita symbols, and there is an implicit sum on repeated indices. The Hodge dual of the rotation R {\displaystyle \mathbf {R} } is just R ∗ = − sin ⁡ ( θ ) k {\displaystyle \mathbf {R} ^{*}=-\sin(\theta )\mathbf {k} } which enables the extraction of both the axis of rotation and the sine of the angle of the rotation from the rotation matrix itself, with the usual ambiguity, where σ = ± 1 {\displaystyle \sigma =\pm 1} . The above simple expression results from the fact that the Hodge duals of I {\displaystyle \mathbf {I} } and K 2 {\displaystyle \mathbf {K} ^{2}} are zero, and K ∗ = − k {\displaystyle \mathbf {K} ^{*}=-\mathbf {k} } .
https://en.wikipedia.org/wiki/Rodrigues'_rotation_formula
The Roe approximate Riemann solver , devised by Phil Roe , is an approximate Riemann solver based on the Godunov scheme and involves finding an estimate for the intercell numerical flux or Godunov flux F i + 1 2 {\displaystyle F_{i+{\frac {1}{2}}}} at the interface between two computational cells U i {\displaystyle U_{i}} and U i + 1 {\displaystyle U_{i+1}} , on some discretised space-time computational domain. A non-linear system of hyperbolic partial differential equations representing a set of conservation laws in one spatial dimension can be written in the form Applying the chain rule to the second term we get the quasi-linear hyperbolic system where A {\displaystyle A} is the Jacobian matrix of the flux vector F ( U ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {U}})} . The Roe method consists of finding a matrix A ~ ( U i , U i + 1 ) {\displaystyle {\tilde {A}}({\boldsymbol {U}}_{i},{\boldsymbol {U}}_{i+1})} that is assumed constant between two cells. The Riemann problem can then be solved as a truly linear hyperbolic system at each cell interface. The Roe matrix must obey the following conditions: Phil Roe introduced a method of parameter vectors to find such a matrix for some systems of conservation laws. [ 1 ] Once the Roe matrix corresponding to the interface between two cells is found, the intercell flux is given by solving the quasi-linear system as a truly linear system.
https://en.wikipedia.org/wiki/Roe_solver
The Roemer model of political competition is a game between political parties in which each party announces a multidimensional policy vector . Since Nash equilibria do not normally exist when the policy space is multidimensional, John Roemer introduced the concept of party-unanimity Nash equilibrium (PUNE), which can be considered an application of the concept of Nash equilibrium to political competition. It is also a generalization of the Wittman model of political competition. In Roemer's model, all political parties are assumed to consist of three types of factions— opportunists , militants , and reformers . Opportunists seek solely to maximize the party's vote share in an election; militants seek to announce (and implement) the preferred policy of the average party member; and reformers have an objective function that is a convex combination of the objective functions of the opportunists and militants. It has been shown that the existence of reformers has no effect on what policies the party announces. With two parties, a pair of policy announcements constitute a PUNE if and only if the reformers and militants of any given party do not unanimously agree to deviate from their announced policy, given the policy put forth by the other party. In other words, if a pair of policies constitute a PUNE, then it should not be the case that both factions of a party can be made weakly better off (and one faction strictly better off) by deviating from the policy that they put forward. Such unanimity to deviate can be rare, and thus PUNEs are more likely to exist than regular Nash equilibria. Although there are no known cases where PUNEs do not exist, no simple necessary and sufficient conditions for the existence of non-trivial PUNEs have yet been offered. (A nontrivial PUNE is one in which no party offers the ideal policy of either its militants or opportunists.) The question of the existence of non-trivial PUNEs remains an important open question in the theory of political competition.
https://en.wikipedia.org/wiki/Roemer_model_of_political_competition
The roentgen equivalent man ( rem ) [ 1 ] [ 2 ] is a CGS unit of equivalent dose , effective dose , and committed dose , which are dose measures used to estimate potential health effects of low levels of ionizing radiation on the human body. Quantities measured in rem are designed to represent the stochastic biological risk of ionizing radiation, which is primarily radiation-induced cancer . These quantities are derived from absorbed dose , which in the CGS system has the unit rad . There is no universally applicable conversion constant from rad to rem; the conversion depends on relative biological effectiveness (RBE). The rem has been defined since 1976 as equal to 0.01 sievert , which is the more commonly used SI unit outside the United States. Earlier definitions going back to 1945 were derived from the roentgen unit , which was named after Wilhelm Röntgen , a German scientist who discovered X-rays . The unit name is misleading, since 1 roentgen actually deposits about 0.96 rem in soft biological tissue, when all weighting factors equal unity. Older units of rem following other definitions are up to 17% smaller than the modern rem. Doses greater than 100 rem received over a short time period are likely to cause acute radiation syndrome (ARS), possibly leading to death within weeks if left untreated. Note that the quantities that are measured in rem were not designed to be correlated to ARS symptoms. The absorbed dose , measured in rad, is a better indicator of ARS. [ 3 ] : 592–593 A rem is a large dose of radiation, so the millirem ( mrem ), which is one thousandth of a rem, is often used for the dosages commonly encountered, such as the amount of radiation received from medical x-rays and background sources. The rem and millirem are CGS units in widest use among the U.S. public, industry, and government. [ 4 ] However, the SI unit the sievert (Sv) is the normal unit outside the United States, and is increasingly encountered within the US in academic, scientific, and engineering environments, and have now virtually replaced the rem. [ 5 ] The conventional units for dose rate is mrem/h. Regulatory limits and chronic doses are often given in units of mrem/yr or rem/yr, where they are understood to represent the total amount of radiation allowed (or received) over the entire year. In many occupational scenarios, the hourly dose rate might fluctuate to levels thousands of times higher for a brief period of time, without infringing on the annual total exposure limits. The annual conversions to a Julian year are: The International Commission on Radiological Protection (ICRP) once adopted fixed conversion for occupational exposure, although these have not appeared in recent documents: [ 6 ] Therefore, for occupation exposures of that time period, The U.S. National Institute of Standards and Technology (NIST) strongly discourages Americans from expressing doses in rem, in favor of recommending the SI unit. [ 7 ] The NIST recommends defining the rem in relation to the SI in every document where this unit is used. [ 8 ] Ionizing radiation has deterministic and stochastic effects on human health. The deterministic effects that can lead to acute radiation syndrome only occur in the case of high doses (> ~10 rad or > 0.1 Gy) and high dose rates (> ~10 rad/h or > 0.1 Gy/h). A model of deterministic risk would require different weighting factors (not yet established) than are used in the calculation of equivalent and effective dose. To avoid confusion, deterministic effects are normally compared to absorbed dose in units of rad, not rem. [ 9 ] Stochastic effects are those that occur randomly, such as radiation-induced cancer . The consensus of the nuclear industry, nuclear regulators, and governments, is that the incidence of cancers caused by ionizing radiation can be modeled as increasing linearly with effective dose at a rate of 0.055% per rem (5.5%/Sv). [ 10 ] Individual studies, alternate models, and earlier versions of the industry consensus have produced other risk estimates scattered around this consensus model. There is general agreement that the risk is much higher for infants and fetuses than adults, higher for the middle-aged than for seniors, and higher for women than for men, though there is no quantitative consensus about this. [ 11 ] [ 12 ] There is much less data, and much more controversy, regarding the possibility of cardiac and teratogenic effects, and the modelling of internal dose . [ 13 ] The ICRP recommends limiting artificial irradiation of the public to an average of 100 mrem (1 mSv) of effective dose per year, not including medical and occupational exposures. [ 10 ] For comparison, radiation levels inside the United States Capitol are 85 mrem/yr (0.85 mSv/yr), close to the regulatory limit, because of the uranium content of the granite structure. [ 14 ] The NRC sets the annual total effective dose of full body radiation, or total body radiation (TBR), allowed for radiation workers 5,000 mrem (5 rem). [ 15 ] [ 16 ] The concept of the rem first appeared in literature in 1945 [ 17 ] and was given its first definition in 1947. [ 18 ] The definition was refined in 1950 as "that dose of any ionizing radiation which produces a relevant biological effect equal to that produced by one roentgen of high-voltage x-radiation." [ 19 ] Using data available at the time, the rem was variously evaluated as 83, 93, or 95 erg /gram. [ 20 ] Along with the introduction of the rad in 1953, the ICRP decided to continue the use of the rem. The US National Committee on Radiation Protection and Measurements noted in 1954 that this effectively implied an increase in the magnitude of the rem to match the rad (100 erg/gram). [ 21 ] The ICRP introduced and then officially adopted the rem in 1962 as the unit of equivalent dose to measure the way different types of radiation distribute energy in tissue and began recommending values of relative biological effectiveness (RBE) for various types of radiation. [ 22 ] In practice, the unit of rem was used to denote that an RBE factor had been applied to a number which was originally in units of rad or roentgen. The International Committee for Weights and Measures (CIPM) adopted the sievert in 1980 but never accepted the use of the rem. The NIST recognizes that this unit is outside the SI but temporarily accepts its use in the U.S. with the SI. [ 8 ] The rem remains in widespread use as an industry standard in the U.S. [ 23 ] The United States Nuclear Regulatory Commission still permits the use of the units curie , rad, and rem alongside SI units. [ 24 ] The following table shows radiation quantities in SI and non-SI units:
https://en.wikipedia.org/wiki/Roentgen_equivalent_man
Roger Harquail French is a materials scientist, engineer, academic and author. He is the Kyocera Professor in the Case School of Engineering at Case Western Reserve University (CWRU) . [ 1 ] French's research interests at CWRU span optical properties and electronic structure, degradation science of materials in outdoor-exposed technologies such as photovoltaics , and employing data science and deep learning using distributed and high-performance computing . [ 2 ] While at DuPont he worked on semiconductor lithography, phase shift masks , pellicles , and photoresists , registering multiple patents. [ 3 ] His publications comprise research articles and a book entitled Durability and Reliability of Polymers and Other Materials in Photovoltaic Modules . He received the 2020 Faculty Distinguished Research Award [ 4 ] and the 2023 Innovation Week Inventor Award from Case Western Reserve University, [ 5 ] in addition to being honored as a Senior Member of the IEEE and as a Fellow of the American Ceramic Society . [ 6 ] French graduated with a Bachelor of Science with Distinction in Materials Science and Engineering from Cornell University in 1979, and in 1985 obtained a PhD in Materials Science and Engineering from the Massachusetts Institute of Technology , working with doctoral advisor Robert L. Coble . [ 7 ] French began his research career in 1985 in Central Research and Development at DuPont . From 1993 to 2002, while still working at DuPont, he was a Visiting Scientist for a month a year in Manfred Rühle's lab at Max-Planck-Institut für Metallforschung in Stuttgart , Germany . His academic career began as an adjunct professor in the Materials Science Department at the University of Pennsylvania in 1996. In 2010, he joined Case Western Reserve University as the F. Alex Nason Professor and has been the Kyocera Professor of Ceramics at CWRU since 2016. [ 1 ] He is also the director of the DOE-NNSA Center of Excellence for Materials Data Science for Stockpile Stewardship, [ 8 ] [ 9 ] and a Co-Principal Investigator of both the NSF Materials Data Science for Reliability and Degradation Center (MDS-Rely) [ 10 ] and the NSF-sponsored IUCRC Center for Advancing Sustainable and Distributed Fertilizer Production (CASFER). [ 11 ] French has studied optical properties and electronic structures of ceramics, polymers, and biomolecules, employing spectroscopy and computational optics, and has explored radiation durability, photochemical degradation, and data-driven approaches to predict lifetime performance and enhance energy efficiency in outdoor-exposed technologies. [ 2 ] His group has utilized VUV and optical spectroscopies, [ 12 ] [ 13 ] along with spectroscopic ellipsometry, [ 14 ] [ 15 ] to investigate the optical properties, electronic structure, and radiation durability of optical materials, polymers, ceramics, and liquids. He has earned patents for phase shift photomasks, [ 16 ] transparent fluoropolymers for pellicles, [ 17 ] photoresists [ 18 ] and immersion fluids, and optical elements for photovoltaics. [ 19 ] His research has contributed to the understanding of how these optical properties influence van der Waals quantum electrodynamical interactions, crucial in governing wetting phenomena and mesoscale assembly in nanotubes and biomolecular materials like DNA and proteins. [ 20 ] [ 21 ] French has developed non-relational, distributed computing environments based on Hadoop, Hbase, Ozone, Impala, and Spark for data science and analytics of complex systems. Through this framework, he has integrated real-world performance data with lab-based experimental datasets to elucidate degradation mechanisms and pathways active over the lifespan of technologies. [ 22 ] His methods encompass network modeling, [ 23 ] structural equations, and graphs to quantify and simulate global spatio-temporal systems such as PV power plants. [ 24 ] [ 25 ] In the area of Lifetime and Degradation Science (L&DS), French's focus has extended to examining long-lived environmentally-exposed materials, components, and systems, including PV technologies, roofing, and building exteriors. [ 26 ] For projects under DOE-SETO, he has researched the lifetime performance and reliability of mono- and bi-facial silicon PERC modules and modeling PV fleet performance using spatio-temporal Graph Neural Networks (stGNNs). [ 27 ] [ 28 ] Under his leadership, the SDLE Research Center has applied data science methodologies across a broad spectrum of energy and materials projects, including a DOE ARPA-E funded initiative on building energy efficiency. [ 29 ] [ 30 ] He is the son of James Bruce French .
https://en.wikipedia.org/wiki/Roger_H._French
Charles Roger Slack FRS FRSNZ (22 April 1937 – 24 October 2016) was a British-born plant biologist and biochemist who lived and worked in Australia (1962–1970) and New Zealand (1970–2000). In 1966, jointly with Marshall Hatch , he discovered C4 photosynthesis (also known as the Hatch Slack Pathway). Slack was born on 22 April 1937 in Ashton-under-Lyne , Lancashire , England; the first and only child of Albert and Eva Slack. [ 1 ] He studied biochemistry at the University of Nottingham , where he graduated with a Bachelor of Science (Honours) in 1958, and a PhD in 1962. [ 1 ] He married Pam Shaw in March 1963, and had two children. [ 1 ] From 1962, Slack worked as a biochemist at the David North Plant Research Centre in Brisbane , Queensland , Australia (funded by the Colonial Sugar Refining Co. Ltd). [ 1 ] In 1970, he joined the Department of Scientific and Industrial Research in New Zealand. [ 2 ] From 1989 until his retirement in 2000, Slack was a senior scientist at the newly formed Crown Research Institute for Crop & Food Research in Palmerston North . [ 1 ] Slack died in Palmerston North in 2016. [ 1 ] In 2007 the New Zealand Society of Plant Biologists renamed their annual award after Slack. The award is made to society members to recognise an outstanding contribution to the study of plant biology. It was renamed in recognition of his outstanding contribution as a plant biologist and biochemist in New Zealand, his role in the discovery of C4 photosynthesis (also known as the Hatch Slack Pathway ), and his contribution as an early member of the New Zealand Society of Plant Biologists. [ 2 ] Selected articles: [ 1 ]
https://en.wikipedia.org/wiki/Roger_Slack
The Rogers–Ramanujan continued fraction is a continued fraction discovered by Rogers (1894) and independently by Srinivasa Ramanujan , and closely related to the Rogers–Ramanujan identities . It can be evaluated explicitly for a broad class of values of its argument. Given the functions G ( q ) {\displaystyle G(q)} and H ( q ) {\displaystyle H(q)} appearing in the Rogers–Ramanujan identities, and assume q = e 2 π i τ {\displaystyle q=e^{2\pi i\tau }} , and, with the coefficients of the q -expansion being OEIS : A003114 and OEIS : A003106 , respectively, where ( a ; q ) ∞ {\displaystyle (a;q)_{\infty }} denotes the infinite q-Pochhammer symbol , j is the j-function , and 2 F 1 is the hypergeometric function . The Rogers–Ramanujan continued fraction is then One should be careful with notation since the formulas employing the j-function j {\displaystyle j} will be consistent with the other formulas only if q = e 2 π i τ {\displaystyle q=e^{2\pi i\tau }} (the square of the nome ) is used throughout this section since the q -expansion of the j-function (as well as the well-known Dedekind eta function ) uses q = e 2 π i τ {\displaystyle q=e^{2\pi i\tau }} . However, Ramanujan, in his examples to Hardy and given below, used the nome q = e π i τ {\displaystyle q=e^{\pi i\tau }} instead. [ citation needed ] If q is the nome or its square, then q − 1 60 G ( q ) {\displaystyle q^{-{\frac {1}{60}}}G(q)} and q 11 60 H ( q ) {\displaystyle q^{\frac {11}{60}}H(q)} , as well as their quotient R ( q ) {\displaystyle R(q)} , are related to modular functions of τ {\displaystyle \tau } . Since they have integral coefficients, the theory of complex multiplication implies that their values for τ {\displaystyle \tau } involving an imaginary quadratic field are algebraic numbers that can be evaluated explicitly. Given the general form where Ramanujan used the nome q = e π i τ {\displaystyle q=e^{\pi i\tau }} , f when τ = i {\displaystyle \tau =i} , when τ = 2 i {\displaystyle \tau =2i} , when τ = 4 i {\displaystyle \tau =4i} , when τ = 2 5 i {\displaystyle \tau =2{\sqrt {5}}i} , when τ = 5 i {\displaystyle \tau =5i} , when τ = 10 i {\displaystyle \tau =10i} , when τ = 20 i {\displaystyle \tau =20i} , and φ = 1 + 5 2 {\displaystyle \varphi ={\tfrac {1+{\sqrt {5}}}{2}}} is the golden ratio . Note that R ( e − 2 π ) {\displaystyle R{\big (}e^{-2\pi }{\big )}} is a positive root of the quartic equation , while R ( e − π ) {\displaystyle R{\big (}e^{-\pi }{\big )}} and R ( e − 4 π ) {\displaystyle R{\big (}e^{-4\pi }{\big )}} are two positive roots of a single octic , (since φ {\displaystyle \varphi } has a square root) which explains the similarity of the two closed-forms. More generally, for positive integer m , then R ( e − 2 π / m ) {\displaystyle R(e^{-2\pi /m})} and R ( e − 2 π m ) {\displaystyle R(e^{-2\pi \,m})} are two roots of the same equation as well as, The algebraic degree k of R ( e − π n ) {\displaystyle R(e^{-\pi \,n})} for n = 1 , 2 , 3 , 4 , … {\displaystyle n=1,2,3,4,\dots } is k = 8 , 4 , 32 , 8 , … {\displaystyle k=8,4,32,8,\dots } ( OEIS : A082682 ). Incidentally, these continued fractions can be used to solve some quintic equations as shown in a later section. Interestingly, there are explicit formulas for G ( q ) {\displaystyle G(q)} and H ( q ) {\displaystyle H(q)} in terms of the j-function j ( τ ) {\displaystyle j(\tau )} and the Rogers-Ramanujan continued fraction R ( q ) {\displaystyle R(q)} . However, since j ( τ ) {\displaystyle j(\tau )} uses the nome's square q = e 2 π i τ {\displaystyle q=e^{2\pi \,i\tau }} , then one should be careful with notation such that j ( τ ) , G ( q ) , H ( q ) {\displaystyle j(\tau ),\,G(q),\,H(q)} and r = R ( q ) {\displaystyle r=R(q)} use the same q {\displaystyle q} . Of course, the secondary formulas imply that q − 1 / 60 G ( q ) {\displaystyle q^{-1/60}G(q)} and q 11 / 60 H ( q ) {\displaystyle q^{11/60}H(q)} are algebraic numbers (though normally of high degree) for τ {\displaystyle \tau } involving an imaginary quadratic field . For example, the formulas above simplify to, and, and so on, with φ {\displaystyle \varphi } as the golden ratio. In the following we express the essential theorems of the Rogers-Ramanujan continued fractions R and S by using the tangential sums and tangential differences: The elliptic nome and the complementary nome have this relationship to each other: The complementary nome of a modulus k is equal to the nome of the Pythagorean complementary modulus: These are the reflection theorems for the continued fractions R and S: The letter Φ {\displaystyle \Phi } represents the Golden number exactly: The theorems for the squared nome are constructed as follows: Following relations between the continued fractions and the Jacobi theta functions are given: Into the now shown theorems certain values are inserted: Therefore following identity is valid: In an analogue pattern we get this result: Therefore following identity is valid: Furthermore we get the same relation by using the above mentioned theorem about the Jacobi theta functions: This result appears because of the Poisson summation formula and this equation can be solved in this way: By taking the other mentioned theorem about the Jacobi theta functions a next value can be determined: That equation chain leads to this tangential sum: And therefore following result appears: In the next step we use the reflection theorem for the continued fraction R again: And a further result appears: The reflection theorem is now used for following values: The Jacobi theta theorem leads to a further relation: By tangential adding the now mentioned two theorems we get this result: By tangential substraction that result appears: In an alternative solution way we use the theorem for the squared nome: Now the reflection theorem is taken again: The insertion of the last mentioned expression into the squared nome theorem gives that equation: Erasing the denominators gives an equation of sixth degree: The solution of this equation is the already mentioned solution: R ( q ) {\displaystyle R(q)} can be related to the Dedekind eta function , a modular form of weight 1/2, as, [ 1 ] The Rogers-Ramanujan continued fraction can also be expressed in terms of the Jacobi theta functions . Recall the notation, The notation θ n {\displaystyle \theta _{n}} is slightly easier to remember since θ 2 4 + θ 4 4 = θ 3 4 {\displaystyle \theta _{2}^{4}+\theta _{4}^{4}=\theta _{3}^{4}} , with even subscripts on the LHS. Thus, Note, however, that theta functions normally use the nome q = e iπτ , while the Dedekind eta function uses the square of the nome q = e 2iπτ , thus the variable x has been employed instead to maintain consistency between all functions. For example, let τ = − 1 {\displaystyle \tau ={\sqrt {-1}}} so x = e − π {\displaystyle x=e^{-\pi }} . Plugging this into the theta functions, one gets the same value for all three R ( x ) formulas which is the correct evaluation of the continued fraction given previously, One can also define the elliptic nome , The small letter k describes the elliptic modulus and the big letter K describes the complete elliptic integral of the first kind. The continued fraction can then be also expressed by the Jacobi elliptic functions as follows: with One formula involving the j-function and the Dedekind eta function is this: where x = [ 5 η ( 5 τ ) η ( τ ) ] 6 . {\displaystyle x=\left[{\frac {{\sqrt {5}}\,\eta (5\tau )}{\eta (\tau )}}\right]^{6}.\,} Since also, Eliminating the eta quotient x {\displaystyle x} between the two equations, one can then express j ( τ ) in terms of r = R ( q ) {\displaystyle r=R(q)} as, where the numerator and denominator are polynomial invariants of the icosahedron . Using the modular equation between R ( q ) {\displaystyle R(q)} and R ( q 5 ) {\displaystyle R(q^{5})} , one finds that, Let z = r 5 − 1 r 5 {\displaystyle z=r^{5}-{\frac {1}{r^{5}}}} , then j ( 5 τ ) = − ( z 2 + 12 z + 16 ) 3 z + 11 {\displaystyle j(5\tau )=-{\frac {\left(z^{2}+12z+16\right)^{3}}{z+11}}} where which in fact is the j-invariant of the elliptic curve , parameterized by the non-cusp points of the modular curve X 1 ( 5 ) {\displaystyle X_{1}(5)} . For convenience, one can also use the notation r ( τ ) = R ( q ) {\displaystyle r(\tau )=R(q)} when q = e 2πiτ . While other modular functions like the j-invariant satisfies, and the Dedekind eta function has, the functional equation of the Rogers–Ramanujan continued fraction involves [ 2 ] the golden ratio φ {\displaystyle \varphi } , Incidentally, There are modular equations between R ( q ) {\displaystyle R(q)} and R ( q n ) {\displaystyle R(q^{n})} . Elegant ones for small prime n are as follows. [ 3 ] For n = 2 {\displaystyle n=2} , let u = R ( q ) {\displaystyle u=R(q)} and v = R ( q 2 ) {\displaystyle v=R(q^{2})} , then v − u 2 = ( v + u 2 ) u v 2 . {\displaystyle v-u^{2}=(v+u^{2})uv^{2}.} For n = 3 {\displaystyle n=3} , let u = R ( q ) {\displaystyle u=R(q)} and v = R ( q 3 ) {\displaystyle v=R(q^{3})} , then ( v − u 3 ) ( 1 + u v 3 ) = 3 u 2 v 2 . {\displaystyle (v-u^{3})(1+uv^{3})=3u^{2}v^{2}.} For n = 5 {\displaystyle n=5} , let u = R ( q ) {\displaystyle u=R(q)} and v = R ( q 5 ) {\displaystyle v=R(q^{5})} , then v ( v 4 − 3 v 3 + 4 v 2 − 2 v + 1 ) = ( v 4 + 2 v 3 + 4 v 2 + 3 v + 1 ) u 5 . {\displaystyle v(v^{4}-3v^{3}+4v^{2}-2v+1)=(v^{4}+2v^{3}+4v^{2}+3v+1)u^{5}.} Or equivalently for n = 5 {\displaystyle n=5} , let u = R ( q ) {\displaystyle u=R(q)} and v = R ( q 5 ) {\displaystyle v=R(q^{5})} and φ = 1 + 5 2 {\displaystyle \varphi ={\tfrac {1+{\sqrt {5}}}{2}}} , then u 5 = v ( v 2 − φ 2 v + φ 2 ) ( v 2 − φ − 2 v + φ − 2 ) ( v 2 + v + φ 2 ) ( v 2 + v + φ − 2 ) . {\displaystyle u^{5}={\frac {v\,(v^{2}-\varphi ^{2}v+\varphi ^{2})(v^{2}-\varphi ^{-2}v+\varphi ^{-2})}{(v^{2}+v+\varphi ^{2})(v^{2}+v+\varphi ^{-2})}}.} For n = 11 {\displaystyle n=11} , let u = R ( q ) {\displaystyle u=R(q)} and v = R ( q 11 ) {\displaystyle v=R(q^{11})} , then u v ( u 10 + 11 u 5 − 1 ) ( v 10 + 11 v 5 − 1 ) = ( u − v ) 12 . {\displaystyle uv(u^{10}+11u^{5}-1)(v^{10}+11v^{5}-1)=(u-v)^{12}.} Regarding n = 5 {\displaystyle n=5} , note that v 10 + 11 v 5 − 1 = ( v 2 + v − 1 ) ( v 4 − 3 v 3 + 4 v 2 − 2 v + 1 ) ( v 4 + 2 v 3 + 4 v 2 + 3 v + 1 ) . {\displaystyle v^{10}+11v^{5}-1=(v^{2}+v-1)(v^{4}-3v^{3}+4v^{2}-2v+1)(v^{4}+2v^{3}+4v^{2}+3v+1).} Ramanujan found many other interesting results regarding R ( q ) {\displaystyle R(q)} . [ 4 ] Let a , b ∈ R + {\displaystyle a,b\in \mathbb {R} ^{+}} , and φ {\displaystyle \varphi } as the golden ratio . If a b = π 2 {\displaystyle ab=\pi ^{2}} then, If 5 a b = π 2 {\displaystyle 5ab=\pi ^{2}} then, The powers of R ( q ) {\displaystyle R(q)} also can be expressed in unusual ways. For its cube , where For its fifth power, let w = R ( q ) R 2 ( q 2 ) {\displaystyle w=R(q)R^{2}(q^{2})} , then, The general quintic equation in Bring-Jerrard form: for every real value a > 1 {\displaystyle a>1} can be solved in terms of Rogers-Ramanujan continued fraction R ( q ) {\displaystyle R(q)} and the elliptic nome To solve this quintic, the elliptic modulus must first be determined as Then the real solution is where S = R [ q ( k ) ] R 2 [ q ( k ) 2 ] . {\displaystyle S=R[q(k)]\,R^{2}[q(k)^{2}].} . Recall in the previous section the 5th power of R ( q ) {\displaystyle R(q)} can be expressed by S {\displaystyle S} : Transform to, thus, and the solution is: and can not be represented by elementary root expressions. thus, Given the more familiar continued fractions with closed-forms, with golden ratio φ = 1 + 5 2 {\displaystyle \varphi ={\tfrac {1+{\sqrt {5}}}{2}}} and the solution simplifies to
https://en.wikipedia.org/wiki/Rogers–Ramanujan_continued_fraction
In mathematics , the Rogers–Ramanujan identities are two identities related to basic hypergeometric series and integer partitions . The identities were first discovered and proved by Leonard James Rogers ( 1894 ), and were subsequently rediscovered (without a proof) by Srinivasa Ramanujan some time before 1913. Ramanujan had no proof, but rediscovered Rogers's paper in 1917, and they then published a joint new proof ( Rogers & Ramanujan 1919 ). Issai Schur ( 1917 ) independently rediscovered and proved the identities. The Rogers–Ramanujan identities are and Here, ( a ; q ) n {\displaystyle (a;q)_{n}} denotes the q-Pochhammer symbol . Consider the following: The Rogers–Ramanujan identities could be now interpreted in the following way. Let n {\displaystyle n} be a non-negative integer. Alternatively, Since the terms occurring in the identity are generating functions of certain partitions , the identities make statements about partitions (decompositions) of natural numbers. The number sequences resulting from the coefficients of the Maclaurin series of the Rogers–Ramanujan functions G and H are special partition number sequences of level 5: The number sequence P G ( n ) {\displaystyle P_{G}(n)} (sequence A003114 in the OEIS ) [ 1 ] ) represents the number of possibilities for the affected natural number n to decompose this number into summands of the patterns 5a + 1 or 5a + 4 with a ∈ N 0 {\displaystyle \mathbb {N} _{0}} . Thus P G ( n ) {\displaystyle P_{G}(n)} gives the number of decays of an integer n in which adjacent parts of the partition differ by at least 2, equal to the number of decays in which each part is equal to 1 or 4 mod 5 is. And the number sequence P H ( n ) {\displaystyle P_{H}(n)} (sequence A003106 in the OEIS ) [ 2 ] ) analogously represents the number of possibilities for the affected natural number n to decompose this number into summands of the patterns 5a + 2 or 5a + 3 with a ∈ N 0 {\displaystyle \mathbb {N} _{0}} . Thus P H ( n ) {\displaystyle P_{H}(n)} gives the number of decays of an integer n in which adjacent parts of the partition differ by at least 2 and in which the smallest part is greater than or equal to 2 is equal the number of decays whose parts are equal to 2 or 3 mod 5. This will be illustrated as examples in the following two tables: The following continued fraction R ( q ) {\displaystyle R(q)} is called Rogers–Ramanujan continued fraction , [ 3 ] [ 4 ] Continuing fraction S ( q ) {\displaystyle S(q)} is called alternating Rogers–Ramanujan continued fraction! R ( q ) = q 1 / 5 [ 1 + q 1 + q 2 1 + q 3 1 + ⋯ ] {\displaystyle R(q)=q^{1/5}\left[1+{\frac {q}{1+{\frac {q^{2}}{1+{\frac {q^{3}}{1+\cdots }}}}}}\right]} S ( q ) = q 1 / 5 [ 1 − q 1 + q 2 1 − q 3 1 + ⋯ ] {\displaystyle S(q)=q^{1/5}\left[1-{\frac {q}{1+{\frac {q^{2}}{1-{\frac {q^{3}}{1+\cdots }}}}}}\right]} The factor q 1 5 {\displaystyle q^{\frac {1}{5}}} creates a quotient of module functions and it also makes these shown continued fractions modular: This definition applies [ 5 ] for the continued fraction mentioned: This is the definition of the Ramanujan theta function : With this function, the continued fraction R can be created this way: The connection between the continued fraction and the Rogers–Ramanujan functions was already found by Rogers in 1894 (and later independently by Ramanujan). The continued fraction can also be expressed by the Dedekind eta function : [ 6 ] The alternating continued fraction S ( q ) {\displaystyle S(q)} has the following identities to the remaining Rogers–Ramanujan functions and to the Ramanujan theta function described above: The following definitions are valid for the Jacobi "Theta-Nullwert" functions : And the following product definitions are identical to the total definitions mentioned: These three so-called theta zero value functions are linked to each other using the Jacobian identity : The mathematicians Edmund Taylor Whittaker and George Neville Watson [ 7 ] [ 8 ] [ 9 ] discovered these definitional identities. The Rogers–Ramanujan continued fraction functions R ( x ) {\displaystyle R(x)} and S ( x ) {\displaystyle S(x)} have these relationships to the theta Nullwert functions: The element of the fifth root can also be removed from the elliptic nome of the theta functions and transferred to the external tangent function. In this way, a formula can be created that only requires one of the three main theta functions: An elliptic function is a modular function if this function in dependence on the elliptic nome as an internal variable function results in a function, which also results as an algebraic combination of Legendre's elliptic modulus and its complete elliptic integrals of the first kind in the K and K' form. The Legendre's elliptic modulus is the numerical eccentricity of the corresponding ellipse. If you set q = e 2 π i τ {\displaystyle q=e^{2\pi i\tau }} (where the imaginary part of τ ∈ C {\displaystyle \tau \in \mathbb {C} } is positive), following two functions are modular functions ! If q = e 2πiτ , then q −1/60 G ( q ) and q 11/60 H ( q ) are modular functions of τ. For the Rogers–Ramanujan continued fraction R(q) this formula is valid based on the described modular modifications of G and H: These functions have the following values for the reciprocal of Gelfond's constant and for the square of this reciprocal: The Rogers–Ramanujan continued fraction takes the following ordinate values for these abscissa values: R [ exp ⁡ ( − π ) ] = 1 4 ( 5 + 1 ) ( 5 − 5 + 2 ) ( 5 + 2 + 5 4 ) = = Φ 3 / 2 cl ⁡ ( 1 5 ϖ ) − 3 / 2 cl ⁡ ( 2 5 ϖ ) 3 / 2 cl ⁡ ( 1 10 ϖ ) 2 cl ⁡ ( 3 10 ϖ ) slh ⁡ ( 2 5 2 ϖ ) = = tan ⁡ [ 1 4 arctan ⁡ ( 2 ) + 1 2 arcsin ⁡ ( Φ − 2 ) ] {\displaystyle {\begin{aligned}R[\exp(-\pi )]{}&={\tfrac {1}{4}}({\sqrt {5}}+1)({\sqrt {5}}-{\sqrt {{\sqrt {5}}+2}})({\sqrt {{\sqrt {5}}+2}}+{\sqrt[{4}]{5}})=\\[4pt]&{}=\Phi ^{3/2}\operatorname {cl} ({\tfrac {1}{5}}\varpi )^{-3/2}\operatorname {cl} ({\tfrac {2}{5}}\varpi )^{3/2}\operatorname {cl} ({\tfrac {1}{10}}\varpi )^{2}\operatorname {cl} ({\tfrac {3}{10}}\varpi )\operatorname {slh} ({\tfrac {2}{5}}{\sqrt {2}}\,\varpi )=\\[4pt]&{}={\color {blue}\tan {\bigl [}{\tfrac {1}{4}}\arctan(2)+{\tfrac {1}{2}}\arcsin(\Phi ^{-2}){\bigr ]}}\\[4pt]\end{aligned}}} R [ exp ⁡ ( − 2 π ) ] = 4 sin ⁡ ( 1 20 π ) sin ⁡ ( 3 20 π ) = = tan ⁡ [ 1 4 arctan ⁡ ( 2 ) ] {\displaystyle {\begin{aligned}R[\exp(-2\pi )]{}&=4\sin({\tfrac {1}{20}}\pi )\sin({\tfrac {3}{20}}\pi )=\\[4pt]&{}={\color {blue}\tan {\bigl [}{\tfrac {1}{4}}\arctan(2){\bigr ]}}\end{aligned}}} Given are the mentioned definitions of G M {\displaystyle G_{M}} and H M {\displaystyle H_{M}} in this already mentioned way: The Dedekind eta function identities for the functions G and H result by combining only the following two equation chains: The quotient is the Rogers Ramanujan continued fraction accurately: But the product leads to a simplified combination of Pochhammer operators: The geometric mean of these two equation chains directly lead to following expressions in dependence of the Dedekind eta function in their Weber form: In this way the modulated functions G M {\displaystyle G_{M}} and H M {\displaystyle H_{M}} are represented directly using only the continued fraction R and the Dedekind eta function quotient! With the Pochhammer products alone, the following identity then applies to the non-modulated functions G and H: For the Dedekind eta function according to Weber's definition [ 10 ] these formulas apply: The fourth formula describes the pentagonal number theorem [ 11 ] because of the exponents! These basic definitions apply to the pentagonal numbers and the card house numbers : The fifth formula contains the Regular Partition Numbers as coefficients. The Regular Partition Number Sequence P ( n ) {\displaystyle \mathrm {P} (n)} itself indicates the number of ways in which a positive integer number n {\displaystyle n} can be split into positive integer summands. For the numbers n = 1 {\displaystyle n=1} to n = 5 {\displaystyle n=5} , the associated partition numbers P {\displaystyle P} with all associated number partitions are listed in the following table: The following further simplification for the modulated functions G M {\displaystyle G_{M}} and H M {\displaystyle H_{M}} can be undertaken. This connection applies especially to the Dedekind eta function from the fifth power of the elliptic nome: These two identities with respect to the Rogers–Ramanujan continued fraction were given for the modulated functions G M {\displaystyle G_{M}} and H M {\displaystyle H_{M}} : The combination of the last three formulas mentioned results in the following pair of formulas: G M ( q ) = η W ( q 2 ) 2 η W ( q ) 2 [ ϑ 01 ( q 5 ) ϑ 01 ( q ) ] 1 / 2 [ 5 ϑ 01 ( q 5 ) 2 4 ϑ 01 ( q ) 2 − 1 4 ] − 1 / 2 R ( q ) − 1 / 2 {\displaystyle G_{M}(q)={\frac {\eta _{W}(q^{2})^{2}}{\eta _{W}(q)^{2}}}{\biggl [}{\frac {\vartheta _{01}(q^{5})}{\vartheta _{01}(q)}}{\biggr ]}^{1/2}{\biggl [}{\frac {5\,\vartheta _{01}(q^{5})^{2}}{4\,\vartheta _{01}(q)^{2}}}-{\frac {1}{4}}{\biggr ]}^{-1/2}R(q)^{-1/2}} H M ( q ) = η W ( q 2 ) 2 η W ( q ) 2 [ ϑ 01 ( q 5 ) ϑ 01 ( q ) ] 1 / 2 [ 5 ϑ 01 ( q 5 ) 2 4 ϑ 01 ( q ) 2 − 1 4 ] − 1 / 2 R ( q ) 1 / 2 {\displaystyle H_{M}(q)={\frac {\eta _{W}(q^{2})^{2}}{\eta _{W}(q)^{2}}}{\biggl [}{\frac {\vartheta _{01}(q^{5})}{\vartheta _{01}(q)}}{\biggr ]}^{1/2}{\biggl [}{\frac {5\,\vartheta _{01}(q^{5})^{2}}{4\,\vartheta _{01}(q)^{2}}}-{\frac {1}{4}}{\biggr ]}^{-1/2}R(q)^{1/2}} The Weber modular functions in their reduced form are an efficient way of computing the values of the Rogers–Ramanujan functions: First of all we introduce the reduced Weber modular functions in that pattern: This function fulfills following equation of sixth degree: Therefore this w R 5 {\displaystyle w_{R5}} function is an algebraic function indeed. But along with the Abel–Ruffini theorem this function in relation to the eccentricity can not be represented by elementary expressions. However there are many values that in fact can be expressed elementarily. Four examples shall be given for this: First example: Second example: = Φ − 1 cot ⁡ [ 1 4 π − arctan ⁡ ( 1 3 5 − 1 3 6 30 + 4 5 3 + 1 3 6 30 − 4 5 3 ) ] = {\displaystyle =\Phi ^{-1}\cot {\bigl [}{\tfrac {1}{4}}\pi -\arctan {\bigl (}{\tfrac {1}{3}}{\sqrt {5}}-{\tfrac {1}{3}}{\sqrt[{3}]{6{\sqrt {30}}+4{\sqrt {5}}}}+{\tfrac {1}{3}}{\sqrt[{3}]{6{\sqrt {30}}-4{\sqrt {5}}}}\,{\bigr )}{\bigr ]}=} Third example: Fourth example: For that function, a further expression is valid: In this way the accurate eccentricity dependent formulas for the functions G and H can be generated: Following Dedekind eta function quotient has this eccentricity dependency: This is the eccentricity dependent formula for the continued fraction R: The last three now mentioned formulas will be inserted into the final formulas mentioned in the section above: G M [ q ( ε ) ] = tan ⁡ [ 2 arctan ⁡ ( ε ) ] 1 / 6 [ 2 w R 5 ( ε ) + 1 ] 1 / 4 5 1 / 4 w R 5 ( ε ) 1 / 2 tan ⁡ { 1 2 arctan ⁡ [ w R 5 ( ε ) − 2 2 w R 5 ( ε ) + 1 ] } − 1 / 10 tan ⁡ { 1 2 arccot ⁡ [ w R 5 ( ε ) − 2 2 w R 5 ( ε ) + 1 ] } − 1 / 5 {\displaystyle G_{M}{\bigl [}q(\varepsilon ){\bigr ]}={\frac {\tan {\bigl [}2\arctan(\varepsilon ){\bigr ]}^{1/6}{\bigl [}2\,w_{R5}(\varepsilon )+1{\bigr ]}^{1/4}}{5^{1/4}w_{R5}(\varepsilon )^{1/2}}}\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {w_{R5}(\varepsilon )-2}{2\,w_{R5}(\varepsilon )+1}}{\biggr ]}{\biggr \}}^{-1/10}\tan {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {w_{R5}(\varepsilon )-2}{2\,w_{R5}(\varepsilon )+1}}{\biggr ]}{\biggr \}}^{-1/5}} H M [ q ( ε ) ] = tan ⁡ [ 2 arctan ⁡ ( ε ) ] 1 / 6 [ 2 w R 5 ( ε ) + 1 ] 1 / 4 5 1 / 4 w R 5 ( ε ) 1 / 2 tan ⁡ { 1 2 arctan ⁡ [ w R 5 ( ε ) − 2 2 w R 5 ( ε ) + 1 ] } 1 / 10 tan ⁡ { 1 2 arccot ⁡ [ w R 5 ( ε ) − 2 2 w R 5 ( ε ) + 1 ] } 1 / 5 {\displaystyle H_{M}{\bigl [}q(\varepsilon ){\bigr ]}={\frac {\tan {\bigl [}2\arctan(\varepsilon ){\bigr ]}^{1/6}{\bigl [}2\,w_{R5}(\varepsilon )+1{\bigr ]}^{1/4}}{5^{1/4}w_{R5}(\varepsilon )^{1/2}}}\tan {\biggl \{}{\frac {1}{2}}\arctan {\biggl [}{\frac {w_{R5}(\varepsilon )-2}{2\,w_{R5}(\varepsilon )+1}}{\biggr ]}{\biggr \}}^{1/10}\tan {\biggl \{}{\frac {1}{2}}\operatorname {arccot} {\biggl [}{\frac {w_{R5}(\varepsilon )-2}{2\,w_{R5}(\varepsilon )+1}}{\biggr ]}{\biggr \}}^{1/5}} On the left side of the balances the functions G M ( q ) {\displaystyle G_{M}(q)} and H M ( q ) {\displaystyle H_{M}(q)} in relation to the elliptic nome function q ( ε ) {\displaystyle q(\varepsilon )} are written down directly. And on the right side an algebraic combination of the eccentricity ε {\displaystyle \varepsilon } is formulated. Therefore these functions G M ( q ) = q − 1 / 60 G ( q ) {\displaystyle G_{M}(q)=q^{-1/60}G(q)} and H M ( q ) = q 11 / 60 H ( q ) {\displaystyle H_{M}(q)=q^{11/60}H(q)} are modular functions indeed! The general case of quintic equations in the Bring–Jerrard form has a non-elementary solution based on the Abel–Ruffini theorem and will now be explained using the elliptic nome of the corresponding modulus, described by the lemniscate elliptic functions in a simplified way. The real solution for all real values c ∈ R {\displaystyle c\in \mathbb {R} } can be determined as follows: Alternatively, the same solution can be presented in this way: The mathematician Charles Hermite determined the value of the elliptic modulus k in relation to the coefficient of the absolute term of the Bring–Jerrard form. In his essay "Sur la résolution de l'Équation du cinquiéme degré Comptes rendus" he described the calculation method for the elliptic modulus in terms of the absolute term. The Italian version of his essay "Sulla risoluzione delle equazioni del quinto grado" contains exactly on page 258 the upper Bring–Jerrard equation formula, which can be solved directly with the functions based on the corresponding elliptic modulus. This corresponding elliptic modulus can be worked out by using the square of the Hyperbolic lemniscate cotangent. For the derivation of this, please see the Wikipedia article lemniscate elliptic functions ! The elliptic nome of this corresponding modulus is represented here with the letter Q: The abbreviation ctlh expresses the Hyperbolic Lemniscate Cotangent and the abbreviation aclh represents the Hyperbolic Lemniscate Areacosine ! Two examples of this solution algorithm are now mentioned: First calculation example: Quintic Bring–Jerrard equation: Solution formula: Decimal places of the nome: Decimal places of the solution: Second calculation example: Quintic Bring–Jerrard equation: Solution: Decimal places of the nome: Decimal places of the solution: The Rogers–Ramanujan identities appeared in Baxter's solution of the hard hexagon model in statistical mechanics. The demodularized standard form of the Ramanujan's continued fraction unanchored from the modular form is as follows:: James Lepowsky and Robert Lee Wilson were the first to prove Rogers–Ramanujan identities using completely representation-theoretic techniques. They proved these identities using level 3 modules for the affine Lie algebra s l 2 ^ {\displaystyle {\widehat {{\mathfrak {sl}}_{2}}}} . In the course of this proof they invented and used what they called Z {\displaystyle Z} -algebras. Lepowsky and Wilson's approach is universal, in that it is able to treat all affine Lie algebras at all levels. It can be used to find (and prove) new partition identities. First such example is that of Capparelli's identities discovered by Stefano Capparelli using level 3 modules for the affine Lie algebra A 2 ( 2 ) {\displaystyle A_{2}^{(2)}} .
https://en.wikipedia.org/wiki/Rogers–Ramanujan_identities
A rogue access point is a wireless access point that has been installed on a secure network without explicit authorization from a local network administrator, [ 1 ] whether added by a well-meaning employee or by a malicious attacker. Although it is technically easy for a well-meaning employee to install a " soft access point " or an inexpensive wireless router —perhaps to make access from mobile devices easier—it is likely that they will configure this as "open", or with poor security, and potentially allow access to unauthorized parties. If an attacker installs an access point they are able to run various types of vulnerability scanners , and rather than having to be physically inside the organization, can attack remotely—perhaps from a reception area, adjacent building, car park, or with a high gain antenna , even from several miles away. When a victim connects, the attacker can use network sniffing tools to steal and monitor data packets and possibly find out credentials from the malicious connection. To prevent the installation of rogue access points, organizations can install wireless intrusion prevention systems to monitor the radio spectrum for unauthorized access points. Presence of a large number of wireless access points can be sensed in airspace of a typical enterprise facility. These include managed access points in the secure network plus access points in the neighborhood. A wireless intrusion prevention system facilitates the job of auditing these access points on a continuous basis to learn whether there are any rogue access points among them. In order to detect rogue access points, two conditions need to be tested: The first of the above two conditions is easy to test—compare wireless MAC address (also called as BSSID) of the access point against the managed access point BSSID list. However, automated testing of the second condition can become challenging in the light of following factors: a) Need to cover different types of access point devices such as bridging, NAT (router), unencrypted wireless links, encrypted wireless links, different types of relations between wired and wireless MAC addresses of access points, and soft access points, b) necessity to determine access point connectivity with acceptable response time in large networks, and c) requirement to avoid both false positives and negatives which are described below. False positives occur when the wireless intrusion prevention system detects an access point not actually connected to the secure network as wired rogue. Frequent false positives result in wastage of administrative bandwidth spent in chasing them. Possibility of false positives also creates hindrance to enabling automated blocking of wired rogues due to the fear of blocking friendly neighborhood access point. False negatives occur when the wireless intrusion prevention system fails to detect an access point actually connected to the secure network as wired rogue. False negatives result in security holes. If an unauthorized access point is found connected to the secure network, it is the rogue access point of the first kind (also called as “wired rogue”). On the other hand, if the unauthorized access point is found not connected to the secure network, it is an external access point. Among the external access points, if any is found to be mischievous or a potential risk (e.g., whose settings can attract or have already attracted secure network wireless clients), it is tagged as a rogue access point of the second kind, which is often called an " evil twin ". A "soft access point" (soft AP) can be set up on a Wi-Fi adapter using for example Windows' virtual Wi-Fi or Intel's My WiFi. This makes it possible, without the need of a physical Wi-Fi router, to share the wired network access of one computer with wireless clients connected to that soft AP. If an employee sets up such a soft AP on their machine without coordinating with the IT department and shares the corporate network through it, then this soft AP becomes a rogue AP. [ 2 ]
https://en.wikipedia.org/wiki/Rogue_access_point
Rogue waves (also known as freak waves or killer waves ) are large and unpredictable surface waves that can be extremely dangerous to ships and isolated structures such as lighthouses . [ 1 ] They are distinct from tsunamis , which are long wavelength waves, often almost unnoticeable in deep waters and are caused by the displacement of water due to other phenomena (such as earthquakes ). A rogue wave at the shore is sometimes called a sneaker wave . [ 2 ] In oceanography , rogue waves are more precisely defined as waves whose height is more than twice the significant wave height ( H s or SWH), which is itself defined as the mean of the largest third of waves in a wave record. Rogue waves do not appear to have a single distinct cause but occur where physical factors such as high winds and strong currents cause waves to merge to create a single large wave. [ 1 ] Research published in 2023 suggests sea state crest-trough correlation leading to linear superposition may be a dominant factor in predicting the frequency of rogue waves. [ 3 ] Among other causes, studies of nonlinear waves such as the Peregrine soliton , and waves modeled by the nonlinear Schrödinger equation (NLS), suggest that modulational instability can create an unusual sea state where a "normal" wave begins to draw energy from other nearby waves, and briefly becomes very large. Such phenomena are not limited to water and are also studied in liquid helium, nonlinear optics, and microwave cavities. A 2012 study reported that in addition to the Peregrine soliton reaching up to about three times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes and demonstrated the creation of a "super rogue wave" (a breather around five times higher than surrounding waves) in a water-wave tank . [ 4 ] A 2012 study supported the existence of oceanic rogue holes, the inverse of rogue waves, where the depth of the hole can reach more than twice the significant wave height. [ 5 ] Although it is often claimed that rogue holes have never been observed in nature despite replication in wave tank experiments, there is a rogue hole recording from an oil platform in the North Sea, revealed in Kharif et al. [ 6 ] The same source also reveals a recording of what is known as the 'Three Sisters'. Rogue waves are waves in open water that are much larger than surrounding waves. More precisely, rogue waves have a height which is more than twice the significant wave height ( H s or SWH). They can be caused when currents or winds cause waves to travel at different speeds, and the waves merge to create a single large wave; or when nonlinear effects cause energy to move between waves to create a single extremely large wave. Once considered mythical and lacking hard evidence, rogue waves are now proven to exist and are known to be natural ocean phenomena. Eyewitness accounts from mariners and damage inflicted on ships have long suggested they occur. Still, the first scientific evidence of their existence came with the recording of a rogue wave by the Gorm platform in the central North Sea in 1984. A stand-out wave was detected with a wave height of 11 m (36 ft) in a relatively low sea state. [ 7 ] However, what caught the attention of the scientific community was the digital measurement of a rogue wave at the Draupner platform in the North Sea on January 1, 1995; called the " Draupner wave ", it had a recorded maximum wave height of 25.6 m (84 ft) and peak elevation of 18.5 m (61 ft). During that event, minor damage was inflicted on the platform far above sea level, confirming the accuracy of the wave-height reading made by a downwards pointing laser sensor. [ 8 ] The existence of rogue waves has since been confirmed by video and photographs, satellite imagery , radar of the ocean surface, [ 9 ] stereo wave imaging systems, [ 10 ] pressure transducers on the sea-floor, and oceanographic research vessels. [ 11 ] In February 2000, a British oceanographic research vessel, the RRS Discovery , sailing in the Rockall Trough west of Scotland, encountered the largest waves ever recorded by any scientific instruments in the open ocean, with an SWH of 18.5 metres (61 ft) and individual waves up to 29.1 metres (95 ft). [ 12 ] In 2004, scientists using three weeks of radar images from European Space Agency satellites found ten rogue waves, each 25 metres (82 ft) or higher. [ 13 ] A rogue wave is a natural ocean phenomenon that is not caused by land movement, only lasts briefly, occurs in a limited location, and most often happens far out at sea. [ 1 ] Rogue waves are considered rare, but potentially very dangerous, since they can involve the spontaneous formation of massive waves far beyond the usual expectations of ship designers , and can overwhelm the usual capabilities of ocean-going vessels which are not designed for such encounters. Rogue waves are, therefore, distinct from tsunamis . [ 1 ] Tsunamis are caused by a massive displacement of water, often resulting from sudden movements of the ocean floor , after which they propagate at high speed over a wide area. They are nearly unnoticeable in deep water and only become dangerous as they approach the shoreline and the ocean floor becomes shallower; [ 14 ] therefore, tsunamis do not present a threat to shipping at sea (e.g., the only ships lost in the 2004 Asian tsunami were in port.). These are also different from the wave known as a " hundred-year wave ", which is a purely statistical description of a particularly high wave with a 1% chance to occur in any given year in a particular body of water. Rogue waves have now been proven to cause the sudden loss of some ocean-going vessels. Well-documented instances include the freighter MS München , lost in 1978. [ 15 ] Rogue waves have been implicated in the loss of other vessels, including the Ocean Ranger , a semisubmersible mobile offshore drilling unit that sank in Canadian waters on 15 February 1982. [ 16 ] In 2007, the United States' National Oceanic and Atmospheric Administration (NOAA) compiled a catalogue of more than 50 historical incidents probably associated with rogue waves. [ 17 ] In 1826, French scientist and naval officer Jules Dumont d'Urville reported waves as high as 33 m (108 ft) in the Indian Ocean with three colleagues as witnesses, yet he was publicly ridiculed by fellow scientist François Arago . In that era, the thought was widely held that no wave could exceed 9 m (30 ft). [ 18 ] [ 19 ] Author Susan Casey wrote that much of that disbelief came because there were very few people who had seen a rogue wave and survived ; until the advent of steel double-hulled ships of the 20th century, "people who encountered 100-foot [30 m] rogue waves generally weren't coming back to tell people about it." [ 20 ] Unusual waves have been studied scientifically for many years (for example, John Scott Russell 's wave of translation , an 1834 study of a soliton wave). Still, these were not linked conceptually to sailors' stories of encounters with giant rogue ocean waves, as the latter were believed to be scientifically implausible. Since the 19th century, oceanographers, meteorologists, engineers, and ship designers have used a statistical model known as the Gaussian function (or Gaussian Sea or standard linear model) to predict wave height, on the assumption that wave heights in any given sea are tightly grouped around a central value equal to the average of the largest third, known as the significant wave height (SWH). [ 21 ] In a storm sea with an SWH of 12 m (39 ft), the model suggests hardly ever would a wave higher than 15 m (49 ft) occur. It suggests one of 30 m (98 ft) could indeed happen, but only once in 10,000 years. This basic assumption was well accepted, though acknowledged to be an approximation. Using a Gaussian form to model waves has been the sole basis of virtually every text on that topic for the past 100 years. [ 21 ] [ 22 ] [ when? ] [ why? ] The first known scientific article on "freak waves" was written by Professor Laurence Draper in 1964. In that paper, he documented the efforts of the National Institute of Oceanography in the early 1960s to record wave height, and the highest wave recorded at that time, which was about 20 metres (67 ft). Draper also described freak wave holes . [ 23 ] [ 24 ] [ 25 ] Before the Draupner wave was recorded in 1995, early research had already made significant strides in understanding extreme wave interactions. In 1979, Dik Ludikhuize and Henk Jan Verhagen at TU Delft successfully generated cross-swell waves in a wave basin. Although only monochromatic waves could be produced at the time, their findings, reported in 1981, showed that individual wave heights could be added together even when exceeding breaker criteria. This phenomenon provided early evidence that waves could grow significantly larger than anticipated by conventional theories of wave breaking. [ 26 ] This work highlighted that in cases of crossing waves, wave steepness could increase beyond usual limits. Although the waves studied were not as extreme as rogue waves, the research provided an understanding of how multidirectional wave interactions could lead to extreme wave heights - a key concept in the formation of rogue waves. The crossing wave phenomenon studied in the Delft Laboratory therefore had direct relevance to the unpredictable rogue waves encountered at sea. [ 27 ] Research published in 2024 by TU Delft and other institutions has subsequently demonstrated that waves coming from multiple directions can grow up to four times steeper than previously imagined. [ 28 ] The Draupner wave was the first rogue wave to be detected by a measuring instrument . The wave was recorded in 1995 at Unit E of the Draupner platform , a gas pipeline support complex located in the North Sea about 160 km (100 miles) southwest from the southern tip of Norway. [ 29 ] [ a ] At 15:24 UTC on 1 January 1995, the device recorded a rogue wave with a maximum wave height of 25.6 m (84 ft). Peak elevation above still water level was 18.5 m (61 ft). [ 30 ] The reading was confirmed by the other sensors. [ 31 ] In the area, the SWH at the time was about 12 m (39 ft), so the Draupner wave was more than twice as tall and steep as its neighbors, with characteristics that fell outside any known wave model. The wave caused enormous interest in the scientific community. [ 29 ] [ 31 ] Following the evidence of the Draupner wave, research in the area became widespread. The first scientific study to comprehensively prove that freak waves exist, which are clearly outside the range of Gaussian waves, was published in 1997. [ 32 ] Some research confirms that observed wave height distribution, in general, follows well the Rayleigh distribution . Still, in shallow waters during high energy events, extremely high waves are rarer than this particular model predicts. [ 13 ] From about 1997, most leading authors acknowledged the existence of rogue waves with the caveat that wave models could not replicate rogue waves. [ 18 ] Statoil researchers presented a paper in 2000, collating evidence that freak waves were not the rare realizations of a typical or slightly non-gaussian sea surface population ( classical extreme waves) but were the typical realizations of a rare and strongly non-gaussian sea surface population of waves ( freak extreme waves). [ 33 ] A workshop of leading researchers in the world attended the first Rogue Waves 2000 workshop held in Brest in November 2000. [ 34 ] In 2000, British oceanographic vessel RRS Discovery recorded a 29 m (95 ft) wave off the coast of Scotland near Rockall . This was a scientific research vessel fitted with high-quality instruments. Subsequent analysis determined that under severe gale-force conditions with wind speeds averaging 21 metres per second (41 kn), a ship-borne wave recorder measured individual waves up to 29.1 m (95.5 ft) from crest to trough, and a maximum SWH of 18.5 m (60.7 ft). These were some of the largest waves recorded by scientific instruments up to that time. The authors noted that modern wave prediction models are known to significantly under-predict extreme sea states for waves with a significant height (H s ) above 12 m (39.4 ft). The analysis of this event took a number of years and noted that "none of the state-of-the-art weather forecasts and wave models ‍ — the information upon which all ships, oil rigs, fisheries, and passenger boats rely ‍ — had predicted these behemoths." In simple terms, a scientific model (and also ship design method) to describe the waves encountered did not exist. This finding was widely reported in the press, which reported that "according to all of the theoretical models at the time under this particular set of weather conditions, waves of this size should not have existed". [ 1 ] [ 12 ] [ 29 ] [ 35 ] [ 36 ] In 2004, the ESA MaxWave project identified more than 10 individual giant waves above 25 m (82 ft) in height during a short survey period of three weeks in a limited area of the South Atlantic. [ 37 ] [ 38 ] By 2007, it was further proven via satellite radar studies that waves with crest-to-trough heights of 20 to 30 m (66 to 98 ft) occur far more frequently than previously thought. [ 39 ] Rogue waves are now known to occur in all of the world's oceans many times each day. Rogue waves are now accepted as a common phenomenon. Professor Akhmediev of the Australian National University has stated that 10 rogue waves exist in the world's oceans at any moment. [ 40 ] Some researchers have speculated that roughly three of every 10,000 waves on the oceans achieve rogue status, yet in certain spots ‍ — such as coastal inlets and river mouths ‍ — these extreme waves can make up three of every 1,000 waves, because wave energy can be focused. [ 41 ] Rogue waves may also occur in lakes . A phenomenon known as the "Three Sisters" is said to occur in Lake Superior when a series of three large waves forms. The second wave hits the ship's deck before the first wave clears. The third incoming wave adds to the two accumulated backwashes and suddenly overloads the ship deck with large amounts of water. The phenomenon is one of various theorized causes of the sinking of the SS Edmund Fitzgerald on Lake Superior in November 1975. [ 42 ] A 2012 study reported that in addition to the Peregrine soliton reaching up to about 3 times the height of the surrounding sea, a hierarchy of higher order wave solutions could also exist having progressively larger sizes, and demonstrated the creation of a "super rogue wave"— a breather around 5 times higher than surrounding waves ‍ — in a water tank . [ 4 ] Also in 2012, researchers at the Australian National University proved the existence of "rogue wave holes", an inverted profile of a rogue wave. Their research created rogue wave holes on the water surface in a water-wave tank. [ 5 ] In maritime folklore , stories of rogue holes are as common as stories of rogue waves. They had followed from theoretical analysis but had never been proven experimentally. "Rogue wave" has become a near-universal term used by scientists to describe isolated, large-amplitude waves that occur more frequently than expected for normal, Gaussian-distributed, statistical events. Rogue waves appear ubiquitous and are not limited to the oceans. They appear in other contexts and have recently been reported in liquid helium, nonlinear optics, and microwave cavities. Marine researchers universally now accept that these waves belong to a specific kind of sea wave, not considered by conventional models for sea wind waves. [ 43 ] [ 44 ] [ 45 ] [ 46 ] A 2015 paper studied the wave behavior around a rogue wave, including optical and the Draupner wave, and concluded, "rogue events do not necessarily appear without warning but are often preceded by a short phase of relative order". [ 47 ] In 2019, researchers succeeded in producing a wave with similar characteristics to the Draupner wave (steepness and breaking), and proportionately greater height, using multiple wavetrains meeting at an angle of 120°. Previous research had strongly suggested that the wave resulted from an interaction between waves from different directions ("crossing seas"). Their research also highlighted that wave-breaking behavior was not necessarily as expected. If waves met at an angle less than about 60°, then the top of the wave "broke" sideways and downwards (a "plunging breaker"). Still, from about 60° and greater, the wave began to break vertically upwards, creating a peak that did not reduce the wave height as usual but instead increased it (a "vertical jet"). They also showed that the steepness of rogue waves could be reproduced in this manner. Lastly, they observed that optical instruments such as the laser used for the Draupner wave might be somewhat confused by the spray at the top of the wave if it broke, and this could lead to uncertainties of around 1.0 to 1.5 m (3 to 5 ft) in the wave height. They concluded, "... the onset and type of wave breaking play a significant role and differ significantly for crossing and noncrossing waves. Crucially, breaking becomes less crest-amplitude limiting for sufficiently large crossing angles and involves the formation of near-vertical jets". [ 48 ] [ 49 ] On 17 November 2020, a buoy moored in 45 metres (148 ft) of water on Amphitrite Bank in the Pacific Ocean 7 kilometres (4.3 mi; 3.8 nmi) off Ucluelet , Vancouver Island , British Columbia , Canada, at 48°54′N 125°36′W  /  48.9°N 125.6°W  / 48.9; -125.6 recorded a lone 17.6-metre (58 ft) tall wave among surrounding waves about 6 metres (20 ft) in height. [ 50 ] The wave exceeded the surrounding significant wave heights by a factor of 2.93. When the wave's detection was revealed to the public in February 2022, one scientific paper [ 50 ] and many news outlets christened the event as "the most extreme rogue wave event ever recorded" and a "once-in-a-millennium" event, claiming that at about three times the height of the waves around it, the Ucluelet wave set a record as the most extreme rogue wave ever recorded at the time in terms of its height in proportion to surrounding waves, and that a wave three times the height of those around it was estimated to occur on average only once every 1,300 years worldwide. [ 51 ] [ 52 ] [ 53 ] The Ucluelet event generated controversy. Analysis of scientific papers dealing with rogue wave events since 2005 revealed the claims for the record-setting nature and rarity of the wave to be incorrect. The paper Oceanic rogue waves [ 54 ] by Dysthe, Krogstad and Muller reports on an event in the Black Sea in 2004 which was far more extreme than the Ucluelet wave, where the Datawell Waverider buoy reported a wave whose height was 10.32 metres (33.86 ft) higher and 3.91 times the significant wave height, as detailed in the paper. Thorough inspection of the buoy after the recording revealed no malfunction. The authors of the paper that reported the Black Sea event [ 55 ] assessed the wave as "anomalous" and suggested several theories on how such an extreme wave may have arisen. The Black Sea event differs in the fact that it, unlike the Ucluelet wave, was recorded with a high-precision instrument. The Oceanic rogue waves paper also reports even more extreme waves from a different source, but these were possibly overestimated, as assessed by the data's own authors. The Black Sea wave occurred in relatively calm weather. Furthermore, a paper [ 56 ] by I. Nikolkina and I. Didenkulova also reveals waves more extreme than the Ucluelet wave. In the paper, they infer that in 2006 a 21-metre (69 ft) wave appeared in the Pacific Ocean off the Port of Coos Bay , Oregon, with a significant wave height of 3.9 metres (13 ft). The ratio is 5.38, almost twice that of the Ucluelet wave. The paper also reveals the MV Pont-Aven incident as marginally more extreme than the Ucluelet event. The paper also assesses a report of an 11-metre (36 ft) wave in a significant wave height of 1.9 metres (6 ft 3 in), but the authors cast doubt on that claim. A paper written by Craig B. Smith in 2007 reported on an incident in the North Atlantic, in which the submarine Grouper was hit by a 30-meter wave in calm seas. [ 57 ] Because the phenomenon of rogue waves is still a matter of active research, clearly stating what the most common causes are or whether they vary from place to place is premature. The areas of highest predictable risk appear to be where a strong current runs counter to the primary direction of travel of the waves; the area near Cape Agulhas off the southern tip of Africa is one such area. The warm Agulhas Current runs to the southwest, while the dominant winds are westerlies , but since this thesis does not explain the existence of all waves that have been detected, several different mechanisms are likely, with localized variation. Suggested mechanisms for freak waves include: The spatiotemporal focusing seen in the NLS equation can also occur when the non-linearity is removed. In this case, focusing is primarily due to different waves coming into phase rather than any energy-transfer processes. Further analysis of rogue waves using a fully nonlinear model by R. H. Gibbs (2005) brings this mode into question, as it is shown that a typical wave group focuses in such a way as to produce a significant wall of water at the cost of a reduced height. A rogue wave, and the deep trough commonly seen before and after it, may last only for some minutes before either breaking or reducing in size again. Apart from a single one, the rogue wave may be part of a wave packet consisting of a few rogue waves. Such rogue wave groups have been observed in nature. [ 73 ] A number of research programmes are currently underway or have concluded whose focus is/was on rogue waves, including: Researchers at UCLA observed rogue-wave phenomena in microstructured optical fibers near the threshold of soliton supercontinuum generation and characterized the initial conditions for generating rogue waves in any medium. [ 95 ] Research in optics has pointed out the role played by a Peregrine soliton that may explain those waves that appear and disappear without leaving a trace. [ 96 ] [ 97 ] Rogue waves in other media appear to be ubiquitous and have also been reported in liquid helium , in quantum mechanics, [ 98 ] in nonlinear optics , in microwave cavities, [ 99 ] in Bose–Einstein condensate , [ 100 ] in heat and diffusion, [ 101 ] and in finance. [ 102 ] [ 103 ] Many of these encounters are reported only in the media, and are not examples of open-ocean rogue waves. Often, in popular culture, an endangering huge wave is loosely denoted as a "rogue wave", while the case has not been established that the reported event is a rogue wave in the scientific sense – i.e. of a very different nature in characteristics as the surrounding waves in that sea state] and with a very low probability of occurrence. This section lists a limited selection of notable incidents. The loss of the MS München in 1978 provided some of the first physical evidence of the existence of rogue waves. München was a state-of-the-art cargo ship with multiple water-tight compartments and an expert crew. She was lost with all crew, and the wreck has never been found. The only evidence found was the starboard lifeboat recovered from floating wreckage sometime later. The lifeboats hung from forward and aft blocks 20 m (66 ft) above the waterline. The pins had been bent back from forward to aft, indicating the lifeboat hanging below it had been struck by a wave that had run from fore to aft of the ship and had torn the lifeboat from the ship. To exert such force, the wave must have been considerably higher than 20 m (66 ft). At the time of the inquiry, the existence of rogue waves was considered so statistically unlikely as to be near impossible. Consequently, the Maritime Court investigation concluded that the severe weather had somehow created an "unusual event" that had led to the sinking of the München . [ 15 ] [ 128 ] In 1980, the MV Derbyshire was lost during Typhoon Orchid south of Japan, along with all of her crew. The Derbyshire was an ore-bulk oil combination carrier built in 1976. At 91,655 gross register tons, she remains the largest British ship ever lost at sea. The wreck was found in June 1994. The survey team deployed a remotely operated vehicle to photograph the wreck. A private report published in 1998 prompted the British government to reopen a formal investigation into the sinking. The investigation included a comprehensive survey by the Woods Hole Oceanographic Institution , which took 135,774 pictures of the wreck during two surveys. The formal forensic investigation concluded that the ship sank because of structural failure and absolved the crew of any responsibility. Most notably, the report determined the detailed sequence of events that led to the structural failure of the vessel. A third comprehensive analysis was subsequently done by Douglas Faulkner, professor of marine architecture and ocean engineering at the University of Glasgow . His 2001 report linked the loss of the Derbyshire with the emerging science on freak waves, concluding that the Derbyshire was almost certainly destroyed by a rogue wave. [ 129 ] [ 130 ] [ 131 ] [ 132 ] [ 133 ] Work by sailor and author Craig B. Smith in 2007 confirmed prior forensic work by Faulkner in 1998 and determined that the Derbyshire was exposed to a hydrostatic pressure of a "static head" of water of about 20 m (66 ft) with a resultant static pressure of 201 kilopascals (2.01 bar; 29.2 psi). [ b ] This is in effect 20 m (66 ft) of seawater (possibly a super rogue wave) [ c ] flowing over the vessel. The deck cargo hatches on the Derbyshire were determined to be the key point of failure when the rogue wave washed over the ship. The design of the hatches only allowed for a static pressure less than 2 m (6.6 ft) of water or 17.1 kPa (0.171 bar; 2.48 psi), [ d ] meaning that the typhoon load on the hatches was more than 10 times the design load. The forensic structural analysis of the wreck of the Derbyshire is now widely regarded as irrefutable. [ 39 ] In addition, fast-moving waves are now known to also exert extremely high dynamic pressure. Plunging or breaking waves are known to cause short-lived impulse pressure spikes called Gifle peaks . These can reach pressures of 200 kPa (2.0 bar; 29 psi) (or more) for milliseconds, which is sufficient pressure to lead to brittle fracture of mild steel. Evidence of failure by this mechanism was also found on the Derbyshire . [ 129 ] Smith documented scenarios where hydrodynamic pressure up to 5,650 kPa (56.5 bar; 819 psi) or over 500 metric tonnes/m 2 could occur. [ e ] [ 39 ] In 2004, an extreme wave was recorded impacting the Alderney Breakwater, Alderney , in the Channel Islands . This breakwater is exposed to the Atlantic Ocean. The peak pressure recorded by a shore-mounted transducer was 745 kPa (7.45 bar; 108.1 psi). This pressure far exceeds almost any design criteria for modern ships, and this wave would have destroyed almost any merchant vessel. [ 7 ] In November 1997, the International Maritime Organization (IMO) adopted new rules covering survivability and structural requirements for bulk carriers of 150 m (490 ft) and upwards. The bulkhead and double bottom must be strong enough to allow the ship to survive flooding in hold one unless loading is restricted. [ 134 ] Rogue waves present considerable danger for several reasons: they are rare, unpredictable, may appear suddenly or without warning, and can impact with tremendous force. A 12 m (39 ft) wave in the usual "linear" model would have a breaking force of 6 metric tons per square metre [t/m 2 ] (8.5 psi). Although modern ships are typically designed to tolerate a breaking wave of 15 t/m 2 , a rogue wave can dwarf both of these figures with a breaking force far exceeding 100 t/m 2 . [ 117 ] Smith presented calculations using the International Association of Classification Societies (IACS) Common Structural Rules for a typical bulk carrier. [ f ] [ 39 ] Peter Challenor, a scientist from the National Oceanography Centre in the United Kingdom, was quoted in Casey 's book in 2010 as saying: "We don't have that random messy theory for nonlinear waves. At all." He added, "People have been working actively on this for the past 50 years at least. We don't even have the start of a theory." [ 29 ] [ 35 ] In 2006, Smith proposed that the IACS recommendation 34 pertaining to standard wave data be modified so that the minimum design wave height be increased to 19.8 m (65 ft). He presented analysis that sufficient evidence exists to conclude that 20.1 m (66 ft) high waves can be experienced in the 25-year lifetime of oceangoing vessels, and that 29.9 m (98 ft) high waves are less likely, but not out of the question. Therefore, a design criterion based on 11.0 m (36 ft) high waves seems inadequate when the risk of losing crew and cargo is considered. Smith also proposed that the dynamic force of wave impacts should be included in the structural analysis. [ 135 ] The Norwegian offshore standards now consider extreme severe wave conditions and require that a 10,000-year wave does not endanger the ships' integrity. [ 136 ] W. Rosenthal noted that as of 2005, rogue waves were not explicitly accounted for in Classification Society's rules for ships' design. [ 136 ] As an example, DNV GL , one of the world's largest international certification bodies and classification society with main expertise in technical assessment, advisory, and risk management publishes their Structure Design Load Principles which remain largely based on the Significant Wave Height, and as of January 2016, still have not included any allowance for rogue waves. [ 137 ] The U.S. Navy historically took the design position that the largest wave likely to be encountered was 21.4 m (70 ft). Smith observed in 2007 that the navy now believes that larger waves can occur and the possibility of extreme waves that are steeper (i.e. do not have longer wavelengths) is now recognized. The navy has not had to make any fundamental changes in ship design due to new knowledge of waves greater than 21.4 m because the ships are built to higher standards than required. [ 39 ] The more than 50 classification societies worldwide each has different rules. However, most new ships are built to the standards of the 12 members of the International Association of Classification Societies , which implemented two sets of common structural rules - one for oil tankers and one for bulk carriers, in 2006. These were later harmonised into a single set of rules. [ 138 ]
https://en.wikipedia.org/wiki/Rogue_wave
Rohri Canal is a major irrigation canal in Sindh , Pakistan . [ 1 ] It is a vital source of water for agriculture in the region. [ 2 ] It originates from the left bank of the Indus River at the Sukkur Barrage , located in Sukkur District , Sindh . [ 3 ] It traverses through several districts, providing irrigation to vast agricultural lands. The canal's primary flow is towards the south, irrigating districts including Sukkur , Khairpur , Naushahro Feroze , Shaheed Benazirabad , Matiari , Hyderabad , Sanghar and Badin . [ 4 ] [ 5 ] It is a perennial canal, meaning it supplies water throughout the year. The Rohri Canal is part of the larger Sukkur Barrage irrigation system . The construction of the barrage itself began in 1923 and was completed in 1932. While the Rohri Canal's construction was completed before the barrage project. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Rohri_Canal
Roid rage (also known as steroid rage [ 5 ] ) is a side effect of the use of anabolic steroids which is described as dramatic mood swings , increased feelings of hostility, impaired judgment, and increased levels of aggression . [ 6 ] The term "roid rage" became popular in the 1980s. [ 7 ] After supraphysiological use of anabolic - androgenic steroid which normally consists of long-term uncontrolled use of anabolic steroids (i.e. several injections over a period of time) roid rage can take place, which includes aggression [ 8 ] [ 9 ] and emotional dysregulation which can lead to depression and paranoia . [ 10 ] Use of steroids like corticosteroid to treat pains can cause steroid induced psychotic episodes which include racing thoughts , anger, agitation, pressured hyperverbal speech and paranoia. [ 11 ] The effects of roid rage, which is not only seen in humans who take large doses of anabolic steroids but also when the same or even lower doses are administered to animals like lab rats , include the heavy increase in aggression and the fight-or-flight response . This was shown in study of lab rats where when the nucleus accumbens and medial prefrontal cortex of a group of lab rats who were administered anabolic steroids showed no difference in tyrosine hydroxylase compared to regular lab rats but the caudate putamen , a brain area important for behavioral inhibition , motor control and habit learning, showed a significant decrease in tyrosine hydroxylase due to the testosterone . [ 12 ] Indecisiveness is linked to changes in the mesocorticolimbic dopamine system with anabolic steroids having an affect opamine function in mesocorticolimbic circuitry . It is not understood how this specifically affects decision making exactly. This indecisiveness is shown in lab rats which then turns into aggression, i.e. roid rage. [ 13 ] Heavy signs of anxiety , [ 14 ] mania , [ 15 ] and paranoia were also present when "roid rage" becomes more present in the person taking in the anabolic steroids. [ 16 ] Psychopathy is heavily increased if a person has underlying disorders for it when roid rage is present, where research shows the connection between substance misuse/abuse of anabolic steroids to violence, repeated imprisonment, and disrespect for authority . [ 17 ] Effects of roid rage have chances to be greatly increased in aggression and impulsiveness if the abuser of anabolic steroids has any underlying personality disorders such as borderline personality disorder . [ 18 ] Heavy use of anabolic steroids with roid rage can cause heavy distress as well depending on how long the steroids have been abused. [ 19 ] During puberty (in adolescent males) the intake of anabolic steroids can alter mood and personality causing a heavier roid rage that can impact them for a long-term time range. [ 20 ] Due to puberty being a sensitive time period with hormones , anabolic steroids can cause a shift on testosterone production and induce unprovoked aggression whether the person may or may not still be using anabolic steroids. [ 21 ] This may lead to extreme violent tendencies even after a year of drug absence, which is more present in adolescents and young adults , anabolic steroids are linked directly to the heightened factor of violent tendencies. [ 22 ] After tests were done comparing the use of methyltestosterone and stanozolol (both compounds of anabolic steroids) on castrated lab rats, both showed levels of heightened aggression, which shows that if males had a vasectomy it can still affect the levels of aggression they would have. [ 23 ] When a person is experiencing roid rage and increased levels of aggression, they may become easily provokable and be prone to commit violent assaults . [ 24 ] Many times when a person has abused anabolic steroids and commits violent crimes , they do not maintain the same levels of judgement to their actions, which may lead to aggravated murders. [ 25 ] The abuse of other drugs can lead to a worsening effects of aggression and violence that a person may commit. [ 26 ] According to anecdotal reports , wives and girlfriends of athletes who take anabolic steroids face violence when the users of anabolic steroids continue to use them; this includes verbal abuse and physical domestic abuse . [ 27 ] Certain drugs that minimise the amount of oestrogen created as the anabolic steroids break down could help lessen the aggression. [ 28 ] Anabolic steroid users can go into a psychosis when roid rage is induced, and even start experiencing suicidal ideation , some antipsychotic medications, such as haloperidol , have been used extensively to treat the psychosis induced roid rage, evidence also exists to support second-generation antipsychotics, lithium , selective serotonin reuptake inhibitors, tricyclic antidepressants (TCAs), and select antiepileptic drugs, including carbamazepine as well as valproic acid and its derivatives. [ 29 ] In a season 8 episode of South Park , " Up the Down Steroid ", the character Jimmy goes into a roid rage after use of anabolic steroids . [ 30 ] In a season 7 episode of Family Guy , " Stew-Roids ", the character Stewie Griffin develops roid rage after his father Peter Griffin gives him steroids in order to make him stronger at the gym. [ 31 ]
https://en.wikipedia.org/wiki/Roid_rage
Roivant Sciences Ltd. is an American multinational healthcare company focused on applying technology to drug development and building subsidiary life sciences and health technology companies. It was founded in 2014 by Vivek Ramaswamy and is currently headed by CEO Matt Gline. [ 3 ] Roivant maintains its headquarters in New York City as well as major offices in the biotech hubs of Boston and Basel . Vivek Ramaswamy founded Roivant in 2014. [ 4 ] Ramaswamy's initial strategy was to in-license drug candidates and create subsidiaries focused on distinct therapeutic areas. [ 5 ] This strategy expanded to include developing earlier stage drug candidates and platform technologies. Roivant is a parent company to over a dozen subsidiaries ranging from Immunovant, a majority-owned public company focused on autoimmune diseases , to privately held Dermavant Sciences, a commercial-stage company [ 6 ] focused on medical dermatology . [ 7 ] Roivant also develops healthcare technologies through its business unit Roivant Health. Roivant built and launched Datavant , which allows healthcare institutions to share data, and was merged with Coix [ 8 ] to become a US$7 billion company. Roivant's technology portfolio also includes Lokavant, which integrates clinical trial data to identify and mitigate risks in pharmaceutical development . [ 7 ] [ 9 ] In 2017, Roivant partnered with the private equity arm of Chinese state-owned CITIC Group to form Sinovant. [ 10 ] [ 11 ] [ 12 ] As of 2019, Roivant had over 40 investigational drugs in development in 14 therapeutic areas across its family of companies. [ 13 ] At the end of 2019, Roivant formed a $3 billion partnership with Sumitomo Dainippon Pharma and transferred its ownership stake in five of its subsidiaries: Myovant Sciences, Urovant Sciences, Enzyvant Therapeutics, Altavant Sciences, and Spirovant Sciences, which now sit under Sumitovant Biopharma. [ 14 ] The deal included the option for Sumitomo to acquire up to six additional subsidiaries. [ 15 ] In April 2020, Roivant dosed the first patient in a clinical study evaluating gimsilumab in COVID-19 patients for the prevention and treatment of acute respiratory distress syndrome (ARDS). [ 16 ] Additionally, in April, Datavant announced that its technology is being used to create a pro bono COVID-19 research database to help public health and policy researchers combat the pandemic. [ 17 ] In January 2021, Ramaswamy stepped down as CEO. Matt Gline, previously the company's chief financial officer , became CEO. [ 18 ] In February 2021, Roivant acquired Silicon Therapeutics, a small-molecule drug designer and computational physics platform, for $450 million in Roivant equity. [ 19 ] [ 20 ] In October 2021, Roivant merged with special-purpose acquisition company Montes Archimedes Acquisition Corp. to become listed on the Nasdaq . [ 21 ] In June 2022, Roivant and Pfizer unveiled Priovant Therapeutics. [ 22 ] Priovant was established in September 2021 through a transaction between Roivant and Pfizer, [ 23 ] in which Pfizer licensed oral and topical brepocitinib's global development rights and US and Japan commercial rights to Priovant. Pfizer holds a 25% equity ownership interest in Priovant. Brepocitinib is a potential first-in-class dual, selective inhibitor of TYK2 and JAK1 ; in all five placebo-controlled studies completed to date, oral brepocitinib generated statistically significant and clinically meaningful results. Later that year in December 2022, Roivant and Pfizer announced [ 24 ] their third partnership to create a new Vant [ 25 ] focused on developing TL1A drug candidates for inflammatory and fibrotic diseases. As of February 2023, Roivant's reported market cap was over $6 billion. [ 26 ] Axovant, owned by Roivant's Sio Gene Therapies , which failed testing in the end, has been accused of being a pump-and-dump scheme. [ 27 ] [ 28 ] [ 29 ] [ 30 ] In December 2023, Roche completed the acquisition of Telavant from Roivant for a purchase price of $7.1 billion upfront and a near-term milestone payment of $150 million. [ 31 ] [ 32 ] As of April 2020, the company's subsidiaries include: The following subsidiaries were previously a part of Roivant, but were included as part of a strategic transaction with Sumitomo Dainippon Pharma which closed in December 2019: [ 35 ] Roivant is a major shareholder of Datavant , which was co-founded with Travis May to link disparate healthcare datasets. [ 43 ] In October 2020, Datavant announced it raised funds from Roivant alongside Transformation Capital, Johnson & Johnson , and Cigna . [ 44 ] In June 2018, Roivant laid off 67 employees and reassigned 130 to subsidiaries. [ 45 ] In March 2020, Roivant announced it is developing gimsilumab, an anti-granulocyte-macrophage colony-stimulating factor (anti-GM-CSF) monoclonal antibody to prevent and treat acute respiratory distress syndrome (ARDS) in patients with COVID-19 . [ 46 ] In April 2020, Roivant started the administration of gimsilumab to COVID-19 patients in the United States . [ 47 ] The company received millions of dollars from hedge funds such as QVT in the early days of its existence. [ 48 ] Later, it was able to raise US$200 million, with help from NovaQuest Capital Management. [ 49 ]
https://en.wikipedia.org/wiki/Roivant_Sciences
In 4-dimensional topology, a branch of mathematics, Rokhlin's theorem states that if a smooth , orientable, closed 4- manifold M has a spin structure (or, equivalently, the second Stiefel–Whitney class w 2 ( M ) {\displaystyle w_{2}(M)} vanishes), then the signature of its intersection form , a quadratic form on the second cohomology group H 2 ( M ) {\displaystyle H^{2}(M)} , is divisible by 16. The theorem is named for Vladimir Rokhlin , who proved it in 1952. Rokhlin's theorem can be deduced from the fact that the third stable homotopy group of spheres π 3 S {\displaystyle \pi _{3}^{S}} is cyclic of order 24; this is Rokhlin's original approach. It can also be deduced from the Atiyah–Singer index theorem . See  genus and Rochlin's theorem . Robion Kirby ( 1989 ) gives a geometric proof. Since Rokhlin's theorem states that the signature of a spin smooth manifold is divisible by 16, the definition of the Rokhlin invariant is deduced as follows: If N is a spin 3-manifold then it bounds a spin 4-manifold M . The signature of M is divisible by 8, and an easy application of Rokhlin's theorem shows that its value mod 16 depends only on N and not on the choice of M . Homology 3-spheres have a unique spin structure so we can define the Rokhlin invariant of a homology 3-sphere to be the element sign ⁡ ( M ) / 8 {\displaystyle \operatorname {sign} (M)/8} of Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } , where M any spin 4-manifold bounding the homology sphere. For example, the Poincaré homology sphere bounds a spin 4-manifold with intersection form E 8 {\displaystyle E_{8}} , so its Rokhlin invariant is 1. This result has some elementary consequences: the Poincaré homology sphere does not admit a smooth embedding in S 4 {\displaystyle S^{4}} , nor does it bound a Mazur manifold . More generally, if N is a spin 3-manifold (for example, any Z / 2 Z {\displaystyle \mathbb {Z} /2\mathbb {Z} } homology sphere), then the signature of any spin 4-manifold M with boundary N is well defined mod 16, and is called the Rokhlin invariant of N . On a topological 3-manifold N , the generalized Rokhlin invariant refers to the function whose domain is the spin structures on N , and which evaluates to the Rokhlin invariant of the pair ( N , s ) {\displaystyle (N,s)} where s is a spin structure on N . The Rokhlin invariant of M is equal to half the Casson invariant mod 2. The Casson invariant is viewed as the Z -valued lift of the Rokhlin invariant of integral homology 3-sphere. The Kervaire–Milnor theorem ( Kervaire & Milnor 1960 ) states that if Σ {\displaystyle \Sigma } is a characteristic sphere in a smooth compact 4-manifold M , then A characteristic sphere is an embedded 2-sphere whose homology class represents the Stiefel–Whitney class w 2 ( M ) {\displaystyle w_{2}(M)} . If w 2 ( M ) {\displaystyle w_{2}(M)} vanishes, we can take Σ {\displaystyle \Sigma } to be any small sphere, which has self intersection number 0, so Rokhlin's theorem follows. The Freedman–Kirby theorem ( Freedman & Kirby 1978 ) states that if Σ {\displaystyle \Sigma } is a characteristic surface in a smooth compact 4-manifold M , then where Arf ⁡ ( M , Σ ) {\displaystyle \operatorname {Arf} (M,\Sigma )} is the Arf invariant of a certain quadratic form on H 1 ( Σ , Z / 2 Z ) {\displaystyle H_{1}(\Sigma ,\mathbb {Z} /2\mathbb {Z} )} . This Arf invariant is obviously 0 if Σ {\displaystyle \Sigma } is a sphere, so the Kervaire–Milnor theorem is a special case. A generalization of the Freedman-Kirby theorem to topological (rather than smooth) manifolds states that where ks ⁡ ( M ) {\displaystyle \operatorname {ks} (M)} is the Kirby–Siebenmann invariant of M . The Kirby–Siebenmann invariant of M is 0 if M is smooth. Armand Borel and Friedrich Hirzebruch proved the following theorem: If X is a smooth compact spin manifold of dimension divisible by 4 then the  genus is an integer, and is even if the dimension of X is 4 mod 8. This can be deduced from the Atiyah–Singer index theorem : Michael Atiyah and Isadore Singer showed that the  genus is the index of the Atiyah–Singer operator, which is always integral, and is even in dimensions 4 mod 8. For a 4-dimensional manifold, the Hirzebruch signature theorem shows that the signature is −8 times the  genus, so in dimension 4 this implies Rokhlin's theorem. Ochanine (1980) proved that if X is a compact oriented smooth spin manifold of dimension 4 mod 8, then its signature is divisible by 16.
https://en.wikipedia.org/wiki/Rokhlin's_theorem
Roku ( / ˈ r oʊ k uː / ⓘ ROH -koo ) is a brand of consumer electronics that includes streaming players , smart TVs (and their operating systems), as well as a free TV streaming service . The brand is owned by Roku, Inc. , an American company. As of 2024, Roku is the U.S. market leader in streaming video distribution , reaching nearly 145 million people. [ 1 ] [ 2 ] [ 3 ] Roku was founded by Anthony Wood in 2002; he had previously founded ReplayTV , a DVR company that competed with TiVo . [ 4 ] After ReplayTV's failure, Wood worked for a while at Netflix . In 2007, Wood's company began working with Netflix on Project:Griffin, a set-top box to allow Netflix users to stream Netflix content to their TVs. [ 4 ] Only a few weeks before the project's launch, Netflix's founder Reed Hastings decided it would hamper license arrangements with third parties, potentially keeping Netflix off other similar platforms, and killed the project. [ 5 ] Fast Company magazine cited the decision to kill the project as "one of Netflix's riskiest moves". [ 5 ] Netflix then decided instead to spin off the company, and Roku released their first set-top box in 2008. [ 6 ] In 2010 they began offering models with various capabilities, which eventually became their standard business model. [ 6 ] In 2014, Roku partnered with smart TV manufacturers to produce TVs with built-in Roku functionality. [ 4 ] In 2015, Roku won the inaugural Emmy for Television Enhancement Devices . In January 2018, CNET reported that Roku was debuting a new licensing program for smart audio devices such as smart speakers, sound bars and whole-home audio, while noting the "ease of use" and "superb streaming options" offered by Roku TVs. [ 7 ] According to CNBC in 2021, Roku was the U.S. market leader in streaming video distribution. [ 2 ] Later in 2023, Variety called Roku "the top connected TV platform" in the U.S. [ 1 ] In December 2023, a Popular Mechanics review cited Roku TVs to be affordable and easy to use, while also noting that the Roku-integrated TVs lacked "the premium brand badging of big players like Sony or Samsung". [ 8 ] According to The Verge in July 2024, a Roku OS update in June 2024 had "ruined" the Roku TV experience. This update added " motion smoothing ", and was reportedly irreversible. This followed another identical issue reported in 2020 for Roku TVs made by TCL. [ 9 ] In August 2024, a Wired review noted that ease of use was one of the main reasons to buy any Roku product. [ 10 ] In February 2025, Roku said it reached more than 90 million streaming households. The Roku Channel reportedly reached households with nearly 145 million people. [ 3 ] The first Roku model, the Roku DVP N1000, was unveiled on May 20, 2008. It was developed in partnership with Netflix to serve as a standalone set-top box for its recently introduced "Watch Instantly" service. The goal was to produce a device with a small footprint that could be sold at low cost compared to larger digital video recorders and video game consoles . It features an NXP PNX8935 video decoder supporting both standard and high definition formats up to 720p ; HDMI output; and automatic software updates, including the addition of new channels for other video services. [ 11 ] [ 12 ] [ 13 ] Roku launched two new models in October 2009: the Roku SD (a simplified version of the DVP, with only analog AV outputs); and the Roku HD-XR, an updated version with 802.11n Wi-Fi and a USB port for future functionality. The Roku DVP was retroactively renamed the Roku HD. By then, Roku had added support for other services. The next month, they introduced the Channel Store, where users could download third-party apps for other content services (including the possibility of private services for specific uses). [ 14 ] [ 15 ] Netflix support was initially dependent on a PC, requiring users to add content to their "Instant Queue" from the service's web interface before it could be accessed via Roku. In May 2010, the channel was updated to allow users to search the Netflix library directly from the device. [ 16 ] In August 2010, Roku announced plans to add 1080p video support to the HD-XR. [ 17 ] The next month, they released an updated lineup with thinner form factors: a new HD; the XD, with 1080p support; and the XDS, with optical audio, dual-band Wi-Fi, and a USB port. The XD and XDS also included an updated remote. [ 18 ] Support for the first-generation Roku models ended in September 2015. [ 19 ] In July 2011, Roku unveiled its second generation of players, branded as Roku 2 HD, XD, and XS. All three models include 802.11n, and also add microSD slots and Bluetooth . The XD and XS support 1080p, and only the XS model includes an Ethernet connector and USB port. They also support the "Roku Game Remote"—a Bluetooth remote with motion controller support for games, which was bundled with the XS and sold separately for other models. [ 20 ] The Roku LT was unveiled in October, as an entry-level model with no Bluetooth or microSD support. [ 21 ] In January 2012, Roku unveiled the Streaming Stick - a new model condensed into a dongle form factor using Mobile High-Definition Link (MHL). [ 22 ] [ 23 ] Later in October, Roku introduced a new search feature to the second-generation models, aggregating content from services usable on the device. [ 24 ] Roku unveiled its third-generation models in March 2013, the Roku 3 and Roku 2. The Roku 3 contains an upgraded CPU over the 2 XS, and a Wi-Fi Direct remote with an integrated headphone jack. The Roku 2 features only the faster CPU. [ 25 ] [ 26 ] A software update in October 2014 added support for peer-to-peer Miracast wireless. [ 27 ] In October 2015, Roku introduced the Roku 4; the device contains upgraded hardware with support for 4K resolution video, as well as 802.11ac wireless. [ 28 ] In September 2016, Roku revamped their entire streaming player line-up with five new models (low end Roku Express, Roku Express+, high end Roku Premiere, Roku Premiere+, and top-of-the-line Roku Ultra), while the Streaming Stick (3600) was held over from the previous generation (having been released the previous April) as a sixth option. [ 29 ] The Roku Premiere+ and Roku Ultra support HDR video using HDR10. [ 30 ] In October 2017, Roku introduced its sixth generation of products. The Premiere and Premiere+ models were discontinued, the Streaming Stick+ (with an enhanced Wi-Fi antenna device) was introduced, as well as new processors for the Roku Streaming Stick, Roku Express, and Roku Express+. [ 31 ] In September 2018, Roku introduced the seventh generation of products. Carrying over from the 2017 sixth-generation without any changes were the Express (3900), Express+ (3910), Streaming Stick (3800), and Streaming Stick+ (3810). The Ultra is the same hardware device from 2017, but it comes with JBL premium headphones and is repackaged with the new model number 4661. Roku has resurrected the Premiere and Premiere+ names, but these two new models bear little resemblance to the 2016 fifth-generation Premiere (4620) and Premiere+ (4630) models. The new Premiere (3920) and Premiere+ (3921) are essentially based on the Express (3900) model with 4K support added, it also includes Roku Streaming Stick+ Headphone Edition (3811) for improving Wifi signal strength and private listening. [ 32 ] In September 2019, Roku introduced the eighth generation of products. [ 33 ] The same year, Netflix announced that it would stop supporting older generations of Roku, including the Roku HD, HD-XR, SD, XD, and XDS, as well as the NetGear-branded XD and XDS beginning on December 1, 2019. Roku had warned in 2015 that it would stop updating players made in May 2011 or earlier, and these vintage boxes were among them. [ 34 ] On September 28, 2020, Roku introduced the ninth generation of products. [ 35 ] An updated Roku Ultra was released along with the addition of the Roku Streambar, a 2-in-1 Roku and Soundbar device. The microSD slot was removed from the new Ultra 4800, making it the first top-tier Roku device since the first generation to lack this feature. On April 14, 2021, Roku announced the Roku Express 4K+, replacing the 8th generation Roku Express devices, the Voice Remote Pro as an optional upgrade for existing Roku players, and Roku OS 10 for all modern Roku devices. [ 36 ] On September 20, 2021, Roku introduced the tenth generation of products. [ 37 ] The Roku Streaming Stick 4K [ 38 ] was announced along with the Roku Streaming Stick 4K+ which includes an upgraded rechargeable Roku Voice Remote Pro with lost remote finder. [ 39 ] Roku announced an updated Roku Ultra LT with a faster processor, stronger Wi-Fi and Dolby Vision as well as Bluetooth audio streaming and built-in Ethernet support. [ 40 ] Roku also announced Roku OS 10.5 with several new and improved features. [ 41 ] On November 15, 2021, Roku announced a budget model Roku LE (3930S3) to be sold at Walmart , while supplies last. [ 42 ] It lacks 4K and HDR10 support, making its features similar to those of the 2019 Roku Express (3930). It has the same form factor as the 2019 Roku Express, except the plastic shell is white rather than black. Roku announced its first branded smart TV and it was released in late 2014. These TVs are manufactured by companies like TCL , LG , Westinghouse , Philips , and Hisense , and use the Roku user interface as the "brain" of the TV. Roku TVs are updated just like the streaming devices. [ 77 ] More recent [ vague ] models also integrate a set of features for use with over-the-air TV signals, including a program guide that provides information for shows and movies available on local antenna broadcast TV, as well as where that content is available to stream, and the ability to pause live TV (although the feature requires a USB hard drive with at least 16GB storage). On November 14, 2019, Walmart and Roku announced that they would be selling Roku TVs under the Onn brand exclusively at Walmart stores, starting November 29. [ 78 ] In January 2020, Roku created a badge to certify devices as working with a Roku TV model. The first certified brands were TCL North America, Sound United, Polk Audio , Marantz , Definitive Technology , and Classé . [ 79 ] In January 2021, a Roku executive said one out of three smart TVs sold in the United States and Canada came with Roku's operating system built-in. [ 80 ] In May 2022, Roku and Element Electronics announced the first ever outdoor Roku TV, sold in 55 inch size. The television offers minimal reflection, an anti-glare display, 4K streaming, and can be used in bright outdoor environments. [ 81 ] In March 2023, Roku announced a partnership with Best Buy in which the retailer will exclusively sell the Roku Select and Plus Series TVs manufactured by Roku. [ 82 ] Roku provides video services from a number of Internet -based video on demand providers. Content on Roku devices is provided by Roku partners and is identified using the term channel . Users can add or remove different channels using the Roku Channel Store or the search feature. Roku's website does not specify which channels are free to its users. The Roku is an open-platform device with a freely available software development kit that enables anyone to create new channels. [ 83 ] The channels are written in a Roku-specific language called BrightScript, a scripting language the company describes as 'unique', but "similar to Visual Basic " and "similar to JavaScript ". [ 84 ] Developers who wish to test their channels before a general release, or who wish to limit viewership, can create "private" channels that require a code be entered by the user in the account page of the Roku website. These private channels, which are not part of the official Roku Channel Store, are not reviewed or certified by Roku. [ 85 ] [ 86 ] There is an NDK (Native Developer Kit) available, though it has added restrictions. [ 84 ] Roku launched its own streaming channel on its devices in October 2017. It is ad-supported, but free. Its licensed content includes movies and TV shows from studios such as Lionsgate , MGM, Paramount, Sony Pictures Entertainment , Warner Bros. , Disney , and Universal as well as Roku channel content publishers American Classics, FilmRise, Nosey, OVGuide, Popcornflix, Vidmark , and YuYu. It is implementing an ad revenue sharing model with content providers. On August 8, 2018, the Roku Channel became available on the web as well. [ 87 ] Roku also added the "Featured Free" section as the top section of its main menu from which users can get access to direct streaming of shows and movies from its partners. [ 88 ] In January 2019, premium subscription options from select content providers were added to the Roku Channel. [ citation needed ] Originally only available in the U.S., [ 89 ] it launched in the UK on April 7, 2020, with a different selection of movies and TV shows, and without premium subscription add-ons. [ 90 ] On January 8, 2021, Roku announced that it had acquired the original content library of the defunct mobile video service Quibi for an undisclosed amount, reported to be around $100 million. [ 91 ] [ 92 ] The content is being rebranded as Roku Originals. [ 93 ] The Daily Beast alleged that non-certified channels on Roku eased access to materials promoting conspiracy theories and terrorism content. [ 94 ] In June 2017, a Mexico City court banned the sale of Roku products in Mexico, following claims by Televisa (via its Izzi cable subsidiary) that the devices were being used for subscription-based streaming services that illegally stream television content without permission from copyright holders. The devices used Roku's private channels feature to install the services, which were all against the terms of service Roku applies for official channels available in its store. Roku defended itself against the allegations as such, stating that these channels were not officially certified and that the company takes active measures to stop illegal streaming services. [ 95 ] The 11th Collegiate Court in Mexico City overturned the decision in October 2018, with Roku returning to the Mexican market soon after; Televisa's streaming service Blim TV (now Vix ) would also launch on the platform. [ 96 ] In August 2017 Roku began to display a prominent disclaimer when non-certified channels are added, warning that channels enabling piracy may be removed "without prior notice". [ 97 ] [ 86 ] [ 98 ] In mid-May 2018, a software glitch caused some users to see copyright takedown notices on legitimate services such as Netflix and YouTube. Roku acknowledged and patched the glitch. [ 99 ] [ 100 ] In March 2022, the private channel system was deprecated due to abuse and replaced with a more limited and strict beta channels platform which only allows twenty users to test a channel for up to four months. [ 101 ] Pay television-styled carriage disputes emerged on the Roku platform in 2020, as the company requires providers to agree to revenue sharing for subscription services that are billed through the platform, and to hold 30% of advertising inventory. [ 102 ] On September 18 of that same year, Roku announced that NBCUniversal TV Everywhere services would be removed from its devices "as early as this weekend", due to its refusal to carry the company's streaming service Peacock (which had been unavailable on Roku since its launch in July 2020) under terms it deemed "unreasonable". [ 102 ] It reached an agreement with NBCUniversal later that day, which allowed Peacock to become available on Roku. [ 103 ] HBO Max , which launched in May 2020, was unavailable on Roku until December 2020 due to similar disputes over revenue sharing, particularly in regards to an upcoming ad-supported tier. [ 104 ] [ 105 ] On December 17, 2020, HBO Max began streaming on Roku, after WarnerMedia and Roku reached a deal the previous day (and also after media speculation that WarnerMedia moving Wonder Woman 1984 and Warner Bros' 2021 theatrical slate to a hybrid theatrical/HBO Max release model were an attempt to get Roku to agree to their terms). [ 106 ] Another dispute, starting mid-December 2020, caused Spectrum customers to be unable to download the Spectrum TV streaming app to their Roku devices; existing customers could retain the app, but would lose it upon deletion, even to fix software bugs. This dispute was resolved on August 17, 2021. [ 107 ] [ 108 ] On April 30, 2021, Roku removed the over-the-top television service YouTube TV from its Channels Store, preventing it from being downloaded. The company accused operator Google LLC of making demands regarding its YouTube app that it considered "predatory, anti-competitive and discriminatory", including enhanced access to customer data, giving YouTube greater prominence in Roku's search interface, and requiring that Roku implement specific hardware standards that could increase the cost of its devices. Roku accused Google of "leveraging its YouTube monopoly to force an independent company into an agreement that is both bad for consumers and bad for fair competition." [ 109 ] [ 110 ] Google claimed that Roku had "terminated our deal in bad faith amidst our negotiation", stating that it wanted to renew the "existing reasonable terms" under which Roku offered YouTube TV. Google denied Roku's claims regarding customer data and prominence of the YouTube app, and stated that its carriage of a YouTube app was under a separate agreement, and unnecessarily brought into negotiations. [ 111 ] As a partial workaround, YouTube began to deploy an update to its main app on Roku and other platforms, which integrates the YouTube TV service. [ 110 ] [ 112 ] On December 8, 2021 (a day before the agreement for the main YouTube app expired), Roku and Google announced that they had settled their dispute and reached a multi-year agreement to keep the YouTube app on Roku and to restore the YouTube TV app on Roku. [ 113 ]
https://en.wikipedia.org/wiki/Roku
Rokushō ( 緑青 ) is a traditional Japanese chemical compound used in the niiro process for artificially inducing patination in decorative non-ferrous metals, especially several copper alloys, with the results being metals of the irogane class. It is most commonly translated as malachite ; in art, rokushō was the most widely used green pigment. [ 1 ] These "colour metals", virtually unknown outside Japan until the late 19th century, have achieved some popularity in craft circles in other parts of the world since then. Rokushō is used to treat a number of metals, including raw natural copper, which holds impurities, purified copper, and copper alloy mixes with two to five metals, to produce irogane metals, including: shakudō , an alloy of copper and gold, which becomes black to dark blue-violet; shibuichi , an alloy of fine silver and copper (in a higher percentage than sterling), which turns grey to misty aquamarine or other shades of blue to green; kuromido which becomes dark coppery black. Rokushō was generally used to patinate all types of mokume-gane ("wood grain metal") as well. Although other patination agents can be used on these metals, some artisans prefer the rich colors achieved with traditional rokushō in the niiro process. These metals are becoming increasingly popular in high-end artistic jewelry, especially in bi-metals (a layer of the alloy fused to another metal such as sterling). Because rokushō has a dramatically different effect on sterling silver than on the alloys typically fused to it in bi-metals, a common technique in art jewelry is to engrave through the alloy layer in a pattern to reveal the silver underneath prior to patination. This provides a rich contrast in color, highlighting the pattern. The formulae for rokushō are not published widely or freely, but passed on in the Japanese craft tradition. However, some scholars have analysed samples of the material. Premixed rokushō can be purchased outside Japan through specialty jewelry suppliers. Additionally, several different formulas have been proposed to replicate the traditional product for those who prefer to make their own: Rokusho is not used alone, but mixed with one or more other chemicals. Further, metal to be processed is cleaned in advance of treatment, using a mild acid bath ( oxalic or sulfuric acids are frequently used), scrubbing with daikon radish or pumice , and/or a surface abrasive, and often treated after patination also.
https://en.wikipedia.org/wiki/Rokushō
Roland Jazz Chorus is the name given to a series of solid-state instrument amplifiers produced by the Roland Corporation in Japan since 1975. Its name comes from its built-in analog chorus effect . The Jazz Chorus series became increasingly popular in the late 1970s and early 1980s new wave and post-punk scenes because of its clean yet powerful sound, durability and relatively low cost when compared to the more commonly used tube amplifiers of the time such as Marshall or Fender . It also found favour amongst funk players in America. [ 1 ] It also became popular to use for clean tones in heavy metal, with the most famous users being James Hetfield and Kirk Hammett from Metallica , and Wes Borland from Limp Bizkit . Most models have controls based on the JC-120's standard setup. There are two channels, one clean, the other with effects. The built-in effects include stereo chorus , vibrato , reverb , and distortion . The amplifier features high and low inputs, a bright switch as well as a three band equalizer and volume for each channel. For the first 40 years of the series' existence, all models were purely analog , transistor-based designs. However, the JC-40 and JC-22 models (introduced in the 2010s) depart from this in favour of a entirely digital signal processing (DSP)-based design, similar to other current Roland and Boss amplifiers. Since its inception in 1975, the Roland Jazz Chorus amplifier has undergone several design iterations. 1975 JC-120, 120 watts, 2x12" speakers; JC-60, 60 watts, 1x12" speaker 1976 JC-160, 120 watts, 4x10" speakers; JC-80 60 watts, 1x15" speaker 1978 JC-200, 200W (head); JC-200S, 2x12" speakers (cabinet); 1979 JC-50, 50 watts, 1x12" speaker 1984 JC-120H, 120W head (“Bright” switch changed to “HI-TREBLE”); JC-77, 80 watts, 2x10" speakers 1986 JC-55, 50 watts, 2x8" speakers 1992 JC-20, 20 watts, 2x5" speakers 1996 JC-85, 80 watts, 2x10" speakers 1997 JC-90, 80 watts, 2x10" speakers (Eminence speakers) 2015 JC-40, 40 watts, 2x10’’, (introduced stereo input); JC-22, 30 watts, 2x6.5" speakers [ 2 ] 2016 JC-22, 30 watts, 2x6.5" speakers 2025 JC-120, DAW Software Plugin The Jazz Chorus is one of the most famous and successful combo amplifiers from its period and its earliest users included Albert King , Andy Summers ( The Police ), Chuck Hammer ( Lou Reed ), Larry Coryell , Robert Smith (of The Cure , although he used the rarer 160 Watt JC-160 with 4 x 10" speakers), Billy Duffy (The Cult, Theatre of Hate), Roger Hodgson of Supertramp , Joe Strummer , John Sebastian of The Lovin' Spoonful , Art Saiz, Chuck Willis, Prince , John McGeoch ( Magazine , Siouxsie and the Banshees, PIL, the Armoury Show ), Steve Hackett , Robert Fripp , Adrian Belew , Steve Rothery , Mdou Moctar , Neil Halstead (Slowdive) [ 3 ] and Wayne Hussey (the Sisters of Mercy, The Mission) among others. Summers' use of the amp in turn inspired, for instance, Jeff Buckley , whose first amplifier was a Jazz Chorus. [ 4 ] Another notable user of the JC-120 was Johnny Marr of The Smiths who used the Roland JC-120 along with his Rickenbacker 330 and Telecaster to create the sounds present on The Smiths’ debut album. Other users include Steve Levine , producer of bands such as Culture Club , The Beach Boys and The Clash . He often combined it with effects pedals from Boss Corporation , a Roland subsidiary. [ 5 ]
https://en.wikipedia.org/wiki/Roland_Jazz_Chorus
Rolanet (Robotron Local Area Network) was a networking standard, developed in the former German Democratic Republic (GDR) and introduced in 1987 by the computer manufacturer Robotron . It enabled computer networking over coax cable and glass fiber with a range of 1,000 metres (3,300 ft). Networking speed was 500 kBd , comparable to other standards of the day. A maximum of 253 computers could be connected using Rolanet. Two variants of Rolanet existed: A scaled-down version of Rolanet, BICNet, was used for educational purposes. It is no longer possible to assemble a functioning Rolanet system today, due to lack of software and working hardware. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Rolanet
Roll20 is a website consisting of a set of tools for playing tabletop role-playing games , also referred to as a virtual tabletop , which can be used as an aid to playing in-person or remotely online. The site was launched in 2012 after a successful Kickstarter campaign. The platform's goal is to provide an authentic tabletop experience that does not try to turn the game into a video game , but instead aids the game master in providing immersive tools online. The blank-slate nature of the platform makes integrating a multitude of tabletop role-playing games possible. During quarantine as a result of the COVID-19 pandemic , it has allowed a variety of real-life games to transition online, facilitating RPGs in an online space. In July 2022, it was announced that Roll20 would merge with OneBookShelf to become a new company. In June 2024, Roll20 purchased the digital tabletop role-playing toolset company Demiplane . Roll20 was originally conceived as a personal project by three college roommates, Riley Dutton, Nolan Jones, and Richard Zayas, to help them continue to play Dungeons & Dragons after graduating and moving to different cities. After realizing that their personal app could help others as well, they started a Kickstarter campaign in the spring of 2012 with an initial goal of $5000; the campaign managed to raise almost $40,000. [ 4 ] [ 5 ] After a short beta testing period following the end of the Kickstarter campaign, Roll20 was released to the public in September 2012. [ 6 ] Roll20 reported reaching 1 million users in July 2015 [ 4 ] and 2 million users in January 2017. [ 7 ] Academic Evan Torner, in the book Watch Us Roll: Essays on Actual Play and Performance in Tabletop Role-Playing Games (2021), highlighted the impact of Roll20 on the actual play movement. [ 8 ] Torner wrote, "Roll20 allows players to seamlessly control information in a shared 'tabletop' era and broadcast content of interest to both the group itself and the wider audience watching it play. Joined with Twitch and YouTube , it constitutes a powerful tool in the kit of industry up-and-comers" and that the "system would impact the play of millions at mass scale [...]. Roll20 would enable these players to document and broadcast their actual play experiences for others to consume". [ 8 ] In July 2016, Roll20 announced that they had acquired a license from Wizards of the Coast for official Dungeons & Dragons material. [ 9 ] [ 2 ] [ 5 ] Along with the announcement, they released the first official module for Dungeons & Dragons 5th edition , Lost Mine of Phandelver , on the Roll20 Marketplace; which was followed by other releases. [ 10 ] In February 2018, Paizo 's Pathfinder and Starfinder games became officially supported on the platform. [ 11 ] In September 2018, one of the co-founders of Roll20, Nolan T. Jones, acting as head moderator of the Reddit Roll20 subreddit, banned Reddit user ApostleO; mistaking the account for another previously banned user, whom Nolan believed to be circumventing the prior ban. After a failed attempt to get clarification and correction of the ban, ApostleO deleted his Roll20 account and posted a summary to Reddit of the hostile customer-service. [ 12 ] Many users criticized the ban, Jones' response, and the inclusion of Roll20 staff as moderators of the subreddit; leading Roll20 to apologize and turn over moderation of the subreddit to the community. [ 13 ] In February 2019, TechCrunch reported that Roll20's databases had been hacked along with those of 8 other companies, with the information of over 4 million users of the site posted for sale on a dark web marketplace. [ 14 ] When the COVID-19 pandemic began to prevent in-person gatherings in 2020, many groups who played in-person role-playing games turned to Roll20 to continue their games virtually. [ 15 ] [ 16 ] [ 17 ] [ 18 ] Liz Schuh, head of publishing and licensing for Dungeons & Dragons , stated that "virtual play rose 86%" in 2020 "aided by online platforms such as Roll20 and Fantasy Grounds ". [ 19 ] Erik Mona , for Paizo , commented that "tools like Roll20 and Discord played a huge role in keeping the Pathfinder and Starfinder communities together. They helped the annual PaizoCon, originally scheduled as an in-person event in Seattle, go fully digital in 2020". [ 20 ] In July 2021, Roll20 increased their subscription costs for the first time with the annual Plus tier increasing from $49.99 to $59.99 and the annual Pro tier increasing from $99.99 to $109.99; the monthly cost of these tiers also increased. [ 21 ] [ 22 ] In February 2022, Ankit Lal, a Google veteran, become the company's CEO . [ 23 ] [ 24 ] Polygon reported that since March 2020 "the company has since tripled in size, growing from just 20 or 25 employees to nearly 60. Lal says that he now has two different groups of employees, one dedicated to users and another to publishers". [ 23 ] Dicebreaker reported that per Roll20's PR team "the number of users on Roll20 has doubled in almost two years, going from five million users to more than 10 million". [ 25 ] In June 2022, Roll20 announced a new partnership with OneBookShelf that would allow content creators on the Dungeon Masters Guild to sell modules and add-ons which are directly integrated with Roll20's virtual tabletop system. [ 26 ] [ 27 ] In July 2022, Roll20 and OneBookShelf announced a merger between the two companies. This merger will combine the content libraries of both companies [ 28 ] [ 29 ] and make "OneBookShelf's PDF libraries accessible within Roll20". [ 30 ] Lal will become the new company's CEO and Steve Wieck , CEO of OneBookShelf, will become president of the new company and join Roll20's board of directors . [ 30 ] [ 28 ] The combined company's name has not yet been announced. [ 29 ] [ 31 ] In 2023, the company had a temporary holding name of Wolves of Freeport, named after Wieck's EverQuest guild. [ 32 ] In June 2024, it was announced that Roll20 had acquired the digital tabletop role-playing toolset company Demiplane . [ 33 ] [ 34 ] Lal stated: "We want to make it as easy as possible for you to build your first character, to get into your first game, to try out playing TTRPGs. And we think the combination of the Roll20 VTT and the Demiplane character sheet ecosystem is going to do that". [ 33 ] Christian Hoffer of ComicBook.com reported that this acquisition "won't have any immediate impact on users of either platform, but Demiplane CEO Peter Romenesko noted that the merged companies will look to close the difference between their two platforms very quickly". [ 33 ] J. R. Zambrano, for Bell of Lost Souls , commented that "it seems that an era of consolidation is on the way as players like WotC and Roll20 move to consolidate their powerbases". [ 34 ] Roll20 is a browser-based suite of tools that allows users to create and play tabletop role playing games. It is organized into individual game-sessions, which users can create or join. These game sessions include various features of typical tabletop RPGs , including dynamic character sheets , automated dice rolling, shared maps with basic character and enemy tokens, and triggered sound effects , as well as a character creation tool for certain licensed game systems. [ 9 ] [ 35 ] [ 36 ] [ 37 ] The interface also includes integrated text chat , voice chat , and video chat ; as well as Google Hangouts integration. [ 38 ] Roll20 also contains a separate marketplace - where art assets and complete game modules are sold - and a reference compendium for several game systems. Compendiums and game modules published through the marketplace are only available to use on the Roll20 platform, [ 39 ] while some art assets and art packs can be transferred to other sites or downloaded and used for physical tabletop sessions. [ 40 ] In addition to the free content, Roll20 also has extra features available for paying subscriber accounts , including dynamic lighting and fog of war for maps. [ 35 ] Besides the main browser version of Roll20, there are also iPad and Android versions. These mobile versions are more focused on the player experience, containing fewer features than the full browser site. [ 41 ] Roll20 is available in English , with moderate support for 17 other languages through community-contributed translations using Crowdin . [ 3 ] Roll20 supports many tabletop systems, including the various editions of Dungeons & Dragons , Pathfinder , Shadowrun , Dungeon World , Gamma World , Traveller , Numenera , 13th Age , and others. [ 2 ] [ 35 ] [ 37 ] For many less known tabletop systems, Roll20 has an open source repository where the community can contribute character sheet templates. [ 42 ] Following the purchase of Demiplane, Roll20 began to support cross-platform access so that content unlocked on one platform would be automatically unlocked on the other platform. As of May 2024 [update] , Paizo, Darrington Press, Kobold Press, and Renegade Game Studio have granted permission for cross-platform access to their products. [ 43 ] Roll20 has held an online gaming convention named Roll20CON every year since 2016, consisting of an organized series of online games hosted on Roll20 and streamed on Twitch , along with other events. Roll20 has partnered with charitable organizations to run Roll20CON: The Cybersmile Foundation , an organization providing support for victims of cyberbullying , in 2016; and Take This , an organization focused on mental health in the gaming community , in 2019. [ 44 ] [ 45 ] In July 2020, [ 46 ] Roll20 released their own science fantasy role-playing game [ 47 ] [ 48 ] named Burn Bryte, with James Introcaso as lead designer. The game was first announced during Gen Con 2018 , [ 49 ] and was mentioned to be designed from the bottom up to be played on Roll20's virtual tabletop platform. [ 46 ] [ 48 ] Starting in August 2018, [ 50 ] a playtest was launched for Roll20's Pro-subscribers, [ 51 ] which was later expanded to their Plus-subscribers in November of the same year. [ 52 ] With the games launch, multiple Actual Play campaigns were started on Twitch . Jacob Brogan, in a review of Lost Mine of Phandelver on Roll20 for Slate in 2016, commented that "our experience wasn't always seamless at first" and that "all of this data also taxed my computer's resources, crashing my browser outright on at least one occasion. [...] In time, I overcame most of those hurdles, however, partly because Lost Mines has been so well implemented here. [...] Though working through it still requires care and preparation—much as its predigital version would—there's more than enough in the virtual package to while away hours with your fellow gamers, however far away they may be. More than any other virtual gaming system I've played with, Roll20's Lost Mines captured what it's like to delve into dungeons". [ 35 ] Ryan Hiller, for GeekDad in 2017, stated that " Roll20 is an industry leading web and tablet based virtual-tabletop application" and that " Roll20 is one of my must have digital tools for roleplaying". [ 53 ] Hiller highlighted the fog-of-war and dynamic lighting features – "in a virtual game, each player would see only what they could see from where their specific character is standing and with the light they have available. This adds a whole new depth to the game as some players see encounters from entirely different perspectives, and areas of shadow become evident for use in concealment. Suddenly the rogue becomes much more interesting". [ 53 ] Tyler Wilde, for PC Gamer in 2017, compared using Roll20 and Tabletop Simulator to play Dungeons & Dragons . He wrote that Roll20 "is the cheaper, more practical solution for remote D&D: a clean mapping interface, easy access to official reference material, built-in video chat, and quick dice rolls. More serious players will probably prefer it". [ 54 ] Leif Johnson, in a 2020 update on virtual tabletops for PC Gamer , wrote that Roll20 "allows a dizzying range of customization for maps, tokens, and more. Its menus are a bit drab, but they're intuitive almost to the point of genius, and the package is especially celebrated for its fantastic line-of-sight dynamic lighting system". [ 55 ] However, the platform has some drawbacks such as "it's browser-based, which means your gameplay's subject to the vagaries of the server. It may cost nothing up front, but the free version restricts you to 100 MB for uploadable assets; to get 1GB, you'll need to fork over $4.99 a month or $49 per year. You also can't use the dynamic lighting functions unless you pay the sub, although you'll still have a fog of war option if you choose not to pay. But these are hardly deal killers. If you're relatively new to D&D and want a friendly place to hop in, Roll20's probably the best place to do it outside of a dining room table with friends". [ 55 ] Ari Szporn, for CBR in 2020, highlighted that Roll20 "provides integrated audio and video chat functions in an attempt to provide as comprehensive an experience as possible" and that the marketplace has third-party content creators who "can upload their own tokens, map tiles, pre-written adventures and more for members to purchase. Roll20 also has a 'Looking For Group' service to help players and DMs find new people to play with". [ 56 ] Szporn also commented on Roll20's subscription service and stated that the free tier is "the best option for new players but is not recommended for DMs due to its limited access to Roll20 's more advanced features". [ 56 ] Luc Tran, in a separate review of various virtual tabletops for CBR, wrote that Roll20 has "a straightforward design tool for maps, dungeons and towns, as well as the ability to create and name multiple simple commands for actions like dice rolling [...]. While Roll20 is great, the fact that it is not licensed by Wizards of the Coast means it lacks a lot of official D&D material. Unless players choose to purchase specific game compendiums, D&D -specific characters, races, monsters and items will either have to be recreated in Roll20 or you'll have to find suitable replacements". [ 57 ] Academics Daniel Lawson and Justin Wigard, in the book Roleplaying Games in the Digital Age: Essays on Transmedia Storytelling, Tabletop RPGs and Fandom (2021), examined Roll20 as a digital space and the potential barriers to entry in play, such as the digital divide and various disabilities. [ 58 ] They reviewed the levels of subscription and wrote that "Roll20 indelibly connects functionality to money. Thus, higher levels of subscription offer increased modes of accessibility in terms of available functionality within Roll20. In brief, money purchases remediative features—and thus rhetorical agency— in these game spaces. [...] Roll20 provides easy-to-use tools for integrating external assets, but incentivizes purchases assets which dramatically reduce accessibility barriers through ease of access". [ 58 ] : 103 Roll20 was named the Gold Winner in the "Best Software" category of the ENnie Awards in 2013, [ 59 ] 2014, [ 60 ] 2015, [ 61 ] and 2016. [ 62 ]
https://en.wikipedia.org/wiki/Roll20
The roll center of a vehicle is the notional point at which the cornering forces in the suspension are reacted to the vehicle body. There are two definitions of roll center. The most commonly used is the geometric (or kinematic) roll center, whereas the Society of Automotive Engineers uses a force -based definition. [ 1 ] The lateral location of the roll center is typically at the center-line of the vehicle when the suspension on the left and right sides of the car are mirror images of each other. The significance of the roll center can only be appreciated when the vehicle's center of mass is also considered. If there is a difference between the position of the center of mass and the roll center a moment arm is created. When the vehicle experiences angular velocity due to cornering, the length of the moment arm, combined with the stiffness of the springs and possibly anti-roll bars (also called 'anti-sway bar'), defines how much the vehicle will roll. This has other effects too, such as dynamic load transfer. When the vehicle rolls the roll centers migrate. The roll center height has been shown to affect behavior at the initiation of turns such as nimbleness and initial roll control. Current methods of analyzing individual wheel instant centers have yielded more intuitive results of the effects of non-rolling weight transfer effects. This type of analysis is better known as the lateral-anti method. This is where one takes the individual instant center locations of each corner of the car and then calculates the resultant vertical reaction vector due to lateral force. This value then is taken into account in the calculation of a jacking force and lateral weight transfer. This method works particularly well in circumstances where there are asymmetries in left to right suspension geometry. The practical equivalent of the above is to push laterally at the tire contact patch and measure the ratio of the change in vertical load to the horizontal force.
https://en.wikipedia.org/wiki/Roll_center
Roll moment is a moment , which is a product of a force and a distance, that tends to cause a vehicle to roll, that is to rotate about its longitudinal axis. [ 1 ] In vehicle dynamics , the roll moment can be calculated as the product of three quantities: In two-axle vehicles, such as cars and some trucks, the roll axis may be found by connecting the roll center of each axle by a straight line. [ 1 ] In single-track vehicles , such as bicycles and motorcycles, the roll axis may be found by connecting the contact patches of each tire by a straight line. In aeronautics , the roll moment is the product of an aerodynamic force and the distance between where it is applied and the aircraft's center of mass that tends to cause the aircraft to rotate about its roll axis. The roll axis is usually defined as the longitudinal axis, which runs from the nose to the tail of the aircraft. A roll moment can be the result of wind gusts, control surfaces such as ailerons , or simply by flying at an angle of sideslip . See flight dynamics . In watercraft , roll is the rotation around the ships longitudinal (front-back or bow-stern) axis. Heel refers to an offset from normal on this axis that is intentional or expected, as caused by wind pressure on sails, turning, or other crew actions. List refers to an unintentional or unexpected offset, as caused by flooding, battle damage, shifting cargo, etc. This automobile -related article is a stub . You can help Wikipedia by expanding it . This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Roll_moment
In calculus , Rolle's theorem or Rolle's lemma essentially states that any real-valued differentiable function that attains equal values at two distinct points must have at least one point, somewhere between them, at which the slope of the tangent line is zero. Such a point is known as a stationary point . It is a point at which the first derivative of the function is zero. The theorem is named after Michel Rolle . If a real -valued function f is continuous on a proper closed interval [ a , b ] , differentiable on the open interval ( a , b ) , and f ( a ) = f ( b ) , then there exists at least one c in the open interval ( a , b ) such that f ′ ( c ) = 0. {\displaystyle f'(c)=0.} This version of Rolle's theorem is used to prove the mean value theorem , of which Rolle's theorem is indeed a special case. It is also the basis for the proof of Taylor's theorem . Although the theorem is named after Michel Rolle , Rolle's 1691 proof covered only the case of polynomial functions. His proof did not use the methods of differential calculus , which at that point in his life he considered to be fallacious. The theorem was first proved by Cauchy in 1823 as a corollary of a proof of the mean value theorem . [ 1 ] The name "Rolle's theorem" was first used by Moritz Wilhelm Drobisch of Germany in 1834 and by Giusto Bellavitis of Italy in 1846. [ 2 ] For a radius r > 0 , consider the function f ( x ) = r 2 − x 2 , x ∈ [ − r , r ] . {\displaystyle f(x)={\sqrt {r^{2}-x^{2}}},\quad x\in [-r,r].} Its graph is the upper semicircle centered at the origin. This function is continuous on the closed interval [− r , r ] and differentiable in the open interval (− r , r ) , but not differentiable at the endpoints − r and r . Since f (− r ) = f ( r ) , Rolle's theorem applies, and indeed, there is a point where the derivative of f is zero. The theorem applies even when the function cannot be differentiated at the endpoints because it only requires the function to be differentiable in the open interval. If differentiability fails at an interior point of the interval, the conclusion of Rolle's theorem may not hold. Consider the absolute value function f ( x ) = | x | , x ∈ [ − 1 , 1 ] . {\displaystyle f(x)=|x|,\quad x\in [-1,1].} Then f (−1) = f (1) , but there is no c between −1 and 1 for which the f ′( c ) is zero. This is because that function, although continuous, is not differentiable at x = 0 . The derivative of f changes its sign at x = 0 , but without attaining the value 0. The theorem cannot be applied to this function because it does not satisfy the condition that the function must be differentiable for every x in the open interval. However, when the differentiability requirement is dropped from Rolle's theorem, f will still have a critical number in the open interval ( a , b ) , but it may not yield a horizontal tangent (as in the case of the absolute value represented in the graph). Rolle's theorem implies that a differentiable function whose derivative is ⁠ 0 {\displaystyle 0} ⁠ in an interval is constant in this interval. Indeed, if a and b are two points in an interval where a function f is differentiable, then the function g ( x ) = f ( x ) − f ( a ) − f ( b ) − f ( a ) b − a ( x − a ) {\displaystyle g(x)=f(x)-f(a)-{\frac {f(b)-f(a)}{b-a}}(x-a)} satisfies the hypotheses of Rolle's theorem on the interval ⁠ [ a , b ] {\displaystyle [a,b]} ⁠ . If the derivative of ⁠ f {\displaystyle f} ⁠ is zero everywhere, the derivative of ⁠ g {\displaystyle g} ⁠ is g ′ ( x ) = f ( b ) − f ( a ) b − a , {\displaystyle g'(x)={\frac {f(b)-f(a)}{b-a}},} and Rolle's theorem implies that there is ⁠ c ∈ ( a , b ) {\displaystyle c\in (a,b)} ⁠ such that 0 = g ′ ( c ) = f ( b ) − f ( a ) b − a . {\displaystyle 0=g'(c)={\frac {f(b)-f(a)}{b-a}}.} Hence, ⁠ f ( a ) = f ( b ) {\displaystyle f(a)=f(b)} ⁠ for every ⁠ a {\displaystyle a} ⁠ and ⁠ b {\displaystyle b} ⁠ , and the function ⁠ f {\displaystyle f} ⁠ is constant. The second example illustrates the following generalization of Rolle's theorem: Consider a real-valued, continuous function f on a closed interval [ a , b ] with f ( a ) = f ( b ) . If for every x in the open interval ( a , b ) the right-hand limit f ′ ( x + ) := lim h → 0 + f ( x + h ) − f ( x ) h {\displaystyle f'(x^{+}):=\lim _{h\to 0^{+}}{\frac {f(x+h)-f(x)}{h}}} and the left-hand limit f ′ ( x − ) := lim h → 0 − f ( x + h ) − f ( x ) h {\displaystyle f'(x^{-}):=\lim _{h\to 0^{-}}{\frac {f(x+h)-f(x)}{h}}} exist in the extended real line [−∞, ∞] , then there is some number c in the open interval ( a , b ) such that one of the two limits f ′ ( c + ) and f ′ ( c − ) {\displaystyle f'(c^{+})\quad {\text{and}}\quad f'(c^{-})} is ≥ 0 and the other one is ≤ 0 (in the extended real line). If the right- and left-hand limits agree for every x , then they agree in particular for c , hence the derivative of f exists at c and is equal to zero. Since the proof for the standard version of Rolle's theorem and the generalization are very similar, we prove the generalization. The idea of the proof is to argue that if f ( a ) = f ( b ) , then f must attain either a maximum or a minimum somewhere between a and b , say at c , and the function must change from increasing to decreasing (or the other way around) at c . In particular, if the derivative exists, it must be zero at c . By assumption, f is continuous on [ a , b ] , and by the extreme value theorem attains both its maximum and its minimum in [ a , b ] . If these are both attained at the endpoints of [ a , b ] , then f is constant on [ a , b ] and so the derivative of f is zero at every point in ( a , b ) . Suppose then that the maximum is obtained at an interior point c of ( a , b ) (the argument for the minimum is very similar, just consider − f ). We shall examine the above right- and left-hand limits separately. For a real h such that c + h is in [ a , b ] , the value f ( c + h ) is smaller or equal to f ( c ) because f attains its maximum at c . Therefore, for every h > 0 , f ( c + h ) − f ( c ) h ≤ 0 , {\displaystyle {\frac {f(c+h)-f(c)}{h}}\leq 0,} hence f ′ ( c + ) := lim h → 0 + f ( c + h ) − f ( c ) h ≤ 0 , {\displaystyle f'(c^{+}):=\lim _{h\to 0^{+}}{\frac {f(c+h)-f(c)}{h}}\leq 0,} where the limit exists by assumption, it may be minus infinity. Similarly, for every h < 0 , the inequality turns around because the denominator is now negative and we get f ( c + h ) − f ( c ) h ≥ 0 , {\displaystyle {\frac {f(c+h)-f(c)}{h}}\geq 0,} hence f ′ ( c − ) := lim h → 0 − f ( c + h ) − f ( c ) h ≥ 0 , {\displaystyle f'(c^{-}):=\lim _{h\to 0^{-}}{\frac {f(c+h)-f(c)}{h}}\geq 0,} where the limit might be plus infinity. Finally, when the above right- and left-hand limits agree (in particular when f is differentiable), then the derivative of f at c must be zero. (Alternatively, we can apply Fermat's stationary point theorem directly.) We can also generalize Rolle's theorem by requiring that f has more points with equal values and greater regularity. Specifically, suppose that Then there is a number c in ( a , b ) such that the n th derivative of f at c is zero. The requirements concerning the n th derivative of f can be weakened as in the generalization above, giving the corresponding (possibly weaker) assertions for the right- and left-hand limits defined above with f ( n − 1) in place of f . Particularly, this version of the theorem asserts that if a function differentiable enough times has n roots (so they have the same value, that is 0), then there is an internal point where f ( n − 1) vanishes. The proof uses mathematical induction . The case n = 1 is simply the standard version of Rolle's theorem. For n > 1 , take as the induction hypothesis that the generalization is true for n − 1 . We want to prove it for n . Assume the function f satisfies the hypotheses of the theorem. By the standard version of Rolle's theorem, for every integer k from 1 to n , there exists a c k in the open interval ( a k , b k ) such that f ′( c k ) = 0 . Hence, the first derivative satisfies the assumptions on the n − 1 closed intervals [ c 1 , c 2 ], …, [ c n − 1 , c n ] . By the induction hypothesis, there is a c such that the ( n − 1) st derivative of f ′ at c is zero. Rolle's theorem is a property of differentiable functions over the real numbers, which are an ordered field . As such, it does not generalize to other fields , but the following corollary does: if a real polynomial factors (has all of its roots) over the real numbers, then its derivative does as well. One may call this property of a field Rolle's property . [ citation needed ] More general fields do not always have differentiable functions, but they do always have polynomials, which can be symbolically differentiated. Similarly, more general fields may not have an order, but one has a notion of a root of a polynomial lying in a field. Thus Rolle's theorem shows that the real numbers have Rolle's property. Any algebraically closed field such as the complex numbers has Rolle's property. However, the rational numbers do not – for example, x 3 − x = x ( x − 1)( x + 1) factors over the rationals , but its derivative, 3 x 2 − 1 = 3 ( x − 1 3 ) ( x + 1 3 ) , {\displaystyle 3x^{2}-1=3\left(x-{\tfrac {1}{\sqrt {3}}}\right)\left(x+{\tfrac {1}{\sqrt {3}}}\right),} does not. The question of which fields satisfy Rolle's property was raised in Kaplansky 1972 . [ 4 ] For finite fields , the answer is that only F 2 and F 4 have Rolle's property. [ 5 ] [ 6 ] For a complex version, see Voorhoeve index .
https://en.wikipedia.org/wiki/Rolle's_theorem
A roller-cone bit is a drill bit used for drilling through rock that features 2 or 3 abrasive, spinning cones that break up rock and sediment as they grind against it. Roller-cone bits are typically used when drilling for oil and gas . [ 1 ] A water jet flowing through the bit washes out the rock in a slurry. [ 2 ] The oil boom in the southern United States in the early 20th century lead to the need for higher efficiency drill bits for well boring. After the Spindletop Gusher , Howard R. Hughes Sr. recognized the growing demand for oil and the ineffectiveness of the standard fishtail bit against harder rock formations. The first roller cone patent was for the rotary rock bit and was issued to American businessman and inventor Howard Hughes Sr. in 1909. It consisted of two interlocking cones. American businessman Walter Benona Sharp worked very closely with Hughes in developing the rock bit. The success of this bit led to the founding of the Sharp-Hughes Tool Company . In 1933 two Hughes engineers, one of whom was Ralph Neuhaus, invented the tricone bit, which has three roller cones. The Hughes patent for the tricone bit lasted until 1951, after which other companies made similar bits. However, Hughes still held 40% of the world's drill bit market in 2000. [ 3 ] Roller cone bits are characterized by the use of rolling cones at the head rather than the typical auger-type design of most drill bits. Rock hardness is one of the determining factors taken into account when selecting an appropriate drill bit. The cutting structure of the bits varies according to the rock formation. Softer formations are drilled with a roller-cone bit with widely spaced, long protruding teeth, whereas harder formations are drilled with closer-spaced and shorter-toothed bits. Roller-Cone bits are versatile and can cut through many formation types. Cones with machined heads are used in applications where tooth wear on the bit is negligible. These bits are usually machined from steel and have larger, more aggressive teeth. On the other hand, cones with tungsten inserts are used in high wear applications, where it is preferable to replace only the inserts and not the entire drill bit. They also tend to have smaller teeth. [ 4 ] [ 5 ] Drilling bits are attachments that are added to the end of a drillstring to perform the cutting necessary to penetrate the many rock layers between Earth's surface and oil/gas reservoirs. Once a hole is drilled, appropriate casings may be inserted to seal the wellbore formation. [ 6 ] The bits are further classified based on their internal bearings . Each bit has three rotating cones and each one will rotate on its own axis during drilling. While the bits are fixed to the drilling rigs, the rotation of the drill pipe will be in a clockwise direction and the roller cones are rotated in an anti-clockwise direction. Each roller cone is rotated on its own axis with the help of the bearing. Again, the bearings are classified mainly into three types: Open bearing bits, Sealed Bearing Bits and Journal Bearing bits. [ 7 ] This article about an oil field is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Roller_cone_bit
A roller dam is a type of hydro -control device specially designed to mitigate erosion . They are most often used to divert water for irrigation but the largest and most notable examples are used to ease river navigation . The world's first roller dam ( German : walzenwehr ) was constructed in Schweinfurt , Germany in 1902 to divert irrigation water south of the Main river . [ 1 ] [ 2 ] Roller dams are a type of weir , or a dam that is designed to allow water to flow over the top in continuous action. They are used on rivers or other such moving bodies of water where erosion damage is undesirable, yet likely to occur. A short wall, lip, or parabolic channel is constructed on the downstream side of the dam parallel to the dam face. As the water pouring over the dam hits this baffle, it is reflected toward the dam face, creating a continual "rolling" action at the foot of the dam; hence the name "Roller Dam". The purpose of the rolling is to dissipate the energy gained by the water as it falls from the top of the dam. Otherwise the energy would be exerted downstream, causing significant bank and river bed erosion . Roller dams can be either fixed (non-moving) or active. Fixed roller dams are generally made from reinforced concrete or masonry. Active roller dams are made from large metal cylinders, which can be lifted out of the water using a system of powerful hydraulic rams or cables and motors. This type is also known as a roller gate. The largest of the active dams in the world is Locks and Dam 15 , which spans the Mississippi River between Rock Island, Illinois , and Davenport, Iowa . Roller dams of any type pose an extreme drowning hazard. Any person going over the top of the dam will be caught in the rolling action at its base and may not be ejected from the cycle for days or possibly weeks. Even very buoyant objects, such as inflatable balls, inner tubes, and life vests, can often be seen resurfacing near the downstream face every few seconds for several hours before escaping the so-called "washing machine of death". Because of the hazards, dam opponents have called for the removal of roller dams. Sixteen people have died by drowning at the roller dam on the Fox River near Yorkville, Illinois , since its construction in 1960. Most recently in a single accident on May 28, 2006, three persons died. [ 3 ] Similar dams have already been removed on the Fox River at Aurora, Illinois , and near Batavia, Illinois , but Yorkville residents successfully petitioned to maintain the roller dam near their town because of tradition. [ 4 ] In July 2009, a man died at a roller dam on the Cedar River in Cedar Rapids, Iowa . His death was notable because he was wearing an approved personal flotation device, intended to help bring a person quickly to the surface of the water. [ 5 ]
https://en.wikipedia.org/wiki/Roller_dam
Roller Electrospinning system a method for producing nanofibres was developed by Jirsak et al. . [ 1 ] This method is known under the name Nanospider from Elmarco Company in Liberec. Roller electrospinning is the unique method which has been used in industry to produce nanofibers continuously. Nanospider consists of rotating roller to spin fibers directly from the polymer solution. This roller spinning electrode partially immersed in the tank with the polymer solution. A grounded collector electrode is placed at the top of the spinner . A nonwoven backing material moves along the collector electrode which makes the production of the nanofibre layer a continuous process. Many Taylor cones are simultaneously formed on the surface of the rotating spinning electrode, which makes the technology highly productive. [ 2 ] [ 3 ] There are independent and dependent parameters for roller electrospinning.
https://en.wikipedia.org/wiki/Roller_electrospinning
Roller mills are mills that use cylindrical rollers, either in opposing pairs or against flat plates, to crush or grind various materials, such as grain , ore , gravel , plastic , and others. Roller grain mills are an alternative to traditional millstone arrangements in gristmills . Roller mills for rock complement other types of mills, such as ball mills and hammermills , in such industries as the mining and processing of ore and construction aggregate ; cement milling ; and recycling . Two-roller mills are the simplest variety, in which the material is crushed between two rollers before it continues on to its final destination. The spacing between the rollers can be adjusted by the operator. Thinner spacing usually leads to that material being crushed into smaller pieces. Four-roller mills have two sets of rollers. in a four-roller mill, the grain first goes through rollers with a rather wide gap, which separates the seed from the husk without much damage to the husk, but leaves large grits. Flour is sieved out of the cracked grain, and then the coarse grist and husks are sent through the second set of rollers, which further crush the grist without damaging the crusts. Similarly, there are three-roller mills, in which one of the rollers is used twice. Six-roller mills have three sets of rollers. In this type of mill, the first set of rollers crush the whole kernel, and its output is divided three ways: Flour immediately is sent out the mill, grits without a husk proceed to the last roller, and husk, possibly still containing parts of the seed, go to the second set of rollers. From the second roller flour is directly output, as are husks and any possible seed still in them, and the husk-free grits are channeled into the last roller. Five-roller mills are six-roller mills in which one of the rollers performs double duty. In the 19th century roller mills were adapted to grist mills before replacing them. The mill used either steel or porcelain rollers. [ 1 ] Between the years 1865 and 1872, the Hungarian milling industry upgraded and expanded the use of stone mills combined with roller mills in a process known as Hungarian high milling. Hungarian hard wheat so milled was claimed as integral to the "First in the world" success of the Vienna Bakery of the 1867 Paris Exposition. [ 2 ] A motor or other prime mover drives the hanger of the grinding roller to rotate through a V pulley and center bearing. The roller, which is hung by a bearing and pendulum shaft, will roll along the inner circle of the roll ring while the hanger is rotating. A dust removal blower will generate negative pressure at the inlet and outlet of the grinder to prevent dust and radiating the heat in the machine. Modern era roller mills were re-invented by the Hungarian engineer András Mechwart in 1874, then quickly spread to other parts of Europe and Americas.
https://en.wikipedia.org/wiki/Roller_mill
In mechanical engineering , a rolling-element bearing , also known as a rolling bearing , [ 1 ] is a bearing which carries a load by placing rolling elements (such as balls, cylinders, or cones) between two concentric, grooved rings called races . The relative motion of the races causes the rolling elements to roll with very little rolling resistance and with little sliding . One of the earliest and best-known rolling-element bearings is a set of logs laid on the ground with a large stone block on top. As the stone is pulled, the logs roll along the ground with little sliding friction . As each log comes out the back, it is moved to the front where the block then rolls onto it. It is possible to imitate such a bearing by placing several pens or pencils on a table and placing an item on top of them. See " bearings " for more on the historical development of bearings. A rolling element rotary bearing uses a shaft in a much larger hole, and spheres or cylinders called "rollers" tightly fill the space between the shaft and the hole. As the shaft turns, each roller acts as the logs in the above example. However, since the bearing is round, the rollers never fall out from under the load. Rolling-element bearings have the advantage of a good trade-off between cost, size, weight, carrying capacity, durability, accuracy, friction, and so on. Other bearing designs are often better on one specific attribute, but worse in most other attributes, although fluid bearings can sometimes simultaneously outperform on carrying capacity, durability, accuracy, friction, rotation rate and sometimes cost. Only plain bearings are used as widely as rolling-element bearings. They are commonly used in automotive, industrial, marine, and aerospace applications. They are products of great necessity for modern technology. The rolling element bearing was developed from a firm foundation that was built over thousands of years. The concept emerged in its primitive form in Roman times . [ 2 ] After a long inactive period in the Middle Ages, it was revived during the Renaissance by Leonardo da Vinci , and developed steadily in the seventeenth and eighteenth centuries. Design description Bearings, especially rolling element bearings, are designed in a similar fashion across the board consisting of the outer and inner track, a central bore, a retainer to keep the rolling elements from clashing into one another or seizing the bearing movement, and the rolling elements themselves. [ 1 ] The internal rolling components may differ in design due to their intended purpose of application of the bearing. The main five types of bearings are ball, cylindrical, tapered, barrel, and needle. [ 2 ] Ball - the simplest following the basic principles with minimal design intention. Important to note the ability for more seizures is likely due to the freedom of the track design. Cylindrical - For single axis movement for straight directional movement. The shape allows for more surface area to be in contact adding in moving more weight with less force at a greater distance. Tapered - Primarily focused on the ability to take on axial loading [ 7 ] and radial loading. [ 8 ] It does this by using a conical structure enabling the elements to roll diagonally. Barrel - Provides assistance to high radial shock loads that cause misalignment and uses its shape and size for compensation. [ 9 ] Needle - Varying in size, diameters, and materials these types of bearings are best suited for helping reduce weight as well as smaller cross sections application, typically higher load capacity than ball bearings and rigid shaft applications. [ 10 ] A particularly common kind of rolling-element bearing is the ball bearing . The bearing has inner and outer races between which balls roll. Each race features a groove usually shaped so the ball fits slightly loose. Thus, in principle, the ball contacts each race across a very narrow area. However, a load on an infinitely small point would cause infinitely high contact pressure. In practice, the ball deforms (flattens) slightly where it contacts each race much as a tire flattens where it contacts the road. The race also yields slightly where each ball presses against it. Thus, the contact between ball and race is of finite size and has finite pressure. The deformed ball and race do not roll entirely smoothly because different parts of the ball are moving at different speeds as it rolls. Thus, there are opposing forces and sliding motions at each ball/race contact. Overall, these cause bearing drag. Roller bearings are the earliest known type of rolling-element-bearing, dating back to at least 40 BC. Common roller bearings use cylinders of slightly greater length than diameter. Roller bearings typically have a higher radial load capacity than ball bearings, but a lower capacity and higher friction under axial loads. If the inner and outer races are misaligned, the bearing capacity often drops quickly compared to either a ball bearing or a spherical roller bearing. As in all radial bearings, the outer load is continuously re-distributed among the rollers. Often fewer than half of the total number of rollers carry a significant portion of the load. The animation on the right shows how a static radial load is supported by the bearing rollers as the inner ring rotates. Spherical roller bearings have an outer race with an internal spherical shape. The rollers are thicker in the middle and thinner at the ends. Spherical roller bearings can thus accommodate both static and dynamic misalignment. However, spherical rollers are difficult to produce and thus expensive, and the bearings have higher friction than an ideal cylindrical or tapered roller bearing since there will be a certain amount of sliding between rolling elements and races. Gear bearings are similar to epicyclic gearing . They consist of a number of smaller 'satellite' gears which revolve around the center of the bearing along a track on the outsides of the internal and satellite gears, and on the inside of the external gear. The downside to this bearing is manufacturing complexity. Tapered roller bearings use conical rollers that run on conical races. Most roller bearings only take radial or axial loads, but tapered roller bearings support both radial and axial loads, and generally can carry higher loads than ball bearings due to greater contact area. Tapered roller bearings are used, for example, as the wheel bearings of most wheeled land vehicles. The downsides to this bearing is that due to manufacturing complexities, tapered roller bearings are usually more expensive than ball bearings; and additionally under heavy loads the tapered roller is like a wedge and bearing loads tend to try to eject the roller; the force from the collar which keeps the roller in the bearing adds to bearing friction compared to ball bearings. The needle roller bearing is a special type of roller bearing which uses long, thin cylindrical rollers resembling needles. Often the ends of the rollers taper to points, and these are used to keep the rollers captive, or they may be hemispherical and not captive but held by the shaft itself or a similar arrangement. Since the rollers are thin, the outside diameter of the bearing is only slightly larger than the hole in the middle. However, the small-diameter rollers must bend sharply where they contact the races, and thus the bearing fatigues relatively quickly. CARB bearings are toroidal roller bearings and similar to spherical roller bearings , but can accommodate both angular misalignment and also axial displacement. [ 11 ] Compared to a spherical roller bearing, their radius of curvature is longer than a spherical radius would be, making them an intermediate form between spherical and cylindrical rollers. Their limitation is that, like a cylindrical roller, they do not locate axially. CARB bearings are typically used in pairs with a locating bearing, such as a spherical roller bearing . [ 11 ] This non-locating bearing can be an advantage, as it can be used to allow a shaft and a housing to undergo thermal expansion independently. Toroidal roller bearings were introduced in 1995 by SKF as "CARB bearings". [ 10 ] The inventor behind the bearing was the engineer Magnus Kellström. [ 12 ] The configuration of the races determine the types of motions and loads that a bearing can best support. A given configuration can serve multiple of the following types of loading. Thrust bearings are used to support axial loads, such as vertical shafts. Common designs are Thrust ball bearings , spherical roller thrust bearings , tapered roller thrust bearings or cylindrical roller thrust bearings. Also non-rolling-element bearings such as hydrostatic or magnetic bearings see some use where particularly heavy loads or low friction is needed. Rolling-element bearings are often used for axles due to their low rolling friction. For light loads, such as bicycles, ball bearings are often used. For heavy loads and where the loads can greatly change during cornering, such as cars and trucks, tapered rolling bearings are used. Linear motion roller-element bearings are typically designed for either shafts or flat surfaces. Flat surface bearings often consist of rollers and are mounted in a cage, which is then placed between the two flat surfaces; a common example is drawer-support hardware. Roller-element bearing for a shaft use bearing balls in a groove designed to recirculate them from one end to the other as the bearing moves; as such, they are called linear ball bearings [ 13 ] or recirculating bearings . Rolling-element bearings often work well in non-ideal conditions, but sometimes minor problems cause bearings to fail quickly and mysteriously. For example, with a stationary (non-rotating) load, small vibrations can gradually press out the lubricant between the races and rollers or balls ( false brinelling ). Without lubricant the bearing fails, even though it is not rotating and thus is apparently not being used. For these sorts of reasons, much of bearing design is about failure analysis. Vibration based analysis can be used for fault identification of bearings. [ 14 ] There are three usual limits to the lifetime or load capacity of a bearing: abrasion, fatigue and pressure-induced welding. Although there are many other apparent causes of bearing failure, most can be reduced to these three. For example, a bearing which is run dry of lubricant fails not because it is "without lubricant", but because lack of lubrication leads to fatigue and welding, and the resulting wear debris can cause abrasion. Similar events occur in false brinelling damage. In high speed applications, the oil flow also reduces the bearing metal temperature by convection. The oil becomes the heat sink for the friction losses generated by the bearing. ISO has categorised bearing failures into a document Numbered ISO 15243. The life of a rolling bearing is expressed as the number of revolutions or the number of operating hours at a given speed that the bearing is capable of enduring before the first sign of metal fatigue (also known as spalling ) occurs on the race of the inner or outer ring, or on a rolling element. Calculating the endurance life of bearings is possible with the help of so-called life models. More specifically, life models are used to determine the bearing size – since this must be sufficient to ensure that the bearing is strong enough to deliver the required life under certain defined operating conditions. Under controlled laboratory conditions, however, seemingly identical bearings operating under identical conditions can have different individual endurance lives. Thus, bearing life cannot be calculated based on specific bearings, but is instead related to in statistical terms, referring to populations of bearings. All information with regard to load ratings is then based on the life that 90% of a sufficiently large group of apparently identical bearings can be expected to attain or exceed. This gives a clearer definition of the concept of bearing life, which is essential to calculate the correct bearing size. Life models can thus help to predict the performance of a bearing more realistically. The prediction of bearing life is described in ISO 281 [ 15 ] and the ANSI /American Bearing Manufacturers Association Standards 9 and 11. [ 16 ] The traditional life prediction model for rolling-element bearings uses the basic life equation: [ 17 ] L 10 = ( C / P ) p {\displaystyle L_{10}=(C/P)^{p}} Where: Basic life or L 10 {\displaystyle L_{10}} is the life that 90% of bearings can be expected to reach or exceed. [ 15 ] The median or average life, sometimes called Mean Time Between Failure (MTBF), is about five times the calculated basic rating life. [ 17 ] Several factors, the ' ASME five factor model', [ 18 ] can be used to further adjust the L 10 {\displaystyle L_{10}} life depending upon the desired reliability, lubrication, contamination, etc. The major implication of this model is that bearing life is finite, and reduces by a cube power of the ratio between design load and applied load. This model was developed in 1924, 1947 and 1952 work by Arvid Palmgren and Gustaf Lundberg in their paper Dynamic Capacity of Rolling Bearings . [ 18 ] [ 19 ] The model dates from 1924, the values of the constant p {\displaystyle p} from the post-war works. Higher p {\displaystyle p} values may be seen as both a longer lifetime for a correctly-used bearing below its design load, or also as the increased rate by which lifetime is shortened when overloaded. This model was recognised to have become inaccurate for modern bearings. Particularly owing to improvements in the quality of bearing steels, the mechanisms for how failures develop in the 1924 model are no longer as significant. By the 1990s, real bearings were found to give service lives up to 14 times longer than those predicted. [ 18 ] An explanation was put forward based on fatigue life ; if the bearing was loaded to never exceed the fatigue strength , then the Lundberg-Palmgren mechanism for failure by fatigue would simply never occur. [ 18 ] This relied on homogeneous vacuum-melted steels , such as AISI 52100 , that avoided the internal inclusions that had previously acted as stress risers within the rolling elements, and also on smoother finishes to bearing tracks that avoided impact loads. [ 16 ] The p {\displaystyle p} constant now had values of 4 for ball and 5 for roller bearings. Provided that load limits were observed, the idea of a 'fatigue limit' entered bearing lifetime calculations. If the bearing was not loaded beyond this limit, its theoretical lifetime would be limited only by external factors, such as contamination or a failure of lubrication. A new model of bearing life was put forward by FAG and developed by SKF as the Ioannides-Harris model. [ 19 ] [ 20 ] ISO 281:2000 first incorporated this model and ISO 281:2007 is based on it. The concept of fatigue limit, and thus ISO 281:2007, remains controversial, at least in the US. [ 16 ] [ 18 ] In 2015, the SKF Generalized Bearing Life Model (GBLM) was introduced. [ 21 ] In contrast to previous life models, GBLM explicitly separates surface and subsurface failure modes, making the model flexible to accommodate several different failure modes. Modern bearings and applications show fewer failures, but the failures that do occur are more linked to surface stresses. By separating surface from the subsurface, mitigating mechanisms can more easily be identified. GBLM makes use of advanced tribology models [ 22 ] to introduce a surface distress failure mode function, obtained from the evaluation of surface fatigue. For the subsurface fatigue, GBLM uses the classical Hertzian rolling contact model. With all this, GBLM includes the effects of lubrication, contamination, and race surface properties, which together influence the stress distribution in the rolling contact. In 2019, the Generalized Bearing Life Model was relaunched. The updated model offers life calculations also for hybrid bearings, i.e. bearings with steel rings and ceramic (silicon nitride) rolling elements. [ 23 ] [ 24 ] Even if the 2019 GBLM release was primarily developed to realistically determine the working life of hybrid bearings, the concept can also be used for other products and failure modes. All parts of a bearing are subject to many design constraints. For example, the inner and outer races are often complex shapes, making them difficult to manufacture. Balls and rollers, though simpler in shape, are small; since they bend sharply where they run on the races, the bearings are prone to fatigue. The loads within a bearing assembly are also affected by the speed of operation: rolling-element bearings may spin over 100,000 rpm, and the principal load in such a bearing may be momentum rather than the applied load. Smaller rolling elements are lighter and thus have less momentum, but smaller elements also bend more sharply where they contact the race, causing them to fail more rapidly from fatigue. Maximum rolling-element bearing speeds are often specified in 'nD m ', which is the product of the mean diameter (in mm) and the maximum RPM. For angular contact bearings nD m s over 2.1 million have been found to be reliable in high performance rocketry applications. [ 25 ] There are also many material issues: a harder material may be more durable against abrasion but more likely to suffer fatigue fracture, so the material varies with the application, and while steel is most common for rolling-element bearings, plastics, glass, and ceramics are all in common use. A small defect (irregularity) in the material is often responsible for bearing failure; one of the biggest improvements in the life of common bearings during the second half of the 20th century was the use of more homogeneous materials, rather than better materials or lubricants (though both were also significant). Lubricant properties vary with temperature and load, so the best lubricant varies with application. Although bearings tend to wear out with use, designers can make tradeoffs of bearing size and cost versus lifetime. A bearing can last indefinitely—longer than the rest of the machine—if it is kept cool, clean, lubricated, is run within the rated load, and if the bearing materials are sufficiently free of microscopic defects. Cooling, lubrication, and sealing are thus important parts of the bearing design. The needed bearing lifetime also varies with the application. For example, Tedric A. Harris reports in his Rolling Bearing Analysis [ 26 ] on an oxygen pump bearing in the U.S. Space Shuttle which could not be adequately isolated from the liquid oxygen being pumped. All lubricants reacted with the oxygen, leading to fires and other failures. The solution was to lubricate the bearing with the oxygen. Although liquid oxygen is a poor lubricant, it was adequate, since the service life of the pump was just a few hours. The operating environment and service needs are also important design considerations. Some bearing assemblies require routine addition of lubricants, while others are factory sealed , requiring no further maintenance for the life of the mechanical assembly. Although seals are appealing, they increase friction, and in a permanently sealed bearing the lubricant may become contaminated by hard particles, such as steel chips from the race or bearing, sand, or grit that gets past the seal. Contamination in the lubricant is abrasive and greatly reduces the operating life of the bearing assembly. Another major cause of bearing failure is the presence of water in the lubrication oil. Online water-in-oil monitors have been introduced in recent years to monitor the effects of both particles and the presence of water in oil and their combined effect. Metric rolling-element bearings have alphanumerical designations, defined by ISO 15 , to define all of the physical parameters. The main designation is a seven digit number with optional alphanumeric digits before or after to define additional parameters. Here the digits will be defined as: 7654321. Any zeros to the left of the last defined digit are not printed; e.g. a designation of 0007208 is printed 7208. [ 27 ] Digits one and two together are used to define the inner diameter (ID), or bore diameter, of the bearing. For diameters between 20 and 495 mm, inclusive, the designation is multiplied by five to give the ID; e.g. designation 08 is a 40 mm ID. For inner diameters less than 20 the following designations are used: 00 = 10 mm ID, 01 = 12 mm ID, 02 = 15 mm ID, and 03 = 17 mm ID. The third digit defines the "diameter series", which defines the outer diameter (OD). The diameter series, defined in ascending order, is: 0, 8, 9, 1, 7, 2, 3, 4, 5, 6. The fourth digit defines the type of bearing: [ 27 ] The fifth and sixth digit define structural modifications to the bearing. For example, on radial thrust bearings the digits define the contact angle, or the presence of seals on any bearing type. The seventh digit defines the "width series", or thickness, of the bearing. The width series, defined from lightest to heaviest, is: 7, 8, 9, 0, 1 (extra light series), 2 (light series), 3 (medium series), 4 (heavy series). The third digit and the seventh digit define the "dimensional series" of the bearing. [ 27 ] [ 28 ] There are four optional prefix characters, here defined as A321-XXXXXXX (where the X's are the main designation), which are separated from the main designation with a dash. The first character, A, is the bearing class, which is defined, in ascending order: C, B, A. The class defines extra requirements for vibration, deviations in shape, the rolling surface tolerances, and other parameters that are not defined by a designation character. The second character is the frictional moment (friction), which is defined, in ascending order, by a number 1–9. The third character is the radial clearance, which is normally defined by a number between 0 and 9 (inclusive), in ascending order, however for radial-thrust bearings it is defined by a number between 1 and 3, inclusive. The fourth character is the accuracy ratings, which normally are, in ascending order: 0 (normal), 6X, 6, 5, 4, T, and 2. Ratings 0 and 6 are the most common; ratings 5 and 4 are used in high-speed applications; and rating 2 is used in gyroscopes . For tapered bearings, the values are, in ascending order: 0, N, and X, where 0 is 0, N is "normal", and X is 6X. [ 27 ] There are five optional characters that can defined after the main designation: A, E, P, C, and T; these are tacked directly onto the end of the main designation. Unlike the prefix, not all of the designations must be defined. "A" indicates an increased dynamic load rating. "E" indicates the use of a plastic cage. "P" indicates that heat-resistant steel are used. "C" indicates the type of lubricant used (C1–C28). "T" indicates the degree to which the bearing components have been tempered (T1–T5). [ 27 ] While manufacturers follow ISO 15 for part number designations on some of their products, it is common for them to implement proprietary part number systems that do not correlate to ISO 15. [ 29 ]
https://en.wikipedia.org/wiki/Rolling-element_bearing
Rolling bed dryer s are used for efficiently processing large amounts of material that need their respective moisture levels reduced. Rolling bed dryers are most often used for drying wood chips and organic residues and are most often utilized in the biomass , waste/ recycling , wood particle board , pellet , and biofuel industries. [ 1 ] The versatility of the rolling bed dryer is based on its simple idea of product circulation. The biomass processed through the rolling bed dryer can be not only efficiently dried but also has the option of being cleaned simultaneously. This provides for efficiency and conservation in energy which results in lower production costs. Biomass is being increasingly used as an alternative fuel source. [ 2 ] Providing for this demands innovative solutions. [ 3 ] Large bulks of biomass is permanently circulated and mixed by highly effective paddles. This basic idea combines the flow of large bulks of product good heat transfer with continuous movement of the product for even drying results. The drying air is supplied through a perforated plate under the moving bulk of product. Depending on the amount of ventilation., it is possible to separate fine materials such as dust, fibers, and sand from the bulk material collecting this separately alongside the ongoing drying process. This simultaneous cleaning occurs through the use of the material against itself to remove, separate, and collect fine materials such as fibers, sand and dust from the drying bulk material. Having this occur at the same time as the drying process saves not only time and energy, but also maintains better the caloric value of the residual biomass and reduces ash content. After the drying process is completed the dried output is suitable for direct firing and pelletizing /briquetting as well as for more demanding processes such as gasification or torrefaction of biomass. [ 4 ]
https://en.wikipedia.org/wiki/Rolling_bed_dryer
A rolling code (or sometimes called a hopping code ) is used in keyless entry systems to prevent a simple form of replay attack , where an eavesdropper records the transmission and replays it at a later time to cause the receiver to 'unlock'. Such systems are typical in garage door openers and keyless car entry systems. HMAC-based one-time password employed widely in multi-factor authentication uses similar approach, but with pre-shared secret key and HMAC instead of PRNG and pre-shared random seed . A rolling code transmitter is useful in a security system for improving the security of radio frequency (RF) transmission, comprising an interleaved trinary bit fixed code and rolling code. A receiver demodulates the encrypted RF transmission and recovers the fixed code and rolling code. Upon comparison of the fixed and rolling codes with stored codes and seeing that they pass a set of algorithmic checks, a signal is generated to actuate an electric motor to open or close a movable component. [ citation needed ] Remote controls send a digital code word to the receiver. If the receiver determines the codeword is acceptable, then the receiver will actuate the relay, unlock the door, or open the barrier. Simple remote control systems use a fixed code word; the code word that opens the gate today will also open the gate tomorrow. An attacker with an appropriate receiver could discover the code word and use it to gain access sometime later. More sophisticated remote control systems use a rolling code (or hopping code) that changes for every use. An attacker may be able to learn the code word that opened the door just now, but the receiver will not accept that code word for the foreseeable future. A rolling code system uses cryptographic methods that allow the remote control and the receiver to share codewords but make it difficult for an attacker to break the cryptography. The Microchip HCS301 was once the most widely used system on garage and gate remote control and receivers. The chip uses the KeeLoq algorithm. The HCS301 KeeLoq system transmits 66 data bits: As detailed at KeeLoq , the algorithm has been shown to be vulnerable to a variety of attacks, and has been completely broken . A rolling code transmitted by radio signal that can be intercepted can be vulnerable to falsification. In 2015, it was reported that Samy Kamkar had built an inexpensive electronic device about the size of a wallet that could be concealed on or near a locked vehicle to capture a single keyless entry code to be used at a later time to unlock the vehicle. The device transmits a jamming signal to block the vehicle's reception of rolling code signals from the owner's fob, while recording these signals from both of his two attempts needed to unlock the vehicle. The recorded first code is forwarded (replayed) to the vehicle only when the owner makes the second attempt, while the recorded second code is retained for future use. Kamkar stated that this vulnerability had been widely known for years to be present in many vehicle types, but was previously undemonstrated. [ 3 ] A demonstration was done during DEF CON 23. [ 4 ]
https://en.wikipedia.org/wiki/Rolling_code
Rolling Contact Fatigue ( RCF ) is a phenomenon that occurs in mechanical components relating to rolling/sliding contact, such as railways, gears , and bearings . [ 2 ] It is the result of the process of fatigue due to rolling/sliding contact. [ 2 ] [ 3 ] The RCF process begins with cyclic loading of the material, which results in fatigue damage that can be observed in crack-like flaws, like white etching cracks . [ 2 ] These flaws can grow into larger cracks under further loading, potentially leading to fractures. [ 2 ] [ 4 ] In railways, for example, when the train wheel rolls on the rail, creating a small contact patch that leads to very high contact pressure between the rail and wheel. [ 2 ] Over time, the repeated passing of wheels with high contact pressures can cause the formation of crack-like flaws that becomes small cracks. [ 2 ] These cracks can grow and sometimes join, leading to either surface spalling or rail break, which can cause serious accidents, including derailments . [ 2 ] [ 4 ] RCF is a major concern for railways worldwide and can take various forms depending on the location of the crack and its appearance. [ 2 ] It is also a significant cause of failure in components subjected to rolling or rolling/sliding contacts, such as rolling-contact bearings, gears, and cam/tappet arrangements. [ 5 ] The alternating stress field in RCF can lead to material removal, varying from micro- and macro- pitting in conventional bearing steels to delamination in hybrid ceramics and overlay coatings. [ 5 ] In the case of bodies capable of rolling, there is a particular type of friction, in which the sliding phenomenon, typical of dynamic friction, does not occur, but there is also a force that opposes the motion, which also excludes the case of static friction. This type of friction is called rolling friction. Now we want to observe in detail what happens to a wheel that rolls on a horizontal plane. Initially the wheel is immobile and the forces acting on it are the weight force m g → {\displaystyle m{\vec {g}}} and the normal force N → {\displaystyle {\vec {N}}} given by the response to the weight of the floor. At this point the wheel is set in motion, causing a displacement at the point of application of the normal force which is now applied in front of the center of the wheel, at a distance b , which is equal to the value of the rolling friction coefficient. The opposition to the motion is caused by the separation of the normal force and the weight force at the exact moment in which the rolling starts, so the value of the torque given by the rolling friction force is M → r . f . = b → × m g → {\displaystyle {{\vec {M}}_{r.f.}}={\vec {b}}\times m{\vec {g}}} What happens in detail at the microscopic level between the wheel and the supporting surface is described in Figure, where it is possible to observe what is the behavior of the reaction forces of the deformed plane acting on an immobile wheel. Rolling the wheel continuously causes imperceptible deformations of the plane and, once passed to a subsequent point, the plane returns to its initial state. In the compression phase the plane opposes the motion of the wheel, while in the decompression phase it provides a positive contribution to the motion. Testing for RCF involves several methods, each designed to simulate the conditions that cause RCF in a controlled environment. Here are some of the methods used: Triple disc rolling contact fatigue (RCF) Rig is a specialised testing apparatus used in the field of tribology and materials science to evaluate the fatigue resistance and durability of materials subjected to rolling contact. [ 8 ] This rig is designed for simulating the conditions encountered in various mechanical systems, such as rolling bearings, gears, and other components exposed to repeated rolling and sliding motions. The rig typically consists of three discs or rollers arranged in a specific configuration. [ 9 ] These discs can represent the interacting components of interest, such as a rolling bearing. The rig also allows precise control over the loading conditions, including the magnitude of the load, contact pressure, and contact geometry. [ 10 ] [ 11 ] PCS Instruments Micro-pitting Rig (MPR) is a specialised testing instrument used in the field of tribology and mechanical engineering to study micro- pitting , a type of surface damage that occurs in lubricated rolling and sliding contact systems. The MPR is designed to simulate real-world operating conditions by subjecting test specimens, often gears or rolling bearings, to controlled rolling and sliding contact under lubricated conditions. [ 12 ]
https://en.wikipedia.org/wiki/Rolling_contact_fatigue
Rolling hairpin replication ( RHR ) is a unidirectional, strand displacement form of DNA replication used by parvoviruses, a group of viruses that constitute the family Parvoviridae . Parvoviruses have linear, single-stranded DNA (ssDNA) genomes in which the coding portion of the genome is flanked by telomeres at each end that form hairpin loops . During RHR, these hairpin loops repeatedly unfold and refold to change the direction of DNA replication so that replication progresses in a continuous manner back and forth across the genome. RHR is initiated and terminated by an endonuclease encoded by parvoviruses that is variously called NS1 or Rep, and RHR is similar to rolling circle replication , which is used by ssDNA viruses that have circular genomes. Before RHR begins, a host cell DNA polymerase converts the genome to a duplex form in which the coding portion is double-stranded and connected to the terminal hairpins. From there, messenger RNA (mRNA) that encodes the viral initiator protein is transcribed and translated to synthesize the protein. The initiator protein commences RHR by binding to and nicking the genome in a region adjacent to a hairpin called the origin and establishing a replication fork with its helicase activity. Nicking leads to the hairpin unfolding into a linear, extended form. The telomere is then replicated and both strands of the telomere refold back in on themselves to their original turn-around forms. This repositions the replication fork to switch templates to the other strand and move in the opposite direction. Upon reaching the other end, the same process of unfolding, replication, and refolding occurs. Parvoviruses vary in whether both hairpins are the same or different. Homotelomeric parvoviruses such as adeno-associated viruses (AAV), i.e. those that have identical or similar telomeres, have both ends replicated by terminal resolution, the previously described process. Heterotelomeric parvoviruses such as minute virus of mice (MVM), i.e. those that have different telomeres, have one end replicated by terminal resolution and the other by an asymmetric process called junction resolution. During asymmetric junction resolution, the duplex extended form of the telomere reorganizes into a cruciform-shaped junction , and the correct orientation of the telomere is replicated off the lower arm of the cruciform. As a result of RHR, a replicative molecule that contains numerous copies of the genomes is synthesized. The initiator protein periodically excises progeny ssDNA genomes from this replicative concatemer. Parvoviruses are a family of DNA viruses that have single-stranded DNA (ssDNA) genomes enclosed in rugged, icosahedral protein capsids 18–26 nanometers (nm) in diameter. [ 1 ] Unlike most other ssDNA viruses, which have circular genomes that form a loop, parvoviruses have linear genomes with short terminal sequences at each end of the genome. These termini are capable of being formed into structures called hairpins or hairpin loops and consist of short, imperfect palindromes. [ 2 ] [ 3 ] Varying from virus to virus, the coding region of the genome is 4–6 kilobases (kb) in length, and the termini are 116–550 nucleotides (nt) in length each. The hairpin sequences provide most of the cis -acting information needed for DNA replication and packaging. [ 1 ] [ 4 ] Parvovirus genomes may be either positive-sense or negative-sense . Some species, such as adeno-associated viruses (AAV) like AAV2, package a roughly equal number of positive-sense and negative-sense strands into virions, others, such as minute virus of mice (MVM), show preference toward packaging negative-sense strands, and others have varying proportions. [ 4 ] Because of this disparity, the 5′-end (usually pronounced "five prime end") of the strand that encodes the non-structural proteins is called the "left end", and the 3′-end (usually pronounced "three prime end") is called the "right end". [ 3 ] In reference to the negative-sense strand, the 3′-end is the left side and the 5′-end is the right side. [ 4 ] [ 5 ] Parvoviruses replicate their genomes through a process called rolling hairpin replication (RHR), which is a unidirectional, strand displacement form of DNA replication. Before replication, the coding portion of the ssDNA genome is converted to a double-strand DNA (dsDNA) form, which is then cleaved by a viral protein to initiate replication. Sequential unfolding and refolding of the hairpin termini acts to reverse the direction of synthesis, which allows replication to go back and forth along the genome to synthesize a continuous duplex replicative form (RF) DNA intermediate. Progeny ssDNA genomes are then excised from the RF intermediate. [ 4 ] [ 6 ] While the general aspects of RHR are conserved across genera and species, the exact details likely vary. [ 7 ] Parvovirus genomes have distinct starting points of replication that contain palindromic DNA sequences. These sequences are able to alternate between inter- and intrastrand basepairing throughout replication, and they serve as self-priming telomeres at each end of the genome. [ 2 ] They also contain two key sites necessary for replication used by the initiator protein: a binding site and a cleavage site. [ 8 ] Telomere sequences have significant complexity and diversity, suggesting that they perform additional functions for many species. [ 1 ] [ 9 ] In MVM, for example, the left-end hairpin contains binding sites for transcription factors that modulate gene expression from an adjacent promoter . For AAV, the hairpins can bind to MRE11/Rad50/NBS1 (MRN) complexes and Ku70/80 heterodimers, which are involved in sensing and repairing DNA. [ 5 ] In general, however, they have the same basic structure: imperfect palindromes in which a fully or primarily basepaired region terminates into an axial symmetry. These palindromes can fold into a variety of structures such as a Y-shaped structure and a cruciform-shaped structure. During replication, the termini act as hinges in which the imperfectly basepaired or partial cruciform regions surrounding the axis provide a favorable environment for unfolding and refolding of the hairpin. [ 2 ] [ 3 ] [ 4 ] Some parvoviruses, such as AAV2, are homotelomeric, meaning the two palindromic telomeres are similar or identical and form part of larger (inverted) terminal repeat ((I)TR) sequences. Replication at each terminal ending is therefore similar. Other parvoviruses, such as MVM, are heterotelomeric, meaning they have two physically different telomeres. As a result, heterotelomeric parvoviruses tend to have a more complex replication process since the two telomeres have different replication processes. [ 2 ] [ 3 ] [ 4 ] In general, homotelomeric parvoviruses replicate both ends via a process called terminal resolution, whereas heterotelomeric parvoviruses replicate one end by terminal resolution and the other end by an asymmetric process called junction resolution. [ 4 ] [ 5 ] [ 6 ] [ 10 ] Whether a genus is hetero- or homotelomeric, along with other genomic characteristics, is shown in the following table. [ 4 ] The entire process of rolling hairpin replication, which has distinct, sequential stages, can be summarized as follows: [ 4 ] [ 5 ] [ 7 ] Upon cell entry, a tether about 24 nucleotides in length that attaches the viral protein NS1, essential in replication, to the virion is cleaved off the virion to be reattached later. [ 3 ] After cell entry, virions accumulate in the cell nucleus while the genome is still contained within the capsid. These capsids may be reconfigured to an open or transitioned state during entry. The exact mechanism by which the genome leaves the capsid is unclear. [ 9 ] For AAV, it has been suggested that nuclear factors disassemble the capsid, whereas for MVM, it appears as if the genome is ejected in a 3′-to-5′ direction from an opening in the capsid called a portal. [ 5 ] Parvoviruses lack genes capable of inducing resting cells to enter their DNA synthesis phase (S-phase). Additionally, naked ssDNA is likely to be unstable, perceived as foreign by the host cell, or improperly replicated by host DNA repair . For these reasons, the genome must either be converted rapidly to its less obstructive, more stable duplex form or retained within the capsid until it is uncoated during S-phase. Typically, the latter occurs and virion remains silent in the nucleus until the host cell enters S-phase by itself. During this waiting period, virions may make use of certain strategies to evade host defense mechanisms to protect their hairpins and DNA to reach S-phase, [ 9 ] though it is unclear how this occurs. [ 4 ] Since the genome is packaged as ssDNA, creation of a complementary strand is necessary before gene expression . [ 5 ] [ 9 ] DNA polymerases are only able to synthesize DNA in a 5′ to 3′ direction, and they require a basepair primer to begin synthesis. Parvoviruses address these limitations by using their termini as primers for complementary strand synthesis. [ 9 ] A 3′ hydroxyl end of the left-hand (3′) terminus pairs with an internal base to prime initial DNA synthesis, resulting in the conversion of the ssDNA genome to its first duplex form. [ 1 ] [ 7 ] This is a monomeric double-stranded DNA molecule in which the two strands are covalently cross-linked to each other at the left-end by a single copy of the viral telomere. Synthesis of the duplex form precedes NS1 expression so that when the replication fork during initial complementary strand synthesis reaches the right (5′) end, it does not displace and copy the right-end hairpin. This allows the 3′-end of the new DNA strand to be covalently ligated to the 5′-end of the right hairpin by a host ligase, thereby creating the duplex molecule. During this step, the tether sequence that was present before viral entry into the cell is resynthesized. [ 6 ] Once an infected cell enters S-phase, parvovirus genomes are converted to their duplex form by host replication machinery, and mRNA that encodes non-structural (NS) proteins is transcribed starting from a viral promoter (P4 for MVM). [ 4 ] [ 5 ] [ 9 ] One of these NS proteins is usually called NS1 but also Rep1 or Rep68/78 for the genus Dependoparvovirus , which AAV belongs to. [ 4 ] NS1 is a site-specific DNA binding protein that acts as the replication initiator protein [ 9 ] via nickase activity. [ 15 ] It also mediates excision of both ends of the genome from duplex RF intermediates via a transesterification reaction that introduces a nick into specific duplex origin sequences. [ 4 ] Key components of NS1 include an HUH endonuclease domain toward the N-terminus of the protein and a superfamily 3 (SF3) helicase toward the C-terminus , [ 16 ] as well as ATPase activity. [ 1 ] It binds to ssDNA, RNA, and site-specifically on duplex DNA at reiterations of the tetranucleotide sequence 5′-ACCA-3′ 1–3 . [ 1 ] [ 9 ] These sequences are present in the viral replication origin sites and repeated at multiple sites throughout the genome in more or less degenerative forms. [ 15 ] NS1 nicks the covalently-closed right-end telomere via a transesterification reaction that liberates a basepaired 3′ nucleotide as a free hydroxyl (-OH). [ 4 ] This reaction is assisted by a host DNA-binding protein from the high mobility group 1/2 (HMG1/2) family and is made in the replication origin, OriR , which was created by sequences in and immediately adjacent to the right hairpin. The left-end telomere of MVM, a heterotelomeric parvovirus, contains sequences that can give rise to replication origins in higher-order duplex intermediates, but these sequences are inactive in the hairpin terminus of the monomeric molecule, so NS1 always initiates replication at the right end. [ 6 ] The 3′-OH that is freed by nicking acts as a primer for the DNA polymerase to start complementary strand synthesis [ 8 ] while NS1 remains covalently attached to the 5′-end via a tyrosine residue. [ 1 ] Consequently, a copy of NS1 remains attached to the 5′-end of all RF and progeny DNA throughout replication, packaging, and virion release. [ 4 ] [ 6 ] NS1 is only able to bind to this specific site by assembling into homodimers or higher order multimers, which happens naturally with the addition of adenosine triphosphate (ATP) that is likely mediated by NS1's helicase domain. In vivo studies have shown that NS1 can form into a variety of oligomeric states, but it most likely assembles into hexamers to fulfill the functions of both the endonuclease domain and helicase domain. [ 15 ] Starting from the location at the nick, it is thought that NS1 organizes a replication fork and acts as the replicative 3′-to-5′ helicase. Near its C-terminus, NS1 contains an acidic transcriptional activation domain. This domain acts to upregulate transcription starting from a viral promoter (P38 for MVM) when NS1 is bound to a series of 5′-ACCA-3′ motifs, called the tar sequence, positioned upstream (toward the 5′-end) of the promoter unit, and via interaction with NS1 and various transcription factors. [ 15 ] NS1 also recruits the cellular replication protein A (RPA) complex, which is essential for establishing the new replication fork and for binding and stabilizing displaced single strands. [ 6 ] While NS1 is the only non-structural protein essential for all parvoviruses, some have other individual proteins that are essential for replication. For MVM, NS2 appears to reprogram the host cell for efficient DNA amplification, single-strand progeny synthesis, capsid assembly, and virion export, though it seems to lack direct involvement in these processes. NS2 initially accumulates up to three times more quickly than NS1 in the early S-phase but is turned-over rapidly by a proteasome-mediated pathway. As the infectious cycle progresses, NS2 becomes less common as P38-driven transcription becomes more prominent. [ 15 ] Another example is the nuclear phosphoprotein NP1 of bocaviruses, which, if not synthesized, results in non-viable progeny genomes. [ 5 ] As viral NS proteins accumulate, they commandeer host cell replication apparati, terminating host cell DNA synthesis and causing viral DNA amplification to begin. Interference with host DNA replication may be due to direct effects on host replication proteins that are not essential for viral replication, by extensive nicking of host DNA, or by the restructuring of the nucleus during viral infection. Early in infection, parvoviruses establish replication foci in the nucleus that are termed autonomous parvovirus-associated replication (APAR) bodies. NS1 co-localizes with replicating viral DNA in these structures with other cellular proteins necessary for viral DNA synthesis, [ 15 ] while other complexes not required for replication are sequestered from APAR bodies. The exact manner by which proteins are included or excluded from APAR bodies is unclear and appears to vary from species to species and between cell types. [ 5 ] As infection progresses, APAR microdomains begin to coalesce with other, formerly distinct, nuclear bodies to form progressively larger nuclear inclusions where viral replication and virion assembly occur. After S-phase begins, the host cell is forced to synthesize viral DNA and cannot leave S-phase. [ 17 ] The right-end hairpin of MVM contains 248 nucleotides [ 10 ] organized into a cruciform shape. [ 1 ] This region is almost perfectly basepaired, with just three unpaired bases at the axis and a mismatched region positioned 20 nucleotides from the axis. A three nucleotide insertion, AGA or TCT, on one strand separates opposing pairs of NS1 binding sites, creating a 36 basepair-length palindrome that can assume an alternate cruciform configuration. This configuration is expected to destabilize the duplex, which facilitates its ability to function as a hinge. The mismatch of the unpaired bases, rather than the three-nucleotide sequence itself, may help to promote instability of duplex DNA. [ 10 ] Fully-duplex linear forms of the right-end hairpin sequence also function as NS1-dependent origins. For many parvoviral telomeres, however, only an initiator binding site next to the nick site is required for the origin function so that the minimal sequences required for nicking are less than 40 basepairs in length. For MVM, the minimal right-end origin is around 125 basepairs in length and includes most of the hairpin sequence because at least three recognition elements are involved: the nick site 5′-CTWWTCA-3′ (element 1), positioned seven nucleotides upstream from a duplex NS1-binding site (element 2) that is oriented to have the attached NS1 complex extending over the nick site, and a second NS1-binding site (element 3), which is adjacent to the hairpin axis. [ 10 ] The second binding site is over 100 basepairs away from the nick site but is required for NS1-mediated cleavage. [ 10 ] In vivo , there is slight variation in the position of the nick, plus or minus one nucleotide, with one position preferred. During nicking, this site is likely exposed as a single strand and is potentially stabilized as a minimal stem-loop by the tetranucleotide inverted repeats to the sides of the site. Optimal forms of the NS1-binding site contain at least three tandem copies of the 5′-ACCA-3′ sequence. Modest alterations to these motifs only have a small effect on affinity, which suggests that each tetranucleotide motif is recognized by different molecules in the NS1 complex. The NS1-binding site that positions NS1 over the nick site in the right-end origin is a high affinity site. [ 18 ] With ATP, NS1 binds asymmetrically over the aforementioned sequence, protecting a region 41 basepairs in length from digestion. This footprint extends just five nucleotides beyond the 3′-end of the ACCA repeat but 22 nucleotides beyond the 5′-end so that the footprint ends 15 nucleotides beyond the nick site, placing NS1 in position to nick the origin. Nicking only occurs if the second, distant NS1-binding site is also present in the origin and the entire complex is activated by addition of HMG1. [ 18 ] In the absence of NS1, HMG1 binds the hairpin sequence independently, causing it to bend, without protecting any region from digestion. HMG1 can also directly bind to NS1 and mediates interactions between NS1 molecules bound to their recognition elements in the origin, so it is essential for formation of the cleavage complex. The ability of the axis region to reconfigure into a cruciform does not appear to be important in this process. Cleavage is dependent on the correct spacing of the elements of the origin, so additions and deletions can be lethal, whereas substitutions can be tolerated. Addition of HMG1 appears to only slightly adjust the sequences protected by NS1, but the conformation of the intervening DNA changes, folding into a double helical loop that extends about 30 basepairs through a guanine -rich element in the hairpin stem. Between this element and the nick site there are five thymidine residues included in the loop, and the site has a region to its side containing many alternating adenine and thymine residues, which likely increases flexibility. The creation of the loop likely allows the terminus to assume a specific 3-dimensional structure required to activate the nickase since origins that fail to reconfigure into a double-helical loop once HMG1 is added are not nicked. [ 18 ] Following nicking, a replication fork is established at the newly exposed 3′ nucleotide that proceeds to unfold and copy the right-end hairpin through a series of melting and reannealing reactions. [ 9 ] [ 18 ] This process begins once NS1 nicks the inboard end of the original hairpin. The terminal sequence is then copied in the opposite direction, which produces an inverted copy of the original sequence. [ 9 ] The end result is a duplex extended-form terminus that contains two copies of the terminal sequence. [ 18 ] While NS1 is required for this, it is unclear if unfolding is mediated by its helicase activity in front of the fork or by destabilization of the duplex following DNA binding at one of its 5′-(ACCA) n -3′ recognition sites. [ 6 ] This process is usually called terminal resolution but also hairpin transfer or hairpin resolution. [ 6 ] [ 9 ] Terminal resolution occurs with each round of replication, so progeny genomes contain an equal number of each terminal orientation. The two orientations are termed "flip" and "flop", [ 5 ] [ 6 ] and may be represented as R and r, or B and b, for the flip and flop of the right-end telomere and L and l, or A and a, for the flip and flop of the left-end telomere. [ 7 ] [ 19 ] Since parvoviral terminal palindromes are imperfect, it is easy to identify which orientation is which. [ 1 ] The extended-form duplex telomeres generated during terminal resolution are melted, mediated by NS1 with ATP hydrolysis , causing individual strands to fold back on themselves to create hairpin "rabbit ear" structures that have the flip and flop of the termini. This requires the NS1 helicase activity as well as its site-specific binding activity, the latter of which enables NS1 to bind to symmetrical copies of NS1-binding sites that surround the axis of the extended-form terminus. [ 10 ] [ 20 ] Rabbit ear formation allows the 3′ nucleotide of the newly synthesized DNA strand to pair with an internal base, which repositions the replication fork in a strand-switching maneuver that primes synthesis of additional linear sequences. [ 10 ] Switching from DNA synthesis to rabbit-ear formation at the end of terminal resolution may require different types of NS1 complexes. Alternatively, the NS1 complex may remain intact during this switch, being ready to start stand displacement synthesis following refolding into rabbit ears. [ 20 ] After the replication fork is repositioned, replication continues toward the left end, using the newly synthesized DNA strand as a template. [ 7 ] At the left end of the genome, NS1 is probably required to unfold the hairpin. NS1 appears to be directly involved in melting-out and reconfiguring the resulting extended-form left-end duplexes into rabbit ear structures, though this reaction seems to be less efficient than at the right-end terminus. Dimeric and tetrameric concatemers of the genome are generated successively for MVM. In these concatemers, alternating unit-length genomes are fused through a palindromic junction in left-end to left-end and right-end to right-end orientations. [ 1 ] [ 10 ] In total, RHR results in coding sequences of the genome being copied twice as often as the termini. [ 1 ] [ 7 ] [ 10 ] Both linear and hairpin configurations of the right-end telomere support initiation of RHR, so resolution of duplex right-end to right-end junctions can occur symmetrically on the basepaired duplex sequence or after this complex is melted and reconfigured into two hairpins. It is unclear which of these two reactions is more common since both appear to produce identical results. [ 20 ] For AAV, each telomere is 125 bases in length and capable folding into a T-shaped hairpin. AAV contains a Rep gene that encodes for four Rep proteins, two of which, Rep68 and Rep78, act as replication initiator proteins and fulfill the same functions, such the nickase and helicase activities, as NS1. They recognize and bind to a (GAGC) 3 sequence in the stem region of the terminus and nick a site 20 bases away termed trs . The same process of terminal resolution as MVM is done for AAV, but at both ends. The other two Rep proteins, Rep52 and Rep40, are not involved in DNA replication but are implicated in synthesis of progeny. AAV replication is dependent on a helper virus that is either an adenovirus or a herpesvirus that coinfects the cell. In the absence of coinfection, the AAV genome is integrated into the host cell's DNA until coinfection occurs. [ 1 ] A general rule is that parvoviruses with identical termini, i.e. homotelomeric parvoviruses such as AAV and B19, replicate both ends by terminal resolution, generating equal numbers of flips and flops of each telomere. [ 1 ] [ 4 ] [ 6 ] Parvoviruses that have different termini, i.e. heterotelomeric parvoviruses like MVM, replicate one end by terminal resolution and the other end by asymmetric junction resolution, which conserves a single-sequence orientation and requires different structural arrangements and cofactors to activate NS1's nickase. [ 4 ] [ 10 ] AAV DNA intermediates containing covalently linked sense and antisense strands yield genomic concatemers under denaturing conditions, indicating that AAV replication also synthesizes duplex concatemers that require some form of junction resolution. [ 10 ] In negative-sense MVM genomes, the left-end hairpin is 121 nucleotides in length and exists in a single flip sequence orientation. This telomere is Y-shaped and contains small internal palindromes that fold into the "ears" of the Y, a duplex stem region 43 nucleotides in length that is interrupted by an asymmetric thymidine residue, and a mismatched "bubble" sequence in which the 5′-GAA-3′ sequence on the inboard arm is opposite of 5′-GA-3′ in the outboard strand. [ 1 ] [ 20 ] Sequences in this hairpin are involved in both replication and regulation of transcription. The elements involved in these two functions separate the two arms of the hairpin. [ 20 ] The left-end telomere of MVM, and likely of all heterotelomeric parvoviruses, cannot function as a replication origin in its hairpin configuration. Instead, a single origin on the lower strand is created when the hairpin is unfolded, extended, and copied to form a duplex basepaired sequence that spans adjacent genomes in the dimer RF. Within this structure, the sequence from the outboard arm that surrounds a GA/TC [ 1 ] dinucleotide serves as an origin, OriL TC . The equivalent GAA/TTC sequence on the inboard arm that contains the bubble trinucleotide, called OriL GAA , does not serve as an origin. The inboard arm and hairpin configuration of the terminus instead appear to function as upstream control elements for the viral transcriptional promoter P4. Additionally, the ability to segregate one arm from nicking appears essential for replication. [ 20 ] The minimal linear left-end origin is about 50 basepairs long and extends from two 5′-ACGT-3′ motifs, spaced five nucleotides apart at one end, to a position seven basepairs beyond the nick site. The bubble's GA sequence itself is relatively unimportant, but the space that it occupies is necessary for the origin to function. [ 1 ] [ 20 ] Within the origin, there are three recognition sequences: an NS1-binding site that orients the NS1 complex over the nick site 5′-CTWWTCA-3′, which is located 17 nucleotides downstream (toward the 3′-end), and the two ACGT motifs. These motifs bind a heterodimeric cellular factor called either parvovirus initiation factor (PIF) or glucocorticoid modulating element-binding protein (GMEB). [ 21 ] PIF is a site-specific DNA-binding heterodimeric complex that contains two subunits, p96 and p79, and functions as a transcription modulator in the host cell. It binds DNA via a KDWK fold and recognizes two ACGT half-sites. The spacing between these sites can vary significantly for PIF, from one to nine nucleotides, with an optimal spacing of six. PIF stabilizes the binding of NS1 on the active form of the left-end origin, OriL TC , but not on the inactive form, OriL GAA , because the two complexes are able to establish contact over the bubble binucleotide. The left-end hairpin of all other species in the Protoparvovirus genus, [ note 6 ] of which MVM belongs, have bubble asymmetries and PIF-binding sites, though with slight variation in spacing. This suggests that they all share a similar origin segregation mechanism. [ 21 ] Due to the location of the active origin OriL TC in the dimer junction, synthesis of new copies of the left-end hairpin in the correct, i.e.flip, orientation is not straightforward since a replication fork moving from this site through the linear bridge structure should synthesize new DNA in the flop orientation. Instead, the left-hand MVM dimer junction is resolved asymmetrically in a process that creates a cruciform intermediate. This maneuver accomplishes two things: it allows synthesis of the new DNA in the correct sequence orientation, and it creates a structure that can be resolved by NS1. This "heterocruciform" model of synthesis suggests that resolution is driven by the NS1 helicase activity and depends on the inherent instability of the duplex palindrome, a property that allows it to switch between its linear and cruciform configurations. [ 21 ] NS1 initially introduces a single-strand nick in OriL TC in the B ("right") arm of the junction and becomes covalently attached to the DNA on the 5′ side of the nick, exposing a basepaired 3′ nucleotide. Two outcomes can then occur, depending on the speed with which a replication fork is assembled. If assembly is rapid, then while the junction is in its linear configuration, "read-through" synthesis copies the upper strand, which regenerates the duplex junction and displaces a positive-sense strand that feeds back into the replicative pool. This promotes MVM DNA amplification but does not lead to synthesis of new terminal sequences in the correct orientation or to junction resolution. [ 22 ] To create a resolvable structure, the initial nicking must be followed by melting and rearrangement of the dimer junction into a cruciform. This is driven by the 3′-to-5′ helicase activity of the 5′-linked NS1 complex. Once this cruciform extends to include sequences beyond the nick site, the exposed primer at the nick site in OriL TC undergoes template switching by annealing with its complement in the lower arm of the cruciform. If a fork assembles after this point, then the subsequent synthesis unfolds and copies the lower cruciform arm. This creates a heterocruciform intermediate that contains the newly synthesized telomere in the flip sequence orientation that is attached to the lower strand of the B arm. [ 22 ] This modified junction is called MJ2. [ 23 ] The lower arm of MJ2 is an extended-form duplex palindrome that is essentially identical to those generated during terminal resolution. Once MJ2 is synthesized, the lower arm becomes susceptible to rabbit-ear formation. This repositions the 3′ nucleotide of the newly synthesized copy of the lower arm so that it pairs with inboard sequences on the junction's B arm to prime strand displacement synthesis. If a replication fork is created at this 3′ nucleotide, then the lower strand of the B arm is copied, creating an intermediate junction called MJ1 and progressively displacing the upper strand. This leads to the release of the newly synthesized B turn-around (B-ta) sequence. The residual cruciform, called δJ, is partially single-stranded at the upper part of the B arm and contains the intact upper strand of the junction paired to the lower strand of the A ("left") arm, with an intact copy of the left-end hairpin, ending in a 5′ NS1 complex. Since δJ carries the NS1 helicase, it is presumed to periodically alter configuration. [ 22 ] [ 23 ] The next step is less certain but can be inferred based on what is known about the process thus far. The NS1 helicase is expected to create a dynamic structure in which the nick site in δJ in the normally inactive A side is temporarily but repeatedly exposed in a single-stranded form during duplex-to-hairpin rearrangements, which allows NS1 to engage the nick site in the origin OriL GAA without the help of a cofactor. The nick would leave NS1 covalently attached to the positive-sense "B" strand of δJ and lead to the release of this strand. Nicking also leaves open a basepaired 3′ nucleotide on the "A" strand of δJ to prime DNA synthesis. If a replication fork is established here, then the A strand is unfolded and copied to create its duplex extended form. [ 23 ] When MVM genomes replicate in vivo , the aforementioned nick may not occur because both ends of the dimer replicative form contain an efficient number of right-end hairpin origins. Therefore, replication forks may progress back toward the dimer junction from the genome's right end, copying the top strand of the B arm before the final resolution nick. This bypasses dimer bridge resolution and recycles the top strand into a replicating duplex dimer pool. In a closely related virus, LuIII, the single-strand nick releases a positive-sense strand with its left-end hairpin in the flop orientation. Unlike MVM, LuIII packages strands of both sense with equal frequency. In the negative-sense strands, the left-end hairpins are all in the flip orientation, while in the positive-sense strands, there are an equal number of flip and flop orientations. Compared to MVM, LuIII contains a two-base insertion immediately 3′ of the nick site in the right origin, which impairs its efficiency. Because of this, the reduced efficiency of replication fork assembly in the genome's right end may favor single-strand nicking by giving it more time to occur. [ 23 ] Individual progeny genomes are excised from genomic replicative concatemers starting by introducing breaks in replication origins, usually by the replication initiator protein. This results in the establishment of new replication forks that replicate the telomeres in a combination of terminal resolution and junction resolution and displaces individual ssDNA genomes from the replicative molecule. [ 7 ] [ 20 ] At the end of this process, the telomeres are folded back inwards to form hairpins on excised genomes. The extended-form termini created during excision resemble the extended-form molecules prior to terminal resolution, so they can be melted out and refolded into rabbit ears for additional rounds of replication. [ 1 ] Within an infected cell, numerous replicative concatemers are therefore able to arise. [ 7 ] Displacement of progeny ssDNA genomes either occurs: predominantly or exclusively during active DNA replication, or when cells are assembling viral particles. Displacement of single strands may therefore be associated with packaging viral DNA into capsids. Earlier research suggested that the preassembled viral particle may sequester the genome in a 5′-to-3′ direction as it is displaced from the fork, but more recent research suggests that packaging is performed in a 3′-to-5′ direction driven by the NS1 helicase using newly synthesized single strands. [ 24 ] It is not clear if these single strands are released into the nucleoplasm so that packaging complexes are physically separate from replication complexes or if the replication intermediates serve as both replication and packaging substrates. In the latter case, newly displaced progeny genomes would be kept in the replication complex via interactions between their 5′-linked NS1 molecules and NS1 or capsid proteins that are physically associated with replicating DNA. [ 24 ] Genomes are inserted into the capsid via an entrance called a portal situated at one of the icosahedral 5-fold axes of the capsid, [ 4 ] which is possibly opposite of the opening from which genomes are expelled early in the replication cycle. [ 5 ] Strand selection for encapsidation likely does not involve specific packaging signals but may be predictable by the Kinetic Hairpin Transfer (KHT) mathematical model, which explains the distribution of the strands and terminal conformations of packaged genomes in terms of the efficiency with which each terminus type can undergo reactions that allow it to be copied and reformed. In other words, the KHT model postulates that the relative efficiency with which two genomic termini are resolved and replicated determines the distribution of amplified replication intermediates created during infection and ultimately the efficiency with which ssDNAs of characteristic polarity and terminal orientations are excised, which will then be packaged with equal efficiency. [ 4 ] [ 24 ] Preferential excision of particular genomes is only apparent during packaging. Therefore, among parvoviruses that package strands of one sense, replication appears to be biphasic. At early times, both sense strands are excised. This is followed by a switch in the replication mode that allows for exclusive synthesis of a single sense for packaging. A modified form of the KHT model, called the preferential strand displacement model, proposes that the aforementioned switch in replication is caused by the onset of packaging because the substrate for packaging is probably a newly displaced DNA molecule. [ 24 ] For heterotelomeric parvoviruses, imbalance of origin firing leads to preferential displacement of negative sense strands from the right-end origin. The relative frequency of sense strands in packaged virions can therefore be used to infer the type of resolution mechanism used during excision. [ 5 ] Shortly after the start of S-phase, translation of viral mRNA leads to the accumulation of capsid proteins in the nucleus. These proteins form into oligomers that are assembled into intact empty capsids. After encapsidation, complete virions may be exported from the nucleus to the exterior of the cell before disintegration of the nucleus. Disruption of the host cell environment may also occur later on in infection. This results in cell lysis via necrosis or apoptosis , which releases virions to the outside of the cell. [ 4 ] [ 17 ] Many small replicons that have circular genomes such as circular ssDNA viruses and circular plasmids replicate via rolling circle replication (RCR), which is a unidirectional, strand displacement form of DNA replication similar to RHR. In RCR, successive rounds of replication, which proceeds in a loop around the genome, are initiated and terminated by site-specific single-strand nicks made by a replicon-encoded endonuclease, variously called the nickase, relaxase, mobilization protein (mob), transesterase, or replication protein (Rep). The replication initiator protein of parvoviruses is genetically related to these other endonucleases. [ 17 ] RCR initiator proteins contain three motifs considered to be important for replication. Two of these are retained within parvovirus initiator proteins: an HUHUUU cluster, which is presumed to bind to a Mg 2+ ion required for nicking, and a YxxxK motif that contains the active-site tyrosine residue that attacks the phosphodiester bond of target DNA. In contrast to RCR initiator proteins, which can join together DNA strands, RHR initiator proteins have only vestigial traces of being able to perform ligation. [ 17 ] RCR begins when the initiator protein nicks a DNA strand at a specific sequence in the replication origin region. This is done through a transesterification reaction that forms a 5′-phosphate bond that connects the DNA to the active-site tyrosine and frees the 3′-end hydroxyl (3′-OH) adjacent to the nick site. The 3′-end is then used as a primer for the host DNA polymerase to begin replication while the initiator protein remains attached to the 5′-end of the "original" strand. After one loop of replication around the circular genome, the initiator protein returns to the nick site, i.e. the original initiator complex, while still attached to the parent strand and attacks the regenerated duplex nick site, or a nearby second site in some cases, by means of a topoisomerase -like nicking-joining reaction. [ 17 ] During the aforementioned reaction, the initiator protein cleaves a new nick site and is transferred across the analogous phosphodiester bond. It thereby becomes attached to the new 5′-end while ligating the 5′-end of the first strand to which it was originally attached to the 3′-end of the same strand. This second mechanism varies depending on the replicon. Some replicons such as the virus ΦX174 contain a second active tyrosine residue in the initiator protein. Others use the analogous active-site tyrosine in a second initiator protein that is present as part of a multimeric nickase complex. [ 17 ] This second nicking reaction may occur after one loop or successive loops may occur in which a concatemer containing multiple copies of the genome is created. The result of this nick is that displaced genomes become detached from the replicative molecule. These copies of the genome are ligated and may either be encapsidated into progeny capsids, provided they are monomeric, or converted to a covalently-closed double-stranded form by a host DNA polymerase for further replication. While RHR generally involves replication of both sense strands in a continuous process, RCR has complementary strand synthesis and genomic strand synthesis occur separately. [ 7 ] The strategies used in RHR to engage the nick site are also present in RCR. Most RCR origins are in the form of duplex DNA that has to be melted before nicking. RCR initiators accomplish this by binding to specific DNA-binding sequences in the origin next to the initiation site. [ 17 ] The latter site is then melted in a process that consumes ATP and which is assisted by the ability of the separated strands to reconfigure into stem-loop structures. In these structures, the nick site is presented on an exposed loop. Like RHR initiator proteins, many RCR initiator proteins contain helicase activity, which allows them to melt the DNA prior to nicking and serve as the 3′-to-5′ helicase in the replication fork. [ 19 ]
https://en.wikipedia.org/wiki/Rolling_hairpin_replication
Rolling resistance , sometimes called rolling friction or rolling drag , is the force resisting the motion when a body (such as a ball , tire , or wheel ) rolls on a surface. It is mainly caused by non-elastic effects; that is, not all the energy needed for deformation (or movement) of the wheel, roadbed, etc., is recovered when the pressure is removed. Two forms of this are hysteresis losses (see below ), and permanent (plastic) deformation of the object or the surface (e.g. soil). Note that the slippage between the wheel and the surface also results in energy dissipation. Although some researchers have included this term in rolling resistance, some suggest that this dissipation term should be treated separately from rolling resistance because it is due to the applied torque to the wheel and the resultant slip between the wheel and ground, which is called slip loss or slip resistance. [ 1 ] In addition, only the so-called slip resistance involves friction , therefore the name "rolling friction" is to an extent a misnomer. Analogous with sliding friction , rolling resistance is often expressed as a coefficient times the normal force. This coefficient of rolling resistance is generally much smaller than the coefficient of sliding friction. [ 2 ] Any coasting wheeled vehicle will gradually slow down due to rolling resistance including that of the bearings, but a train car with steel wheels running on steel rails will roll farther than a bus of the same mass with rubber tires running on tarmac/asphalt . Factors that contribute to rolling resistance are the (amount of) deformation of the wheels, the deformation of the roadbed surface, and movement below the surface. Additional contributing factors include wheel diameter , [ 3 ] load on wheel , surface adhesion, sliding, and relative micro-sliding between the surfaces of contact. The losses due to hysteresis also depend strongly on the material properties of the wheel or tire and the surface. For example, a rubber tire will have higher rolling resistance on a paved road than a steel railroad wheel on a steel rail. Also, sand on the ground will give more rolling resistance than concrete . Soil rolling resistance factor is not dependent on speed. [ citation needed ] The primary cause of pneumatic tire rolling resistance is hysteresis : [ 5 ] A characteristic of a deformable material such that the energy of deformation is greater than the energy of recovery. The rubber compound in a tire exhibits hysteresis. As the tire rotates under the weight of the vehicle, it experiences repeated cycles of deformation and recovery, and it dissipates the hysteresis energy loss as heat. Hysteresis is the main cause of energy loss associated with rolling resistance and is attributed to the viscoelastic characteristics of the rubber. This main principle is illustrated in the figure of the rolling cylinders. If two equal cylinders are pressed together then the contact surface is flat. In the absence of surface friction, contact stresses are normal (i.e. perpendicular) to the contact surface. Consider a particle that enters the contact area at the right side, travels through the contact patch and leaves at the left side. Initially its vertical deformation is increasing, which is resisted by the hysteresis effect. Therefore, an additional pressure is generated to avoid interpenetration of the two surfaces. Later its vertical deformation is decreasing. This is again resisted by the hysteresis effect. In this case this decreases the pressure that is needed to keep the two bodies separate. The resulting pressure distribution is asymmetrical and is shifted to the right. The line of action of the (aggregate) vertical force no longer passes through the centers of the cylinders. This means that a moment occurs that tends to retard the rolling motion. Materials that have a large hysteresis effect, such as rubber, which bounce back slowly, exhibit more rolling resistance than materials with a small hysteresis effect that bounce back more quickly and more completely, such as steel or silica . Low rolling resistance tires typically incorporate silica in place of carbon black in their tread compounds to reduce low-frequency hysteresis without compromising traction. [ 7 ] Note that railroads also have hysteresis in the roadbed structure. [ 8 ] In the broad sense, specific "rolling resistance" (for vehicles) is the force per unit vehicle weight required to move the vehicle on level ground at a constant slow speed where aerodynamic drag (air resistance) is insignificant and also where there are no traction (motor) forces or brakes applied. In other words, the vehicle would be coasting if it were not for the force to maintain constant speed. [ 9 ] This broad sense includes wheel bearing resistance, the energy dissipated by vibration and oscillation of both the roadbed and the vehicle, and sliding of the wheel on the roadbed surface (pavement or a rail). But there is an even broader sense that would include energy wasted by wheel slippage due to the torque applied from the engine . This includes the increased power required due to the increased velocity of the wheels where the tangential velocity of the driving wheel(s) becomes greater than the vehicle speed due to slippage. Since power is equal to force times velocity and the wheel velocity has increased, the power required has increased accordingly. The pure "rolling resistance" for a train is that which happens due to deformation and possible minor sliding at the wheel-road contact. [ 10 ] For a rubber tire, an analogous energy loss happens over the entire tire, but it is still called "rolling resistance". In the broad sense, "rolling resistance" includes wheel bearing resistance, energy loss by shaking both the roadbed (and the earth underneath) and the vehicle itself, and by sliding of the wheel, road/rail contact. Railroad textbooks seem to cover all these resistance forces but do not call their sum "rolling resistance" (broad sense) as is done in this article. They just sum up all the resistance forces (including aerodynamic drag) and call the sum basic train resistance (or the like). [ 11 ] Since railroad rolling resistance in the broad sense may be a few times larger than just the pure rolling resistance [ 12 ] reported values may be in serious conflict since they may be based on different definitions of "rolling resistance". The train's engines must, of course, provide the energy to overcome this broad-sense rolling resistance. For tires, rolling resistance is defined as the energy consumed by a tire per unit distance covered. It is also called rolling friction or rolling drag. It is one of the forces that act to oppose the motion of a driver. The main reason for this is that when the tires are in motion and touch the surface, the surface changes shape and causes deformation of the tire. [ 13 ] For highway motor vehicles, there is some energy dissipated in shaking the roadway (and the earth beneath it), the shaking of the vehicle itself, and the sliding of the tires. But, other than the additional power required due to torque and wheel bearing friction, non-pure rolling resistance doesn't seem to have been investigated, possibly because the "pure" rolling resistance of a rubber tire is several times higher than the neglected resistances. [ 14 ] The "rolling resistance coefficient" is defined by the following equation: [ 6 ] F = C r r N {\displaystyle \ F=C_{rr}N} where C r r {\displaystyle C_{rr}} is the force needed to push (or tow) a wheeled vehicle forward (at constant speed on a level surface, or zero grade, with zero air resistance) per unit force of weight. It is assumed that all wheels are the same and bear identical weight. Thus: C r r = 0.01 {\displaystyle \ C_{rr}=0.01} means that it would only take 0.01 pounds to tow a vehicle weighing one pound. For a 1000-pound vehicle, it would take 1000 times more tow force, i.e. 10 pounds. One could say that C r r {\displaystyle C_{rr}} is in lb(tow-force)/lb(vehicle weight). Since this lb/lb is force divided by force, C r r {\displaystyle C_{rr}} is dimensionless. Multiply it by 100 and you get the percent (%) of the weight of the vehicle required to maintain slow steady speed. C r r {\displaystyle C_{rr}} is often multiplied by 1000 to get the parts per thousand, which is the same as kilograms (kg force) per metric ton (tonne = 1000 kg ), [ 15 ] which is the same as pounds of resistance per 1000 pounds of load or Newtons/kilo-Newton, etc. For the US railroads, lb/ton has been traditionally used; this is just 2000 C r r {\displaystyle 2000C_{rr}} . Thus, they are all just measures of resistance per unit vehicle weight. While they are all "specific resistances", sometimes they are just called "resistance" although they are really a coefficient (ratio)or a multiple thereof. If using pounds or kilograms as force units, mass is equal to weight (in earth's gravity a kilogram a mass weighs a kilogram and exerts a kilogram of force) so one could claim that C r r {\displaystyle C_{rr}} is also the force per unit mass in such units. The SI system would use N/tonne (N/T, N/t), which is 1000 g C r r {\displaystyle 1000gC_{rr}} and is force per unit mass, where g is the acceleration of gravity in SI units (meters per second square). [ 16 ] The above shows resistance proportional to C r r {\displaystyle C_{rr}} but does not explicitly show any variation with speed, loads , torque , surface roughness, diameter , tire inflation/wear, etc., because C r r {\displaystyle C_{rr}} itself varies with those factors. It might seem from the above definition of C r r {\displaystyle C_{rr}} that the rolling resistance is directly proportional to vehicle weight but it is not . There are at least two popular models for calculating rolling resistance. The results of these tests can be hard for the general public to obtain as manufacturers prefer to publicize "comfort" and "performance". The coefficient of rolling resistance for a slow rigid wheel on a perfectly elastic surface, not adjusted for velocity, can be calculated by [ 18 ] [ citation needed ] C r r = z / d {\displaystyle C_{rr}={\sqrt {z/d}}} where The empirical formula for C r r {\displaystyle C_{rr}} for cast iron mine car wheels on steel rails is: [ 19 ] C r r = 0.0048 ( 18 / D ) 1 2 ( 100 / W ) 1 4 = 0.0643988 W D 2 4 {\displaystyle C_{rr}=0.0048(18/D)^{\frac {1}{2}}(100/W)^{\frac {1}{4}}={\frac {0.0643988}{\sqrt[{4}]{WD^{2}}}}} where As an alternative to using C r r {\displaystyle C_{rr}} one can use b {\displaystyle b} , which is a different rolling resistance coefficient or coefficient of rolling friction with dimension of length. It is defined by the following formula: [ 3 ] F = N b r {\displaystyle F={\frac {Nb}{r}}} where The above equation, where resistance is inversely proportional to radius r {\displaystyle r} seems to be based on the discredited "Coulomb's law" (Neither Coulomb's inverse square law nor Coulomb's law of friction) [ citation needed ] . See dependence on diameter . Equating this equation with the force per the rolling resistance coefficient , and solving for b {\displaystyle b} , gives b {\displaystyle b} = C r r r {\displaystyle C_{rr}r} . Therefore, if a source gives rolling resistance coefficient ( C r r {\displaystyle C_{rr}} ) as a dimensionless coefficient, it can be converted to b {\displaystyle b} , having units of length, by multiplying C r r {\displaystyle C_{rr}} by wheel radius r {\displaystyle r} . Table of rolling resistance coefficient examples: [3] For example, in earth gravity, a car of 1000 kg on asphalt will need a force of around 100 newtons for rolling (1000 kg × 9.81 m/s 2 × 0.01 = 98.1 N). According to Dupuit (1837), rolling resistance (of wheeled carriages with wooden wheels with iron tires) is approximately inversely proportional to the square root of wheel diameter. [ 34 ] This rule has been experimentally verified for cast iron wheels (8″ - 24″ diameter) on steel rail [ 35 ] and for 19th century carriage wheels. [ 33 ] But there are other tests on carriage wheels that do not agree. [ 33 ] Theory of a cylinder rolling on an elastic roadway also gives this same rule [ 36 ] These contradict earlier (1785) tests by Coulomb of rolling wooden cylinders where Coulomb reported that rolling resistance was inversely proportional to the diameter of the wheel (known as "Coulomb's law"). [ 37 ] This disputed (or wrongly applied) -"Coulomb's law" is still found in handbooks, however. For pneumatic tires on hard pavement, it is reported that the effect of diameter on rolling resistance is negligible (within a practical range of diameters). [ 38 ] [ 39 ] The driving torque T {\displaystyle T} to overcome rolling resistance R r {\displaystyle R_{r}} and maintain steady speed on level ground (with no air resistance) can be calculated by: T = V s Ω R r {\displaystyle T={\frac {V_{s}}{\Omega }}R_{r}} where It is noteworthy that V s / Ω {\displaystyle V_{s}/\Omega } is usually not equal to the radius of the rolling body as a result of wheel slip. [ 40 ] [ 41 ] [ 42 ] The slip between wheel and ground inevitably occurs whenever a driving or braking torque is applied to the wheel. [ 43 ] [ 44 ] Consequently, the linear speed of the vehicle differs from the wheel's circumferential speed. It is notable that slip does not occur in driven wheels, which are not subjected to driving torque, under different conditions except braking. Therefore, rolling resistance, namely hysteresis loss, is the main source of energy dissipation in driven wheels or axles, whereas in the drive wheels and axles slip resistance, namely loss due to wheel slip, plays the role as well as rolling resistance. [ 45 ] Significance of rolling or slip resistance is largely dependent on the tractive force , coefficient of friction, normal load, etc. [ 46 ] "Applied torque" may either be driving torque applied by a motor (often through a transmission ) or a braking torque applied by brakes (including regenerative braking ). Such torques results in energy dissipation (above that due to the basic rolling resistance of a freely rolling, i.e. except slip resistance). This additional loss is in part due to the fact that there is some slipping of the wheel, and for pneumatic tires, there is more flexing of the sidewalls due to the torque. Slip is defined such that a 2% slip means that the circumferential speed of the driving wheel exceeds the speed of the vehicle by 2%. A small percentage slip can result in a slip resistance which is much larger than the basic rolling resistance. For example, for pneumatic tires, a 5% slip can translate into a 200% increase in rolling resistance. [ 47 ] This is partly because the tractive force applied during this slip is many times greater than the rolling resistance force and thus much more power per unit velocity is being applied (recall power = force x velocity so that power per unit of velocity is just force). So just a small percentage increase in circumferential velocity due to slip can translate into a loss of traction power which may even exceed the power loss due to basic (ordinary) rolling resistance. For railroads, this effect may be even more pronounced due to the low rolling resistance of steel wheels. It is shown that for a passenger car, when the tractive force is about 40% of the maximum traction, the slip resistance is almost equal to the basic rolling resistance (hysteresis loss). But in case of a tractive force equal to 70% of the maximum traction, slip resistance becomes 10 times larger than the basic rolling resistance. [ 1 ] In order to apply any traction to the wheels, some slippage of the wheel is required. [ 48 ] For trains climbing up a grade, this slip is normally 1.5% to 2.5%. Slip (also known as creep ) is normally roughly directly proportional to tractive effort . An exception is if the tractive effort is so high that the wheel is close to substantial slipping (more than just a few percent as discussed above), then slip rapidly increases with tractive effort and is no longer linear. With a little higher applied tractive effort the wheel spins out of control and the adhesion drops resulting in the wheel spinning even faster. This is the type of slipping that is observable by eye—the slip of say 2% for traction is only observed by instruments. Such rapid slip may result in excessive wear or damage. Rolling resistance greatly increases with applied torque. At high torques, which apply a tangential force to the road of about half the weight of the vehicle, the rolling resistance may triple (a 200% increase). [ 47 ] This is in part due to a slip of about 5%. The rolling resistance increase with applied torque is not linear, but increases at a faster rate as the torque becomes higher. The rolling resistance coefficient, Crr, significantly decreases as the weight of the rail car per wheel increases. [ 49 ] For example, an empty freight car had about twice the Crr as a loaded car (Crr=0.002 vs. Crr=0.001). This same "economy of scale" shows up in testing of mine rail cars. [ 50 ] The theoretical Crr for a rigid wheel rolling on an elastic roadbed shows Crr inversely proportional to the square root of the load. [ 36 ] If Crr is itself dependent on wheel load per an inverse square-root rule, then for an increase in load of 2% only a 1% increase in rolling resistance occurs. [ 51 ] For pneumatic tires, the direction of change in Crr (rolling resistance coefficient) depends on whether or not tire inflation is increased with increasing load. [ 52 ] It is reported that, if inflation pressure is increased with load according to an (undefined) "schedule", then a 20% increase in load decreases Crr by 3%. But, if the inflation pressure is not changed, then a 20% increase in load results in a 4% increase in Crr. Of course, this will increase the rolling resistance by 20% due to the increase in load plus 1.2 x 4% due to the increase in Crr resulting in a 24.8% increase in rolling resistance. [ 53 ] When a vehicle ( motor vehicle or railroad train ) goes around a curve, rolling resistance usually increases. If the curve is not banked so as to exactly counter the centrifugal force with an equal and opposing centripetal force due to the banking, then there will be a net unbalanced sideways force on the vehicle. This will result in increased rolling resistance. Banking is also known as "superelevation" or "cant" (not to be confused with rail cant of a rail ). For railroads, this is called curve resistance but for roads it has (at least once) been called rolling resistance due to cornering . Rolling friction generates sound (vibrational) energy, as mechanical energy is converted to this form of energy due to the friction. One of the most common examples of rolling friction is the movement of motor vehicle tires on a roadway , a process which generates sound as a by-product. [ 54 ] The sound generated by automobile and truck tires as they roll (especially noticeable at highway speeds) is mostly due to the percussion of the tire treads, and compression (and subsequent decompression) of air temporarily captured within the treads. [ 55 ] Several factors affect the magnitude of rolling resistance a tire generates: In a broad sense rolling resistance can be defined as the sum of components [ 62 ] ): Wheel bearing torque losses can be measured as a rolling resistance at the wheel rim, Crr . Railroads normally use roller bearings which are either cylindrical (Russia) [ 63 ] or tapered (United States). [ 64 ] The specific rolling resistance in bearings varies with both wheel loading and speed. [ 65 ] Wheel bearing rolling resistance is lowest with high axle loads and intermediate speeds of 60–80 km/h with a Crr of 0.00013 (axle load of 21 tonnes). For empty freight cars with axle loads of 5.5 tonnes, Crr goes up to 0.00020 at 60 km/h but at a low speed of 20 km/h it increases to 0.00024 and at a high speed (for freight trains) of 120 km/h it is 0.00028. The Crr obtained above is added to the Crr of the other components to obtain the total Crr for the wheels. The rolling resistance of steel wheels on steel rail of a train is far less than that of the rubber tires wheels of an automobile or truck. The weight of trains varies greatly; in some cases they may be much heavier per passenger or per net ton of freight than an automobile or truck, but in other cases they may be much lighter. As an example of a very heavy passenger train, in 1975, Amtrak passenger trains weighed a little over 7 tonnes per passenger, [ 66 ] which is much heavier than an average of a little over one ton per passenger for an automobile. This means that for an Amtrak passenger train in 1975, much of the energy savings of the lower rolling resistance was lost to its greater weight. An example of a very light high-speed passenger train is the N700 Series Shinkansen , which weighs 715 tonnes and carries 1323 passengers, resulting in a per-passenger weight of about half a tonne. This lighter weight per passenger, combined with the lower rolling resistance of steel wheels on steel rail means that an N700 Shinkansen is much more energy efficient than a typical automobile. In the case of freight, CSX ran an advertisement campaign in 2013 claiming that their freight trains move "a ton of freight 436 miles on a gallon of fuel", whereas some sources claim trucks move a ton of freight about 130 miles per gallon of fuel, indicating trains are more efficient overall.
https://en.wikipedia.org/wiki/Rolling_resistance
A rollout is an analysis technique for backgammon positions and moves. A rollout consists of playing the same position many times (with different dice rolls) and recording the results. The balance of wins and losses is used to evaluate the equity of the position. Historically this was done by hand, but it is now undertaken primarily by computer programs. In order to compare two or more ways to move, rollouts can be performed from the positions after each move. Better choices will yield a more favorable position, and thus will win more times (and lose more rarely) in the end. Computer programs usually play rollouts where the number of games is a multiple of 36, and ensure that the first dice roll is uniformly distributed. That is, 1/36 of the played games will start with a roll of 1-1 , another 36th will start with 1-2 , and so on. A common length for a rollout is 36x36 = 1296, in which each possible combination is used for the first two rolls. This improves the accuracy of the technique. Rollouts depend on the availability of a good evaluator. If the computer makes mistakes in particular scenarios, the rollout results may be invalid. For example, if a computer AI's backgame strategy was weak, rollouts starting in a backgame position will skew the equity against the player who chose that strategy. When comparing moves, a weak backgame AI may favor less aggressive style. [ 1 ] It is therefore not uncommon to see slightly different outcomes from rollouts done with different programs. [ 2 ] Nevertheless, rollouts whose results are consistently nonintuitive occur, [ 3 ] and their results are usually accepted by most backgammon players. Modern backgammon opening theory is mostly based on rollouts. [ 4 ]
https://en.wikipedia.org/wiki/Rollout_(backgammon)
Rollover (also known as flameover ) is a stage of a structure fire when fire gases in a room or other enclosed area ignite. [ 1 ] Since heated gases, the product of pyrolysis , rise to the ceiling, this is where a rollover phenomenon is most often witnessed. Visually, this may be seen as flames "rolling" across the ceiling, radiating outward from the seat of the fire to the extent of gas spread. Rollover is not the same as flashover , although it may precede it, and the terms may be confused. [ 2 ] In the case of rollover, only gases present in the room, not the room contents, ignite. This combustion article is a stub . You can help Wikipedia by expanding it . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Rollover_(fire)
Horace Romano " Rom " Harré ( / ˈ h æ r eɪ / ; [ 2 ] 18 December 1927 – 17 October 2019) [ 3 ] [ 4 ] was a New Zealand-British philosopher and psychologist . Harré was born in Āpiti , in northern Manawatu , near Palmerston North , New Zealand, [ 5 ] but held British citizenship . [ 6 ] He studied chemical engineering and later graduated with a BSc in mathematics (1948) and a Master's in Philosophy (1952), both at the University of New Zealand , now the University of Auckland . He taught mathematics at King's College, Auckland (1948–1953) and the University of Punjab in Lahore , Pakistan (1953–54). He then studied at University College, Oxford , where he completed a B.Phil. under the supervision of J. L. Austin in 1956. After a fellowship at the University of Birmingham he was lecturer at the University of Leicester from 1957 to 1959. He returned to Oxford as the successor to Friedrich Waismann as University Lecturer in Philosophy of Science in 1960. [ 3 ] At Oxford, where he was a Fellow of Linacre College , he was active in the founding of the Honours School of Physics and Philosophy. [ 3 ] He also played an important part in the discursive turn in social psychology , a field he came to in the middle of his career. After his retirement from Oxford in 1995, he joined the psychology department of Georgetown University (having previously taught at that university during Spring Semesters). [ 5 ] There he continued as Distinguished Research Professor until he retired in 2016. [ 5 ] Harré gave yearly short courses as an adjunct professor at Binghamton University from 1975 through 1998 [ 7 ] and occasional courses at both American University in Washington, D.C., and at George Mason University at Fairfax, Virginia. From 2009 until 2011 he served as Director of the Centre for Philosophy of Natural and Social Science at the London School of Economics in conjunction with his US post. He was visiting professor at many places, teaching courses at Aoyama University , Tokyo; Universidad Santiago de Compostela , Spain; Universidad Peruana Cayetano Heredia , Lima, Peru; Free University at Brussels ; Aarhus University in Denmark and elsewhere. Harré was one of the world's most prolific philosophers of social science. He wrote on a wide variety of subjects including: philosophy of mathematics , philosophy of science , ontology , psychology , social psychology , chemistry , sociology and philosophy . He was an important early influence on the philosophical movement critical realism , publishing The Principles of Scientific Thinking in 1970 and Causal Powers with E. H. Madden in 1975. He supervised Roy Bhaskar 's doctoral studies, and continued to maintain close involvement with realism. He also supervised Patrick Baert , German Berrios , and Jonathan Smith 's doctoral studies, respectively in social theory , history and epistemology of psychiatry, and social psychology . Another one of Harré's distinctive contributions was to the understanding of the social self in microsociology , which he called " ethogenics :" this method attempts to understand the systems of belief or means by which individuals can attach significance to their actions and form their identities, in addition to the structure of rules and cultural resources that underlie these actions. [ 8 ] In his later years Harre returned to his first love of chemistry and became the honorary president of the International Society for the Philosophy of Chemistry. In addition to regular lectures and articles on the subject, he organized two international conferences on the philosophy of chemistry , one in Oxford and the second at the London School of Economics while he was the director of its Center for the Philosophy of Science. Harré was the uncle of New Zealand politician and trade unionist Laila Harré and professor of psychology and environmentalist Niki Harré . [ 9 ] Books Edited books
https://en.wikipedia.org/wiki/Rom_Harré
A Roma wall or Gypsy wall is a wall built by local authorities in the Czech Republic , Romania and Slovakia to segregate the Roma minority from the rest of the population. Such practices have been criticised by both human rights organizations and the European Union, who see it as a case of racial segregation. [ 1 ] A 2-metre high, 65-metre long wall along Matiční street [ 2 ] was built in the city of Ústí nad Labem in 1999 following complaints of the locals that the Roma were "noisy and unhygienic". [ 1 ] Foreign journalists travelled to Ústí nad Labem to investigate, and were told by councillors that the wall was not meant to segregate by race, but to keep respectable citizens safe from noise and rubbish coming from the opposite side of the street. [ 3 ] The local authorities also argued that it will keep the Roma children from running into the street [ 4 ] and that it's part of an "urban renewal programme". [ 2 ] The Roma Civic Initiative and Deputy Prime Minister Vladimír Špidla vocally opposed the construction. [ 4 ] The wall was also criticised by U.S. Congressman Chris Smith , and a delegation from the Council of Europe described it as a "racist" and drastic solution. [ 3 ] The European Union Commissioner Günter Verheugen called it "a violation of human rights ". [ 2 ] The Czech Republic promised that it would be torn down, and it was demolished on 24 November 1999. The government provided the local authorities money for social welfare programmes, but much of the money was used for buying the houses of the non-Roma residents, thus creating a local Roma-only " ghetto ". [ 1 ] In April 2000, the Constitutional Court of the Czech Republic ruled that the MPs exceeded their legal powers when they ordered the demolition of the wall, as this was a matter of local self-government . [ 5 ] The fence from Matiční Street was then used to separate Ústí nad Labem Zoo from Drážďanská Street, which it still serves today. [ 6 ] In Baia Mare , Romania, the local administration built a wall between the road Strada Horea and an area of social housing that houses 1000 Roma people into one-room apartments, some without water or electricity. [ 7 ] According to the mayor, this wall was designed to "prevent traffic accidents", [ 8 ] while pro-democracy organizations say it amounts to "institutionalized racism". [ 8 ] In 2011, the national anti-discrimination council fined mayor Cătălin Cherecheș for the building of the wall and ordered it to be pulled down. [ 9 ] The wall nevertheless proved popular with the majority population and the mayor was overwhelmingly re-elected in 2012. [ 7 ] In Ostrovany , a 150-metre long wall was built by the local government separating the Roma from the rest of the population. According to the mayor, the goal was to "stop vandalism and theft". [ 10 ] Slovaks accuse the Roma of stealing their fruit, vegetables and metal fence posts. [ 11 ] Unlike in other cases, in Ostrovany, the Roma form the majority of the population (1200 of the 1786 residents), making it even more unjust, according to critics, who argue that separating people is not a solution to social problems. [ 10 ] A wall was built in the summer of 2013 in the Košice-Západ district of Košice . [ 12 ] Androulla Vassiliou , the European Commissioner for Education, Culture, Multilingualism and Youth complained about the wall arguing that it "violates the EU's stand against racism" by segregating the Roma people [ 12 ] and it is at odds with the concept of European Capital of Culture , which the town bears this year. [ 13 ] The mayor of Košice, Richard Raši , called the wall illegally built without the necessary permits and pledged its demolition. [ 12 ] In 2013, there were 14 Roma walls in Slovakia, of which 8 in the Košice and Prešov regions, have the highest Roma populations. The local authorities decide for building such walls and they usually state a different reason than the Roma people. [ 14 ]
https://en.wikipedia.org/wiki/Roma_wall
Etruscans control Italy. Carthaginian occupation of parts of Sardinia and Sicily . Etruria becomes part of Rome. Punic Wars . Iberia becomes a Roman province. Athens becomes a Roman province. Carthage becomes a Roman province. Asia Minor becomes a Roman province. Egypt becomes a Roman province. Britannia becomes a Roman province. Dacia becomes a Roman province. Metals and metal working had been known to the people of modern Italy since the Bronze Age . By 53 BC, Rome had expanded to control an immense expanse of the Mediterranean. This included Italy and its islands, Spain , Macedonia , Africa , Asia Minor , Syria and Greece ; by the end of the Emperor Trajan 's reign, the Roman Empire had grown further to encompass parts of Britain , Egypt , all of modern Germany west of the Rhine, Dacia, Noricum , Judea , Armenia , Illyria , and Thrace (Shepard 1993). [ 1 ] As the empire grew, so did its need for metals. Central Italy itself was not rich in metal ores, leading to necessary trade networks in order to meet the demand for metal. Early Italians had some access to metals in the northern regions of the peninsula in Tuscany and Cisalpine Gaul , as well as the islands Elba and Sardinia . With the conquest of Etruria in 275 BC and the subsequent acquisitions due to the Punic Wars , Rome had the ability to stretch further into Transalpine Gaul and Iberia, both areas rich in minerals. At the height of the Empire, Rome exploited mineral resources from Tingitana in north western Africa to Egypt, Arabia to North Armenia, Galatia to Germania , and Britannia to Iberia, encompassing all of the Mediterranean coast. Britannia, Iberia, Dacia, and Noricum were of special significance, as they were very rich in deposits and became major sites of resource exploitation (Shepard, 1993). There is evidence that after the middle years of the Empire there was a sudden and steep decline in mineral extraction . This was mirrored in other trades and industries. One of the most important Roman sources of information is the Naturalis Historia of Pliny the Elder . Several books (XXXIII–XXXVII) of his encyclopedia cover metals and metal ores, their occurrence, importance and development. Many of the first metal artifacts that archaeologists have identified have been tools or weapons , as well as objects used as ornaments such as jewellery . These early metal objects were made of the softer metals; copper , gold , and lead in particular, either as native metals or by thermal extraction from minerals, and softened by minimal heat (Craddock, 1995). While technology did advance to the point of creating surprisingly pure copper, most ancient metals are in fact alloys , the most important being bronze , an alloy of copper and tin . As metallurgical technology developed ( hammering , melting , smelting , roasting , cupellation , moulding , smithing , etc.), more metals were intentionally included in the metallurgical repertoire. By the height of the Roman Empire, metals in use included: silver , zinc , iron , mercury , arsenic , antimony , lead, gold, copper, tin (Healy 1978). As in the Bronze Age, metals were used based on many physical properties: aesthetics, hardness , colour, taste/smell (for cooking wares), timbre (instruments), resistance to corrosion , weight (i.e., density), and other factors. Many alloys were also possible, and were intentionally made in order to change the properties of the metal; e.g. the alloy of predominately tin with lead would harden the soft tin, to create pewter , which would prove its utility as cooking and tableware . Gold Iberia , Gaul , Cisalpine Gaul, Britannia, Noricum, Dalmatia , Moesia Superior , Arabia , India, Africa Silver Copper Cisthene , Cyprus, Carmania, Arabia , Aleppo , Sinai , Meroe , Masaesyli , India, Britannia. Tin Lead Iron Zinc Mercury Arsenic Phalagonia , Carmania Antimony Mytilene , Chios , around Smyrna , Transcaucasia , Persia, Tehran , Punjab , Britannia Iberia (modern Spain and Portugal ) was possibly the Roman province richest in mineral ore , containing deposits of gold, silver, copper, tin, lead, iron, and mercury. [ 2 ] From its acquisition after the Second Punic War to the Fall of Rome, Iberia continued to produce a significant amount of Roman metals. [ 3 ] Britannia was also very rich in metals. Gold was mined at Dolaucothi in Wales , copper and tin in Cornwall , and lead in the Pennines , Mendip Hills and Wales. Significant studies have been made on the iron production of Roman Britain ; iron use in Europe was intensified by the Romans, and was part of the exchange of ideas between the cultures through Roman occupation . [ 4 ] It was the importance placed on iron by the Romans throughout the Empire which completed the shift from the few cultures still using primarily bronze [ who? ] into the Iron Age . [ citation needed ] Noricum (modern Austria ) was exceedingly rich in gold and iron, Pliny, Strabo , and Ovid all lauded its bountiful deposits. Iron was its main commodity, but alluvial gold was also prospected. By 15 BC, Noricum was officially made a province of the Empire, and the metal trade saw prosperity well into the fifth century AD. [ 5 ] Some scholars believe that the art of iron forging was not necessarily created, but well developed in this area and it was the population of Noricum which reminded Romans of the usefulness of iron. [ 6 ] For example, of the three forms of iron ( wrought iron , steel , and soft), the forms which were exported were of the wrought iron (containing a small percentage of uniformly distributed slag material) and steel (carbonised iron) categories, as pure iron is too soft to function like wrought or steel iron. [ 7 ] Dacia, located in the area of Transylvania , was conquered in 107 AD in order to capture the resources of the region for Rome. The amount of gold that came into Roman possession actually brought down the value of gold. Iron was also of importance to the region. The difference between the mines of Noricum and Dacia was the presence of a slave population as a workforce. [ 8 ] The earliest metal manipulation was probably hammering (Craddock 1995, 1999), where copper ore was pounded into thin sheets. The ore (if there were large enough pieces of metal separate from mineral) could be beneficiated ('made better') before or after melting, where the prills of metal could be hand-picked from the cooled slag. Melting beneficiated metal also allowed early metallurgists to use moulds and casts to form shapes of molten metal (Craddock 1995). Many of the metallurgical skills developed in the Bronze Age were still in use during Roman times. Melting—the process of using heat to separate slag and metal, smelting—using a reduced oxygen heated environment to separate metal oxides into metal and carbon dioxide, roasting—process of using an oxygen rich environment to isolate sulphur oxide from metal oxide which can then be smelted, casting —pouring liquid metal into a mould to make an object, hammering—using blunt force to make a thin sheet which can be annealed or shaped, and cupellation—separating metal alloys to isolate a specific metal—were all techniques which were well understood (Zwicker 1985, Tylecote 1962, Craddock 1995). However, the Romans provided few new technological advances other than the use of iron and the cupellation and granulation in the separation of gold alloys (Tylecote 1962). While native gold is common, the ore will sometimes contain small amounts of silver and copper. The Romans utilised a sophisticated system to separate these precious metals. The use of cupellation, a process developed before the rise of Rome, would extract copper from gold and silver, or an alloy called electrum . In order to separate the gold and silver, however, the Romans would granulate the alloy by pouring the liquid, molten metal into cold water, and then smelt the granules with salt , separating the gold from the chemically altered silver chloride (Tylecote 1962). They used a similar method to extract silver from lead. While Roman production became standardised in many ways, the evidence for distinct unity of furnace types is not strong, alluding to a tendency of the peripheries continuing with their own past furnace technologies. In order to complete some of the more complex metallurgical techniques, there is a bare minimum of necessary components for Roman metallurgy: metallic ore, furnace of unspecified type with a form of oxygen source (assumed by Tylecote to be bellows) and a method of restricting said oxygen (a lid or cover), a source of fuel ( charcoal from wood or occasionally peat ), moulds and/or hammers and anvils for shaping, the use of crucibles for isolating metals (Zwicker 1985), and likewise cupellation hearths (Tylecote 1962). There is direct evidence that the Romans mechanised at least part of the extraction processes. They used water power from water wheels for grinding grains and sawing timber or stone, for example. A set of sixteen such overshot wheels is still visible at Barbegal near Arles and dates from the 1st century AD or possibly earlier, the water being supplied by the main aqueduct to Arles. It is likely that the mills supplied flour for Arles and other towns locally. Multiple grain mills also existed on the Janiculum hill in Rome. Ausonius attests the use of a water mill for sawing stone in his poem Mosella from the 4th century AD. They could easily have adapted the technology to crush ore using tilt hammers , and just such is mentioned by Pliny the Elder in his Naturalis Historia dating to about 75 AD, and there is evidence for the method from Dolaucothi in South Wales . The Roman gold mines developed from c. 75 AD. The methods survived into the medieval period, as described and illustrated by Georgius Agricola in his De re metallica . They also used reverse overshot water-wheels for draining mines, the parts being prefabricated and numbered for ease of assembly. Multiple set of such wheels have been found in Spain at the Rio Tinto copper mines and a fragment of a wheel at Dolaucothi. An incomplete wheel from Spain is now on public show in the British Museum . The invention and widespread application of hydraulic mining , namely hushing and ground-sluicing, aided by the ability of the Romans to plan and execute mining operations on a large scale, allowed various base and precious metals to be extracted on a proto-industrial scale only rarely matched until the Industrial Revolution . [ 9 ] The most common fuel by far for smelting and forging operations, as well as heating purposes, was wood and particularly charcoal, which is nearly twice as efficient. [ 10 ] In addition, coal was mined in some regions to a fairly large extent: almost all major coalfields in Roman Britain were exploited by the late 2nd century AD, and a lively trade along the English North Sea coast developed, which extended to the continental Rhineland , where bituminous coal was already used for the smelting of iron ore. [ 11 ] The annual iron production at Populonia alone accounted for an estimated 2,000 [ 12 ] to 10,000 tons. [ 13 ] Iron Copper Lead Silver Gold Romans used many methods to create metal objects. Like Samian ware , moulds were created by making a model of the desired shape (whether through wood, wax , or metal), which would then be pressed into a clay mould. In the case of a metal or wax model, once dry, the ceramic could be heated and the wax or metal melted until it could be poured from the mould (this process utilising wax is called the “ lost wax “ technique). By pouring metal into the aperture, exact copies of an object could be cast. This process made the creation of a line of objects quite uniform. This is not to suggest that the creativity of individual artisans did not continue; rather, unique handcrafted pieces were normally the work of small, rural metalworkers on the peripheries of Rome using local techniques (Tylecote 1962). There is archaeological evidence throughout the Empire demonstrating the large scale excavations , smelting, and trade routes concerning metals. With the Romans came the concept of mass production ; this is arguably the most important aspect of Roman influence in the study of metallurgy. Three particular objects produced en masse and seen in the archaeological record throughout the Roman Empire are brooches called fibulae , worn by both men and women (Bayley 2004), coins , and ingots (Hughes 1980). These cast objects can allow archaeologists to trace years of communication , trade, and even historic/stylistic changes throughout the centuries of Roman power. When the cost of producing slaves became too high to justify slave labourers for the many mines throughout the empire around the second century, a system of indentured servitude was introduced for convicts . In 369 AD, a law was reinstated due to the closure of many deep mines; the emperor Hadrian had previously given the control of mines to private employers, so that workers were hired rather than working out of force. Through the institution of this system profits increased (Shepard 1993). In the case of Noricum, there is archaeological evidence of freemen labour in the metal trade and extraction through graffiti on mine walls. In this province, many men were given Roman citizenship for their efforts contributing to the procurement of metal for the empire. Both privately owned and government run mines were in operation simultaneously (Shepard 1993). From the formation of the Roman Empire, Rome was an almost completely closed economy , not reliant on imports although exotic goods from India and China (such as gems , silk and spices ) were highly prized (Shepard 1993). Through the recovery of Roman coins and ingots throughout the ancient world (Hughes 1980), metallurgy has supplied the archaeologist with material culture through which to see the expanse of the Roman world .
https://en.wikipedia.org/wiki/Roman_metallurgy
Roman military engineering was of a scale and frequency far beyond that of its contemporaries. Indeed, military engineering was in many ways endemic in Roman military culture, as demonstrated by each Roman legionary having as part of his equipment a shovel, alongside his gladius (sword) and pila ( javelins ). Workers, craftsmen, and artisans, known collectively as fabri , served in the Roman military. Descriptions of early Roman army structure (initially by phalanx, later by legion) attributed to king Servius Tullius state that two centuriae of fabri served under an officer, the praefectus fabrum . [ citation needed ] Roman military engineering took both routine and extraordinary forms, the former a part of standard military procedure, and the latter of an extraordinary or reactive nature. Each Roman legion had a legionary fort as its permanent base. However, when on the march, particularly in enemy territory, the legion would construct a rudimentary fortified camp or castra , using only earth, turf and timber. Camp construction was the responsibility of engineering units to which specialists of many types belonged, officered by architecti (engineers), from a class of troops known as immunes who were excused from regular duties. These engineers would requisition manual labour from the soldiers at large as required. A legion could throw up a camp under enemy attack in a few hours. The names of the different types of camps apparently represent the amount of investment: tertia castra , quarta castra : "a camp of three days", "four days", etc. The engineers built bridges from timber and stone. Some Roman stone bridges survive. Stone bridges were made possible by the innovative use of keystone arches . One notable example was Julius Caesar's Bridge over the Rhine River . This bridge was completed in only ten days and is conservatively estimated to have been more than 100 m (328 feet) long. [ 1 ] [ 2 ] The construction was deliberately over-engineered for Caesar's stated purpose of impressing the Germanic tribes. Caesar writes in his War in Gaul that he rejected the idea of simply crossing in boats because it "would not be fitting for my own prestige and that of Rome" (at the time, he did not know that the Germanic tribes, with little knowledge of engineering, had already withdrawn from the area upon his arrival), and because a bridge would emphasize that Rome could travel wherever she wished. Caesar was able to cross over the completed bridge and explore the area uncontested, before crossing back over the subsequently dismantled bridge. Caesar related in War in Gaul that when he "sent messengers to the Sugambri to demand the surrender of those who had made war on me and on Gaul, they replied that the Rhine was the limit of Roman power". The bridge was intended to show otherwise. Although most Roman siege engines were adaptations of earlier Greek designs, the Romans were adept at engineering them swiftly and efficiently, as well as innovating variations such as the repeating ballista . The 1st century BC army engineer Vitruvius describes in detail many of the Roman siege machines in his manuscript De architectura . When invading enemy territories, the Roman army would often construct roads as it went, to allow swift reinforcement and resupply, or for easy retreat if necessary. Roman road-making skills were such that some survive today. Michael Grant credits the Roman building of the Via Appia with winning them the Second Samnite War . [ 3 ] When soldiers were not engaged in military campaigns, the legions had little to do, while costing the Roman state large sums of money. Thus, soldiers were involved in building civilian works to keep them well accustomed to hard physical labour and out of mischief, since it was believed that idle armies were a potential source of mutiny. Soldiers were put to use in the construction of roads, town walls, the digging of canals, drainage projects, aqueducts, harbours, and even in the cultivation of vineyards. Soldiers were used in mining operations such as building aqueducts needed for prospecting for metal veins, activities such as hydraulic mining , and building reservoirs to hold water at the minehead. The knowledge and experience learned through routine engineering lent itself readily to extraordinary engineering projects. In such projects, Roman military engineering greatly exceeded that of its contemporaries in imagination and scope. One notable project was the circumvallation of the entire city of Alesia and its Celtic leader Vercingetorix , within a massive double-wall – one inward-facing to prevent escape or offensive sallies, and one outward-facing to prevent attack by Celtic reinforcements. This wall is estimated to have been over 20 km (12 mi) long. A second example is the massive ramp built using thousands of tons of stones and beaten earth up to the invested city of Masada during the Jewish Revolt . The siege works and the ramp remain in a remarkable state of preservation.
https://en.wikipedia.org/wiki/Roman_military_engineering
In mathematics, specifically additive number theory , Romanov's theorem is a mathematical theorem proved by Nikolai Pavlovich Romanov. It states that given a fixed base b , the set of numbers that are the sum of a prime and a positive integer power of b has a positive lower asymptotic density . Romanov initially stated that he had proven the statements "In jedem Intervall (0, x) liegen mehr als ax Zahlen, welche als Summe von einer Primzahl und einer k-ten Potenz einer ganzen Zahl darstellbar sind, wo a eine gewisse positive, nur von k abhängige Konstante bedeutet" and "In jedem Intervall (0, x) liegen mehr als bx Zahlen, weiche als Summe von einer Primzahl und einer Potenz von a darstellbar sind. Hier ist a eine gegebene ganze Zahl und b eine positive Konstante, welche nur von a abhängt". [ 1 ] These statements translate to "In every interval ( 0 , x ) {\displaystyle (0,x)} there are more than α x {\displaystyle \alpha x} numbers which can be represented as the sum of a prime number and a k -th power of an integer, where α {\displaystyle \alpha } is a certain positive constant that is only dependent on k " and "In every interval ( 0 , x ) {\displaystyle (0,x)} there are more than β x {\displaystyle \beta x} numbers which can be represented as the sum of a prime number and a power of a . Here a is a given integer and β {\displaystyle \beta } is a positive constant that only depends on a " respectively. The second statement is generally accepted as the Romanov's theorem, for example in Nathanson's book. [ 2 ] Precisely, let d ( x ) = | { n ≤ x : n = p + 2 k , p prime, k ∈ N } | x {\displaystyle d(x)={\frac {\left\vert \{n\leq x:n=p+2^{k},p\ {\textrm {prime,}}\ k\in \mathbb {N} \}\right\vert }{x}}} and let d _ = lim inf x → ∞ d ( x ) {\displaystyle {\underline {d}}=\liminf _{x\to \infty }d(x)} , d ¯ = lim sup x → ∞ d ( x ) {\displaystyle {\overline {d}}=\limsup _{x\to \infty }d(x)} . Then Romanov's theorem asserts that d _ > 0 {\displaystyle {\underline {d}}>0} . [ 3 ] Alphonse de Polignac wrote in 1849 that every odd number larger than 3 can be written as the sum of an odd prime and a power of 2. (He soon noticed a counterexample, namely 959.) [ 4 ] This corresponds to the case of a = 2 {\displaystyle a=2} in the original statement. The counterexample of 959 was, in fact, also mentioned in Euler 's letter to Christian Goldbach , [ 5 ] but they were working in the opposite direction, trying to find odd numbers that cannot be expressed in the form. In 1934, Romanov proved the theorem. The positive constant β {\displaystyle \beta } mentioned in the case a = 2 {\displaystyle a=2} was later known as Romanov's constant . [ 6 ] Various estimates on the constant, as well as d ¯ {\displaystyle {\overline {d}}} , has been made. The history of such refinements are listed below. [ 3 ] In particular, since d ¯ {\displaystyle {\overline {d}}} is shown to be less than 0.5 this implies that the odd numbers that cannot be expressed this way has positive lower asymptotic density. Analogous results of Romanov's theorem has been proven in number fields by Riegel in 1961. [ 11 ] In 2015, the theorem was also proven for polynomials in finite fields. [ 12 ] Also in 2015, an arithmetic progression of Gaussian integers that are not expressible as the sum of a Gaussian prime and a power of 1+i is given. [ 13 ]
https://en.wikipedia.org/wiki/Romanov's_theorem
In mathematics , the Romanovski polynomials are one of three finite subsets of real orthogonal polynomials discovered by Vsevolod Romanovsky [ 1 ] (Romanovski in French transcription) within the context of probability distribution functions in statistics. They form an orthogonal subset of a more general family of little-known Routh polynomials introduced by Edward John Routh [ 2 ] in 1884. The term Romanovski polynomials was put forward by Raposo, [ 3 ] with reference to the so-called 'pseudo-Jacobi polynomials in Lesky's classification scheme. [ 4 ] It seems more consistent to refer to them as Romanovski–Routh polynomials , by analogy with the terms Romanovski–Bessel and Romanovski–Jacobi used by Lesky for two other sets of orthogonal polynomials. In some contrast to the standard classical orthogonal polynomials, the polynomials under consideration differ, in so far as for arbitrary parameters only a finite number of them are orthogonal , as discussed in more detail below. The Romanovski polynomials solve the following version of the hypergeometric differential equation Curiously, they have been omitted from the standard textbooks on special functions in mathematical physics [ 5 ] [ 6 ] and in mathematics [ 7 ] [ 8 ] and have only a relatively scarce presence elsewhere in the mathematical literature. [ 9 ] [ 10 ] [ 11 ] The weight functions are they solve Pearson's differential equation that assures the self-adjointness of the differential operator of the hypergeometric ordinary differential equation . For α = 0 and β < 0 , the weight function of the Romanovski polynomials takes the shape of the Cauchy distribution , whence the associated polynomials are also denoted as Cauchy polynomials [ 12 ] in their applications in random matrix theory. [ 13 ] The Rodrigues formula specifies the polynomial R ( α , β ) n ( x ) as where N n is a normalization constant. This constant is related to the coefficient c n of the term of degree n in the polynomial R ( α , β ) n ( x ) by the expression which holds for n ≥ 1 . As shown by Askey this finite sequence of real orthogonal polynomials can be expressed in terms of Jacobi polynomials of imaginary argument and thereby is frequently referred to as complexified Jacobi polynomials. [ 14 ] Namely, the Romanovski equation ( 1 ) can be formally obtained from the Jacobi equation, [ 15 ] via the replacements, for real x , in which case one finds (with suitably chosen normalization constants for the Jacobi polynomials). The complex Jacobi polynomials on the right are defined via (1.1) in Kuijlaars et al. (2003) [ 16 ] which assures that ( 8 ) are real polynomials in x. Since the cited authors discuss the non-hermitian (complex) orthogonality conditions only for real Jacobi indexes the overlap between their analysis and definition ( 8 ) of Romanovski polynomials exists only if α = 0. However examination of this peculiar case requires more scrutiny beyond the limits of this article. Notice the invertibility of ( 8 ) according to where, now, P ( α , β ) n ( x ) is a real Jacobi polynomial and would be a complex Romanovski polynomial. For real α , β and n = 0, 1, 2, ... , a function R ( α , β ) n ( x ) can be defined by the Rodrigues formula in Equation ( 4 ) as where w ( α , β ) is the same weight function as in ( 2 ), and s ( x ) = 1 + x 2 is the coefficient of the second derivative of the hypergeometric differential equation as in ( 1 ). Note that we have chosen the normalization constants N n = 1 , which is equivalent to making a choice of the coefficient of highest degree in the polynomial, as given by equation ( 5 ). It takes the form Also note that the coefficient c n does not depend on the parameter α , but only on β and, for particular values of β , c n vanishes (i.e., for all the values where k = 0, ..., n − 1 ). This observation poses a problem addressed below. For later reference, we write explicitly the polynomials of degree 0, 1, and 2, which derive from the Rodrigues formula ( 10 ) in conjunction with Pearson's ODE ( 3 ). The two polynomials, R ( α , β ) m ( x ) and R ( α , β ) n ( x ) with m ≠ n , are orthogonal, [ 3 ] if and only if, In other words, for arbitrary parameters, only a finite number of Romanovski polynomials are orthogonal. This property is referred to as finite orthogonality . However, for some special cases in which the parameters depend in a particular way on the polynomial degree infinite orthogonality can be achieved. This is the case of a version of equation ( 1 ) that has been independently encountered anew within the context of the exact solubility of the quantum mechanical problem of the trigonometric Rosen–Morse potential and reported in Compean & Kirchbach (2006). [ 17 ] There, the polynomial parameters α and β are no longer arbitrary but are expressed in terms of the potential parameters, a and b , and the degree n of the polynomial according to the relations, Correspondingly, λ n emerges as λ n = − n (2 a + n − 1) , while the weight function takes the shape Finally, the one-dimensional variable, x , in Compean & Kirchbach (2006) [ 17 ] has been taken as where r is the radial distance, while d {\displaystyle d} is an appropriate length parameter. In Compean & Kirchbach [ 17 ] it has been shown that the family of Romanovski polynomials corresponding to the infinite sequence of parameter pairs, is orthogonal. In Weber (2007) [ 18 ] polynomials Q ( α n , β n + n ) ν ( x ) , with β n + n = − a , and complementary to R ( α n , β n ) n ( x ) have been studied, generated in the following way: In taking into account the relation, Equation ( 16 ) becomes equivalent to and thus links the complementary to the principal Romanovski polynomials. The main attraction of the complementary polynomials is that their generating function can be calculated in closed form. [ 19 ] Such a generating function , written for the Romanovski polynomials based on Equation ( 18 ) with the parameters in ( 14 ) and therefore referring to infinite orthogonality, has been introduced as The notational differences between Weber [ 18 ] and those used here are summarized as follows: The generating function under discussion obtained in Weber [ 18 ] now reads: Recurrence relations between the infinite orthogonal series of Romanovski polynomials with the parameters in the above equations ( 14 ) follow from the generating function , [ 18 ] and as Equations (10) and (23) of Weber (2007) [ 18 ] respectively.
https://en.wikipedia.org/wiki/Romanovski_polynomials
Romen Efimovich Sova ( Russian : Ромен Ефимович Сова ) (5 November 1938 - 22 December 2001), was a Soviet and Ukrainian toxicologist . Corresponding Member of the Ukrainian Ecological Academy of Sciences, Doctor of Medical Sciences. From 1965 — Associate Research Fellow of the Kiev Institute of Hygiene and Occupational Diseases, where he received a PhD degree in medical sciences. Since 1971, research activities related to Romain Efimovicha connected with VNIIGINTOKS (now the Institute of ecological hygiene and toxicology behalf L. I. Medved). Over 26 years of work at the institute, he has gone from a research assistant to the deputy director for scientific work. The main focus of his activity was the methodology of integrated assessment of chemical risks to human health and the environment. He took an active part in the development of new directions in toxicology and health - a complex hygienic regulation of pesticides, hygienic regulation of pesticides in soil, application of mathematical methods to assess and predict the real risk of accumulation of pesticides in the environment and the human body. He is coauthor of environmental hygiene and classification of pesticides hazard. As a toxicologist Romen E. Sova has made a significant contribution to the development problems of biological standards of laboratory animals, the methodology and methods of studying the combined, integrated and combined effects of chemicals and other factors. Romain Yefimovich established the All-Union Center "Dioxin", commenced research on this problem, developed the first hygienic standards of the most dangerous environmental pollutants. He was an expert of WHO on the issue of dioxin, an expert from Ukraine on the issue of persistent organic pollutants in the United Nations Environment Programme, a member of the committee on the hygienic regulation of MH of Ukraine. Romen Yefimovich prepared 5 Candidates of Medical Sciences, published 6 monographs and more than 230 scientific papers. This biographical article about a chemist is a stub . You can help Wikipedia by expanding it . This toxicology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Romen_Sova
A ROMER Arm is a term for a portable coordinate measuring machine ROMER, a company Acquired by the Hexagon AB group, and part of the Manufacturing Intelligence division, designed the ROMER arm in the 1980s to solve the problem of how to measure large objects such as airplanes and car bodies without moving them to a dedicated measuring laboratory. A coordinate measuring machine precisely measures an object in a 3D coordinate system, often in comparison to a computer aided design (CAD) model. A portable coordinate measuring machine is usually a manual measuring device, which indicates that it requires a person to operate it. The arm has 6 or 7 joints and operate in the 3D world - It has 6 degree of freedoms- 3 for rotation and 3 for translation. The physical arrangement of the arm is much like a human arm, with a wrist, forearm, elbow, and so on. ROMER arms are used for industrial measuring tasks where the part to be measured is too large or inconvenient to be moved. The measuring arm is brought to the part, which is possible because of the light weight of the system (less than 20 pounds). The original design for the ROMER arm is based on US patent 3,944,798, filed in 1974 by Homer Eaton, one of ROMER's founders, while working at Eaton Leonard. At that time, the measuring arm was intended solely for the measurement of bent tube geometry. Later, Eaton teamed up with colleague Romain Granger to create ROMER SARL (France) to create portable measuring arms for general purpose industrial measuring applications. The word ROMER comes from a combination of the two founders' names: Romain Granger and Homer Eaton. The ROMER companies today are part of Hexagon Manufacturing Intelligence. In August 2018 Hexagon Manufacturing Intelligence renamed the product to the Absolute Arm consigning the name ROMER to the history books.
https://en.wikipedia.org/wiki/Romer_arm
Romulus , Remus , and Khaleesi are genetically modified gray wolves ( Canis lupus ) created by Colossal Biosciences Inc. with the goal of replicating the phenotype of the extinct dire wolf ( Aenocyon dirus ). Romulus and Remus were born on October 1, 2024; Colossal claims that the two males represent the first living examples of the species since its extinction approximately 10,000 years ago. Khaleesi, a female, was born later, on January 30, 2025. Due to significant genetic differences from Aenocyon , the dire wolf genus, Romulus, Remus, and Khaleesi are in fact genetically engineered gray wolves. Scientists rewrote 14 key genes in gray wolf EPC cells to express 20 dire wolf traits meaning that "no ancient dire wolf DNA was actually spliced into the grey wolf's genome." [ 1 ] The public announcement of their birth in April 2025 was subject to significant public attention. [ 2 ] The creation of these modified gray wolves involved analyzing ancient DNA samples from two sources: a 13,000-year-old tooth discovered at Sheriden Cave (an Ice Age archaeological site in Wyandot County , northwestern Ohio ) and a 72,000-year-old ear bone found in American Falls , Idaho . Colossal Biosciences 's scientists stated that rather than directly inserting ancient DNA into modern animals, they had identified approximately twenty genetic modifications across fourteen genes that differentiate dire wolves from modern gray wolves. [ 1 ] [ 3 ] [ 4 ] [ 5 ] The scientific team isolated endothelial progenitor cells (EPCs) from gray wolf blood samples, then used CRISPR gene editing to produce several traits which the company claims are found in dire wolves, including a larger body size, wider head, and pale coat color. Only 15 of the 20 DNA changes are directly based on dire wolf genome and the remaining 5 changes are known to produce light coat color. [ 6 ] The light coat color was created by inducing loss-of-function in the genes MC1R and MFSD12 . [ 7 ] These modified nuclei were transferred into denucleated ova , which developed into embryos in laboratory conditions. From 45 engineered ova, four viable embryos were successfully implanted in surrogate mother dogs. [ 1 ] [ 5 ] The surrogate mothers were selected domestic dog mixes chosen for their general health and size sufficient to accommodate the larger dire wolf pups. One particular surrogate mother, a domestic dog named Skyla, played a crucial role in the project. The pregnancies were continuously monitored with weekly ultrasounds , and all three births occurred via planned caesarean section to minimize complications. The mother dogs after the caesarean were cared by the surgical team, recovered, and reunited with the puppies. [ 1 ] [ 4 ] Born on October 1, 2024, Romulus and Remus are housed in a secured 2,000-acre (810 ha) ecological preserve within the United States surrounded by 10-foot (3.0 m) fencing, at an undisclosed location to protect them from disturbance. The facility includes a smaller 6-acre (2.4 ha) area containing a veterinary clinic , extreme weather shelter, and natural dens. They receive around-the-clock veterinary supervision. [ 1 ] [ 3 ] Romulus and Remus were named after the legendary twin brothers who founded Rome , and in Roman mythology had been suckled by a she-wolf as infants. As pups, a picture was taken of them resting on the original Iron Throne prop from Game of Thrones , [ 2 ] an allusion to their alleged species' presence in the franchise. Their diet consists of deer , beef , and horse meat , supplemented with organ meats and specialized nutritional supplements. Initially fed pureed meat after weaning, they now receive whole portions that allow them to engage in natural tearing behaviors. The wolves have not been provided with live prey, though staff note they have not observed any interactions with small wildlife that may enter their enclosure. [ 1 ] Their genetic modifications resulted in several key physical differences from gray wolves, including a pale coat coloration and distinctive vocalizations, particularly bearing unique howling patterns. Morphologically , the wolves have a larger overall body size with a more "powerful" shoulder structure, wider head shape, larger teeth and jaws, and more muscular legs. Behaviorally, the wolves exhibited natural wolf tendencies from birth, including early howling (beginning at approximately two weeks of age), stalking and hunting behaviors, and a natural wariness around humans. Unlike domesticated canines, they maintain distance from people, including their handlers, and display characteristic wolf retreat behaviors when approached. [ 1 ] [ 3 ] Based on unreviewed genetic research, Colossal claims that dire wolves would have had a pale coat coloration. The pups' white coat results from a coat coloration gene expressed in dogs, chosen as a replacement for the purported original gene, which carries "a risk of blindness and deafness." [ 3 ] By six months of age, the male wolves measured nearly four feet (~122 cm) in length and weighed approximately 36.3 kg (80 lb) each. They are projected to reach six feet (~183 cm) in length and 68 kg (150 lb) at full maturity. [ 1 ] Their sister Khaleesi was born on January 30, 2025. She was named in homage to the Game of Thrones character Daenerys Targaryen . [ 2 ] The three wolves are maintained as a small pack . [ 1 ] Independent experts disagreed with the Colossal Biosciences' claim that these animals are revived dire wolves. The zoologist Philip Seddon and the paleontologist Dr. Nic Rawlence from Otago University explained that the animals are genetically modified hybrid gray wolves. Rawlence noted that ancient dire wolf DNA is an extremely fragmentary source for constructing a biological clone and that dire wolves diverged from gray wolves anywhere between 2.5 and 6 million years ago. His criticism was likewise directed at the small number of genetic changes (20 in only 14 genes) Colossal administered to the gray wolf genome—suggesting a closer relation to the gray wolf genetically than the company's marketing often acknowledged—and was concerned about this project giving a wrong message in biodiversity conservation. [ 8 ] The geneticist Adam Boyko at Cornell University stated that the wolves are only functional versions of "dire wolves", instead of a resurrection of the legitimate species. [ 5 ] Jeremy Austin, Director of the Australian Centre for Ancient DNA, stated that the result was "not a dire wolf under any definition of a species ever", disputing the phenotypic species definition used by Dr. Beth Shapiro of Colossal Biosciences, arguing that hundreds of thousands of genetic differences exist between dire and gray wolves. He also questioned whether the purported dire wolves have any ecological place left in the modern world or will merely become zoo animals. [ 9 ] Similar criticisms would be expressed by wolf experts including L. David Mech [ 10 ] and Luigi Boitani . [ 11 ] According to Time Magazine , the Mandan, Hidatsa, and Arikara Nation has expressed interest in Colossal Biosciences potentially releasing their modified wolves into a controlled area of the Fort Berthold Reservation in northwestern North Dakota , which currently spans 1,000,000 acres (400,000 ha). The wolves currently live in a 2,000-acre (810 ha) ecological preserve in an undisclosed location in the United States. [ 1 ] Author George R.R. Martin , the creator of Game of Thrones , was invited to see the wolves, and commented, "Maybe I was remembering a past life, when I ran with a pack in the Ice Age. … Whatever the reason, I have to say the rebirth of the direwolf has stirred me as no scientific news has since Neil Armstrong walked on the moon." [ 12 ] The IUCN Species Survival Commission Canid Specialist Group officially declared that the three animals are neither dire wolves nor proxies of the dire wolves based on the IUCN SSC guiding principles on creating proxies of extinct species for conservation benefit. They commented that creating phenotypic proxies doesn't change the conservation status of an extinct species and may instead threaten the extant species such as gray wolves. Therefore, since the claimed proxies do not conform with the IUCN SSC guidelines, have no ecological niche left today and "will not restore ecosystem function", they concluded that the Colossal Biosciences' project "does not contribute to conservation." [ 13 ]
https://en.wikipedia.org/wiki/Romulus,_Remus,_and_Khaleesi
Ronald Rivera (August 22, 1948 – September 3, 2008) was an American activist of Puerto Rican descent who is best known for promoting an inexpensive ceramic water filter developed in Guatemala by the chemist Fernando Mazariegos and used to treat gray water in impoverished communities and for establishing community-based factories to produce the filters around the world. Rivera was born in the Bronx borough of New York City , [ 1 ] of Puerto Rican parents. He was raised in both New York City and Puerto Rico. Rivera graduated from The World University in San Juan, Puerto Rico . He also studied at the School for International Training. [ 2 ] [ 3 ] Rivera worked with the Peace Corps in Panama and Ecuador , and with Catholic Relief Services in Bolivia . He founded the local consultancy office for the Inter American Foundation in Ecuador where he worked until 1988, when he moved to Nicaragua . [ 2 ] Rivera first became passionate about ceramics in the early 1970s when he studied in Cuernavaca, Mexico with Paulo Freire and Ivan Illich , who taught that human beings had lost their connection with the earth. Rivera then went to live with an experienced potter and learned the art of ceramics. [ 2 ] [ 3 ] After moving to Nicaragua in the late 1980s during the Contra War , [ 4 ] where he reunited with and eventually married his high-school sweetheart, Kathy McBride, Rivera worked for over two decades with potters from rural communities in Nicaragua, helping them to enhance their production methods, including the implementation of a more fuel-efficient kiln developed by Manny Hernandez, a professor at Northern Illinois University . He also worked with potters around the country to develop new designs and to connect to new markets. [ 2 ] He first learned of ceramic pot filters from its inventor Guatemalan chemist Fernando Mazariegos . Rivera produced this inexpensive filter developed in Guatemala by Mr. Mazariegos from a mix of local terra-cotta clay and sawdust or other combustible materials, such as rice husks. The combustible ingredient, which has been milled and screened, burns out in the firing, leaving a network of fine pores. After firing, the filter is coated with colloidal silver . This combination of fine pore size and the bactericidal properties of colloidal silver produce an effective filter, killing over 98 percent of the contaminants that cause diarrhea , thus dramatically reducing public health problems in the communities that use them to purify potable water . [ 2 ] [ 3 ] He designed a mold for the filter and a special clay press that was operated with a tire jack. [ 5 ] The Family of the Americas Association, a Guatemalan organization, conducted a one-year follow-up study on the initial Mazariegos-developed filter project, concluding that this filter helped to reduce the incidence of diarrhea in participating households by as much as 50 percent. Laboratory testing and field studies have been performed on the filter by various institutions, including MIT , Tulane University , University of Colorado and University of North Carolina . [ 2 ] [ 3 ] Rivera began manufacturing the pots through Potters for Peace in Nicaragua, eventually helping to establish an independent enterprise to produce the filters. [ 6 ] Beginning in 1998, Rivera traveled throughout Latin America, Africa and Asia to establish 30 filter microenterprises in Guatemala , Honduras , Mexico , Cambodia , Bangladesh , Ghana , Nigeria , El Salvador , the Darfur region of Sudan , Myanmar and other countries. These factories have produced over 300,000 filters, and the filters are used by about 1.5 million people to date. An additional 13 filter workshops are scheduled to begin operating by the end of next year. [ 2 ] The filter has been cited by the United Nations ’ Appropriate Technology Handbook, and tens of thousands of filters have been distributed worldwide by organizations such as International Federation of the Red Cross and Red Crescent , Doctors Without Borders , UNICEF , Plan International , Project Concern International , International Development Enterprises , Oxfam and USAID . [ 2 ] Rivera wanted to share this Guatemalan invention with the world and posted his experience in manufacturing ceramic pot filters in painstaking detail, on the Internet. [ 5 ] Ron Rivera, Lynette Yetter, Jeff Rogers and Reid Harvey co-authored the paper, "A Sustainable Ceramic Water Filter for Household Purification," which Lynette Yetter presented at a NSF Conference in 2000. [ 7 ] Rivera's filters were included in an exhibition at the Cooper-Hewitt National Design Museum called " Design for the Other 90 Percent ." [ 2 ] [ 3 ] Rivera died in Managua, Nicaragua on September 3, 2008, after contracting falciparum malaria while working in Nigeria. A memorial service held in Managua on September 6 at the Universidad Centroamericana was attended by hundreds, including scores of local potters. [ 2 ] During his stay in Nigeria he worked endlessly to put together a ceramic water filter factory. [ 8 ]
https://en.wikipedia.org/wiki/Ron_Rivera_(public_health)
Ronald James Gillespie , CM FRSC FRS [ 1 ] (August 21, 1924 – February 26, 2021) [ 2 ] was a British chemist specializing in the field of molecular geometry , who arrived in Canada after accepting an offer that included his own laboratory with new equipment, which post-World War II Britain could not provide. He was responsible for establishing inorganic chemistry education in Canada. He was educated at the University of London obtaining a B.Sc. in 1945, a Ph.D. in 1949 and a D.Sc. in 1957. He was assistant lecturer and then lecturer in the Department of Chemistry at University College London in England from 1950 to 1958. He moved to McMaster University , Hamilton, Ontario , Canada, in 1958. He was elected as a Fellow of the Royal Society of Canada in 1965, a Fellow of the Royal Society of London in 1977, and made a member of the Order of Canada in 2007. [ 3 ] Gillespie died on February 26, 2021, at the age of ninety-six in the town of Dundas, Ontario . Gillespie did extensive work on expanding the idea of the Valence Shell Electron Pair Repulsion (VSEPR) model of Molecular Geometry, which he developed with Ronald Nyholm (and thus is also known as the Gillespie-Nyholm theory), and setting the rules for assigning numbers. He has written several books on this VSEPR topic in chemistry . With other workers he developed LCP theory , (ligand close packing theory), which for some molecules allows geometry to be predicted on the basis of ligand-ligand repulsions. Gillespie has also done extensive work on interpreting the covalent radius of fluorine . The covalent radius of most atoms is found by taking half the length of a single bond between two similar atoms in a neutral molecule. Calculating the covalent radius for fluorine is more difficult because of its high electronegativity compared to its small atomic radius size. Gillespie's work on the bond length of fluorine focuses on theoretically determining the covalent radius of fluorine by examining its covalent radius when it is attached to several different atoms. [ 4 ] This article about a Canadian scientist is a stub . You can help Wikipedia by expanding it . This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ronald_Gillespie
Leading Aircraftman Ronald George Maddison (23 January 1933 – 6 May 1953) was a twenty-year-old Royal Air Force mechanic who was unlawfully killed as the result of exposure to nerve agents while acting as a voluntary test subject at Porton Down , in Wiltshire , England. After substantial controversy, his death was the subject of an inquest 51 years after the event. Porton Down had been testing sarin on humans since October 1951, but the first adverse reaction was not recorded until February 1953. An even more severe reaction occurred on 27 April when one of six volunteers, a man named John Patrick Kelly, was exposed to 300 milligrams of sarin and fell into a coma but subsequently recovered. This prompted a reduction in the dose used in this series of experiments to 200 mg. [ 1 ] Along with other servicemen, Maddison was offered 15 shillings and a three-day leave pass for taking part in the experiments. He had planned to use the money to purchase an engagement ring for his girlfriend, Mary Pyle. On the day he died, Ronald Maddison entered a gas chamber at 10:00 a.m. along with five other test subjects. They were each to have an identical experiment performed on them which was part of a series of experiments to determine the lethal dose of sarin when delivered to bare or battle dress -covered skin. [ 2 ] The method used was to measure the change in active acetylcholinesterase in red blood cells at small dose levels and extrapolate this to work out what the effect of larger doses would be. [ 2 ] Sarin is extremely poisonous because it attacks the nervous system by blocking the activity of cholinesterase enzymes present in it, including acetylcholinesterase. The method was practical because red blood cell membranes contain forms of acetylcholinesterase. [ citation needed ] The participants were wearing respirators , with woollen hats and oversize overalls but no proper protective clothing. [ 1 ] Two technicians were also present to carry out the experiment. [ 3 ] The respirators were tested by exposing the men to tear gas in the chamber before the experiment started. [ 3 ] Maddison was the fourth to have the drops applied, at 10:17 having twenty 10 mg drops of sarin applied to the two layers of cloth used in uniforms, serge and flannel , which had been taped [ 3 ] to the inside of his left forearm. [ 1 ] After twenty minutes, Maddison began to sweat and complain that he did not feel well. [ 1 ] One eye witness reported at the second inquest that he slumped over the table. [ 3 ] The contaminated cloth was removed and he left the chamber, walking (perhaps with help) [ 3 ] about 30 metres to a bench. [ 1 ] An ambulance was called and shortly afterwards Maddison complained of deafness, collapsed and began gasping for breath and the scientists injected him with atropine after they witnessed an asthma -like attack and convulsions . An ambulance took him to the site's local medical facility, where he arrived at 10:47. Attempts were made to resuscitate him using oxygen, further injections of atropine and anacardone , and finally an injection of adrenaline into his heart just after 11 am. [ 1 ] Although he had died at 11 am, less than 45 minutes after being exposed to the poison, [ 4 ] he was not formally pronounced dead until 1:30 pm. [ 1 ] The post mortem was carried out in Salisbury Infirmary. [ 5 ] On 8 and 16 May 1953, an inquest was held in secret before the Wiltshire Coroner , Harold Dale, who returned a verdict of misadventure . [ 6 ] Maddison's father was permitted to attend the inquest but warned that he would be prosecuted under the Official Secrets Act if he informed anyone, including his family, of the circumstances surrounding his son's death. [ 7 ] An internal court of inquiry at Porton Down found that Maddison had died because of "personal idiosyncrasy", either because he was unusually sensitive to the poison or his skin absorbed it faster than in other test subjects. [ 7 ] The Ministry of Defence delivered Ronald Maddison's body in a steel coffin with the lid bolted down to maintain secrecy. [ 8 ] A large number of samples of body parts including brain and spinal cord tissue, skin, muscle, stomach, lung, and gut were retained without his family's knowledge or permission and used over several years in other toxicology experiments. [ 5 ] Maddison's father, John Maddison, was paid £40 to cover the funeral expenses, made up of £20 for black clothes, £16 for undertaker 's fees and £4 for catering. [ 7 ] Operation Antler was a police investigation from 1999 to 2004 into Maddison's death, and into allegations that other British chemical-weapons test participants between 1939 and 1989 had not been properly informed and may have been misled about the experiments and their risks. [ 9 ] As a result of the investigation, and campaigning by Ronald Maddison's family, Lord Chief Justice Lord Woolf , sitting with Mrs Justice Hallett in the High Court quashed the original inquest verdict in November 2002. [ 8 ] A new inquest opened on 5 May 2004 [ 10 ] and was the longest held in England and Wales up to that time, hearing around 100 witnesses over 50 days. [ 11 ] On 15 November 2004, the inquest jury returned the verdict that Ronald Maddison had been unlawfully killed . [ 4 ] [ 9 ] The Ministry of Defence applied for a judicial review to quash the unlawful killing verdict, although the government announced that whatever the outcome they would look "favourably" upon paying compensation to Maddison's family. In February 2006 an agreement was struck within the framework of the judicial review whereby the MoD accepted the inquest verdict on the grounds that Maddison had died through "gross negligence in the planning and conduct of the experiment". [ 11 ] The MoD did not accept that there was sufficient evidence to conclude that Maddison had not given his informed consent to take part. [ 12 ] Ronald Maddison's relatives received a total of £100,000 in compensation from the Ministry of Defence. [ 13 ] The Crown Prosecution Service announced in 2003 that there was insufficient evidence to charge anyone responsible for the tests, but that they would review this decision following Maddison's second inquest. In June 2006, they confirmed that there would be no prosecutions. [ 14 ]
https://en.wikipedia.org/wiki/Ronald_Maddison
Sir Ronald Sydney Nyholm (29 January 1917 – 4 December 1971) was an Australian chemist who was a leading figure in inorganic chemistry in the 1950s and 1960s. Born on 29 January 1917 as the fourth in a family of six children. Nyholm's father, Eric Edward Nyholm (1878–1932) was a railway guard. [ citation needed ] Nyholm's paternal grandfather, Erik Nyholm (1850–1887) was a coppersmith born in Nykarleby in the Swedish-speaking part of Finland , who migrated to Adelaide in 1873. [ citation needed ] Ronald Nyholm valued his Finnish roots and was particularly proud in his election in 1959 as Corresponding Member of the Finnish Chemical Society. [ citation needed ] Hailing from the small mining town of Broken Hill , New South Wales , he was early exposed to the role of inorganic chemistry. [ 1 ] [ 3 ] He attended Burke Ward Public School and Broken Hill High School. Nyholm married Maureen Richardson of Epping , a suburb of Sydney , NSW , at the parish church in Kensington, London on 6 August 1948. [ 4 ] After graduating from Broken Hill High School, he attended the University of Sydney (BSc, 1938; MSc, 1942) and then University College London (PhD, 1950, supervised by Sir Christopher Ingold ; D.Sc., 1953). [ 2 ] On graduation Nyholm became a High School teacher – a contractual requirement of his scholarship to university. He then joined the Eveready Battery Co as a chemist where he was frustrated that his work to make longer lasting batteries was not well received by the marketing department. He then returned to teaching but now in tertiary education. During World War II he was a Gas Officer as the civil defence forces were very concerned that the likely Japanese invasion would include gas attacks. He was lecturer, then senior lecturer in Chemistry at Sydney Technical College from 1940 to 1951, although on leave in London from 1947. From 1952 to 1954 he was associate professor of Inorganic Chemistry at the New South Wales University of Technology. In 1954 he was elected President of the Royal Society of New South Wales . In 1955, Nyholm returned to England as Professor of Chemistry at University College London, where he worked until his death on 4 December 1971 as a result of a motorcar accident on the outskirts of Cambridge, England. [ 1 ] [ 5 ] Nyholm's research in inorganic chemistry was primarily concerned with the preparation of transition metal compounds, particularly those involving organo-arsenic ligands. His interest in organoarsenic chemistry was fostered at the University of Sydney by George Joseph Burrows (1888–1950). Using the strong chelating ligand diars , Nyholm demonstrated a range of oxidation states and coordination numbers for several of the transition metals. [ 6 ] Nyholm noted that the term 'unusual valence state' had an 'historical, but not chemical significance.' 'The definition of usual oxidation state refers to oxidation states that are stable in environments made up of those chemical species that were common in classical inorganic compounds, e.g. oxides, water and other simple oxygen donors, the halogens, excluding fluorine, and sulphur. Nowadays, however, such species constitute only a minority of the vast number of donor atoms and ligands that can be attached to metal.' After joining Sydney Technology college in 1940 Nyholm formed a close personal friendship with Francis (Franky) Dwyer and they collaborated in their research. Despite heavy teaching loads, between 1942 and 1947 they reported complexes of rhodium , iridium , and osmium in seventeen papers in the Journal and Proceedings of the Royal Society of New South Wales . [ 7 ] One of Nyholm's early successes was the preparation of an octahedral complex of trivalent nickel [Ni( diars ) 2 Cl 2 ]Cl, by aerial oxidation of the red salt of bivalent nickel [Ni( diars ) 2 ]Cl 2 . [ 8 ] He also described stable complexes of quadrivalent nickel such as the deep blue [Ni( diars ) 2 Cl 2 ] [ClO 4 ] 2 , by nitric acid oxidation of the trivalent complex. [ 9 ] This stabilisation of higher oxidation states became significant in the Nyholm-Rail reaction where the ditertiary arsine, diars undergoes a condensation reaction to a tritertiary arsine, triars . Nyholm prepared examples of divalent octahedral complexes of the type M( diars ) 2 X 2 , where X is Cl, Br or I, and M is Cr, Mn, Fe, Co, Ni, Mo, Tc, Ru, Pd, W, Re, Os, and Pt. Many of these divalent complexes are sensitive to aerial oxidation. The chromium complex is oxidized by water. Indeed, previous attempts to prepare Cr( diars ) 2 X 2 had failed. The chromium compounds were eventually synthesized by his co-worker Anthony Nicholl Rail only a month before Nyholm's death, using rigorous air-free techniques . [ 10 ] Together with Professor Ronald Gillespie , Nyholm developed the VSEPR (Valence shell electron pair repulsion) theory for the simple prediction of molecular geometry . This theory emphasized classical pictures of bonding, adapted to include features of quantum theory , but focusing on electron clouds of varying density within a probability envelope. In his inaugural lecture as professor of chemistry at University College London, Nyholm spoke of his concern for the teaching of chemistry. [ 11 ] [ 12 ] In 1957 Nyholm organized the first of an annual series of Summer Schools at University College on new aspects of chemical knowledge and theory, and demonstrations of new equipment. In the early sixties, the Nuffield Foundation , at least partly as a result of Nyholm's influence, established the Science Teaching project, of which Nyholm was the first Chairman of the Chemistry Consultative Committee. This program led to the development of experiential GCE courses that emphasized the process of chemistry, rather than the recall of chemical facts, and explored the role of chemistry in society. In 1971 Nyholm published an article entitled 'Education for Change' in which he differentiated between education and training as it applies to chemistry. [ 13 ] He defined education as 'a process in which a person receives a training for a full life in a rapidly changing modern society, carried out in such a manner as will ensure the maximum development of the individual personality'. He was not a person who placed too much emphasis on fact-burdened and fact-tested learning such as in the National Curriculum developments in England in the nineteen-nineties. Nyholm defined training for a full life as including: [ citation needed ] Nyholm was associated with industry all of his life. One of his earliest positions was as a chemist at Eveready Batteries in Sydney. The application of science to useful products was of great importance to him, and he is purported to have admired the DuPont logo "Better things for better living through chemistry". He was an active consultant to a number of companies including ICI and Johnson Matthey in the UK and DuPont in the US. The Nyholm Prize for Inorganic Chemistry [ 14 ] and the Nyholm Prize for Education , [ 15 ] founded by the Chemical Society in 1973, are now awarded biennially by the Royal Society of Chemistry . The mineral Nyholmite is named after Nyholm. [ 16 ] It was discovered in Broken Hill in 2009 and its structure was elucidated by Elliot et al. [ 17 ]
https://en.wikipedia.org/wiki/Ronald_Sydney_Nyholm
Ronald Wayne " Ron " Davis (born July 17, 1941) is professor of biochemistry and genetics , and director of the Stanford Genome Technology Center at Stanford University . [ 4 ] Davis is a researcher in biotechnology and molecular genetics, particularly active in human and yeast genomics and the development of new technologies in genomics, with over 64 biotechnology patents . [ 5 ] In 2013, it was said of Davis that "A substantial number of the major genetic advances of the past 20 years can be traced back to Davis in some way." [ 6 ] Since his son fell severely ill with myalgic encephalomyelitis/chronic fatigue syndrome Davis has focused his research efforts into the illness. [ 7 ] After completing his PhD at Caltech and a postdoctoral fellowship at Harvard University working with Jim Watson , Davis joined the faculty of Stanford 's department of biochemistry in 1972. [ 8 ] He became an associate professor in 1980, full professor in 1980, and joined the department of genetics as a professor in 1990. He became director of the Stanford Genome Technology Center in 1994. He was elected a member of the National Academy of Sciences in 1983. [ 9 ] [ 10 ] Davis developed the R-loop technique of electron microscopy for mapping coding RNAs which led to the discovery of RNA splicing. [ 11 ] With Janet E. Mertz , Davis was the first to demonstrate the use of restriction endonucleases for joining DNA fragments. [ 12 ] Davis collaborated in the development of the first DNA microarray for gene expression profiling with Patrick O. Brown , [ 13 ] and the gene expression profile of the first complete eukaryotic genome ( Saccharomyces cerevisiae ). [ 14 ] Davis, with David Botstein , Mark Skolnick , and Ray White developed the method [ 15 ] for constructing a genetic linkage map using restriction fragment length polymorphisms that enabled and led to the Human Genome Project . He and his colleagues submitted a proposal to NIH to map the human genome in 1979; that proposal was turned down as being too ambitious. [ 8 ] The Stanford Genome Technology Center was included in the Human Genome Project that began in 1990 and was completed in 2003. In 2013, Davis founded the Stanford Chronic Fatigue Syndrome Research Center (now called ME/CFS Collaborative Research Center). [ 7 ] In 2013 Davis was named, alongside Elon Musk and Jeff Bezos, as one of today's nine greatest innovators by The Atlantic : "A substantial number of the major genetic advances of the past 20 years can be traced back to Davis in some way." [ 6 ] He has won recognition for his contributions to genetic research from many groups, as early as 1976 and as recently as 2015, from one of his alumni colleges and from the National Academy of Sciences . In 2015, he received the Precision Medicine World Conference Luminary Award for his development of “R-loop Technique of Electron Microscopy”. [ 1 ] In 2013, he received the Warren Alpert Foundation Prize . He received the Gruber Prize in Genetics in 2011, which noted among other achievements, two landmark papers, one in 1977 concerning genome editing and another in 1980 which "helped launch the field of genomics." [ 2 ] In 2007, California Institute of Technology gave him its Distinguished Alumni Award. In 2005, Davis received the Dickson Prize in Medicine . In 2004, he received the Lifetime Achievement Award from the Genetics Society of America . The National Academy of Sciences (NAS) gave him the 1982 NAS Award in Molecular Biology . In 1976, he received the Eli Lilly Award in Microbiology and Immunology . Dr. Davis is the director of the Scientific Advisory Board at the Open Medicine Foundation , a non-profit organization, whose goal is to fund and initiate research into chronic complex diseases. [ 16 ] Presently the foundation is invested in The End ME/CFS Project, which aims to fast-track research for a cure for myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS). [ 17 ] In April 2019, a notable result was reported; a test of blood without red cells (white cells in plasma), identified ME/CFS patients from healthy people with 100% accuracy in a small sample, 20 patients and 20 healthy people. [ 18 ] [ 19 ] [ 20 ] The test used a biotechnological device designed by Davis and his team, which is called the "nanoneedle". “The small device that Davis and his colleagues created was originally developed to detect changes in electrical signals when cancer cells were exposed to different treatments.”, as described in Stat News . [ 19 ] it was used to test cells of ME/CFS patients, and in their first hypothesis, found it to be useful in distinguishing patients from healthy people. People with this disease are described as not using energy well and taking a long time to recover from energy expenditure; “the researchers decided to mimic this by stressing cells from 20 healthy controls and 20 ME/CFS patients by exposing them to increased levels of salt.” [ 19 ] Rahim Esfandyarpour, lead author of the paper, said “When they [cells from ME/CFS patients] face this new environment, their reaction is different than the reaction of healthy cells.” [ 19 ] Davis's research became more urgent and important after Dr. Anthony Fauci warned that some COVID-19 survivors showed symptoms in line with those of ME/CFS. According to Fauci, "a considerable number" of COVID-19 survivors struggle with extreme exhaustion, memory lapses, and cognitive difficulties many months after they have been officially cleared as recovered. Davis is part of a high-level interagency work and research group with the Centers for Disease Control and Prevention (CDC), National Institutes of Health , the Veterans Administration , and the Department of Defense looking at the long-term consequences of COVID-19 and Long COVID . [ 7 ] Davis married Janet Dafoe in July 1969. [ 3 ] [ 8 ] Their son, Whitney Dafoe was born in 1983, followed by their daughter Ashley Davis. [ 3 ] Whitney Dafoe became ill with severe ME/CFS around 2009, [ 8 ] declining from active and healthy in his career as a photographer to housebound, and by 2015 bed bound from this disease, unable to tolerate sounds and light, unable to do much at all, and eventually unable to eat, drink or speak. [ 21 ] [ 22 ] As his endurance decreased, Dafoe moved back home in May 2011. [ 8 ] His mother cut her work as a clinical psychologist to five hours a week to spend full time on his daily care as he continued declining in function, [ 8 ] while Davis continues his research career and helps with his son's daily care. Dafoe's need for treatment is the motivation for Davis to direct his medical and scientific research efforts toward this disease; he dropped all other projects in hand before his son became so ill. [ 17 ] [ 8 ]
https://en.wikipedia.org/wiki/Ronald_W._Davis
Ronchigram (after Italian physicist Vasco Ronchi [ˈroŋki] [ 1 ] [ 2 ] ) is the convergent beam diffraction pattern [ 3 ] of a known object with features comparable to the diffracting wavelength. In the case of electron Ronchigrams amorphous materials are used. The structure of the Ronchigram encodes information about the aberration phase field across the objective aperture. [ 4 ] As such, Ronchigrams have become increasingly important with the invention of aberration corrected scanning transmission electron microscopy . [ 5 ] This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ronchigram
Rongalite is a chemical compound with the molecular formula Na + HOCH 2 SO 2 − . This salt has many additional names, including Rongalit , sodium hydroxymethylsulfinate , sodium formaldehyde sulfoxylate , and Bruggolite . It is listed in the European Cosmetics Directive as sodium oxymethylene sulfoxylate ( INCI ). It is water-soluble and generally sold as the dihydrate. The compound and its derivatives are widely used in the dye industry. [ 1 ] The structure of this salt has been confirmed by X-ray crystallography . [ 2 ] Although available commercially, the salt can be prepared from sodium dithionite and formaldehyde : This reaction proceeds quantitatively, such that dithionite can be determined by its conversion to Rongalite, which is far less O 2 -sensitive and thus easier to handle. The hydroxymethanesulfinate ion is unstable in solution towards decomposition to formaldehyde and sulfite. Addition of at least one equivalent of formaldehyde pushes the equilibrium towards the side of the adduct and reacts further to give the bis -(hydroxymethyl)sulfone. Such solutions are shelf-stable indefinitely. Sodium hydroxymethanesulfinate was originally developed in the early 20th century for the textile industry as a shelf-stable source of sulfoxylate ion, where the latter can be generated at will. In use, when sodium hydroxymethanesulfinate is made acidic, the reducing sulfoxylate ion and formaldehyde are released in equimolar amounts. For safety reasons the generation of formaldehyde must be taken into consideration when used industrially. NaHOCH 2 SO 2 can essentially be considered to be a source of SO 2 2− . As such it is used both as a reducing agent and as a reagent to introduce SO 2 groups into organic molecules. Treatment of elemental Se and Te with NaHOCH 2 SO 2 gives solutions containing the corresponding Na 2 Se x and Na 2 Te x , where x is approximately 2. As a nucleophile, NaHOCH 2 SO 2 reacts with alkylating agents to give sulfones . Occasionally, alkylation will occur also at oxygen, thus xylylene dibromide gives both the sulfone and the isomeric sulfinate ester. The original use of the compound was as industrial bleaching agent and as a reducing agent for vat dyeing . [ 1 ] Another large-scale use is as a reducing agent in redox-initiator systems for emulsion polymerization. One of the typical redox pair examples is t-butyl peroxide. A niche use is its use as water conditioner for aquaria as it rapidly reduces chlorine and chloramine and reacts with ammonia to form the innocuous aminomethylsulfinate ion. [ 3 ] It is also used as an antioxidant in pharmaceutical formulation. The compound has been used increasingly in commercial cosmetic hair dye colour removers despite the generation of formaldehyde , a known human carcinogen . It has a variety of specialized applications in organic synthesis . [ 4 ] [ 5 ] The zinc complex Zn(HOCH 2 SO 2 ) 2 is marketed under the trademarks Decroline, Decolin, and Safolin. This compound is an additive in polymers and textiles. [ 6 ] Sodium hydroxymethanesulfinate is called Rongalite C. Calcium hydroxymethanesulfinate is called Rongalite H.
https://en.wikipedia.org/wiki/Rongalite
Roof pitch is the steepness of a roof expressed as a ratio of inch(es) rise per horizontal foot (or their metric equivalent), or as the angle in degrees its surface deviates from the horizontal. A flat roof has a pitch of zero in either instance; all other roofs are pitched . The pitch of a roof is expressed as a fraction, with the vertical rise from the top of the wall plates to the ridge as the numerator, and the horizontal span between the wall plates as the denominator. Regardless of the units used, the fraction is simplified to its lowest terms and understood as a ratio. While the terms *pitch* and * slope * are sometimes used interchangeably, they refer to distinct concepts in roof geometry. Pitch is defined as the ratio of the total vertical rise to the total horizontal span of a roof, whereas slope is defined as the ratio of the rise to the run (half the span), typically standardized to a fixed unit such as 12 inches in imperial units or 1 meter in the metric system . For example, a roof that rises 6 inches for every 12 inches of horizontal run has a slope of 6:12. A common misconception is that the pitch of a roof is what is displayed on a framing square . In fact, the tables and markings on a framing square represent slope, not pitch. These values are based on a standard run of 12 inches (or 1 meter in metric systems) and provide rise-per-unit-run information, which is essential for calculating rafter lengths, plumb cuts, and other framing details. The framing square does not indicate pitch, even though it is sometimes mistakenly described that way. To convert pitch to slope in imperial units, the pitch is multiplied by 24, yielding the equivalent slope in rise per 12 inches of run. For example, a pitch of 1⁄6 corresponds to a slope of 4:12 (1⁄6 x 24 = 4). In the metric system, where slope is expressed as rise per meter of run, the pitch is multiplied by 2 to obtain the rise over a 1-meter run. For instance, a pitch of 1⁄6 would result in a slope of approximately 333 mm per meter (1⁄6 x 2 = 333mm). Considerations involved in selecting a roof pitch include availability and cost of materials, aesthetics, ease or difficulty of construction, climatic factors such as wind and potential snow load, [ 1 ] and local building codes. Pitches require different applications. In Canada, the NBC [ 2 ] lays out requirements to allow for ranges of roof slopes. The NBC defines a low slope as less than 1 in 3 (4/12), while normal slopes are 1 in 3 (4/12) or greater. For each slope category, there are specific codes that must be followed. These codes also contain equations with variables that need to be replaced with values from a 774-row, 16-column table. Each entry in the table corresponds to a geographical location within Canada and provides location-specific weather averages and 1-in-50-year extremes (such as rain, snow, and wind), as well as values to prevent and control fire spread, moisture index, and degree days below 18°C (which are important for concrete applications). These values must then be substituted into the relevant variables of the equations in the building code, ensuring the structure is built to withstand local environmental conditions and last over time. Carpenters framing roofs for buildings or homes typically round their calculations to three decimal places. The smallest fraction of an inch used in framing is a 16th (0.0625"), which is rounded to 0.063". The mathematical operations involved in framing equations are minimal, so rounding to three decimal places results in a solution that is accurate within a 1/16th of an inch. Historically, roof pitch was designated in two other ways: A ratio of the ridge height to the width of the building (span) [ 3 ] and as a ratio of the rafter length to the width of the building. [ 4 ] Commonly used roof pitches were given names such as:
https://en.wikipedia.org/wiki/Roof_pitch
A roof seamer is a portable roll forming machine that is used to install mechanically seamed structural standing-seam metal roof panels, as part of an overall metal construction building envelope system. The machine is small and portable to be handled by an operator on top of a roof. The machine is applied to the overlapping area when two parallel roof panels meet. The action of the machine bends the two panels together to form a joint that has weather-tight qualities superior to other types of roof systems and cladding. Commonly, a roof seamer is developed as an afterthought. Since roof seamers are dependent on the metal roof system being used, their development was secondary to the roof panel. A roof seamer is a development that replaced a manual process and hand tools of the past. A hammer and small anvil were tools that were used for hemming and seaming roof panels together at the edge where they meet with the next roof panel in sequence. In 1976, a German immigrant and inventor, Ewald Stellrecht, [ 1 ] helped develop an early version of a metal roof panel portable roll forming machine in Exton, PA. From this a version of the roof seamer was also created. Since that time, great strides and innovations have been made in the development of roof seaming machines. Also, in the 1970s, Butler Manufacturing developed and released a proprietary roof system that featured the use of an electric roof seamer, dubbed the Roof Runner®, along with hand tools and an operating platform. [ 2 ] Many developments have been made since that time to make roof seamers lighter, faster, and more user-friendly. In 1989, Developmental Industries refocused the niche market by developing a line of roof seamers that were universal to many different panel manufacturers' products and were available to rent by the end-user. [ 3 ] Traditionally, purchasing a roof seamer meant that it would work with one specific roof panel, manufactured by a specific roof panel manufacturer. By opening up builders and installers to the option of renting , this allowed them to have the option of buying from different sources and greatly reducing their cost, making metal roofing a more accessible option for many that would not consider it before. Today roof seamers are used around the world. As the rise in popularity in sustainable building products has risen in recent years, the need of a roof seaming tool has also increased. Most roof seaming machines can have a life expectancy of 20 or more years, if proper maintenance and care are exercised. Many variables exist when using a roof seamer that may affect the final product outcome. All of the following variables should be considered and decided on during the design process of the building: Traditionally, roof seamers are powered by electricity -driven motors. Depending on the operator's location, either 120-volt or 240-volt power may be required. On most construction sites, either temporary electrical power is supplied or power is offered by an electric generator . This gives the operator the flexibility to take the power source onto a roof with them instead of using extensions cords, which can depreciate the power supply and possibly damage the motor of the roof seamer. While simple in concept, the effective use of the roof seamer requires a trained person to operate. Training is both practical and effective in on-site troubleshooting. While classroom and practical training are options to learn how to operate a roof seamer, on-the-job training is recommended as being the most effective method. Manuals, videos, and field guides are also methods that will support training. In all cases, training should be completed before operating alone with a roof seamer to teach proper preventive maintenance steps, simple adjustments and troubleshooting in the event of a machine problem. In 2015, the Metal Construction Association published a "best practices" guide for proper use and operation of roof seaming tools. [ 4 ] As with any tool, proper maintenance will increase the usefulness and life expectancy. Proper maintenance extends beyond the roof seamer, to the working surface on the roof. Before operating the roof seamer, ensure that the roof panel and seam are clean and clear of debris that could mark or gouge the forming dies. During operation check lubrication points and other recommended maintenance steps. In addition, most manufacturers will recommend scheduled service on an annual basis to ensure internal components are not worn or damaged. In conjunction with the roof seaming machine, there are an array of hand tools that are used. The most common tool that is usually required when operating a roof seamer is a "hand crimper". The hand crimper is used to "flat form" the panel seams into the appropriate configuration to prepare the seam for the roof seamer to be applied. Other common tools are snips , nibblers , and shears. There are numerous professional and trade organizations that support metal roofing, metal construction and the core market where roof seamers are used. The Metal Roofing Alliance (MRA), Metal Construction Association (MCA) , Metal Buildings Manufacturers Association (MBMA), the Metal Buildings Contractors and Erectors Association (MBCEA), and the National Roofing Contractors Association (NRCA) are just a few. In addition, many distributors and suppliers offer resources and support documentation for their particular product offerings.
https://en.wikipedia.org/wiki/Roof_seamer
A roofing filter is a type of filter used in a HF radio receiver that limits the passband in the early stages of the receiver electronics. It blocks strong signals outside the receive channel which can overload following amplifier and mixer stages. The roofing filter is usually found after the first receiver mixer (which normally contains an amplifier) to limit the first intermediate frequency (IF) stage's passband . It prevents overloading later amplifier stages, which would cause nonlinearity ("distortion") or clipping ("buzz") even if the overload occurred on frequencies whose signal is not heard directly. Roofing filters are usually crystal or ceramic filter types, with a passband for general purpose shortwave radio reception of about 6–20 kHz (for AM – NFM ). The receiver's bandwidth is not determined by the roofing filter passband, but instead by a follow-on crystal filter , mechanical filter , or DSP filter, all of which allow a much tighter filtering curve than a typical roofing filter. For more demanding uses like listening to weak CW or SSB signals, a roofing filter is required that gives a smaller passband appropriate to the mode of the received signal. It is often used at a high first IF stage above 40 MHz , with passband widths of 250 Hz, 500 Hz (for CW), or 1.8 kHz (for SSB). These narrow filters require that the receiver uses a first IF well below VHF range, perhaps 9 or 11 MHz. [ 1 ]
https://en.wikipedia.org/wiki/Roofing_filter
Rooibos ( / ˈ r ɔɪ b ɒ s / ROY -boss ; Afrikaans: [ˈroːibɔs] ⓘ , lit. ' red bush ' ), or Aspalathus linearis , is a broom -like member of the plant family Fabaceae that grows in South Africa 's Fynbos biome. The leaves are used to make a caffeine -free herbal infusion that has been popular in Southern Africa for generations. Since the 2000s, rooibos has gained popularity internationally, with an earthy flavour and aroma that is similar to yerba mate or tobacco . [ 3 ] [ 4 ] [ 5 ] Outside of Southern Africa, it is called bush tea , red tea , or redbush tea (predominantly in Great Britain). The name rooibos is Afrikaans deriving from rooi bos , meaning ' red bush ' . The name is protected in South Africa and has protected designation of origin status in the EU. Rooibos was formerly classified in the genus Psoralea but is now thought to be part of Aspalathus , following Dahlgren (1980). The specific name of linearis , for the plant's linear growing structure and needle-like leaves, was given by Burman (1759). Rooibos is usually grown in the Cederberg , a small mountainous area in the West Coast District of the Western Cape province of South Africa . [ 6 ] Generally, the leaves undergo oxidation. [ 7 ] This process produces the distinctive reddish-brown colour of rooibos and enhances the flavour. Unoxidised green rooibos is also produced, but the more demanding production process for green rooibos (similar to the method by which green tea is produced) makes it more expensive than traditional rooibos. It carries a malty and slightly grassy flavour somewhat different from its red counterpart. [ 8 ] Rooibos is commonly prepared as a tisane by steeping in hot water, in the same manner as black tea . The infusion is consumed on its own or flavoured by addition of milk, lemon, sugar or honey. It is also served as lattes , cappuccinos or iced tea . [ 9 ] As a fresh leaf, rooibos contains a high content of ascorbic acid (vitamin C). [ 10 ] Rooibos tea does not contain caffeine [ 11 ] [ 12 ] and has low tannin levels compared to black tea or green tea . [ 10 ] Rooibos contains polyphenols , including flavanols , flavones , flavanones , dihydrochalcones , [ 13 ] [ 14 ] aspalathin [ 15 ] and nothofagin . [ 16 ] The processed leaves and stems contain benzoic and cinnamic acids . [ 17 ] Rooibos grades are largely related to the percentage needle or leaf to stem content in the mix. A higher leaf content results in a darker liquor, richer flavour and less "dusty" aftertaste. The high-grade rooibos is exported and does not reach local markets, with major consumers being the EU, particularly Germany, where it is used in creating flavoured blends for loose-leaf tea markets. [ 18 ] Three species of the Borboniae group of Aspalathus , namely A. angustifolia , A. cordata and A. crenata , were once used as tea. These plants have simple, rigid, spine-tipped leaves, hence the common name 'stekeltee'. The earliest record of the use of Aspalathus as a source of tea was that of Carl Peter Thunberg , who wrote about the use of A. cordata as tea: "Of the leaves of Borbonia cordata the country people make tea." (Thunberg, July 1772, at Paarl). This anecdote is sometimes erroneously associated with rooibos tea ( Aspalathus linearis ). [ 19 ] Archaeological records suggest that Aspalathus linearis could have been used thousands of years ago, but that does not imply rooibos tea was made in precolonial times. [ 20 ] The traditional method of harvesting and processing rooibos (for making rooibos infusion or decoction tea) could have, at least partly, originated in precolonial times. However, it does not necessarily follow that San and Khoikhoi used that method to prepare a beverage that they consumed for pleasure as tea. The earliest available ethnobotanical records of rooibos tea originate in the late 19th century. No Khoi or San vernacular names of the species have been recorded. Several authors have assumed that the tea originated from the local inhabitants of the Cederberg . Apparently, rooibos tea is a traditional drink of Khoi-descended people of the Cederberg (and "poor whites"). However, that tradition has not been traced further back than the last quarter of the 19th century. [ 19 ] Traditionally, the local people would climb the mountains and cut the fine needle-like leaves from wild rooibos plants. They then rolled the bunches of leaves into hessian bags and brought them down the steep slopes using donkeys. Rooibos tea was traditionally processed by beating the material on a flat rock with a heavy wooden pole or club or a large wooden hammer. [ 19 ] The historical record of the use of rooibos in precolonial and early colonial times is mostly a record of absence. Colonial-era settlers could have learnt about some properties of the Aspalathus linearis from pastoralists and hunter-gatherers of the Cederberg region. The nature of that knowledge was not documented. Given the available data, the origin of rooibos tea can be viewed in the context of the global expansion of tea trade and the colonial habit of drinking Chinese and later Ceylon tea. In that case, the rooibos infusion or decoction served as a local replacement for the expensive Asian product. [ 20 ] It appears that both the indigenous (San and Khoikhoi) and the colonial inhabitants of rooibos-growing areas contributed to the traditional knowledge of rooibos in some way. For instance, medicinal uses might have been introduced before the 18th century by Khoisan pastoralists or San hunter-gatherers. Also, the use of the Aspalathus linearis to make tea, including the production processes, such as bruising and oxidising the leaves, is more likely to have been introduced in colonial times by settlers who were accustomed to drinking Asian tea or its substitutes. [ 20 ] In 1904, South African businessman, also referred to as 'the father of the rooibos industry', [ 21 ] Benjamin Ginsberg ran a variety of experiments at Rondegat Farm and finally cured rooibos. He simulated the traditional Chinese method of making Keemun by fermenting the tea in barrels, drawing inspiration from his Jewish family's tradition of brewing tea and herbal infusions , which were customarily prepared with a samovar [ 22 ] The major hurdle in growing rooibos commercially was that farmers could not germinate the rooibos seeds. The seeds were hard to find and impossible to germinate commercially. A medical doctor by profession and business partner to Ginsberg, Pieter le Fras Nortier, [ 23 ] ascertained that seeds require a process of scarification before they are planted in acidic, sandy soil. [ 24 ] [ 25 ] By the late 1920s, growing demand for the tea had led to problems with supply of the wild rooibos plants. As a remedy, Pieter le Fras Nortier, a district surgeon in Clanwilliam and an avid naturalist, proposed to develop a cultivated variety of rooibos to be raised on appropriately-situated land. Nortier worked on cultivation of the rooibos species in partnership with the farmers Oloff Bergh and William Riordan and with the encouragement of Benjamin Ginsberg. [ 20 ] Bergh harvested a large amount of rooibos in 1925 on his farm Kleinvlei, in the Pakhuis Mountains. Nortier collected seeds in the Pakhuis Mountains (Rocklands) and in a large valley, called Grootkloof, and those first selected seeds are known as the Nortier-type and Redtea-type. [ 26 ] In 1930, Nortier began conducting experiments with the commercial cultivation of the rooibos plant. He cultivated the first plants at Clanwilliam on his farm of Eastside and on the farm of Klein Kliphuis. The tiny seeds were very difficult to come by Nortier, who paid the local villagers £5 per matchbox of seeds collected. An aged Khoi woman found an unusual seed source: having chanced upon ants dragging seed, she followed them back to their nest and, on breaking it open, found a granary . [ 27 ] Nortier's research was ultimately successful, and he subsequently showed all the local farmers how to germinate their own seeds. The secret lay in scarifying the seed pods. Nortier placed a layer of seeds between two mill stones and ground away some of the seed pod wall. Thereafter the seeds were easily propagated. Over the next decade the price of seeds rose to £80 per pound, the most expensive vegetable seed in the world, as farmers rushed to plant rooibos. Today, the seed is gathered by special sifting processes. Nortier is today accepted as the father of the rooibos tea industry. The variety developed by Nortier has become the mainstay of the rooibos industry enabling it to expand and create income and jobs for inhabitants of rooibos-growing regions. [ 20 ] Thanks to Nortier's research, rooibos tea became an iconic national beverage and then a globalised commodity. Production is today the economic mainstay of the Clanwilliam district. In 1948, the University of Stellenbosch awarded Nortier an Honorary Doctorate D.Sc. (Agria) in recognition for his valuable contribution to South African agriculture. Aspalathus linearis has a small endemic range in the wild, but horticultural techniques to maximise production have been effective at maintaining cultivation as a semi-wild crop to supply the new demands of the broadening rooibos tea industry. A. linearis is often grouped with the honeybush ( Cyclopia ) , another plant from the Fynbos region of Southern Africa , which is also used to make tea. Like other members of the genus, A. linearis is considered a part of the Fynbos ecoregion in the Cape Floristic Region , whose plants often depend on fire for reproduction. A. linearis is a legume and thus an angiosperm and produces an indehiscent fruit. Its flowers make up a raceme inflorescence . Seed germination can be slow, but sprouting can be induced by acid treatment. [ 28 ] The seeds are hard-shelled and often need scarification. [ 29 ] For A. linearis , fire can stimulate resprouting in the species, but the sprouting is less than that of other plants in the Fynbos ecoregion. A. linearis can be considered facultative and obligate sprouters and have lignotuber development for after fires. Typically, there are two classifications of A. lineraris in response to fire: reseeders and resprouters . Reseeders are killed by fire, but it stimulates their seeds’ germination. Resprouters are not completely killed during a fire and grow back from established lignotubers . [ 30 ] Seeds of wild populations are dispersed by species of ants, whose use as dispersers reduces parent-offspring and sibling-sibling competition. [ 31 ] Ants are also helpful in dispersion as they reduce the susceptibility of seeds to other herbivores. Like most other legumes , there is a symbiotic relationship between rhizoids and the underground lignotuber structure that promotes nitrogen fixation and growth. The nitrogen content in the soil is an important environmental factor for growth, development, and reproduction. Hawkins, Malgas, & Biénabe (2011) suggested that there are multiple ecotypes of A. linearis that have different selected methods of growth and morphology and are dependent on the environment. [ 32 ] It is unclear how many ecotypes there might be, given their limited geographic range and the limited literature about genetic diversity. Van der Bank, Van der Bank, & Van Wyk (1999) [ 33 ] suggest that resprouting populations and reseeding populations have been selected for based on the environment as a way to reduce genetic bottlenecks; however, whether that promotes certain reproductive strategies over others was unclear. [ 33 ] Wild populations can contain both sprouting and non-sprouting individuals, but cultivated rooibos are typically reseeders, not resprouters, and have higher growth rates. Cultivated A. linearis can be selected for certain traits that are desirable for human use. Cultivated plants are diploid with a base chromosome number of 9 ( 2 n = 18 chromosomes), but the understanding of how this might differ in ecotypes is limited. [ 30 ] The selection process can include human-mediated pollination, fire suppression, and supplementing soil contents. Like many other Fynbos plants, A. linearis is not significantly pollinated by cape honey bees , which suggests an alternative way of primary pollination. [ 34 ] Some wasps likely play an important role in pollinating the flowers and some wasp species are thought to be specially adapted to accessing the A. linearis flower. [ 35 ] In 1994, Burke International registered the name "Rooibos" with the US Patent and Trademark Office and so established a monopoly on the name in the United States when the plant was virtually unknown there. When it later entered more widespread use, Burke demanded for companies to pay fees to use the name or to cease its use. In 2005, the American Herbal Products Association and a number of import companies succeeded in defeating the trademark through petitions and lawsuits. After losing one of the cases, Burke surrendered the name to the public domain . [ 36 ] The South African Department of Trade and Industry issued final rules on 6 September 2013 that protects and restricts the use of the names "rooibos", "red bush", "rooibostee", "rooibos tea", "rooitee", and "rooibosch" in the country so that the name cannot be used for things unless they are derived from the Aspalathus linearis plant. It also provides guidance and restrictions for how products that include rooibos and in what measures should use the name rooibos in their branding. [ 37 ] [ 38 ] In May 2021, the European Union conferred protected designation of origin (PDO) status to "rooibos". Any foodstuff sold as "rooibos" in the EU and several countries outside the bloc must be made by using only Aspalathus linearis leaves that are cultivated in the Cederberg region of South Africa. [ 39 ] [ 40 ] The rooibos plant is endemic to a small part of the Western Cape Province , South Africa. It grows in a symbiotic relationship with local micro-organisms. [ 41 ] A 2012 South African news item cited concerns regarding the prospects of rooibos farming in the face of climate change . [ 42 ] The use of rooibos and the expansion of its cultivation are threatening other local species of plants endemic to the area such as Protea convexa , [ 43 ] Roridula dentata [ 44 ] and P. scolymocephala . [ 45 ]
https://en.wikipedia.org/wiki/Rooibos
In combinatorial mathematics , a rook polynomial is a generating polynomial of the number of ways to place non-attacking rooks on a board that looks like a checkerboard ; that is, no two rooks may be in the same row or column. The board is any subset of the squares of a rectangular board with m rows and n columns; we think of it as the squares in which one is allowed to put a rook. The board is the ordinary chessboard if all squares are allowed and m = n = 8 and a chessboard of any size if all squares are allowed and m = n . The coefficient of x k in the rook polynomial R B ( x ) is the number of ways k rooks, none of which attacks another, can be arranged in the squares of B . The rooks are arranged in such a way that there is no pair of rooks in the same row or column. In this sense, an arrangement is the positioning of rooks on a static, immovable board; the arrangement will not be different if the board is rotated or reflected while keeping the squares stationary. The polynomial also remains the same if rows are interchanged or columns are interchanged. The term "rook polynomial" was coined by John Riordan . [ 1 ] Despite the name's derivation from chess , the impetus for studying rook polynomials is their connection with counting permutations (or partial permutations ) with restricted positions. A board B that is a subset of the n × n chessboard corresponds to permutations of n objects, which we may take to be the numbers 1, 2, ..., n , such that the number a j in the j -th position in the permutation must be the column number of an allowed square in row j of B . Famous examples include the number of ways to place n non-attacking rooks on: Interest in rook placements arises in pure and applied combinatorics, group theory , number theory , and statistical physics . The particular value of rook polynomials comes from the utility of the generating function approach, and also from the fact that the zeroes of the rook polynomial of a board provide valuable information about its coefficients, i.e., the number of non-attacking placements of k rooks. The rook polynomial R B ( x ) of a board B is the generating function for the numbers of arrangements of non-attacking rooks: where r k ( B ) {\displaystyle r_{k}(B)} is the number of ways to place k non-attacking rooks on the board B . There is a maximum number of non-attacking rooks the board can hold; indeed, there cannot be more rooks than the number of rows or number of columns in the board (hence the limit min ( m , n ) {\displaystyle \min(m,n)} ). [ 2 ] For rectangular m × n boards B m , n , we write R m,n := R B m , n , and if m = n , R n := R m , n . The first few rook polynomials on square n × n boards are: In words, this means that on a 1 × 1 board, 1 rook can be arranged in 1 way, and zero rooks can also be arranged in 1 way (empty board); on a complete 2 × 2 board, 2 rooks can be arranged in 2 ways (on the diagonals), 1 rook can be arranged in 4 ways, and zero rooks can be arranged in 1 way; and so forth for larger boards. The rook polynomial of a rectangular chessboard is closely related to the generalized Laguerre polynomial L n α ( x ) by the identity A rook polynomial is a special case of one kind of matching polynomial , which is the generating function of the number of k -edge matchings in a graph. The rook polynomial R m , n ( x ) corresponds to the complete bipartite graph K m , n . The rook polynomial of a general board B ⊆ B m , n corresponds to the bipartite graph with left vertices v 1 , v 2 , ..., v m and right vertices w 1 , w 2 , ..., w n and an edge v i w j whenever the square ( i , j ) is allowed, i.e., belongs to B . Thus, the theory of rook polynomials is, in a sense, contained in that of matching polynomials. We deduce an important fact about the coefficients r k , which we recall given the number of non-attacking placements of k rooks in B : these numbers are unimodal , i.e., they increase to a maximum and then decrease. This follows (by a standard argument) from the theorem of Heilmann and Lieb [ 3 ] about the zeroes of a matching polynomial (a different one from that which corresponds to a rook polynomial, but equivalent to it under a change of variables), which implies that all the zeroes of a rook polynomial are negative real numbers. For incomplete square n × n boards, (i.e. rooks are not allowed to be played on some arbitrary subset of the board's squares) computing the number of ways to place n rooks on the board is equivalent to computing the permanent of a 0–1 matrix. A precursor to the rook polynomial is the classic "Eight rooks problem" by H. E. Dudeney [ 4 ] in which he shows that the maximum number of non-attacking rooks on a chessboard is eight by placing them on one of the main diagonals (Fig. 1). The question asked is: "In how many ways can eight rooks be placed on an 8 × 8 chessboard so that neither of them attacks the other?" The answer is: "Obviously there must be a rook in every row and every column. Starting with the bottom row, it is clear that the first rook can be put on any one of eight different squares (Fig. 1). Wherever it is placed, there is the option of seven squares for the second rook in the second row. Then there are six squares from which to select the third row, five in the fourth, and so on. Therefore the number of different ways must be 8 × 7 × 6 × 5 × 4 × 3 × 2 × 1 = 40,320" (that is, 8!, where "!" is the factorial ). [ 5 ] The same result can be obtained in a slightly different way. Let us endow each rook with a positional number, corresponding to the number of its rank, and assign it a name that corresponds to the name of its file. Thus, rook a1 has position 1 and name "a", rook b2 has position 2 and name "b", etc. Then let us order the rooks into an ordered list ( sequence ) by their positions. The diagram on Fig. 1 will then transform in the sequence (a,b,c,d,e,f,g,h). Placing any rook on another file would involve moving the rook that hitherto occupied the second file to the file, vacated by the first rook. For instance, if rook a1 is moved to "b" file, rook b2 must be moved to "a" file, and now they will become rook b1 and rook a2. The new sequence will become (b,a,c,d,e,f,g,h). In combinatorics, this operation is termed permutation , and the sequences, obtained as a result of the permutation, are permutations of the given sequence. The total number of permutations, containing 8 elements from a sequence of 8 elements is 8! ( factorial of 8). To assess the effect of the imposed limitation "rooks must not attack each other", consider the problem without such limitation. In how many ways can eight rooks be placed on an 8 × 8 chessboard? This will be the total number of combinations of 8 rooks on 64 squares: Thus, the limitation "rooks must not attack each other" reduces the total number of allowable positions from combinations to permutations which is a factor of about 109,776. A number of problems from different spheres of human activity can be reduced to the rook problem by giving them a "rook formulation". As an example: A company must employ n workers on n different jobs and each job must be carried out only by one worker. In how many ways can this appointment be done? Let us put the workers on the ranks of the n × n chessboard, and the jobs − on the files. If worker i is appointed to job j , a rook is placed on the square where rank i crosses file j . Since each job is carried out only by one worker and each worker is appointed to only one job, all files and ranks will contain only one rook as a result of the arrangement of n rooks on the board, that is, the rooks do not attack each other. The classical rooks problem immediately gives the value of r 8 , the coefficient in front of the highest order term of the rook polynomial. Indeed, its result is that 8 non-attacking rooks can be arranged on an 8 × 8 chessboard in r 8 = 8! = 40320 ways. Let us generalize this problem by considering an m × n board, that is, a board with m ranks (rows) and n files (columns). The problem becomes: In how many ways can one arrange k rooks on an m × n board in such a way that they do not attack each other? It is clear that for the problem to be solvable, k must be less or equal to the smaller of the numbers m and n ; otherwise one cannot avoid placing a pair of rooks on a rank or on a file. Let this condition be fulfilled. Then the arrangement of rooks can be carried out in two steps. First, choose the set of k ranks on which to place the rooks. Since the number of ranks is m , of which k must be chosen, this choice can be done in ( m k ) {\displaystyle {\binom {m}{k}}} ways. Similarly, the set of k files on which to place the rooks can be chosen in ( n k ) {\displaystyle {\binom {n}{k}}} ways. Because the choice of files does not depend on the choice of ranks, according to the products rule there are ( m k ) ( n k ) {\displaystyle {\binom {m}{k}}{\binom {n}{k}}} ways to choose the square on which to place the rook. However, the task is not yet finished because k ranks and k files intersect in k 2 squares. By deleting unused ranks and files and compacting the remaining ranks and files together, one obtains a new board of k ranks and k files. It was already shown that on such board k rooks can be arranged in k ! ways (so that they do not attack each other). Therefore, the total number of possible non-attacking rooks arrangements is: [ 6 ] For instance, 3 rooks can be placed on a conventional chessboard (8 × 8) in 8 ! 8 ! 3 ! 5 ! 5 ! = 18 , 816 {\displaystyle \textstyle {\frac {8!8!}{3!5!5!}}=18,816} ways. For k = m = n , the above formula gives r k = n ! that corresponds to the result obtained for the classical rooks problem. The rook polynomial with explicit coefficients is now: If the limitation "rooks must not attack each other" is removed, one must choose any k squares from m × n squares. This can be done in: If the k rooks differ in some way from each other, e.g., they are labelled or numbered, all the results obtained so far must be multiplied by k !, the number of permutations of k rooks. As a further complication to the rooks problem, let us require that rooks not only be non-attacking but also symmetrically arranged on the board. Depending on the type of symmetry, this is equivalent to rotating or reflecting the board. Symmetric arrangements lead to many problems, depending on the symmetry condition. [ 7 ] [ 8 ] [ 9 ] [ 10 ] The simplest of those arrangements is when rooks are symmetric about the centre of the board. Let us designate with G n the number of arrangements in which n rooks are placed on a board with n ranks and n files. Now let us make the board to contain 2 n ranks and 2 n files. A rook on the first file can be placed on any of the 2 n squares of that file. According to the symmetry condition, placement of this rook defines the placement of the rook that stands on the last file − it must be arranged symmetrically to the first rook about the board centre. Let us remove the first and the last files and the ranks that are occupied by rooks (since the number of ranks is even, the removed rooks cannot stand on the same rank). This will give a board of 2 n − 2 files and 2 n − 2 ranks. It is clear that to each symmetric arrangement of rooks on the new board corresponds a symmetric arrangement of rooks on the original board. Therefore, G 2 n = 2 nG 2 n − 2 (the factor 2 n in this expression comes from the possibility for the first rook to occupy any of the 2 n squares on the first file). By iterating the above formula one reaches to the case of a 2 × 2 board, on which there are 2 symmetric arrangements (on the diagonals). As a result of this iteration, the final expression is G 2 n = 2 n n ! For the usual chessboard (8 × 8), G 8 = 2 4 × 4! = 16 × 24 = 384 centrally symmetric arrangements of 8 rooks. One such arrangement is shown in Fig. 2. For odd-sized boards (containing 2 n + 1 ranks and 2 n + 1 files) there is always a square that does not have its symmetric double − this is the central square of the board. There must always be a rook placed on this square. Removing the central file and rank, one obtains a symmetric arrangement of 2 n rooks on a 2 n × 2 n board. Therefore, for such board, once again G 2 n + 1 = G 2 n = 2 n n !. A little more complicated problem is to find the number of non-attacking arrangements that do not change upon 90° rotation of the board. Let the board have 4 n files and 4 n ranks, and the number of rooks is also 4 n . In this case, the rook that is on the first file can occupy any square on this file, except the corner squares (a rook cannot be on a corner square because after a 90° rotation there would 2 rooks that attack each other). There are another 3 rooks that correspond to that rook and they stand, respectively, on the last rank, the last file, and the first rank (they are obtained from the first rook by 90°, 180°, and 270° rotations). Removing the files and ranks of those rooks, one obtains the rook arrangements for a (4 n − 4) × (4 n − 4) board with the required symmetry. Thus, the following recurrence relation is obtained: R 4 n = (4 n − 2) R 4 n − 4 , where R n is the number of arrangements for a n × n board. Iterating, it follows that R 4 n = 2 n (2 n − 1)(2 n − 3)...1. The number of arrangements for a (4 n + 1) × (4 n + 1) board is the same as that for a 4 n × 4 n board; this is because on a (4 n + 1) × (4 n + 1) board, one rook must necessarily stand in the centre and thus the central rank and file can be removed. Therefore R 4 n + 1 = R 4 n . For the traditional chessboard ( n = 2), R 8 = 4 × 3 × 1 = 12 possible arrangements with rotational symmetry. For (4 n + 2) × (4 n + 2) and (4 n + 3) × (4 n + 3) boards, the number of solutions is zero. Two cases are possible for each rook: either it stands in the centre or it doesn't stand in the centre. In the second case, this rook is included in the rook quartet that exchanges squares on turning the board at 90°. Therefore, the total number of rooks must be either 4 n (when there is no central square on the board) or 4 n + 1. This proves that R 4 n + 2 = R 4 n + 3 = 0. The number of arrangements of n non-attacking rooks symmetric to one of the diagonals (for determinacy, the diagonal corresponding to a1–h8 on the chessboard) on a n × n board is given by the telephone numbers defined by the recurrence Q n = Q n − 1 + ( n − 1) Q n − 2 . This recurrence is derived in the following way. Note that the rook on the first file either stands on the bottom corner square or it stands on another square. In the first case, removal of the first file and the first rank leads to the symmetric arrangement n − 1 rooks on a ( n − 1) × ( n − 1) board. The number of such arrangements is Q n − 1 . In the second case, for the original rook there is another rook, symmetric to the first one about the chosen diagonal. Removing the files and ranks of those rooks leads to a symmetric arrangement n − 2 rooks on a ( n − 2) × ( n − 2) board. Since the number of such arrangements is Q n − 2 and the rook can be put on the n − 1 square of the first file, there are ( n − 1) Q n − 2 ways for doing this, which immediately gives the above recurrence. The number of diagonal-symmetric arrangements is then given by the expression: This expression is derived by partitioning all rook arrangements in classes; in class s are those arrangements in which s pairs of rooks do not stand on the diagonal. In exactly the same way, it can be shown that the number of n -rook arrangements on a n × n board, such that they do not attack each other and are symmetric to both diagonals is given by the recurrence equations B 2 n = 2 B 2 n − 2 + (2 n − 2) B 2 n − 4 and B 2 n + 1 = B 2 n . A different type of generalization is that in which rook arrangements that are obtained from each other by symmetries of the board are counted as one. For instance, if rotating the board by 90 degrees is allowed as a symmetry, then any arrangement obtained by a rotation of 90, 180, or 270 degrees is considered to be "the same" as the original pattern, even though these arrangements are counted separately in the original problem where the board is fixed. For such problems, Dudeney [ 11 ] observes: "How many ways there are if mere reversals and reflections are not counted as different has not yet been determined; it is a difficult problem." The problem reduces to that of counting symmetric arrangements via Burnside's lemma .
https://en.wikipedia.org/wiki/Rook_polynomial
The room-temperature densification method was developed for Li 2 MoO 4 ceramics and is based on the water-solubility of Li 2 MoO 4 . It can be used for the fabrication of Li 2 MoO 4 ceramics instead of conventional thermal sintering . The method utilizes a small amount of aqueous phase formed by moistening the Li 2 MoO 4 powder. The densification occurs during sample pressing as the solution incorporates the pores between the powder particles and recrystallizes . The contact points of the particles provide a high pressure zone, where solubility is increased, whereas the pores act as a suitable place for the precipitation of the solution. Any residual water is removed by post-processing typically at 120 °C. The method is suitable also for Li 2 MoO 4 composite ceramics with up to 30 volume-% of filler material, enabling the optimization of the dielectric properties. [ 1 ] [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Room-temperature_densification_method
A room-temperature superconductor is a hypothetical material capable of displaying superconductivity above 0 °C (273 K; 32 °F), operating temperatures which are commonly encountered in everyday settings. As of 2023, the material with the highest accepted superconducting temperature was highly pressurized lanthanum decahydride , whose transition temperature is approximately 250 K (−23 °C) at 200 GPa. [ 1 ] [ 2 ] At standard atmospheric pressure , cuprates currently hold the temperature record, manifesting superconductivity at temperatures as high as 138 K (−135 °C). [ 3 ] Over time, researchers have consistently encountered superconductivity at temperatures previously considered unexpected or impossible, challenging the notion that achieving superconductivity at room temperature was infeasible. [ 4 ] [ 5 ] The concept of "near-room temperature" transient effects has been a subject of discussion since the early 1950s. Since the discovery of high-temperature superconductors ("high" being temperatures above 77 K (−196.2 °C; −321.1 °F), the boiling point of liquid nitrogen ), several materials have been claimed, although not confirmed, to be room-temperature superconductors. [ 6 ] In 2014, an article published in Nature suggested that some materials, notably YBCO ( yttrium barium copper oxide ), could be made to briefly superconduct at room temperature using infrared laser pulses. [ 7 ] In 2015, an article published in Nature by researchers of the Otto Hahn Institute suggested that under certain conditions such as extreme pressure H 2 S transitioned to a superconductive form H 3 S at 150 GPa (around 1.5 million times atmospheric pressure) in a diamond anvil cell . [ 8 ] The critical temperature is 203 K (−70 °C) which would be the highest T c ever recorded and their research suggests that other hydrogen compounds could superconduct at up to 260 K (−13 °C). [ 9 ] [ 10 ] Also in 2018, researchers noted a possible superconducting phase at 260 K (−13 °C) in lanthanum decahydride ( La H 10 ) at elevated (200 GPa ) pressure. [ 11 ] In 2019, the material with the highest accepted superconducting temperature was highly pressurized lanthanum decahydride, whose transition temperature is approximately 250 K (−23 °C). [ 1 ] [ 2 ] Though not room temperature, a rare earth 'infinite layer' nickelate was recently discovered that superconducted at the unheard of (for nickelates) temperature of 44K at ambient pressure. This material is stable in air unlike cuprates, and other nickelates may have even higher critical temperatures. The current theory is that these materials leverage very unusual physics including pair density waves (PDW) that may not be as sensitive to the normal pitfalls of high temperature superconductors like low critical current. [ 12 ] In 1993 and 1997, Michel Laguës and his team published evidence of room temperature superconductivity observed on Molecular Beam Epitaxy (MBE) deposited ultrathin nanostructures of bismuth strontium calcium copper oxide (BSCCO, pronounced bisko, Bi 2 Sr 2 Ca n −1 Cu n O 2 n +4+ x ). [ 13 ] [ 14 ] These compounds exhibit extremely low resistivities orders of magnitude below that of copper, strongly non-linear I(V) characteristics and hysteretic I(V) behavior. In 2000, while extracting electrons from diamond during ion implantation work, South African physicist Johan Prins claimed to have observed a phenomenon that he explained as room-temperature superconductivity within a phase formed on the surface of oxygen-doped type IIa diamonds in a 10 −6 mbar vacuum . [ 15 ] In 2003, a group of researchers published results on high-temperature superconductivity in palladium hydride (PdH x : x > 1 ) [ 16 ] and an explanation in 2004. [ 17 ] In 2007, the same group published results suggesting a superconducting transition temperature of 260 K, [ 18 ] with transition temperature increasing as the density of hydrogen inside the palladium lattice increases. This has not been corroborated by other groups. In March 2021, an announcement reported superconductivity in a layered yttrium-palladium-hydron material at 262 K and a pressure of 187 GPa. Palladium may act as a hydrogen migration catalyst in the material. [ 19 ] On 31 December 2023, "Global Room-Temperature Superconductivity in Graphite" was published in the journal Advanced Quantum Technologies , claiming to demonstrate superconductivity at room temperature and ambient pressure in highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects. [ 20 ] In 2012, an Advanced Materials article claimed superconducting behavior of graphite powder after treatment with pure water at temperatures as high as 300 K and above. [ 21 ] [ unreliable source? ] So far, the authors have not been able to demonstrate the occurrence of a clear Meissner phase and the vanishing of the material's resistance. In 2018, Dev Kumar Thapa and Anshu Pandey from the Solid State and Structural Chemistry Unit of the Indian Institute of Science , Bangalore claimed the observation of superconductivity at ambient pressure and room temperature in films and pellets of a nanostructured material that is composed of silver particles embedded in a gold matrix. [ 22 ] Due to similar noise patterns of supposedly independent plots and the publication's lack of peer review , the results have been called into question. [ 23 ] Although the researchers repeated their findings in a later paper in 2019, [ 24 ] this claim is yet to be verified and confirmed. [ citation needed ] Since 2016, a team led by Ranga P. Dias has produced a number of retracted or challenged papers in this field. In 2016 they claimed observation of solid metallic hydrogen in 2016. [ 25 ] In October 2020, they reported room-temperature superconductivity at 288 K (at 15 °C) in a carbonaceous sulfur hydride at 267 GPa, triggered into crystallisation via green laser. [ 26 ] [ 27 ] This was retracted in 2022 after flaws in their statistical methods were identified [ 28 ] and led to questioning of other data. [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] [ 34 ] In 2023 he reported superconductivity at 294 K and 1 GPa in nitrogen-doped lutetium hydride , in a paper widely met with skepticism about its methods and data. Later in 2023 he was found to have plagiarized parts of his dissertation from someone else's thesis, and to have fabricated data in a paper on manganese disulfide , which was retracted. [ 35 ] The lutetium hydride paper was also retracted. [ citation needed ] The first attempts to replicate those results failed. [ 36 ] [ 37 ] [ 38 ] On July 23, 2023, a Korean team claimed that Cu-doped lead apatite, which they named LK-99 , was superconducting up to 370 K, though they had not observed this fully. [ 39 ] They posted two preprints to arXiv , [ 40 ] published a paper in a journal, [ 41 ] and submitted a patent application. [ 42 ] The reported observations were received with skepticism by experts due to the lack of clear signatures of superconductivity. [ 43 ] The story was widely discussed on social media, leading to a large number of attempted replications, none of which had more than qualified success. By mid-August, a series of papers from major labs provided significant evidence that LK-99 was not a superconductor, finding resistivity much higher than copper, and explaining observed effects such as magnetic response and resistance drops in terms of impurities and ferromagnetism in the material. [ 44 ] [ 45 ] Theoretical work by British physicist Neil Ashcroft predicted that solid metallic hydrogen at extremely high pressure (~500 GPa ) should become superconducting at approximately room temperature, due to its extremely high speed of sound and expected strong coupling between the conduction electrons and the lattice-vibration phonons . [ 46 ] A team at Harvard University has claimed to make metallic hydrogen and reports a pressure of 495 GPa. [ 47 ] Though the exact critical temperature has not yet been determined, weak signs of a possible Meissner effect and changes in magnetic susceptibility at 250 K may have appeared in early magnetometer tests on an original now-lost sample. A French team is working with doughnut shapes rather than planar at the diamond culette tips. [ 48 ] In 1964, William A. Little proposed the possibility of high-temperature superconductivity in organic polymers . [ 49 ] In 2004, Ashcroft returned to his idea and suggested that hydrogen-rich compounds can become metallic and superconducting at lower pressures than hydrogen. More specifically, he proposed a novel way to pre-compress hydrogen chemically by examining IVa hydrides . [ 50 ] In 2014–2015, conventional superconductivity was observed in a sulfur hydride system ( H 2 S or H 3 S ) at 190 K to 203 K at pressures of up to 200 GPa. In 2016, research suggested a link between palladium hydride containing small impurities of sulfur nanoparticles as a plausible explanation for the anomalous transient resistance drops seen during some experiments, and hydrogen absorption by cuprates was suggested in light of the 2015 results in H 2 S as a plausible explanation for transient resistance drops or "USO" noticed in the 1990s by Chu et al. during research after the discovery of YBCO . [ 51 ] It has been predicted that Sc H 12 ( scandium dodecahydride ) would exhibit superconductivity at room temperature – T c between 333 K (60 °C) and 398 K (125 °C) – under a pressure expected not to exceed 100 GPa. [ 52 ] Some research efforts are currently moving towards ternary superhydrides , where it has been predicted that Li 2 MgH 16 ( dilithium magnesium hexadecahydride ) would have a T c of 473 K (200 °C) at 250 GPa. [ 53 ] [ 54 ] It is also possible that if the bipolaron explanation is correct, a normally semiconducting material can transition under some conditions into a superconductor if a critical level of alternating spin coupling in a single plane within the lattice is exceeded; this may have been documented in very early experiments from 1986. The best analogy here would be anisotropic magnetoresistance , but in this case the outcome is a drop to zero rather than a decrease within a very narrow temperature range for the compounds tested similar to " re-entrant superconductivity ". [ 55 ] In 2018, support was found for electrons having anomalous 3/2 spin states in YPtBi. [ 56 ] Though YPtBi is a relatively low temperature superconductor, this does suggest another approach to creating superconductors. [ 57 ] "Quantum bipolarons" could describe how a material might superconduct at up to nearly room temperature. [ 58 ]
https://en.wikipedia.org/wiki/Room-temperature_superconductor
Room acoustics is a subfield of acoustics dealing with the behaviour of sound in enclosed or partially-enclosed spaces. The architectural details of a room influences the behaviour of sound waves within it, with the effects varying by frequency . Acoustic reflection , diffraction , and diffusion can combine to create audible phenomena such as room modes and standing waves at specific frequencies and locations, echos , and unique reverberation patterns. The way that sound behaves in a room can be broken up into four different frequency zones: For frequencies under the Schroeder frequency, certain wavelengths of sound will build up as resonances within the boundaries of the room, and the resonating frequencies can be determined using the room's dimensions. Similar to the calculation of standing waves inside a pipe with two closed ends, the modal frequencies ( f m , n , l ) {\textstyle (f_{m,n,l})} and the sound pressure of those modes at a particular position ( p m , n , l ( x , y , z ) ) {\textstyle (p_{m,n,l}(x,y,z))} of a rectilinear room can be defined as f m , n , l = c 2 ( m L x ) 2 + ( n L y ) 2 + ( l L z ) 2 {\displaystyle f_{m,n,l}={\frac {c}{2}}{\sqrt {{\Big (}{\frac {m}{L_{x}}}{\Big )}^{2}+{\Big (}{\frac {n}{L_{y}}}{\Big )}^{2}+{\Big (}{\frac {l}{L_{z}}}{\Big )}^{2}}}} p m , n , l ( x , y , z ) = A cos ⁡ ( m π L x x ) cos ⁡ ( n π L y y ) cos ⁡ ( l π L z z ) {\displaystyle p_{m,n,l}(x,y,z)=A\cos {\Big (}{\frac {m\pi }{L_{x}}}x{\Big )}\cos {\Big (}{\frac {n\pi }{L_{y}}}y{\Big )}\cos {\Big (}{\frac {l\pi }{L_{z}}}z{\Big )}} where m , n , l = 0 , 1 , 2 , 3... {\textstyle m,n,l=0,1,2,3...} are mode numbers corresponding to the x-,y-, and z-axis of the room, c {\textstyle c} is the speed of sound in m s {\textstyle {\frac {m}{s}}} , L x , L y , L z {\textstyle L_{x},L_{y},L_{z}} are the dimensions of the room in meters. A {\textstyle A} is the amplitude of the sound wave, and x , y , z {\textstyle x,y,z} are coordinates of a point contained inside the room. [ 4 ] Modes can occur in all three dimensions of a room. Axial modes are one-dimensional, and build up between one set of parallel walls. Tangential modes are two-dimensional, and involve four walls bounding the space perpendicular to each other. Finally, oblique modes concern all walls within the simplified rectilinear room. [ 5 ] A modal density analysis method using concepts from psychoacoustics , the "Bonello criterion", analyzes the first 48 room modes and plots the number of modes in each one-third of an octave. [ 6 ] The curve increases monotonically (each one-third of an octave must have more modes than the preceding one). [ 7 ] Other systems to determine correct room ratios have more recently been developed. [ 8 ] After determining the best dimensions of the room, using the modal density criteria, the next step is to find the correct reverberation time . The most appropriate reverberation time depends on the use of the room. RT60 is a measure of reverberation time. [ 9 ] Times about 1.5 to 2 seconds are needed for opera theaters and concert halls. For broadcasting and recording studios and conference rooms, values under one second are frequently used. The recommended reverberation time is always a function of the volume of the room. Several authors give their recommendations [ 10 ] A good approximation for broadcasting studios and conference rooms is: with V=volume of the room in m 3 . [ 11 ] Ideally, the RT60 should have about the same value at all frequencies from 30 to 12,000 Hz. To get the desired RT60, several acoustics materials can be used as described in several books. [ 12 ] [ 13 ] A valuable simplification of the task was proposed by Oscar Bonello in 1979. [ 14 ] It consists of using standard acoustic panels of 1 m 2 hung from the walls of the room (only if the panels are parallel). These panels use a combination of three Helmholtz resonators and a wooden resonant panel. This system gives a large acoustic absorption at low frequencies (under 500 Hz) and reduces at high frequencies to compensate for the typical absorption by people, lateral surfaces, ceilings, etc. Acoustic space is an acoustic environment in which sound can be heard by an observer. The term acoustic space was first mentioned by Marshall McLuhan , a professor and a philosopher. [ 15 ] In reality, there are some properties of acoustics that affect the acoustic space. These properties can either improve the quality of the sound or interfere with the sound. The application of acoustic space is very useful in architecture. Some kinds of architecture need a proficient design to bring out the best performances. For example, concert halls, auditoriums, theaters, or even cathedrals. [ 18 ] The acoustic impression of a room is determined by: The task of room acoustics is to influence these parameters by designing the room [ 24 ] in such a way that the acoustic properties of the room are maximized for its intended use. However, not all venues are designed with acoustics in mind. In this case, speaker placement will play a decisive role in the movement of sound waves, affecting clarity, loudness and overall sound quality. [ 25 ] The goals of acoustical room design can be: [ 26 ] Since the acoustic properties of rooms for different applications are almost incompatible, it is hardly possible to create a universal room that combines good speech intelligibility and good spatial music perception.
https://en.wikipedia.org/wiki/Room_acoustics