text
stringlengths 5
10.5k
| source
stringlengths 33
146
|
|---|---|
the number of occurrences of a particular word in an email) or real-valued (e.g., a measurement of blood pressure). Often, categorical and ordinal data are grouped together, and this is also the case for integer-valued and real-valued data. Many algorithms work only in terms of categorical data and require that real-valued or integer-valued data be discretized into groups (e.g., less than 5, between 5 and 10, or greater than 10). === Probabilistic classifiers === Many common pattern recognition algorithms are probabilistic in nature, in that they use statistical inference to find the best label for a given instance. Unlike other algorithms, which simply output a "best" label, often probabilistic algorithms also output a probability of the instance being described by the given label. In addition, many probabilistic algorithms output a list of the N-best labels with associated probabilities, for some value of N, instead of simply a single best label. When the number of possible labels is fairly small (e.g., in the case of classification), N may be set so that the probability of all possible labels is output. Probabilistic algorithms have many advantages over non-probabilistic algorithms: They output a confidence value associated with their choice. (Note that some other algorithms may also output confidence values, but in general, only for probabilistic algorithms is this value mathematically grounded in probability theory. Non-probabilistic confidence values can in general not be given any specific meaning, and only used to compare against other confidence values output by the same algorithm.) Correspondingly, they can abstain when the confidence of choosing any particular output is too low. Because of the probabilities output, probabilistic pattern-recognition algorithms can be more effectively incorporated into larger machine-learning tasks, in a way that partially or completely avoids the problem of error propagation. === Number of important feature variables === Feature
|
{"page_id": 126706, "title": "Pattern recognition"}
|
( x ′ / W ) ∗ ∑ n = 0 N δ ( x ′ − n S ) {\displaystyle \operatorname {rect} (x'/W)*\sum _{n=0}^{N}\delta (x'-nS)} The amplitude is then given by the Fourier transform of this expression as: U ( x , z ) = f ^ [ rect ( x ′ / W ) ] f ^ [ ∑ n = 0 N δ ( x ′ − n S ) ] = a sinc ( W sin θ λ ) 1 − e − i 2 π N S sin θ / λ 1 − e − i 2 π S sin θ / λ {\displaystyle {\begin{aligned}U(x,z)&={\hat {f}}[\operatorname {rect} (x'/W)]{\hat {f}}\left[\sum _{n=0}^{N}\delta (x'-nS)\right]\\&=a\operatorname {sinc} \left({\frac {W\sin \theta }{\lambda }}\right){\frac {1-e^{-i2\pi NS\sin \theta /\lambda }}{1-e^{-i2\pi S\sin \theta /\lambda }}}\end{aligned}}} ==== Intensity ==== The intensity is given by: I ( θ ) ∝ sinc 2 ( π W sin θ λ ) sin 2 ( π N S sin θ / λ ) sin 2 ( π S sin θ / λ ) {\displaystyle I(\theta )\propto \operatorname {sinc} ^{2}\left({\frac {\pi W\sin \theta }{\lambda }}\right){\frac {\sin ^{2}(\pi NS\sin \theta /\lambda )}{\sin ^{2}(\pi S\sin \theta /\lambda )}}} The diagram shows the diffraction pattern for a grating with 20 slits, where the width of the slits is 1/5th of the slit separation. The size of the main diffracted peaks is modulated with the diffraction pattern of the individual slits. === Other gratings === The Fourier transform method above can be used to find the form of the diffraction for any periodic structure where the Fourier transform of the structure is known. Goodman uses this method to derive expressions for the diffraction pattern obtained with sinusoidal amplitude and phase modulation gratings.
|
{"page_id": 32267080, "title": "Fraunhofer diffraction equation"}
|
Arthur J. Bond (1939 – December 30, 2012) was the dean of the School of Engineering and Technology at Alabama A&M University in Alabama, United States, and an activist in the cause of increasing black enrollment and retention in engineering and technology. He was a founding member of the National Society of Black Engineers and part of the team that fought for state funding of engineering at Alabama A&M University. == Education == Bond came to Purdue University in 1957 to study electrical engineering on National Merit Scholarship and Purdue's Special Merit Scholarship. He describes always having been interested in electrical engineering, and ending up at Purdue by luck, recounting that "My high school principal's son was interested in engineering at Purdue... One day they were going for a visit to campus, and they asked me if I wanted to come." After two years, however, he had to drop out due to a softball injury. After he recovered, he joined the army, "because Vietnam was looming on the horizon," he would later recount. Bond returned to Purdue in 1966, was graduated with a bachelor's degree in electrical engineering (BSEE) in 1968, a master's degree in electrical engineering (MSEE) in 1969, and Ph.D. in 1974. At the time Bond was the 42nd African-American to earn a PhD in engineering, and only the 12th to earn it in electrical engineering. == Student organizing == Bond was a student leader at Purdue during the time when the civil rights movement was in full swing. He would become a founding member of Purdue's Black Cultural Center and a founder of the National Society of Black Engineers. At Purdue, Bond led students to demand that Purdue open up its engineering schools to more blacks and women. Frederick L. Hovde, Purdue's president at the time, was
|
{"page_id": 7392359, "title": "Arthur J. Bond"}
|
In physics, the gyration tensor is a tensor that describes the second moments of position of a collection of particles S m n = d e f 1 N ∑ i = 1 N r m ( i ) r n ( i ) {\displaystyle S_{mn}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {1}{N}}\sum _{i=1}^{N}r_{m}^{(i)}r_{n}^{(i)}} where r m ( i ) {\displaystyle r_{m}^{(i)}} is the m t h {\displaystyle \mathrm {m^{th}} } Cartesian coordinate of the position vector r ( i ) {\displaystyle \mathbf {r} ^{(i)}} of the i t h {\displaystyle \mathrm {i^{th}} } particle. The origin of the coordinate system has been chosen such that ∑ i = 1 N r ( i ) = 0 {\displaystyle \sum _{i=1}^{N}\mathbf {r} ^{(i)}=0} i.e. in the system of the center of mass r C M {\displaystyle r_{CM}} . Where r C M = 1 N ∑ i = 1 N r ( i ) {\displaystyle r_{CM}={\frac {1}{N}}\sum _{i=1}^{N}\mathbf {r} ^{(i)}} Another definition, which is mathematically identical but gives an alternative calculation method, is: S m n = d e f 1 2 N 2 ∑ i = 1 N ∑ j = 1 N ( r m ( i ) − r m ( j ) ) ( r n ( i ) − r n ( j ) ) {\displaystyle S_{mn}\ {\stackrel {\mathrm {def} }{=}}\ {\frac {1}{2N^{2}}}\sum _{i=1}^{N}\sum _{j=1}^{N}(r_{m}^{(i)}-r_{m}^{(j)})(r_{n}^{(i)}-r_{n}^{(j)})} Therefore, the x-y component of the gyration tensor for particles in Cartesian coordinates would be: S x y = 1 2 N 2 ∑ i = 1 N ∑ j = 1 N ( x i − x j ) ( y i − y j ) {\displaystyle S_{xy}={\frac {1}{2N^{2}}}\sum _{i=1}^{N}\sum _{j=1}^{N}(x_{i}-x_{j})(y_{i}-y_{j})} In the continuum limit, S m n = d e f ∫ d r ρ ( r ) r m
|
{"page_id": 5066430, "title": "Gyration tensor"}
|
Lazer lineup , the Lazer ZX-7 which featured a new 7075 aluminium made chassis with improved rigidity , a new floating resin servo mount and a new sliding motor mount. Since 2021 , the Lazer ZX-7 has been out of stock and listed 'undecided' on Kyosho's website , and it's debatable whether or not , Kyosho will release a new competition designed Lazer-series buggy. Despite that , Kyosho announced in April 2024 , a bash-racing type buggy of the Lazer series named Lazer SB. The Lazer SB is based on the Lazer ZX-5 platform and it is believed by some to be the Re-Release of the Lazer ZX-5. The Launch date is set to be on June 2024. Here is a list with all the Lazer series models (in chronoligical order) : Lazer ZX , released in 1989 Lazer ZX-Sport , released in 1992 Lazer ZX-R , released in 1992 Lazer ZX ALPHA , released in 1992 Lazer ZX-RR (ZX-R Mk2) , released in 1994 Lazer ZXS , released in 1997 (Limited Production) Lazer 2000 , released in 1999 Lazer ZX-S Evo , released in 2000 (Limited Production) Unreleased Prototype (Presumably a prototype of the Lazer ZX-5 which appeared in IFMAR WORLDS 2003. The buggy is belt-driven and has the Lazer ZX-RR's arms and a new-type of big-bore shocks) Lazer ZX-5 , released in 2005 Lazer ZX-5 SP , released in 2007 Lazer ZX-5 FS (Firestrike) , released in 2008 Lazer ZX-5 FS2 , released in 2010 Lazer ZX-5 FS2 SP , released in 2011 Lazer ZX-6 , released in 2014 Lazer ZX-6.6 , released in 2016 Lazer ZX-7 , released in 2019 Lazer SB , to be released in June 2024 (Based on the Lazer ZX-5 platform) == Die-Cast Cars == Since 1992, Kyosho has specialized in creating high
|
{"page_id": 5631336, "title": "Kyosho"}
|
manufacturing system or subsystem's existence. Some of the major phases which may be included in a system life cycle approach are, requirements identification; system design specification; vendor selection; system development and upgrades; installation, testing, training; and benchmarking of production operations. Management, coordination, and administration functions need to be performed during each phase of the life cycle. Phases may be repeated over time as a system is upgraded or re-engineered to meet changing needs or incorporate new technologies. A software tool integration framework should specify how the tools could be independently designed and developed. The framework would define how CAPE tools would deal with common services, interact with each other and coordinate problem solving activities. Although some existing software products and standards currently address the common services issue, the problem of tool interaction remains largely unsolved. The problem of tool interaction is not limited to the domain of computer-aided manufacturing systems engineering—it is pervasive across the software industry.[5] == CAPE's current state == An initial CAPE environment has been established from commercial off-the-shelf (COTS) software packages. This new environment is being used to demonstrate commercially available tools to perform CAPE functions, to develop a better understanding and define functional requirements for individual engineering tools and the overall environment, and to identify the integration issues which must be addressed to implement compatible environments in the future. Several engineering demonstrations using COTS tools are under development. These demonstrations are designed to illustrate the various types of functions that must be performed in engineering a manufacturing system. Functions supported by the current COTS environment include: system specification/diagramming, process flowcharting, information modeling, computer-aided design of products, plant layout, material flow analysis, ergonomic workplace design, mathematical modeling, statistical analysis, line balancing, manufacturing simulation, investment analysis, project management, knowledge-based system development, spreadsheets, document preparation, user interface development,
|
{"page_id": 2001956, "title": "Computer-aided production engineering"}
|
immune system can recognize RNA using the intracellular pathogen associated molecular pattern (PAMP) receptors and extracellular toll-like receptors (TLR). Activation of the receptors leads to a cytokine (IFNy-Interferon gamma) mediated immune response. Common applications to overcome the immune response include second generation chemical modifications. This process includes the introduction of small one at a time chemical modifications to avoid the immune response. However, there are some reports of adverse immune responses in clinical trials employing such modified reagents. There’s no fixed answer to issues with immunogenicity and ncRNA therapy. Modified adenovirus vectors have been used extensively in many clinical trials as a ncRNA delivery mechanism. In particular, adenovirus vector is considered an efficient delivery system due to its stability within live cells and non-pathogenicity. Even though viral transfections have achieved significant results in basic research, one of the issues is the non-specificity leading to off target transfections. Further research needs to be done to improve the accuracy of viral transfections for future tests and clinical trials. == ASO Guidelines == In December 2021, the FDA came up with a draft guidance for the use of ASO drug products. This draft guidance was directed towards sponsor-investigators who are developing individualized investigational antisense oligonucleotides (ASO) drug products for severely debilitating or life threatening diseases. Severely debilitating corresponds to a disease or condition that causes major irreversible morbidity. However, life-threatening is defined as the disease or condition has a likelihood of death unless the course of treatment leads to an endpoint of survival. Usually individuals that have a severely debilitating life threatening disease don't have any alternative treatment options, and their diseases will be rapidly progressing, leading to an early death and/or devastating or irreversible morbidity within a short time frame without treatment. Drug development is usually targeted for a large number of
|
{"page_id": 70197321, "title": "NcRNA therapy"}
|
1999 International Mathematical Olympiads. === 21st Century === ==== 2000s ==== 2002: Susan Howson became the first woman to be given the Adams Prize, given annually by the University of Cambridge to a British mathematician under the age of 40. 2002: Melanie Wood became the first American woman and second woman overall to be named a Putnam Fellow in 2002. Putnam Fellows are the top five (or six, in case of a tie) scorers on William Lowell Putnam Mathematical Competition. 2004: American Melanie Wood became the first woman to win the Frank and Brennie Morgan Prize for Outstanding Research in Mathematics by an Undergraduate Student. It is an annual award given to an undergraduate student in the US, Canada, or Mexico who demonstrates superior mathematics research. 2004: American Alison Miller became the first female gold medal winner on the U.S. International Mathematical Olympiad Team. 2006: Polish-Canadian mathematician Nicole Tomczak-Jaegermann became the first woman to win the CRM-Fields-PIMS prize. 2006: Stefanie Petermichl, a German mathematical analyst then at the University of Texas at Austin, became the first woman to win the Salem Prize, an annual award given to young mathematicians who have worked in Raphael Salem's field of interest, chiefly topics in analysis related to Fourier series. She shared the prize with Artur Avila. 2006: When Olga Gil Medrano became president of the Royal Spanish Mathematical Society in 2006, she was the first woman elected to that position. ==== 2010s ==== 2011: Belgian mathematician Ingrid Daubechies became the first female president of the International Mathematical Union. 2012: Latvian mathematician Daina Taimina became the first woman to win the Euler Book Prize, for her 2009 book Crocheting Adventures with Hyperbolic Planes. 2012: The Working Committee for Women in Mathematics, Chinese Mathematical Society (WCWM-CMS) was founded; it is a national non-profit academic organization
|
{"page_id": 41400343, "title": "Timeline of women in mathematics"}
|
an environment with the declarations \raggedright replacing the flushleft environment, and \raggedleft replacing the flushright environment. # 4.2.3 Two-sided indentation A section of text may be displayed by indenting it by an equal amount on both sides, with the environments \begin{quote} text \end{quote} \begin{quotation} text \end{quotation} Additional vertical spacing is inserted above and below the displayed text to separate it visually from the normal text. The text to be displayed may be of any length; it can be part of a sentence, a whole paragraph, or several paragraphs. 68 Chapter 4. Displayed Text Paragraphs are separated as usual with an empty line, although no empty lines are needed at the beginning and end of the displayed text since additional vertical spacing is inserted here anyway. The difference between the above two forms is thus: In the quotation environment, paragraphs are marked by extra indentation of the first line, whereas in the quote envi-ronment, they are indicated with more vertical spacing between them. The present text is produced within the quotation envi-ronment, while the sample above was done with the quote environment. The quotation environment is only really meaningful when the regular text makes use of first-line indentation to show off new paragraphs. # 4.2.4 Verse indentations For indenting rhymes, poetry, verses, etc. on both sides, the environment \begin{verse} poem \end{verse} is more appropriate. Stanzas are separated by blank lines while the individual lines of the stanza are divided by the \\ command. If a line is too long for the reduced text width, it will be left and right justified and continued on the next line, which is indented even further. The above indenting schemes may be nested inside one another. Within a quote environment there may be another quote , quotation , or verse environment. Each time, additional indentations
|
{"source": 1186, "title": "from dpo"}
|
Overall, it was a really fulfilling year. I found out what I really want to do for my PhD, for real for real this time: Implement and verify decision procedures. It fits exactly the kind of person I am: I enjoy thinking deeply about algorithms and proofs, and implementing kawaii cores with lots of surrounding heuristics to make stuff go fast! I'm super excited for 2025, because I feel like I've finally found product-market fit for my problem statement. It was a sad year too, 'cause I broke up with my long term partner. I'm optimistic that we'll remain close friends, as we were before we began dating, and I am super excited to see what they do next! Cheers, and let me know what I should read, try, and do for next year. After all, New Year's resolutions are still a day or two away! # Mechanical Theorem-Proving by Model Elimination [WIP] - A simplified format for the model elimination procedure # Shostaks Algorithm For Combining Decision Procedures [WIP] - [ # Ragtime Theory - [John Valerio: Stride and Swing Piano]([ - Left hand: four beats per measure, alternate bass note and chord. - right hand: plays melodies at twice the speed of the left. - Right hand is grouped into *additive rhythms*: `3+2+3`, or `3+3+3`. #### Harmonization With Respect to Example - [Mary Had a Little Lamb: Forward Rag Roll]([ - [Learn to Play ragtime]([ - Good ragtime ornament: To go to `C Eb G`, play the Ddim first, with `D Ab`, which resolves to `Eb G`. see that `D = Ebb`, which is the flat third, and `G = Ab`, which is the 6th of the C minor scale. This gives me more reason to like `Ddim`! - harmonize with respect to 6ths. (e.g. to play $A$,
|
{"source": 3346, "title": "from dpo"}
|
$2^{85.5}$ chosen messages and $2^{128}$ queries. **Distinguisher and Related-Key Attack on the Full AES-256** _Alex Biryukov, Dmitry Khovratovich, and Ivica Nikoli\'{c}_ In this paper we construct a chosen-key distinguisher and a related-key attack on the full 256-bit key AES. We define a notion of {\em differential $q$-multicollision} and show that for AES-256 $q$-multicollisions can be constructed in time $q\cdot 2^{67}$ and with negligible memory, while we prove that the same task for an ideal cipher of the same block size would require at least $O(q\cdot 2^{\frac{q-1}{q+1}128})$ time. Using similar approach and with the same complexity we can also construct $q$-pseudo collisions for AES-256 in Davies-Meyer mode, a scheme which is provably secure in the ideal-cipher model. We have also computed partial $q$-multicollisions in time $q\cdot 2^{37}$ on a PC to verify our results. These results show that AES-256 can not model an ideal cipher in theoretical constructions. Finally we extend our results to find the first publicly known attack on the full 14-round AES-256: a related-key distinguisher which works for one out of every $2^{35}$ keys with $2^{120}$ data and time complexity and negligible memory. This distinguisher is translated into a key-recovery attack with total complexity of $2^{131}$ time and $2^{65}$ memory. **Cryptanalysis of C2** _Julia Borghoff, Lars Knudsen, Gregor Leander, and Krystian Matusiewicz_ We present several attacks on the block cipher C2, which is used for encrypting DVD Audio discs and Secure Digital cards. C2 has a 56 bit key and a secret $8$ to $8$ bit S-box. We show that if the attacker is allowed to choose the key, the S-box can be recovered in $2^{24}$ C2 encryptions. Attacking the $56$ bit key for a known S-box can be done in complexity $2^{48}$. Finally, a C2 implementation with a $8$ to $8$ bit secret S-box (equivalent to $2048$
|
{"source": 5647, "title": "from dpo"}
|
when the course is tough and muddy. This course would be a good fit. . In contrast, one can look at the case where the two
|
{"page_id": 69290671, "title": "Linnett double-quartet theory"}
|
The Vertically Generalized Production Model (VGPM) is a model commonly used to estimate primary production within the ocean. The VGPM was designed by Behrenfeld and Falkowski and was originally published in a 1997 article in Limnology and Oceanography. It is one of the most frequently used models for primary production estimation due to its ability to be applied to chlorophyll a data from satellites, and its relatively simple design. Chlorophyll a is a common measure of primary production, as it is a main component of photosynthesis. Primary production is often measured using three variables: the biomass (or amount in weight) of the phytoplankton, the availability of light, and the rate of carbon fixation. The VGPM is now one of the most popular models to use for satellite chlorophyll data due to it being surface light dependent as well as using an estimated maximum value of primary production compared to the units of chlorophyll throughout the water column, known as PBopt. It also considers environmental factors that often influence primary production as well as allows for variables often collected using remote satellites to derive the primary production without having to physically sample the water. This PBopt was found to be dependent on surface chlorophyll, and data for this can be collected using satellites. Satellites can only collect the parameters used to estimate primary production; they cannot calculate it themselves, which is why the need for a model to do so exists. Because of this being a generalized model, it is intended to reflect most accurately the open ocean. Other localized areas, especially coastal regions, may need to incorporate additional factors to get the most accurate representation of primary production. The values produced using the VGPM are estimates and there will be some level of uncertainty with using this model. == References
|
{"page_id": 71995920, "title": "Vertically Generalized Production Model"}
|
EIA is most useful Once section 4 is complete, it is obvious where impacts are greatest Using this information in ways to avoid negative impacts should be developed Best working with the developer with this section as they know the project best Using the windfarm example again, construction might take place outside of bird nesting seasons, or removal of hardstanding on a potentially contaminated land site might take place outside of the rainy season. Non-technical summary (EIS) The EIA is in the public domain and be used in the decision-making process It is important that the information is available to the public This section is a summary that does not include jargon or complicated diagrams It should be understood by the informed lay-person Lack of know-how/technical difficulties This section is to advise any areas of weakness in knowledge It can be used to focus areas of future research Some developers see the EIA as a starting block for poor environmental management In 2021, ESG reporting requirements changed in the EU and UK. The EU started enforcing the Sustainable Finance Disclosures Regulation (SFDR), which was created with the purpose of unifying climate risk disclosures across the private sector by 2023. It also requires businesses to report on "principal adverse impacts" for society and the environment. ==== Annexed projects ==== All projects are either classified as Annex 1 or Annex 2 projects. Those lying in Annex 1 are large scale developments such as motorways, chemical works, bridges, power stations, etc. These always require an EIA under the Environmental Impact Assessment Directive (85,337,EEC as amended). Annex 2 projects are smaller in scale than those referred to in Annex 1. Member States must determine whether these project shall be made subject to an assessment subject to a set of criteria set out in Annex
|
{"page_id": 2364800, "title": "Environmental impact assessment"}
|
is a Sun-like star of spectral type G2/G3V located around 138 light-years away that is orbited by two super-Earths with periods of 13 and 46 days and masses 8.3 and 10.1 times that of Earth, respectively. === Brown dwarfs === The discovery of a binary brown dwarf system named Luhman 16 only 6.6 light-years away, the third-closest system to the Solar System, was announced on 11 March 2013. === Deep-sky objects === Of the deep-sky objects of interest in Vela is a planetary nebula known as NGC 3132, nicknamed the 'Eight-Burst Nebula' or 'Southern Ring Nebula' (see accompanying photo). It lies on the border of the constellation with Antlia. NGC 2899 is an unusual red-hued example. This constellation has 32 more planetary nebulae. The Gum Nebula is a faint emission nebula, believed to be the remains of a million-year-old supernova. Within it lies the smaller and younger Vela Supernova Remnant. This is the nebula of a supernova explosion that is believed to have been visible from Earth around 10,000 years ago. The remnant contains the Vela Pulsar, the first pulsar to be identified optically. Nearby is NGC 2736, also known as the Pencil Nebula. HH-47 is a Herbig-Haro Object, a young star around 1,400 light-years from the Sun that is ejecting material at tremendous speed (up to a million kilometres per hour) into its surrounds. This material glows as it hits surrounding gas. NGC 2670 is an open cluster located in Vela. It has an overall magnitude of 7.8 and is 3,200 light-years from Earth. The stars of NGC 2670, a Trumpler class II 2 p and Shapley class-d cluster, are in a conformation suggesting a bow and arrow. Its class indicates that it is a poor, loose cluster, though detached from the star field. It is somewhat concentrated at
|
{"page_id": 32568, "title": "Vela (constellation)"}
|
a single type. For example: openarray – Represents arrays of different sizes, sequences, and strings SomeSignedInt – Represents all the signed integer types SomeInteger – Represents all the Integer types, signed or not SomeOrdinal – Represents all the basic countable and ordered types, except of non integer number This code sample demonstrates the use of typeclasses in Nim: echo twiceIfIsNumber(67) # Passes an int to the function echo twiceIfIsNumber(67u8) # Passes an uint8 echo twiceIfIsNumber(true) # Passes a bool (Which is also an Ordinal) === Influence === According to the language creator, Nim was conceived to combine the best parts of Ada typing system, Python flexibility, and powerful Lisp macro system. Nim was influenced by specific characteristics of existing languages, including the following: Modula-3: traced vs untraced pointers Object Pascal: type safe bit sets (set of char), case statement syntax, various type names and filenames in the standard library Ada: subrange types, distinct type, safe variants – case objects C++: operator overloading, generic programming Python: Off-side rule Lisp: Macro system, AST manipulation, homoiconicity Oberon: export marker C#: async/await, lambda macros ParaSail: pointer-free programming === Uniform function call syntax === Nim supports uniform function call syntax (UFCS) and identifier equality, which provides a large degree of flexibility in use. For example, each of these lines print "hello world", just with different syntax: === Identifier equality === Nim is almost fully style-insensitive; two identifiers are considered equal if they only differ by capitalization and underscores, as long as the first characters are identical. This is to enable a mixture of styles across libraries: one user can write a library using snake_case as a convention, and it can be used by a different user in a camelCase style without issue. === Stropping === The stropping feature allows the use of any name for
|
{"page_id": 45413679, "title": "Nim (programming language)"}
|
driving the display enables a digit by placing a positive voltage on that digit's grid and then placing a positive voltage on the appropriate plates. Electrons flow through that digit's grid and strike those plates that are at a positive potential. If the display had been built with every segment being individually connected, the display would have required 49 wires just for the digits, with more wires being needed for all of the other indicators that can be illuminated. By multiplexing the display, only seven "digit selector" lines and seven "segment selector" lines are needed. The extra indicators (in our example, "VCR", "Hi-Fi", "STEREO", "SAP", etc.) are arranged as if they were segments of an additional digit or two or extra segments of existing digits and are scanned using the same multiplexed strategy as the real digits. Most character-oriented displays drive all the appropriate segments of an entire digit simultaneously. A few character-oriented displays drive only one segment at a time. The display on the Hewlett-Packard HP-35 was an example of this. The calculator took advantage of an effect of pulsed LED operation where very brief pulses of light are perceived as brighter than a longer pulse of light with the same time-integral of intensity. A keyboard matrix circuit has a very similar arrangement as a multiplexed display, and has many of the same advantages. In order to reduce the number of wires even further, some people "share" wires between a multiplexed display and a keyboard matrix, reducing the number of wires even further. == Pixel-oriented displays == By comparison, in dot-matrix displays, individual pixels are located at the intersections of the matrix's "row" and "column" lines and each pixel can be individually controlled. Here, the savings in wiring becomes far more dramatic. For a typical 1024×768 (XGA) computer screen,
|
{"page_id": 10322631, "title": "Multiplexed display"}
|
discriminates between the items and concluding that the item with the larger value has also a larger value on the criterion. The matrix of all objects of the reference class, from which A and B have been taken, and of the cue values which describe these objects constitutes a so-called environment. Gigerenzer and Goldstein, who introduced take-the-best (see Gerd Gigerenzer and Daniel Goldstein, D. G. (1996)) considered, as a walk-through example, precisely pairs of German cities. yet only those with more than 100,000 inhabitants. The comparison task for a given pair (A,B) of German cities in the reference class, consisted in establishing which one has a larger population, based on nine cues. Cues were binary-valued, such as whether the city is a state capital or whether it has a soccer team in the national league. The cue values could be modeled by 1s (for "yes") and 0s (for "no") so that each city could be identified with its "cue profile", i.e., a vector of 1s and 0s, ordered according to the ranking of cues. The question was: How can one infer which of two objects, for example, city A with cue profile (100101010) and city B with cue profile (100010101), scores higher on the established criterion, i.e., population size? The take-the-best heuristic simply compares the profiles lexicographically, just as numbers written in base two are compared: the first cue value is 1 for both, which means that the first cue does not discriminate between A and B. The second cue value is 0 for both, again with no discrimination. The same happens for the third cue value, while the fourth cue value is 1 for A and 0 for B, implying that A is judged as having a higher value on the criterion. In other words, XA > XB if
|
{"page_id": 2238325, "title": "Take-the-best heuristic"}
|
old speeds were maintained. Meanwhile, the State of the Netherlands v. Urgenda Foundation court case was decided in favour of its plaintiff Urgenda (initially in June 2015, upheld on appeal in October 2018, and finally confirmed by the Supreme Court of the Netherlands on 20 December 2019), who successfully forced the government to implement the necessary measures to reduce the Netherlands' CO2 emissions from 1990 levels by 25% by 2020. Although the government was free to choose which measures it would take to achieve this reduction, the plaintiff and other environmentalists had been suggesting throughout the legal process to lower the speed limit as one of several effective options to do so. Similar environmental arguments for speed limits have been proposed in Germany. As one of several methods to mitigate the environmental impact of aviation, a shift to other modes of transport or a switch from short-haul air traffic to high-speed trains has been proposed. In several countries in Europe, increasingly in the 2010s and early 2020s, some governments have even imposed a short-haul flight ban on all airlines, while many governmental agencies, commercial companies, universities, and NGOs have imposed restrictions or prohibitions on their employees to not take short-haul flights that can also be properly accomplished by train. In the field of urban planning, there are concepts for walkability, the compact city (or 'city of short distances'), New Urbanism (or its variant New Pedestrianism), and car-free living. In research policy, there are demands to give more consideration to the consequences of motorised private transport in the form of practice-oriented and solution-oriented research. === Further development of local public transport === According to a 2015 study by the Verkehrsclub Deutschland, local public transport in Germany was not customer-friendly enough. Cryptic route networks, opaque fare systems, ticket machines that cannot be
|
{"page_id": 70171813, "title": "Mobility transition"}
|
The oxygen-burning process is a set of nuclear fusion reactions that take place in massive stars that have used up the lighter elements in their cores. Oxygen-burning is preceded by the neon-burning process and succeeded by the silicon-burning process. As the neon-burning process ends, the core of the star contracts and heats until it reaches the ignition temperature for oxygen burning. Oxygen burning reactions are similar to those of carbon burning; however, they must occur at higher temperatures and densities due to the larger Coulomb barrier of oxygen. == Reactions == Oxygen ignites in the temperature range of (1.5–2.6)×109 K and in the density range of (2.6–6.7)×1012 kg·m−3. The principal reactions are given below, where the branching ratios assume that the deuteron channel is open (at high temperatures): Near 2×109 K, the oxygen-burning reaction rate is approximately 2.8×10−12(T9/2)33, where T9 is the temperature in billion kelvins. Overall, the major products of the oxygen-burning process are 28Si, 32,33,34S, 35,37Cl, 36,38Ar, 39,41K, and 40,42Ca. Of these, 28Si and 32S constitute 90% of the final composition. The oxygen fuel within the core of the star is exhausted after 0.01–5 years, depending on the star's mass and other parameters. The silicon-burning process, which follows, creates iron, but this iron cannot react further to create energy to support the star. During the oxygen-burning process, proceeding outward, there is an oxygen-burning shell, followed by a neon shell, a carbon shell, a helium shell, and a hydrogen shell. The oxygen-burning process is the last nuclear reaction in the star's core which does not proceed via the alpha process. == Pre-oxygen burning == Although 16O is lighter than neon, neon burning occurs before oxygen burning, because 16O is a doubly-magic nucleus and hence extremely stable. Compared to oxygen, neon is much less stable. As a result, neon burning
|
{"page_id": 217720, "title": "Oxygen-burning process"}
|
= {\displaystyle CX\simeq X\star \{v\}=} the join of X {\displaystyle X} with a single point v ∉ X {\displaystyle v\not \in X} .: 76 == Examples == Here we often use a geometric cone ( C X {\displaystyle CX} where X {\displaystyle X} is a non-empty compact subspace of Euclidean space). The considered spaces are compact, so we get the same result up to homeomorphism. The cone over a point p of the real line is a line-segment in R 2 {\displaystyle \mathbb {R} ^{2}} , { p } × [ 0 , 1 ] {\displaystyle \{p\}\times [0,1]} . The cone over two points {0, 1} is a "V" shape with endpoints at {0} and {1}. The cone over a closed interval I of the real line is a filled-in triangle (with one of the edges being I), otherwise known as a 2-simplex (see the final example). The cone over a polygon P is a pyramid with base P. The cone over a disk is the solid cone of classical geometry (hence the concept's name). The cone over a circle given by { ( x , y , z ) ∈ R 3 ∣ x 2 + y 2 = 1 and z = 0 } {\displaystyle \{(x,y,z)\in \mathbb {R} ^{3}\mid x^{2}+y^{2}=1{\mbox{ and }}z=0\}} is the curved surface of the solid cone: { ( x , y , z ) ∈ R 3 ∣ x 2 + y 2 = ( z − 1 ) 2 and 0 ≤ z ≤ 1 } . {\displaystyle \{(x,y,z)\in \mathbb {R} ^{3}\mid x^{2}+y^{2}=(z-1)^{2}{\mbox{ and }}0\leq z\leq 1\}.} This in turn is homeomorphic to the closed disc. More general examples:: 77, Exercise.1 The cone over an n-sphere is homeomorphic to the closed (n + 1)-ball. The cone over an n-ball is also homeomorphic to
|
{"page_id": 782162, "title": "Cone (topology)"}
|
management. Employees are resources, but they’re also humans (thus, “Human Resources”). We’re developing people, and people have whole lives. Navel gazing over my own already established norms is a great way to avoid being flexible enough to learn new things and expand my abilities to nurtue the people who report to me. ▼ Collapse 2 replies Vince Medlock* July 6, 2016 at 9:15 am I must admit that I long for the days when employees were considered people. Just people. Not a resource to be managed like steel or lumber. I miss our Personnel department. ▼ Collapse 1 reply sstabeler* August 19, 2016 at 4:43 pm The ironic thing is that the term “Human Resources” was supposed to emphasise that employees are more than just the cost of their salary. LK* July 6, 2016 at 2:23 pm Absolutely!! Seemed a clear case of taking advantage of her goodwill as well!! I had a manager like that — (actually I’ve had several) I had more training and experience but never the title so I always worked my way up. A manager was hired in over me, that had previously been a manager for the company, 10 years prior. I had to train him!! Then he would meander in and out of the office as he pleased because his title allowed for it, meanwhile I was doing all of the work!! Eventually, it caught up to him when the regional manager would show up unannounced. I never offered up information but I dang sure didn’t cover it up either!! Often, he wouldn’t return until the end of the day and it was slightly entertaining to see him stammer to come up with details to explain his lengthy absences. One day I ended up having dental pain so bad that on my lunch
|
{"source": 1738, "title": "from dpo"}
|
λ)M ′′ for M ′, M ′′ ∈ M , then M ′, M ′′ ∈ Def ( ˜M ) by the previous paragraph. Observe that “deformation of” is a transitive relation, hence M ′, M ′′ ∈ Def (M ), as desired. □ The polyhedral characterization of Def (M ) immediately translates into an algebraic char-acterization of finite-menu extreme points: by Lemma B.5, M ∈ ext M if and only if there is a non-zero direction ( t, s ) ∈ Rd×| ext M | × Rk such that the two candidate solutions (( a ± ta)a∈ext M , (c ± s)) solve the linear system (14) to (16). Using condition (4) in Lemma B.3, we can state an equivalent algebraic characterization of finite-menu extreme points that needs only minimal information about the underlying mechanism. For a mechanism x ∈ X , let E = ( (a, b ) ∈ menu( x) × menu( x) ∃θ ∈ Θ : {a, b } = arg max > ˜a∈menu( x) ˜a · θ ) (17) denote the set of pairs ( a, b ) of menu items for which there exists a type whose favorite allocations are {a, b }. These are exactly the edges of the extended menu associated with x.For an allocation a ∈ menu( x), also define F(a) = {H ∈ F | a ∈ H}. (18) Theorem B.6. Let x ∈ X have finite menu size. Then x ∈ ext X if and only if all solutions (( φa)a∈menu( x), (λab )(a,b )∈E ) ∈ Rd×| menu( x)| × R|E| > + to λab (a − b) = φa − φb ∀(a, b ) ∈ E (19) φa · nH = cH ∀a ∈ menu( x), H ∈ F (a) (20) φa · nH ≤ cH ∀a
|
{"source": 3886, "title": "from dpo"}
|
g in our notation, and think of elements of C[G] as linear combinations of group elements, with the obvious multiplication. 22 9.7 Proposition. There is a natural bijection between modules over C[G] and complex repre-sentations of G.Proof. On each representation ( V, ρ ) of G, we let the group algebra act in the obvious way, ρ (∑ > g ϕg · g ) (v) = ∑ > g ϕgρ(g)( v). Conversely, given a C[G]-module M , the action of Ce1 makes it into a complex vector space and a group action is defined by embedding G inside C[G] as explained above. We can reformulate the discussion above (perhaps more obscurely) by saying that a group homomorphism ρ : G → GL( V ) extends naturally to an algebra homomorphism from C[G] to End( V ); V is a module over End( V ) and the extended homomorphism makes it into a C[G]-module. Conversely, any such homomorphism defines by restriction a representation of G, from which the original map can then be recovered by linearity. Choosing a complete list (up to isomorphism) of irreducible representations V of G gives a homomorphism of algebras, ⊕ρV : C[G] → ⊕ > V End( V ), ϕ 7 → (ρV (ϕ)) . (9.8) Now each space End( V ) carries two commuting actions of G: these are left composition with ρV (g), and right composition with ρV (g)−1. For an alternative realisation, End( V ) is isomorphic to V ⊗ V ∗ and G acts separately on the two factors. There are also two commuting actions of G on C[G], by multiplication on the left and on the right, respectively: (λ(h)ϕ) ( g) = ϕ(h−1g), (ρ(h)ϕ) ( g) = ϕ(gh ). It is clear from our construction that the two actions of G
|
{"source": 5879, "title": "from dpo"}
|
strategy. Joy argues that this is why meat is rarely served with the animal's head or other intact body parts. === Justification === Joy introduced the idea of the "Three Ns of Justification", writing that meat-eaters regard meat consumption as "normal, natural, and necessary". She argues that the "Three Ns" have been invoked to justify other ideologies, including slavery and denying women the right to vote, and are widely recognized as problematic only after the ideology they support has been dismantled. The argument holds that people are conditioned to believe that humans evolved to eat meat, that it is expected of them, and that they need it to survive or be strong. These beliefs are said to be reinforced by various institutions, including religion, family and the media. Although scientists have shown that humans can get enough protein in their diets without eating meat, the belief that meat is required persists. Moreover, a 2022 study published in PNAS calls into question the impact of meat consumption on shaping the evolution of the human species. Building on Joy's work, psychologists conducted a series of studies in the United States and Australia, published in 2015, that found the great majority of meat-eaters' stated justifications for consuming meat were based on the "Four Ns" – "natural, normal, necessary, and nice". The arguments were that humans are omnivores (natural), that most people eat meat (normal), that vegetarian diets are lacking in nutrients (necessary), and that meat tastes good (nice). Meat-eaters who endorsed these arguments more strongly reported less guilt about their dietary habits. They tended to objectify animals, have less moral concern for them and attribute less consciousness to them. They were also more supportive of social inequality and hierarchical ideologies, and less proud of their consumer choices. Helena Pedersen, in her review of
|
{"page_id": 39193418, "title": "Carnism"}
|
that indicates the temperature. == Variable stars == Variable stars have periodic or random changes in luminosity because of intrinsic or extrinsic properties. Of the intrinsically variable stars, the primary types can be subdivided into three principal groups. During their stellar evolution, some stars pass through phases where they can become pulsating variables. Pulsating variable stars vary in radius and luminosity over time, expanding and contracting with periods ranging from minutes to years, depending on the size of the star. This category includes Cepheid and Cepheid-like stars, and long-period variables such as Mira. Eruptive variables are stars that experience sudden increases in luminosity because of flares or mass ejection events. This group includes protostars, Wolf-Rayet stars, and flare stars, as well as giant and supergiant stars. Cataclysmic or explosive variable stars are those that undergo a dramatic change in their properties. This group includes novae and supernovae. A binary star system that includes a nearby white dwarf can produce certain types of these spectacular stellar explosions, including the nova and a Type 1a supernova. The explosion is created when the white dwarf accretes hydrogen from the companion star, building up mass until the hydrogen undergoes fusion. Some novae are recurrent, having periodic outbursts of moderate amplitude. Stars can vary in luminosity because of extrinsic factors, such as eclipsing binaries, as well as rotating stars that produce extreme starspots. A notable example of an eclipsing binary is Algol, which regularly varies in magnitude from 2.1 to 3.4 over a period of 2.87 days. == Structure == The interior of a stable star is in a state of hydrostatic equilibrium: the forces on any small volume almost exactly counterbalance each other. The balanced forces are inward gravitational force and an outward force due to the pressure gradient within the star. The pressure
|
{"page_id": 26808, "title": "Star"}
|
C} is Chézy's coefficient [length1/2/time]. Values of this coefficient must be determined experimentally. Typically, these range from 30 m1/2/s (small rough channel) to 90 m1/2/s (large smooth channel). For many years following Antoine de Chézy's development of this formula, researchers assumed that C {\displaystyle C} was a constant, independent of flow conditions. However, additional research proved the coefficient's dependence on the Reynolds number as well as a channel's roughness. Accordingly, although the Chézy formula does not appear to incorporate either of these terms, the Chézy coefficient empirically and indirectly represents them. == Exploring Chézy's similarity parameter == The relationship between linear momentum and deformable fluid bodies is well explored, as are the Navier–Stokes equations for incompressible flow. However, exploring the relationships foundational to the Chézy formula can be helpful towards understanding the formula in full. To understand the Chézy similarity parameter, a simple linear momentum equation can help summarize the conservation of momentum of a control volume uniformly flowing through an open channel: ∑ F c v = ∂ ∂ t ∫ C V V ρ d V + ∫ C S V ρ V ⋅ n ^ d A {\displaystyle \sum F_{cv}={\partial \over \partial t}\int \limits _{CV}V\rho \,{dV}+\int \limits _{CS}V\rho V\cdot {\hat {n}}\,{dA}} Where the sum of forces on the contents of a control volume in the open channel is equal to the sum of the time rate of change of the linear momentum of the contents of the control volume, plus the net rate of flow of linear momentum through the control surface. The momentum principle may always be used for hydrodynamic force calculations. As long as uniform flow can be assumed, applying the linear momentum equation to a river channel flowing in one dimension means that momentum remains conserved and the forces are balanced in the direction
|
{"page_id": 17419853, "title": "Chézy formula"}
|
In astronomy, the curve of growth describes the equivalent width of a spectral line as a function of the column density of the material from which the spectral line is observed. == Shape == The curve of growth describes the dependence of the equivalent width W {\displaystyle W} , which is an effective measure of the strength of a feature in a emission or absorption spectrum, on the column density N {\displaystyle N} . Because the spectrum of a single spectral line has a characteristic shape, being broadened by various processes from a pure line, by increasing the optical depth τ {\displaystyle \tau } of a medium that either absorbs or emits light, the strength of the feature develops non-trivially. In the case of the combined natural line width, collisional broadening and thermal Doppler broadening, the spectrum can be described by a Voigt profile and the curve of growth exhibits the approximate dependencies depicted on the right. For low optical depth τ ≪ 1 {\displaystyle \tau \ll 1} corresponding to low N {\displaystyle N} , increasing the thickness of the medium leads to a linear increase of absorption and the equivalent line width grows linearly W ∝ N {\displaystyle W\propto N} . Once the central Gaussian part of the profile saturates, τ ≈ 1 {\displaystyle \tau \approx 1} and the Gaussian tails will lead to a less effective growth of W ∝ ln N {\displaystyle W\propto {\sqrt {\ln N}}} . Eventually, the growth will be dominated by the Lorentzian tails of the profile, which decays as ∼ 1 / x 2 {\displaystyle \sim 1/x^{2}} , producing a dependence of W ∝ N {\displaystyle W\propto {\sqrt {N}}} . == References ==
|
{"page_id": 58472849, "title": "Curve of growth"}
|
The Cluster Innovation Centre (DU CIC) is a Government of India-funded center established under the aegis of the University of Delhi. It was founded in 2011 by Prof. Dinesh Singh, the then Vice Chancellor of the University of Delhi, and introduced Innovation as a credit-based course for the first time in India. == Establishment == The Cluster Innovation Centre was conceptualized to foster innovation and connect academic research with practical applications. It was established during the tenure of Prof. Dinesh Singh, then Vice Chancellor of the University of Delhi. The National Innovation Council proposed the development of 20 University Innovation Centres across the country, with CIC serving as the prototype for this initiative. == Objectives == CIC aims to develop a culture of innovation within the academic system and to connect research with societal needs. Its primary objectives include promoting innovative degree programs, educating students and faculty through innovation-focused schemes, supporting application-oriented research, and facilitating collaborations with industries, academia, and other stakeholders. It also focuses on commercializing innovations to make them accessible to end users, addressing real-world problems through student projects, and developing affordable and sustainable innovations that benefit a broad audience. == Academic Programs == CIC offers interdisciplinary academic programs spanning undergraduate, postgraduate, and doctoral levels. === Undergraduate Programs === The Bachelor of Technology (B.Tech.) in Information Technology and Mathematical Innovations is a four-year program that integrates mathematics and information technology to cultivate an innovation-driven mindset. Students in this program can earn a minor degree in fields such as electronics, management, or computational biology. The Bachelor of Arts (Honours) in Humanities and Social Sciences, offered under the "Meta College" concept, is another four-year interdisciplinary program. It enables students to design their degree by majoring in fields like environmental science, tourism, geography, literature, media and communication studies, natural sciences, or
|
{"page_id": 38840063, "title": "Cluster Innovation Centre"}
|
almost exactly to the 50% that would have been expected by chance guessing alone. When the level of the signal was elevated by 14 dB or more, the test subjects were able to detect the higher noise floor of the CD-quality loop easily. The authors commented: Now, it is very difficult to use negative results to prove the inaudibility of any given phenomenon or process. There is always the remote possibility that a different system or more finely attuned pair of ears would reveal a difference. But we have gathered enough data, using sufficiently varied and capable systems and listeners, to state that the burden of proof has now shifted. Further claims that careful 16/44.1 encoding audibly degrades high resolution signals must be supported by properly controlled double-blind tests. Following criticism that the original published results of the study were not sufficiently detailed, the AES published a list of the audio equipment and recordings used during the tests. Since the Meyer–Moran study in 2007, approximately 80 studies have been published on high-resolution audio, about half of which included blind tests. Joshua Reiss performed a meta-analysis on 20 of the published tests that included sufficient experimental detail and data. In a paper published in the July 2016 issue of the AES Journal, Reiss says that, although the individual tests had mixed results, and that the effect was "small and difficult to detect," the overall result was that trained listeners could distinguish between high-resolution recordings and their CD equivalents under blind conditions: "Overall, there was a small but statistically significant ability to discriminate between standard-quality audio (44.1 or 48 kHz, 16 bit) and high-resolution audio (beyond standard quality). When subjects were trained, the ability to discriminate was far more significant." Hiroshi Nittono pointed out that the results in Reiss's paper showed that
|
{"page_id": 74784, "title": "Super Audio CD"}
|
In computer architecture, bit-serial architectures send data one bit at a time, along a single wire, in contrast to bit-parallel word architectures, in which data values are sent all bits or a word at once along a group of wires. All digital computers built before 1951, and most of the early massive parallel processing machines used a bit-serial architecture—they were serial computers. Bit-serial architectures were developed for digital signal processing in the 1960s through 1980s, including efficient structures for bit-serial multiplication and accumulation. The HP Nut processor used in many Hewlett-Packard calculators operated bit-serially. Assuming N is an arbitrary integer number, N serial processors will often take less FPGA area and have a higher total performance than a single N-bit parallel processor. == See also == Serial computer 1-bit computing Bit banging Bit slicing BKM algorithm CORDIC == References == == External links == Application of FPGA technology to accelerate the finite-difference time-domain (FDTD) method BIT-Serial FIR filters with CSD Coefficients for FPGAs
|
{"page_id": 17152542, "title": "Bit-serial architecture"}
|
Scenic design, also known as stage design or set design, is the creation of scenery for theatrical productions including plays and musicals. The term can also be applied to film and television productions, where it may be referred to as production design. Scenic designers create sets and scenery to support the overall artistic goals of the production. Scenic design is an aspect of scenography, which includes theatrical set design as well as light and sound. Modern scenic designers are increasingly taking on the role of co-creators in the artistic process, shaping not only the physical space of a production but also influencing its blocking, pacing, and tone. As Richard Foreman famously stated, scenic design is a way to "create the world through which you perceive things happening." These designers work closely with the director, playwright, and other creative members of the team to develop a visual concept that complements the narrative and emotional tone of the production. Notable scenic designers who have embraced this collaborative role include Robin Wagner, Eugene Lee, and Jim Clayburgh == History == The origins of scenic design may be found in the outdoor amphitheaters of ancient Greece, when acts were staged using basic props and scenery. Because of improvements in stage equipment and drawing perspectives throughout the Renaissance, more complex and realistic sets could be created for scenic design. Scenic design evolved in conjunction with technological and theatrical improvements over the 19th and 20th centuries. === The New Stagecraft Movement === In the early 20th century, American scenic design underwent a dramatic transformation with the introduction of the New Stagecraft. Drawing inspiration from European pioneers like Adolphe Appia and Edward Gordon Craig, American designers began moving away from the overly detailed naturalism of the 19th century. Instead, they embraced simplified realism, abstraction, mood-driven environments, and
|
{"page_id": 416779, "title": "Scenic design"}
|
progress made by Serre and Ribet, this approach to Fermat was widely considered unusable as well, since almost all mathematicians saw the Taniyama–Shimura–Weil conjecture itself as completely inaccessible to proof with current knowledge.: 203–205, 223, 226 For example, Wiles's ex-supervisor John Coates stated that it seemed "impossible to actually prove",: 226 and Ken Ribet considered himself "one of the vast majority of people who believed [it] was completely inaccessible".: 223 == Andrew Wiles == Hearing of Ribet's 1986 proof of the epsilon conjecture, English mathematician Andrew Wiles, who had studied elliptic curves and had a childhood fascination with Fermat, decided to begin working in secret towards a proof of the Taniyama–Shimura–Weil conjecture, since it was now professionally justifiable, as well as because of the enticing goal of proving such a long-standing problem. Ribet later commented that "Andrew Wiles was probably one of the few people on earth who had the audacity to dream that you can actually go and prove [it].": 223 == Announcement and subsequent developments == Wiles initially presented his proof in 1993. It was finally accepted as correct, and published, in 1995, following the correction of a subtle error in one part of his original paper. His work was extended to a full proof of the modularity theorem over the following six years by others, who built on Wiles's work. === Announcement and final proof (1993–1995) === During 21–23 June 1993, Wiles announced and presented his proof of the Taniyama–Shimura conjecture for semistable elliptic curves, and hence of Fermat's Last Theorem, over the course of three lectures delivered at the Isaac Newton Institute for Mathematical Sciences in Cambridge, England. There was a relatively large amount of press coverage afterwards. After the announcement, Nick Katz was appointed as one of the referees to review Wiles's manuscript. In the
|
{"page_id": 21950759, "title": "Wiles's proof of Fermat's Last Theorem"}
|
Although this affects only the magnitude of the impedance at relatively high frequency, but also its effect on the phase at line frequency causes a noticeable error at a low power factor. The major disadvantage of using the shunt is that fundamentally a shunt is a resistive element, the power loss is thus proportional to the square of the current passing through it and consequently it is a rarity amongst high current measurements. Fast-response for measuring high-impulse or heavy-surge currents is the common requirement for shunt resistors. In 1981 Malewski, designed a circuit to eliminate the skin effect and later in 1999 the flat-strap sandwich shunt (FSSS) was introduced from a flat-strap sandwich resistor. The properties of the FSSS in terms of response time, power loss and frequency characteristics, are the same as the shunt resistor but the cost is lower and the construction technique is less sophisticated, compared to Malewski and the coaxial shunt. The intrinsic resistance of a conducting element, such as a copper trace on a printed circuit board can be used as a sensing resistor. This saves space and component cost. The voltage drop of a copper trace is very low due to its very low resistance, making the presence of a high gain amplifier mandatory in order to get a useful signal. Accuracy is limited by the initial tolerance of manufacturing the trace and the significant temperature coefficient of copper. A digital controller may apply corrections to improve the measurement. A significant drawback of a resistor sensor is the unavoidable electrical connection between the current to be measured and the measurement circuit. An isolation amplifier can provide electrical isolation between measured current and the rest of the measurement circuit. However, these amplifiers are expensive and can also limit the bandwidth, accuracy and thermal drift of
|
{"page_id": 50825027, "title": "Current sensing"}
|
c:ornputation within whieh it is in-volved: pi X Pi-/-1 X pi+2 where P, = P(wij; iti;, )P(ti;, it;-r;, _,, ti···2j,_,). o These eomputations proceed from left to right, so that the optimal tag found for position i will be used in the computation of the optimal word/tag pairs at positions i + 1 and i + 2. The experimental results reported in El-BE:ze et al. (1994) indicate success levels slightly superior to ours. This may be explained in part by the use of a better language model (their HMM is three-tag, ours is two- tag). It must be said, however, that their test-corpus was relatively small (in all, a little over 8000 words), and that the performances varied wildly from text to text, with average distances between errors varying between 100 and 600 words. A method which exploits different sources of infor- mation in the candidate selection task is described in Yarowsky (1994b): this system relies on local context (e.g., words within a 2- or 4-word window around the current word), global context (e.g. a 40-word window), part-of-speech of surrounding words, etc. These arc combined within a unifying framework known as de- cision lists. \Vithin this framework, the system bases its decision for each individual candidate selection on the single most reliable piece of evidence. Although the work described in Yarowsky (1994b) does address the problem of l<'rcnch automatic accen- tuation, it mostly focuses on the Spanish language. Furthermore, the evaluation focuses on specific am-biguities, from which it is impossible to get a global performance measure. As a result, it is unfortunately 31 not currently possible to compare these findings with ours in a quantitative way. In Yarowsky (1994a), the author compares his method with one based on the stochastic part-of- speech tagger of Church (1988), a method which ob-
|
{"source": 972, "title": "from dpo"}
|
with the crypto-graphic module in the secure world. The software architec-ture diagram is shown in Figure 8. The TZ library exposes three types of functions to the user application, set key , encrypt , and decrypt . Upon receiving a request from the user process, the TZ library prepares the memory buffers and structures and then invokes the system call using the swi instruction. Next, the TZ driver in the kernel space copies the user buffer into kernel and switches into the security monitor via smc instruction. Once the AES crypto module receives the request from the TZ service manager, it processes the request. When the request is to encrypt, the crypto module operates on the normal world memory for reading in plaintext and writing out ciphertext. The code and data of the cryptographic module are stored within the secure world memory and thus protected from the normal world. Cache is enabled in all components, including stack, code, and data of all secure components. In our attack demonstration, the encryption key is chosen at random by the TZ service manager in the secure world. Following the design philosophy of TrustZone, we assume that most of the I/O and content consumers reside in the nor-mal world. Therefore, the AES crypto module in the secure world offers encryption and decryption service to processes in the normal world. Each user level process can trigger an encryption via the TZ support library. Therefore, the user-level process knows the start and the end of encryption as well as the resulting ciphertext. 5.3. TruSpy Attack from Normal World OS We implement the OS-level attack as a kernel module. The prime and probe steps are implemented in assembly to avoid cache pollution during the probing process. We assume that the physical address of the AES T-table is
|
{"source": 2337, "title": "from dpo"}
|
bins, πm×1 is the vector of initial guesses for state probabilities, m is the number of hidden states, Γ m×m is the initial guess for the transition probability matrix, Λ N×m is the initial guess for the firing rates matrix, Am×m = {aij } is the matrix of parameters of the Dirichlet prior (Eq. 19), dt is the bin width (∆ t in Eq. 7), maxiter is the maximum number of iterations, tol = {tol 1, tol 2} is a vector of tolerance levels for convergence. > 1: LP ← 1 ▷ total log-posterior ln P (O|Θ) + ln P (Θ) > 2: for k = 1 to maxiter do > 3: from K and Λ, compute the emission prob. matrix EN ×T with entries eit = P (kkk(t)|Λi) ▷ Eq. 7 > 4: E step: compute forward and backward probabilities α, β from {π, Γ, E } ▷ Eqs. B.19-B.22 > 5: compute LL next , qi, ξij from α, β and {π, Γ, E } ▷ Eqs. B.23-B.25 > 6: M step: compute Θ next = {πnext , Γnext , Λnext } ▷ Eqs. 14, 22, 16 > 7: LP next ← LL next + ln P (Θ) ▷ Eq. B.29 with current value of Γ > 8: if || Γnext − Γ|| + || Λnext − Λ|| 9: return π, Γ, Λ, LP ▷ training complete > 10: else > 11: π ← πnext , Γ ← Γnext , Λ ← Λnext , LP ← LP next ▷ iteration complete; go to line 3 > 12: end if > 13: end for ▷ end loop over iterations 37 > .CC-BY 4.0 International license available under a was not certified by peer review)
|
{"source": 4957, "title": "from dpo"}
|
−2 × 1105 + 7 × (1430 − 1 × 1105) = 7 × 1430 − 9 × 1105 (A4.11) = 7 × 1430 − 9 × (6825 − 4 × 1430) = −9 × 6825 + 37 × 1430 . (A4.12) That is, 65 = 6825 × (−9) + 1430 × 37, which is the desired representation. What resources are consumed by Euclid’s algorithm? Suppose a and b may be rep-resented as bit strings of at most L bits each. It is clear that none of the divisors ki or remainders ri can be more than L bits long, so we may assume that all computations are done in L bit arithmetic. The key observation to make in a resource analysis is that ri+2 ≤ ri/2. To prove this we consider two cases: • ri+1 ≤ ri/2. It is clear that ri+2 ≤ ri+1 so we are done. • ri+1 > r i/2. In this case ri = 1 × ri+1 + ri+2 , so ri+2 = ri − ri+1 ≤ ri/2. Since ri+2 ≤ ri/2, it follows that the divide-and-remainder operation at the heart of Euclid’s algorithm need be performed at most 2 log a = O(L) times. Each divide-and-remainder operation requires O(L2 ) operations, so the total cost of Euclid’s algorithm is O(L3 ). Finding x and y such that ax + by = gcd( a, b ) incurs a minor additional cost: O(L) substitutions are performed, at a cost of O(L2 ) per substitution to do the arithmetic involved, for a total resource cost of O(L3 ). Euclid’s algorithm may also be used to efficiently find multiplicative inverses in mod-ular arithmetic. This is implicit in the proof of Corollary A4.4; we now make it explicit. Suppose a is co-prime to n, and we wish to find
|
{"source": 6248, "title": "from dpo"}
|
The Fresnel equations (or Fresnel coefficients) describe the reflection and transmission of light (or electromagnetic radiation in general) when incident on an interface between different optical media. They were deduced by French engineer and physicist Augustin-Jean Fresnel () who was the first to understand that light is a transverse wave, when no one realized that the waves were electric and magnetic fields. For the first time, polarization could be understood quantitatively, as Fresnel's equations correctly predicted the differing behaviour of waves of the s and p polarizations incident upon a material interface. == Overview == When light strikes the interface between a medium with refractive index n1 and a second medium with refractive index n2, both reflection and refraction of the light may occur. The Fresnel equations give the ratio of the reflected wave's electric field to the incident wave's electric field, and the ratio of the transmitted wave's electric field to the incident wave's electric field, for each of two components of polarization. (The magnetic fields can also be related using similar coefficients.) These ratios are generally complex, describing not only the relative amplitudes but also the phase shifts at the interface. The equations assume the interface between the media is flat and that the media are homogeneous and isotropic. The incident light is assumed to be a plane wave, which is sufficient to solve any problem since any incident light field can be decomposed into plane waves and polarizations. === S and P polarizations === There are two sets of Fresnel coefficients for two different linear polarization components of the incident wave. Since any polarization state can be resolved into a combination of two orthogonal linear polarizations, this is sufficient for any problem. Likewise, unpolarized (or "randomly polarized") light has an equal amount of power in each of
|
{"page_id": 11149, "title": "Fresnel equations"}
|
reaction mixture without polymerase is coated with wax and the polymerase is added on top of the cooled wax. When heated, the wax layer melts and the polymerase mixes with the reaction mixture. == Other DNA polymerases == Some DNA polymerases used in isothermal DNA amplification, e.g. in loop-mediated isothermal amplification, multidisplacement amplification, recombinase polymerase amplification or isothermal assembly, for the amplification of entire genomes (e.g. the φ29 DNA polymerase from the bacteriophage phi29, B35DNAP from the phage Bam35) are not thermostable, while others like the Bst Klenow fragment are thermostable. The T4, T6 and T7 DNA polymerases are also not thermostable. == RNA-dependent DNA polymerases == The standard reverse transcriptases (RNA-dependent DNA polymerases) of retroviral origin used for RT-PCR, like the AMV- and the MoMuLV-Reverse-Transcriptase, are not thermostable at 95 °C. At the lower temperatures of a reverse transcription unspecific hybridisation of primers to wrong sequences can occur, as well as unwanted secondary structures in the DNA template, which can lead to unwanted PCR products and less desired PCR products. The AMV reverse transcriptase may be used up to 70 °C. Also, some thermostable DNA-dependent DNA polymerases can be used as RNA-dependent DNA polymerases by exchanging Mg2+ as cofactors with Mn2+, so that they may be used for an RT-PCR. But since the synthesis rate of Taq with Mn2+ is relatively low, Tth was increasingly used for this approach. The use of Mn2+ also increases the error rate and the necessary amount of template, so that this method is rarely used. These problems can be avoided with the thermostable 3173-Polymerase from a thermophilic bacteriophage, which can withstand the high temperatures of a PCR and prefers RNA as a template. == Applications == In addition to the choice of thermostable DNA polymerase, other parameters of a PCR are specifically changed
|
{"page_id": 77039850, "title": "Thermostable DNA polymerase"}
|
the singly-substituted isotopologues, and exponentially smaller amounts of structures having two or more 13C in them. The rare case where two adjacent carbon atoms in a single structure are both 13C causes a detectable coupling effect between them as well as signals for each one itself. The INADEQUATE correlation experiment uses this effect to provide evidence for which carbon atoms in a structure are attached to each other, which can be useful for determining the actual structure of an unknown chemical. === Reaction kinetics === In reaction kinetics, a rate effect is sometimes observed between different isotopomers of the same chemical. This kinetic isotope effect can be used to study reaction mechanisms by analyzing how the differently massed atom is involved in the process. === Biochemistry === In biochemistry, differences between the isotopomers of biochemicals such as starches is of practical importance in archaeology. They offer clues to the diet of prehistoric humans that lived as long ago as Paleolithic times. This is because naturally occurring carbon dioxide contains both 12C and 13C. Monocots, such as rice and oats, differ from dicots, such as potatoes and tree fruits, in the relative amounts of 12CO2 and 13CO2 that they incorporate into their tissues as products of photosynthesis. When tissues of such subjects are recovered, usually tooth or bone, the relative isotopic content can give useful indications of the main source of the staple foods of the subjects of the investigations. == Cumomer == A cumomer is a set of isotopomers sharing similar properties and is a concept that relates to metabolic flux analysis. The concept was developed in 1999. In a metabolic cascade, many molecules will contain the same pattern of isotope labelling. In order to simplify the analysis of such cascades, molecules with identically labelled atoms are aggregated into a
|
{"page_id": 2421084, "title": "Isotopomer"}
|
in the early 1990s. Peter Foltz and Thomas Landauer developed a system using a scoring engine called the Intelligent Essay Assessor (IEA). IEA was first used to score essays in 1997 for their undergraduate courses. It is now a product from Pearson Educational Technologies and used for scoring within a number of commercial products and state and national exams. IntelliMetric is Vantage Learning's AES engine. Its development began in 1996. It was first used commercially to score essays in 1998. Educational Testing Service offers "e-rater", an automated essay scoring program. It was first used commercially in February 1999. Jill Burstein was the team leader in its development. ETS's Criterion Online Writing Evaluation Service uses the e-rater engine to provide both scores and targeted feedback. Lawrence Rudner has done some work with Bayesian scoring, and developed a system called BETSY (Bayesian Essay Test Scoring sYstem). Some of his results have been published in print or online, but no commercial system incorporates BETSY as yet. Under the leadership of Howard Mitzel and Sue Lottridge, Pacific Metrics developed a constructed response automated scoring engine, CRASE. Currently utilized by several state departments of education and in a U.S. Department of Education-funded Enhanced Assessment Grant, Pacific Metrics’ technology has been used in large-scale formative and summative assessment environments since 2007. Measurement Inc. acquired the rights to PEG in 2002 and has continued to develop it. In 2012, the Hewlett Foundation sponsored a competition on Kaggle called the Automated Student Assessment Prize (ASAP). 201 challenge participants attempted to predict, using AES, the scores that human raters would give to thousands of essays written to eight different prompts. The intent was to demonstrate that AES can be as reliable as human raters, or more so. The competition also hosted a separate demonstration among nine AES vendors on
|
{"page_id": 35151113, "title": "Automated essay scoring"}
|
cost and boost performance of MRAM to hopefully release a product to market. November — Toshiba applied and proved the spin-transfer torque switching with perpendicular magnetic anisotropy MTJ device. November — NEC develops world's fastest SRAM-compatible MRAM with operation speed of 250 MHz. 2008 Japanese satellite, SpriteSat, to use Freescale MRAM to replace SRAM and FLASH components June — Samsung and Hynix become partner on STT-MRAM June — Freescale spins off MRAM operations as new company Everspin August — Scientists in Germany have developed next-generation MRAM that is said to operate as fast as fundamental performance limits allow, with write cycles under 1 nanosecond. November — Everspin announces BGA packages, product family from 256 Kb to 4 Mb 2009 June — Hitachi and Tohoku University demonstrated a 32-Mbit spin-transfer torque RAM (SPRAM). June — Crocus Technology and Tower Semiconductor announce deal to port Crocus' MRAM process technology to Tower's manufacturing environment November — Everspin releases SPI MRAM product family and ships first embedded MRAM samples 2010 April — Everspin releases 16 Mb density June — Hitachi and Tohoku Univ announce Multi-level SPRAM 2011 March — PTB, Germany, announces below 500 ps (2 Gbit/s) write cycle 2012 November — Chandler, Arizona, USA, Everspin debuts 64 Mb ST-MRAM on a 90 nm process. December — A team from University of California, Los Angeles presents voltage-controlled MRAM at IEEE International Electron Devices Meeting. 2013 November — Buffalo Technology and Everspin announce a new industrial SATA III SSD that incorporates Everspin's Spin-Torque MRAM (ST-MRAM) as cache memory. 2014 January — Researchers announce the ability to control the magnetic properties of core/shell antiferromagnetic nanoparticles using only temperature and magnetic field changes. October — Everspin partners with GlobalFoundries to produce ST-MRAM on 300 mm wafers. 2016 April — Samsung's semiconductor chief Kim Ki-nam says Samsung is
|
{"page_id": 315008, "title": "Magnetoresistive RAM"}
|
a flip-out panel to allow for a larger screen. This design was later refined with a slightly more angular appearance that was seen in most Next Generation–era movies as well as later seasons of Star Trek: Deep Space Nine and Voyager. In the post-Next Generation-era (Star Trek: Nemesis and Star Trek: Elite Force II ), a newer tricorder was introduced. It is flatter, with a small flap that opens on top and a large touchscreen interface. == Production == The tricorder prop for the original Star Trek series was designed and built by Wah Ming Chang, who created several futuristic props under contract. Some of his designs are considered to have been influential on later, real-world consumer electronics devices. For instance, his communicator inspired cell phone inventor Martin Cooper's desire to create his own form of mobile communication device. Many other companies followed this example and life-sized replicas remain popular collectibles today. The tricorder in The Next Generation was initially inspired by the HP-41C scientific calculator. == "Real" tricorders == Software exists to make hand-held devices simulate a tricorder. Examples include Jeff Jetton's Tricorder for the PalmPilot; the Web application for the Pocket PC, iPhone, and iPod Touch; and an Android version. Vital Technologies Corporation sold a portable device dubbed the "Official Star-Trek Tricorder Mark 1" (formally, the TR-107 Tricorder Mark 1) in 1996. Its features were an "Electromagnetic Field (EMF) Meter", "Two-Mode Weather Station" (thermometer and barometer), "Colorimeter" (no wavelength given), "Light meter", and "Stardate Clock and Timer" (a clock and timer). Spokespersons claimed the device was a "serious scientific instrument". Vital Technologies marketed the TR-107 as a limited run of 10,000 units before going out of business, although far fewer than 10,000 were likely ever built. The company was permitted to call this device a "tricorder" because Gene
|
{"page_id": 348535, "title": "Tricorder"}
|
N.F. Smith & Associates, also known as Smith, or Smith & Associates, is an independent distributor of electronic components and semiconductors headquartered in Houston, Texas. == History == In 1984, brothers Robert and Leland Ackerley and their wives founded Smith. Working around a dining table with two phone lines, they connected with top industry figures in the nascent computing industry. Smith is now one of the largest independent distributors in the semiconductor and electronic components industry, and currently ranks 11th among all global distributors. The 1990s saw marked growth for the company. In 1992, Smith's annual sales were $30 million; by 1998, they had topped $470 million due to the company's expansion of its business into new regions, industries, and service offerings. In 1997, the company moved into its new 60,000 sq. ft. headquarters in Houston, followed by the opening of its first major international office and hub in Hong Kong that same year. Smith established a European presence with the opening of its Amsterdam office in 1999. In 2000, the company completed construction of a 15,000 sq. ft. warehouse in Houston to expand its ability to handle OEM and CEM consignment and excess inventories, adding to its supply chain services. Smith opened offices in Seoul, San Jose, and Guadalajara in 2000, followed by office openings in New York City in 2003, Shanghai in 2004, and Shenzhen in 2008. In 2010, the company completed construction of an enhanced, in-house, anti-counterfeit laboratory at its headquarters. Smith relocated the laboratory to a 57,199 sq. ft. operational facility in November 2014. The company opened its 10th physical trading office in Taipei in 2011, adding additional offices; Austin in 2013, Penang in 2014, Bangalore in 2015, and Cluj-Napoca, Munich, and Beijing in 2017. In the current decade, Smith continues its growth adding sales offices
|
{"page_id": 40759550, "title": "N.F. Smith & Associates"}
|
Palisa's Comet, also known formally as C/1879 Q1 by its modern nomenclature, is a parabolic comet that was barely visible to the naked eye in late 1879. It was the only comet discovered by Austrian astronomer, Johann Palisa. == Discovery and observations == Johann Palisa discovered this comet on 21 August 1879, initially mistaking it for a nebula not recorded in the catalogs of Messier and d'Arrest before confirming the object's motion a few hours later. At the time it was located within the constellation Ursa Major, where he described the comet as "round, small, but bright". One of the first ephemerides of the comet were calculated on September 5. The comet was moving inbound through the inner Solar System between September and October 1879, enabling further observations and refining orbital calculations. Pietro Tacchini measured the coma diameter as 1.7' on October 7. Ralph Copeland described the comet as "bright and round" on October 19 while measuring the comet's spectra. == References == === Notes === === Citations === == External links == C/1879 Q1 at the JPL Small-Body Database
|
{"page_id": 59936434, "title": "C/1879 Q1 (Palisa)"}
|
ecological preservation) == References == == Sources == Foltz, Richard (2006). "Seyyed Hossein Nasr". In Taylor, Bron (ed.). The Encyclopedia of Religion and Nature. Continuum. ISBN 9780199754670. Foltz, Richard (2013). "Ecology in Islam". In Runehov, Anne L. C.; Oviedo, Lluis (eds.). Encyclopedia of Sciences and Religions. Springer. ISBN 978-1402082641. Hancock, R. (2017). Islamic Environmentalism: Activism in the United States and Great Britain. Routledge Advances in Sociology. Taylor & Francis. ISBN 978-1-134-86550-5. Retrieved 2021-10-03. Hancock, Rosemary (2019), "Ecology in Islam", Oxford Research Encyclopedia of Religion, Oxford University Press, doi:10.1093/acrefore/9780199340378.013.510, ISBN 978-0-19-934037-8 Johnston, David L. (2012). "Intra-Muslim Debates on Ecology: Is Shari'a Still Relevant?". Worldviews. 16 (3): 218–238. doi:10.1163/15685357-01603003. JSTOR 43809777. Koehrsen, Jens (2021). "Muslims and climate change: How Islam, Muslim organizations, and religious leaders influence climate change perceptions and mitigation activities". WIREs Climate Change. 12 (3). Wiley. doi:10.1002/wcc.702. hdl:10852/90034. ISSN 1757-7780. S2CID 233963934. Ouis, Soumaya Pernilla (1998). "Islamic Ecotheology based on the Qur'an". Islamic Studies. 37 (2). Islamic Research Institute, International Islamic University, Islamabad: 151–181. ISSN 0578-8072. JSTOR 20836989. Retrieved 2021-10-02. Quadir, Tarik M. (2013). Traditional Islamic Environmentalism: The Vision of Seyyed Hossein Nasr. Lanham, MD: University Press of America. ISBN 978-0-7618-6143-0. == Further reading == Richard C. Foltz; Frederick M. Denny; Azizan Baharuddin, eds. (2003). Islam and Ecology: A Bestowed Trust. Center for the Study of World Religions, Harvard University. ISBN 9780945454397. Ibrahim Abdul-Matin (2010). Green Deen: What Islam Teaches about Protecting the Planet. Berrett-Koehler Publishers. ISBN 9781605094649. Odeh Rashed Al-Jayyousi (2012). Islam and Sustainable Development: New Worldviews. Gower Publishing. ISBN 9781409456490. Anna M. Gade (2019). Muslim Environmentalisms: Religious and Social Foundations. Columbia University Press. ISBN 9780231549219. Fazlun Khalid (2019). Signs on the Earth: Islam, Modernity and the Climate Crisis. Kube Publishing Limited. ISBN 9781847740779. H. Aburounia; M. Sexton (2006). Islam and Sustainable Development (PDF). Research Institute for the Built
|
{"page_id": 68887276, "title": "Islamic environmentalism"}
|
adhesives. Owing to the microfine structure, transmission electron microscope or TEM was used to examine the structure. The butadiene matrix was stained with osmium tetroxide to provide contrast in the image. The material was made by living polymerization so that the blocks are almost monodisperse to create a regular microstructure. The molecular weight of the polystyrene blocks in the main picture is 102,000; the inset picture has a molecular weight of 91,000, producing slightly smaller domains. Microphase separation is a situation similar to that of oil and water. Oil and water are immiscible (i.e., they can phase separate). Due to the incompatibility between the blocks, block copolymers undergo a similar phase separation. Since the blocks are covalently bonded to each other, they cannot demix macroscopically like water and oil. In "microphase separation," the blocks form nanometer-sized structures. Depending on the relative lengths of each block, several morphologies can be obtained. In diblock copolymers, sufficiently different block lengths lead to nanometer-sized spheres of one block in a matrix of the second (e.g., PMMA in polystyrene). Using less different block lengths, a "hexagonally packed cylinder" geometry can be obtained. Blocks of similar length form layers (often called lamellae in the technical literature). Between the cylindrical and lamellar phase is the gyroid phase. The nanoscale structures created from block copolymers can potentially be used to create devices for computer memory, nanoscale-templating, and nanoscale separations. Block copolymers are sometimes used as a replacement for phospholipids in model lipid bilayers and liposomes for their superior stability and tunability. Polymer scientists use thermodynamics to describe how the different blocks interact. The product of the degree of polymerization, n, and the Flory-Huggins interaction parameter, χ {\displaystyle \chi } , gives an indication of how incompatible the two blocks are and whether they will microphase separate. For example,
|
{"page_id": 768839, "title": "Copolymer"}
|
a Delimited Text File box will pop up. ii) Browse to the .csv you saved. Once you add the file, double check that information in the other boxes (i.e. X and Y fields, and data from the Excel file) automatically got filled in. If not, activate the circle next to CSV (comma separated values). (1) Double check that under the heading Geometry definition: (a) Point coordinates is activated. (b) X Field = Longitude. (c) Y Field = Latitude. Note: If you are using Degrees Minutes Second instead of Decimal degrees be sure to activate the box next to DMS coordinates. (d) Once you have checked your settings and data, click OK. 154 (e) Next, you will be asked to choose your coordinate system for the file. If the coordinate system you are using is under Recently used coordinate reference systems you can highlight it there and click OK. If not, choose the correct coordinate system under Coordinate reference systems of the world . For this exercise, we will use WGS 84. (f) Click OK. (g) Your data points should show up in the Map View window and the file should now be listed in the Map Legend window. Note: This file cannot be edited (toggle edit button is grayed out), and therefore must be saved as a shapefile. 155 5) Convert an imported .csv layer into a shapefile: a) Right click on the .csv layer you just imported into QGIS ( Rodent_Sampling_Sites ) and select Save as. b) The Save vector layer as… box will appear. i) Format = ESRI shapefile . ii) Save as – Click the Browse button and browse to the location you would like to save the file. Be sure to name the file and click Save. iii) Set the CRS (coordinate reference system) to Layer
|
{"source": 1196, "title": "from dpo"}
|
(x) = I(x ∈ C) = { 0 if x ∈ C ∞ if x / ∈ C For x ∈ C, ∂IC (x) = NC (x), the normal cone of C at x. Recall that the normal cone of C at x is defined as: NC (x) = {g ∈ Rn : gT x ≥ gT y for any y ∈ C}Lecture 7: February 2 7-3 This directly follows from the definition of subgradient: IC (y) ≥ IC (x) + gT (y − x) ∀y For y / ∈ C, IC (y) = ∞. For y ∈ C, IC (y) = 0. This means 0 ≥ gT (y − x) which is the definition of the normal cone. The subgradients of indicator functions are important as any constrained coptimization problem min > x∈C f (x)can be rewritten as min > x f (x) + IC (x). ## 7.4.1 Subgradient Calculus Subgradients for complex convex functions can be computed by knowing the subgradients for a basic set of convex functions and then applying rules of subgradient calculus. Here are the set of rules: Scaling: ∂(αf ) = α∂f Addition: ∂(f1 + f2) = ∂f 1 + ∂f 2 Affine composition: If g(x) = f (Ax ) + b then ∂g (x) = AT ∂f (Ax + b). Finite pointwise maximum: If f (x) = max > i=1 ,...,m fi(x) then ∂f (x) = conv (⋃ > i:fi(x)= f(x) ∂f i(x) ) Generalized pointwise maximum: If f (x) = max > x∈S fs(x) then ∂f (x) ⊇ cl ( conv (⋃ > s:fs(x)= f(x) ∂f s(x) )) Lp Norm: f (x) = || x|| p. Let q be such that 1 > p + 1 > q = 1, then, || x|| p = max || z|| q ≤1 zT x.
|
{"source": 3372, "title": "from dpo"}
|
provision to any disclosure requirement (a ‘tipping off’ provision). 4 > 1In R v Spencer (Jeffrey) EWCA Crim 2240, 12 WLUK 246, the appellant had been > convicted under RIPA s 53 for not providing the PIN to unlock two mobile telephones in his possession. > A disclosure notice under s 49 had been presented to the appellant, who declined to provide the codes. > 2RIPA 2000, s 49. > 3RIPA 2000, s 49 when read in conjunction with s 50(3). > 4RIPA 2000, s 54. 8.11 These powers are considered below. While the first two will lead to the disclosure of information in an intelligible form, the first differs in that it does not technically require the surrendering of the key. It suffices that the person produces the data in an intelligible form. Thus, for example, if there were other documents that were encrypted that were not relevant to the crime, the police would not see them. However, in many instances it is unlikely that the police would be content with an assurance that other documents are not relevant and, instead, they will require the key to be disclosed, which will either provide access or allow the encrypted material to be rendered intelligible. In essence, the difference is who does the decryption. In the first scenario it is the suspect, whereas in the second scenario it will be the relevant investigator, or a nominated person. Encrypted data 401 # Notice requiring disclosure 8.12 Where a suspect does not voluntarily provide her key, or where the police are unable to identify the key using the techniques discussed above, they may seek to serve a notice requiring disclosure of either the information sought or the key. The police can only do so with the permission of the National Technical
|
{"source": 5648, "title": "from dpo"}
|
might cause slight delays. You might instead consider the following extremely simple and lightweight policy, which is surprisingly effective: assign each job to a random server. Let’s abstract things slightly. Suppose we have k servers and n jobs. Assume all n jobs arrive very quickly, we assign each to a random server (independently), and the jobs take a while to process. What we are interested in the load of the servers. ASSUMPTION: n is much bigger than k.E.g., our YouTube example had n = 10 6, k = 10 3. Question: The average “load” — jobs per server — will of course be n/k . But how close to perfectly balanced will things be? In particular, is it true that the maximum load is not much bigger than n/k , with high probability? 7Answer: Yes! Let’s do the analysis. Let Xi denote the number of jobs assigned to server i, for 1 ≤ i ≤ k. Question: What is the distribution of the random variable Xi? Answer: If you think a little bit carefully, you see that Xi is a binomial random variable: Xi ∼ Binomial( n, 1/k ). To see it, just imagine staring at the ith server. For each of n trials/jobs, there is a 1 /k chance that that job gets thrown onto this ith server. Don’t be confused by notation, by the way — we used to use subscripted X’s like Xi to denote Bernoulli random variables, and their sums were Binomial random variables denoted X. Here we have that each Xi itself is a Binomial random variable. Question: Are X1, . . . , X k independent random variables? Answer: No! Here is one non-rigorous but intuitive reason: we know that it will always be the case that k∑ > i=1 Xi = n. So
|
{"source": 6644, "title": "from dpo"}
|
Deconvolution ==== Fluorescence microscopy is a powerful technique to show specifically labeled structures within a complex environment and to provide three-dimensional information of biological structures. However, this information is blurred by the fact that, upon illumination, all fluorescently labeled structures emit light, irrespective of whether they are in focus or not. So an image of a certain structure is always blurred by the contribution of light from structures that are out of focus. This phenomenon results in a loss of contrast especially when using objectives with a high resolving power, typically oil immersion objectives with a high numerical aperture. However, blurring is not caused by random processes, such as light scattering, but can be well defined by the optical properties of the image formation in the microscope imaging system. If one considers a small fluorescent light source (essentially a bright spot), light coming from this spot spreads out further from our perspective as the spot becomes more out of focus. Under ideal conditions, this produces an "hourglass" shape of this point source in the third (axial) dimension. This shape is called the point spread function (PSF) of the microscope imaging system. Since any fluorescence image is made up of a large number of such small fluorescent light sources, the image is said to be "convolved by the point spread function". The mathematically modeled PSF of a terahertz laser pulsed imaging system is shown on the right. The output of an imaging system can be described using the equation: s ( x , y ) = P S F ( x , y ) ∗ o ( x , y ) + n {\displaystyle s(x,y)=PSF(x,y)*o(x,y)+n} Where n is the additive noise. Knowing this point spread function means that it is possible to reverse this process to a certain extent by computer-based
|
{"page_id": 19567, "title": "Microscopy"}
|
is collected and deposited into the CODIS database, which is maintained by the FBI. CODIS enables law enforcement officials to test DNA samples from crimes for matches within the database, providing a means of finding specific biological profiles associated with collected DNA evidence. When a match is made from a national DNA databank to link a crime scene to an offender having provided a DNA sample to a database, that link is often referred to as a cold hit. A cold hit is of value in referring the police agency to a specific suspect but is of less evidential value than a DNA match made from outside the DNA Databank. FBI agents cannot legally store DNA of a person not convicted of a crime. DNA collected from a suspect not later convicted must be disposed of and not entered into the database. In 1998, a man residing in the UK was arrested on accusation of burglary. His DNA was taken and tested, and he was later released. Nine months later, this man's DNA was accidentally and illegally entered in the DNA database. New DNA is automatically compared to the DNA found at cold cases and, in this case, this man was found to be a match to DNA found at a rape and assault case one year earlier. The government then prosecuted him for these crimes. During the trial the DNA match was requested to be removed from the evidence because it had been illegally entered into the database. The request was carried out. The DNA of the perpetrator, collected from victims of rape, can be stored for years until a match is found. In 2014, to address this problem, Congress extended a bill that helps states deal with "a backlog" of evidence. DNA profiling databases in Plants: PIDS: PIDS(Plant
|
{"page_id": 44290, "title": "DNA profiling"}
|
a CRT to function due to not being built with the functionality of modern displays in mind. CRTs tend to be more durable than their flat panel counterparts, though specialised LCDs that have similar durability also exist. == Types == CRTs were produced in two major categories, picture tubes and display tubes. Picture tubes were used in TVs while display tubes were used in computer monitors. Display tubes were of higher resolution and when used in computer monitors sometimes had adjustable overscan, or sometimes underscan. Picture tube CRTs have overscan, meaning the actual edges of the image are not shown; this is deliberate to allow for adjustment variations between CRT TVs, preventing the ragged edges (due to blooming) of the image from being shown on screen. The shadow mask may have grooves that reflect away the electrons that do not hit the screen due to overscan. Color picture tubes used in TVs were also known as CPTs. CRTs are also sometimes called Braun tubes. === Monochrome CRTs === If the CRT is in black and white (B&W or monochrome), there is a single electron gun in the neck and the funnel is coated on the inside with aluminum that has been applied by evaporation; the aluminum is evaporated in a vacuum and allowed to condense on the inside of the CRT. Aluminum eliminates the need for ion traps, necessary to prevent ion burn on the phosphor, while also reflecting light generated by the phosphor towards the screen, managing heat and absorbing electrons providing a return path for them; previously funnels were coated on the inside with aquadag, used because it can be applied like paint; the phosphors were left uncoated. Aluminum started being applied to CRTs in the 1950s, coating the inside of the CRT including the phosphors, which also
|
{"page_id": 6014, "title": "Cathode-ray tube"}
|
requests on port 80. If the web server can fulfil the request it sends an HTTP response back to the browser indicating success: followed by the content of the requested page. Hypertext Markup Language (HTML) for a basic web page might look like this: The web browser parses the HTML and interprets the markup (<title>, <p> for paragraph, and such) that surrounds the words to format the text on the screen. Many web pages use HTML to reference the URLs of other resources such as images, other embedded media, scripts that affect page behaviour, and Cascading Style Sheets that affect page layout. The browser makes additional HTTP requests to the web server for these other Internet media types. As it receives their content from the web server, the browser progressively renders the page onto the screen as specified by its HTML and these additional resources. === HTML === Hypertext Markup Language (HTML) is the standard markup language for creating web pages and web applications. With Cascading Style Sheets (CSS) and JavaScript, it forms a triad of cornerstone technologies for the World Wide Web. Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for the appearance of the document. HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as <img /> and <input /> directly introduce content into the page. Other tags
|
{"page_id": 33139, "title": "World Wide Web"}
|
years. It found 15 are in serious decline and five are in a precarious condition.: 6–19 === Economic costs === Experts in environmental economics have calculated the cost of using public natural resources. One project calculated the damage to ecosystems and biodiversity loss. This was the Economics of Ecosystems and Biodiversity project from 2007 to 2011. An entity that creates environmental and social costs often does not pay for them. The market price also does not reflect those costs. In the end, government policy is usually required to resolve this problem. Decision-making can take future costs and benefits into account. The tool for this is the social discount rate. The bigger the concern for future generations, the lower the social discount rate should be. Another approach is to put an economic value on ecosystem services. This allows us to assess environmental damage against perceived short-term welfare benefits. One calculation is that, "for every dollar spent on ecosystem restoration, between three and 75 dollars of economic benefits from ecosystem goods and services can be expected". In recent years, economist Kate Raworth has developed the concept of doughnut economics. This aims to integrate social and environmental sustainability into economic thinking. The social dimension acts as a minimum standard to which a society should aspire. The carrying capacity of the planet acts an outer limit. == Barriers == There are many reasons why sustainability is so difficult to achieve. These reasons have the name sustainability barriers. Before addressing these barriers it is important to analyze and understand them.: 34 Some barriers arise from nature and its complexity ("everything is related"). Others arise from the human condition. One example is the value-action gap. This reflects the fact that people often do not act according to their convictions. Experts describe these barriers as intrinsic to
|
{"page_id": 18413531, "title": "Sustainability"}
|
Cretaceous, with present-day volcanism at the Society and Macdonald volcanoes originating from secondary plumes that rise from the superplume to the crust. The association may explain the Hotspot highway of the South Pacific Ocean first described in 2010. An ultra-low velocity zone under Pitcairn extends to the Easter hotspot and the Macdonald hotspot. === Local geology === The Austral Islands and the Cook Islands may have been formed by the Macdonald hotspot, as the Pacific plate was carried above the hotspot at a rate of 10–11 centimetres per year (3.9–4.3 in/year). A 500–300 metres (1,640–980 ft) high swell underpins the Austral Islands as far as Macdonald seamount, which is the presently active volcano on the Macdonald hotspot. They fit the pattern of linear volcanism, seeing as they are progressively less degraded southeastward (with the exception of Marotiri, which unprotected by coral reefs unlike the other more equatorial islands has been heavily eroded) and the active Macdonald volcano lies at their southeastern end. However, there appear to be somewhat older guyots in the area as well, some of which show evidence that secondary volcanoes formed on them. It is possible that the guyots are much older and that lithospheric anomalies were periodically reactivated and triggered renewed volcanism on the older guyots. In addition, dating of the various volcanoes in the Cook-Austral chain indicates that there is no simple age progression away from Macdonald seamount and that the chain appears to consist of two separate alignments. While the younger ages of Atiu and Aitutaki may be explained by the long-range effect of Rarotonga's growth, Rarotonga itself is about 18–19 million years younger than would be expected if it was formed by Macdonald. Additional younger ages in some volcanoes such as Rurutu have been explained by the presence of an additional system, the
|
{"page_id": 22639422, "title": "Macdonald hotspot"}
|
Nina H. Fefferman (born December 20, 1978) is an American mathematical modeler and theoretical biologist. Her research uses mathematical modeling to explore the behavior, evolution, and control of complex systems with application in areas from basic science (evolutionary sociobiology and epidemiology) to direct real-world applications (bio-security, cyber-security, bio-inspired design, and wildlife conservation). She studies how individual behaviors can affect an entire population, frequently focusing on a networks approach. She has written over 150 peer reviewed journal articles and book chapters and been funded by a diversity of US governmental agencies and private foundations throughout her career. Fefferman is the founding director and PI of the US NSF Center for Analysis and Prediction of Pandemic Expansion (APPEX) and also serves as the director of the National Institute for Modeling Biological Systems (NIMBioS) (previously the National Institute for Mathematical and Biological Synthesis). Both of these organizations are based at the University of Tennessee, Knoxville, where Fefferman is also a professor in the Department of Ecology & Evolutionary Biology and the Department of Mathematics. == Early life and education == Nina Fefferman is the daughter of Julie and Charles Fefferman; her father is a mathematician at Princeton University. She is the sister of composer Lainie Fefferman. She studied mathematics to get her bachelor's degree in math from Princeton in 1999. She later received her Master of Science degree in math from Rutgers University in 2001 and her Ph.D. in biology from Tufts University in 2005. Her thesis was on using mathematical models in evolutionary biology and epidemiology. == Publications == Her most cited papers are: Lofgren E, Fefferman NH, Naumov YN, Gorski J, Naumova EN. Influenza seasonality: underlying causes and modeling theories. Journal of virology. 2007 Jun 1;81(11):5429-36. Wilson-Rich N, Spivak M, Fefferman NH, Starks PT. Genetic, individual, and group facilitation of disease
|
{"page_id": 63056588, "title": "Nina Fefferman"}
|
erosion and sedimentation yield spatial distribution and many more mapping activities for the regions. == Reception == Immediately after the April 2015 Nepal Earthquake, scientists at ICIMOD began supporting rescue and relief efforts by closely monitoring landslides, glacier lakes and dammed rivers through the analyses of satellite images, and providing the latest information to the Nepalese government and relief agencies. ICIMOD scientists also worked with traffic controllers at the Tribhuvan International airport, Kathmandu, by providing assistance to assess weather and terrain conditions. Teams of volunteers from ICIMOD went to aid relief efforts in villages nearby ICIMOD and Kathmandu. A 2021 case study from the World Bank commented on ICIMOD's role as an apolitical intergovernmental platform: In the Himalayas – where national interests are often seen as contradictory to regional interests – regional institutions are forced to devote considerable effort to making their case. ICIMOD's story demonstrates useful methods of achieving this objective: proactive engagement with political constituencies; efforts at reputation building through research to earn a place in likeminded global and regional networks; and hiring recognized subject experts to carry the institutional flag. These efforts are still a work in progress at ICIMOD, but they seem to be producing results. == References == == External links == ICIMOD Homepage ICIMOD Publications ICIMOD Data sets
|
{"page_id": 6471605, "title": "International Centre for Integrated Mountain Development"}
|
The IBM Card-Programmed Electronic Calculator or CPC was announced by IBM in May 1949. Later that year an improved machine, the CPC-II, was also announced. IBM's electronic (vacuum tube) calculators could perform multiple calulations, including division. The card-programmed calculators used fields on punched cards not to specify the actual operations to be performed on data, but which "microprogram" hard-coded onto the plugboard of the IBM 604 or 605 calculator machine; a set of cards produced different results when used with different plugboards. The units could be configured to retain up to 10 instructions in memory and perform them in a loop. The original CPC Calculator has the following units interconnected by cables: Electronic Calculating Punch IBM 604 with reader/punch unit IBM 521 Accounting Machine IBM 402 or IBM 417 The CPC-II Calculator has the following units interconnected by cables: Electronic Calculating Punch IBM 605 with punch unit IBM 527 Accounting Machine IBM 407 or IBM 412 or IBM 418 Optional Auxiliary Storage Units (up to 3) IBM 941, each could store 16 decimal numbers with ten digits plus sign. From the IBM Archives: The IBM Card-Programmed Electronic Calculator was announced in May 1949 as a versatile general purpose computer designed to perform any predetermined sequence of arithmetical operations coded on standard 80-column punched cards. It was also capable of selecting and following one of several sequences of instructions as a result of operations already performed, and it could store instructions for self-programmed operation. The Calculator consisted of a Type 605 Electronic Calculating Punch and a Type 412 or 418 Accounting Machine. A Type 941 Auxiliary Storage Unit was available as an optional feature. All units composing the Calculator were interconnected by flexible cables. If desired, the Type 412 or 418, with or without the Type 941, could be operated independently
|
{"page_id": 742723, "title": "IBM CPC"}
|
press against each other, against the wellbore, and around tubing running through the wellbore. Outlets at the sides of the BOP housing (body) are used for connection to choke and kill lines or valves. Rams, or ram blocks, are of four common types: pipe, blind, shear, and blind shear. Pipe rams close around a drill pipe, restricting flow in the annulus (ring-shaped space between concentric objects) between the outside of the drill pipe and the wellbore, but do not obstruct flow within the drill pipe. Variable-bore pipe rams can accommodate tubing in a wider range of outside diameters than standard pipe rams, but typically with some loss of pressure capacity and longevity. A pipe ram should not be closed if there is no pipe in the hole. Blind rams (also known as sealing rams), which have no openings for tubing, can close off the well when the well does not contain a drill string or other tubing, and seal it. Shear rams are designed to shear the pipe in the well and seal the wellbore simultaneously. It has steel blades to shear the pipe and seals to seal the annulus after shearing the pipe. Blind shear rams (also known as shear seal rams, or sealing shear rams) are intended to seal a wellbore, even when the bore is occupied by a drill string, by cutting through the drill string as the rams close off the well. The upper portion of the severed drill string is freed from the ram, while the lower portion may be crimped and the "fish tail" captured to hang the drill string off the BOP. In addition to the standard ram functions, variable-bore pipe rams are frequently used as test rams in a modified blowout preventer device known as a stack test valve. Stack test valves
|
{"page_id": 5239446, "title": "Blowout preventer"}
|
12 rhythmic 12 revulsion 12 revoked 12 reverence 12 retrospectively 12 restraint 12 rested 12 restatement 12 responsiblity 12 resized 12 repression 12 reprend 12 repeatability 12 reorients 12 remplacer 12 remodeling 12 remedied 12 rels 12 reloaded 12 relevancy 12 reintroduced 12 reincarnation 12 regularities 12 refuge 12 redoing 12 recoverable 12 reconciles 12 readline 12 reachability 12 rdfa 12 rct 12 razzle 12 rarity 12 rarer 12 rapidement 12 rainy 12 rainsberger 12 qw 12 quine 12 quicktime 12 queued 12 quarks 12 quanta 12 qualit 12 qua 12 qe 12 pursues 12 purchasers 12 punishes 12 punches 12 puissants 12 publica 12 pseudoscience 12 protector 12 prosody 12 propeller 12 pronouncements 12 promo 12 prodigy 12 proclaiming 12 princes 12 priests 12 priceless 12 previews 12 preprocessing 12 prepended 12 prepend 12 prennent 12 preformed 12 precessors 12 preacher 12 pratt 12 pragmatist 12 pps 12 ppe 12 pottery 12 positivism 12 porte 12 porches 12 polyphony 12 politico 12 poised 12 poetics 12 plupart 12 plublish 12 pliers 12 pleasurable 12 ple 12 platfroms 12 pitt 12 piratas 12 piqued 12 pinned 12 pine 12 pigeons 12 pier 12 pickups 12 phlip 12 phaedrus 12 perseverance 12 permissionless 12 perks 12 pennington 12 pendance 12 pedigree 12 peaking 12 pea 12 payloads 12 payable 12 pawn 12 patrol 12 pathname 12 patently 12 patel 12 parting 12 parodies 12 palestine 12 paine 12 pagetitle 12 pageslug 12 pagemaps 12 overtook 12 oversees 12 overheads 12 overflowed 12 oti 12 ornament 12 orienting 12 organises 12 organisers 12 oreillynet 12 orchestrating 12 orchestral 12 orbital 12 orality 12 opl 12 openxanadu 12 openssl 12 openings 12 opendemocracy 12 ooda 12 ood 12 ooa 12 onslaught 12 onset 12 oneclickorgs 12 ohne 12 ocks 12 obsoleted 12 obsess
|
{"source": 1742, "title": "from dpo"}
|
$\cos\theta \:=\:\frac{12-x}{15}\;\;{\color{blue}}$ Equate and : . $\frac{\sqrt{64-x^2}}{8} \;=\;\frac{12-x}{15}\quad\Rightarrow\quad 15\sqrt{64-x^2} \;=\;8(12-x)$ Square both sides: . $225(64-x^2) \;=\;64(144-24x + x^2)$ . . which simplifies to: . $289x^2 - 1536x - 5184 \;=\;0$ . . and has the positive root: . $x \;=\;\frac{1536 + \sqrt{6,352,000}}{578} \;=\;7.657439446$ Then: . $\sin\theta \;=\;\frac{x}{8} \;=\;\frac{7.657439446}{8} \;=\;0.957179931$ . . $\theta \;=\;\sin^{-1}(0.957179931) \;=\;73.17235533^o \;\approx\;\boxed{73^o10'}$ 5. I've got some difficulties, how can we know, by reading the text, what angle we are looking for ? And how can this physically be possible if D is not on RS ?? 6. When I saw this question, I knew I saw it before. It was in my textbook last year. Here is the picture the book provided, and the answer at the back of the book is 16 degrees 50 minutes, not 16 degrees 15 minutes. I think Soroban had a correct answer, but the book is asking for the angle of the other side. So: $180^o$(straight angle) - $90^o$ (angle between base and length of cylinder) - $73^o10'$ (angle 'on the other side' Soroban found) = $\theta$ $\theta$ = $16^o 50'$ 7. ## Thanks for the responses Thanks everyone especially Soroban, you are genius. Two dimensional diagram is very helpful to understand. Thanks Gusbob for posting the diagram from the text book, I did not have scanner to scan the diagram.2014-09-03T02:41:40{ "domain": "mathhelpforum.com", "url": " "openwebmath_score": 0.8148919343948364, "openwebmath_perplexity": 1087.1382273861045, "lm_name": "Qwen/Qwen-72B", "lm_label": "1. YES\n2. YES", "lm_q1_score": 0.9877587225460771, "lm_q2_score": 0.9334308114924604, "lm_q1q2_score": 0.9220044259449408 } Golden Ratio and Fibonacci Numbers Golden Ratio is considered to be one of the greatest beauties in mathematics. Two numbers $$a$$ and $$b$$ are said to be in Golden Ratio if $a>b>0,\quad and\quad \frac { a }{ b } =\frac { a+b }{ a }$ If we consider this ratio to be equal to some $$\varphi$$ then
|
{"source": 4160, "title": "from dpo"}
|
Universal Composability (UC) framework has emerged as the de-facto gold standard for modelling and analyzing (s)aPAKE protocols. The UC framework is particularly well-suited for password-based protocols because it does not make any assumptions about password distributions, models real-world behaviour such as password re-use and guarantees secure composition with arbitrary other protocols – or itself. In fact, the self-composition aspect of UC has been used to study a simplified setting of (s)aPAKE: the single -user variant, where only a single user can register with the server and establish keys. This allows to focus the model, protocol design and analysis to this simpler setting, and rely on the composition theorem when the protocol is used for multiple users. So far, UC-secure (s)aPAKE protocols have predominantly been designed and analyzed for such a single-user setting , , , , , , , . Exceptions are the recent works of , , . Multi-User Reality. In practice, (s)aPAKE is typically deployed in a setting where a single server serves up to millions of users, all running the same protocol for password-based authentication and key-exchange. Tech-nically, the self-composition requires the server to run several and fully independent instances of the single-user protocol . In particular, the server must not re-use any shared state, such as secret keys across the instances if it wants to benefit from the composability guarantees and provide the same security as the provably secure single-user version. This requires application developers to understand the limitations of the single-user (s)aPAKE protocols and be able to extend the protocol in a way that is compliant with the UC framework. Thus, the focus on the single-user setting leaves a dangerous gap between the provably-secure protocol variant and the version that would actually be needed for real-world deployment. That this ambiguity can lead
|
{"source": 5935, "title": "from dpo"}
|
U ) → X {\displaystyle \operatorname {In} _{X}:C_{c}^{\infty }(U)\to X} is a continuous injection whose image is dense in the codomain, this map's transpose t In X : X b ′ → D ′ ( U ) = ( C c ∞ ( U ) ) b ′ {\displaystyle {}^{t}\operatorname {In} _{X}:X'_{b}\to {\mathcal {D}}'(U)=\left(C_{c}^{\infty }(U)\right)'_{b}} is a continuous injection. This injective transpose map thus allows the continuous dual space X ′ {\displaystyle X'} of X {\displaystyle X} to be identified with a certain vector subspace of the space D ′ ( U ) {\displaystyle {\mathcal {D}}'(U)} of all distributions (specifically, it is identified with the image of this transpose map). This transpose map is continuous but it is not necessarily a topological embedding. A linear subspace of D ′ ( U ) {\displaystyle {\mathcal {D}}'(U)} carrying a locally convex topology that is finer than the subspace topology induced on it by D ′ ( U ) = ( C c ∞ ( U ) ) b ′ {\displaystyle {\mathcal {D}}'(U)=\left(C_{c}^{\infty }(U)\right)'_{b}} is called a space of distributions. Almost all of the spaces of distributions mentioned in this article arise in this way (for example, tempered distribution, restrictions, distributions of order ≤ {\displaystyle \leq } some integer, distributions induced by a positive Radon measure, distributions induced by an L p {\displaystyle L^{p}} -function, etc.) and any representation theorem about the continuous dual space of X {\displaystyle X} may, through the transpose t In X : X b ′ → D ′ ( U ) , {\displaystyle {}^{t}\operatorname {In} _{X}:X'_{b}\to {\mathcal {D}}'(U),} be transferred directly to elements of the space Im ( t In X ) . {\displaystyle \operatorname {Im} \left({}^{t}\operatorname {In} _{X}\right).} === Radon measures === The inclusion map In : C c ∞ ( U ) → C c
|
{"page_id": 51955, "title": "Distribution (mathematics)"}
|
β ℏ γ 2 V ^ ∘ ) {\displaystyle {\hat {\Theta }}=-{\frac {\eta \gamma }{\beta }}\left({\hat {V}}^{\times }-i{\frac {\beta \hbar \gamma }{2}}{\hat {V}}^{\circ }\right)} with the inverse temperature β = 1 / k B T {\displaystyle \beta =1/k_{B}T} and the following "super-operator" notation: A ^ × ρ ^ = A ^ ρ ^ − ρ ^ A ^ A ^ ∘ ρ ^ = A ^ ρ ^ + ρ ^ A ^ {\displaystyle {\begin{aligned}{\hat {A}}^{\times }{\hat {\rho }}&={\hat {A}}{\hat {\rho }}-{\hat {\rho }}{\hat {A}}\\{\hat {A}}^{\circ }{\hat {\rho }}&={\hat {A}}{\hat {\rho }}+{\hat {\rho }}{\hat {A}}\end{aligned}}} The counter n {\displaystyle n} provides for n = 0 {\displaystyle n=0} the system density matrix. As with Kubo's stochastic Liouville equation in hierarchical form, it goes up to infinity in the hierarchy which is a problem numerically. Tanimura and Kubo, however, provide a method by which the hierarchy can be truncated to a finite set of N {\displaystyle N} differential equations. This "terminator" N {\displaystyle N} defines the depth of the hierarchy and is determined by some constraint sensitive to the characteristics of the system, i.e. frequency, amplitude of fluctuations, bath coupling etc. A simple relation to eliminate the ρ ^ n + 1 {\displaystyle {\hat {\rho }}_{n+1}} term is ρ ^ N + 1 = − Θ ^ ρ ^ N / ℏ γ . {\displaystyle {\hat {\rho }}_{N+1}=-{\hat {\Theta }}{\hat {\rho }}_{N}/\hbar \gamma .} The closing line of the hierarchy is thus: ∂ ∂ t ρ ^ N = − ( i ℏ H ^ A × + N γ ) ρ ^ N − i γ ℏ 2 V ^ × Θ ^ ρ ^ N + i N ℏ Θ ^ ρ ^ N − 1 {\displaystyle {\frac {\partial }{\partial t}}{\hat {\rho }}_{N}=-\left({\frac {i}{\hbar }}{\hat {H}}_{A}^{\times }+N\gamma \right){\hat {\rho }}_{N}-{i
|
{"page_id": 51407573, "title": "Hierarchical equations of motion"}
|
a current between them through the conductor. The potential difference between two points is measured in units of volts in recognition of Volta's work. The first mention of voltaic electricity, although not recognized as such at the time, was probably made by Johann Georg Sulzer in 1767, who, upon placing a small disc of zinc under his tongue and a small disc of copper over it, observed a peculiar taste when the respective metals touched at their edges. Sulzer assumed that when the metals came together they were set into vibration, acting upon the nerves of the tongue to produce the effects noticed. In 1790, Prof. Luigi Alyisio Galvani of Bologna, while conducting experiments on "animal electricity", noticed the twitching of a frog's legs in the presence of an electric machine. He observed that a frog's muscle, suspended on an iron balustrade by a copper hook passing through its dorsal column, underwent lively convulsions without any extraneous cause, the electric machine being at this time absent. To account for this phenomenon, Galvani assumed that electricity of opposite kinds existed in the nerves and muscles of the frog, the muscles and nerves constituting the charged coatings of a Leyden jar. Galvani published the results of his discoveries, together with his hypothesis, which engrossed the attention of the physicists of that time. The most prominent of these was Volta, professor of physics at Pavia, who contended that the results observed by Galvani were the result of the two metals, copper and iron, acting as electromotors, and that the muscles of the frog played the part of a conductor, completing the circuit. This precipitated a long discussion between the adherents of the conflicting views. One group agreed with Volta that the electric current was the result of an electromotive force of contact at
|
{"page_id": 5951576, "title": "History of electromagnetic theory"}
|
Astroinformatics is an interdisciplinary field of study involving the combination of astronomy, data science, machine learning, informatics, and information/communications technologies. The field is closely related to astrostatistics. Data-driven astronomy (DDA) refers to the use of data science in astronomy. Several outputs of telescopic observations and sky surveys are taken into consideration and approaches related to data mining and big data management are used to analyze, filter, and normalize the data set that are further used for making Classifications, Predictions, and Anomaly detections by advanced Statistical approaches, digital image processing and machine learning. The output of these processes is used by astronomers and space scientists to study and identify patterns, anomalies, and movements in outer space and conclude theories and discoveries in the cosmos. == Background == Astroinformatics is primarily focused on developing the tools, methods, and applications of computational science, data science, machine learning, and statistics for research and education in data-oriented astronomy. Early efforts in this direction included data discovery, metadata standards development, data modeling, astronomical data dictionary development, data access, information retrieval, data integration, and data mining in the astronomical Virtual Observatory initiatives. Further development of the field, along with astronomy community endorsement, was presented to the National Research Council (United States) in 2009 in the astroinformatics "state of the profession" position paper for the 2010 Astronomy and Astrophysics Decadal Survey. That position paper provided the basis for the subsequent more detailed exposition of the field in the Informatics Journal paper Astroinformatics: Data-Oriented Astronomy Research and Education. Astroinformatics as a distinct field of research was inspired by work in the fields of Geoinformatics, Cheminformatics, Bioinformatics, and through the eScience work of Jim Gray (computer scientist) at Microsoft Research, whose legacy was remembered and continued through the Jim Gray eScience Awards. Although the primary focus of astroinformatics is on
|
{"page_id": 28326718, "title": "Astroinformatics"}
|
== === Probability distribution === === Random variables === A random variable X is a measurable function X: Ω → S from the sample space Ω to another measurable space S called the state space. If A ⊂ S, the notation Pr(X ∈ A) is a commonly used shorthand for P ( { ω ∈ Ω : X ( ω ) ∈ A } ) {\displaystyle P(\{\omega \in \Omega :X(\omega )\in A\})} . === Defining the events in terms of the sample space === If Ω is countable, we almost always define F {\displaystyle {\mathcal {F}}} as the power set of Ω, i.e. F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} which is trivially a σ-algebra and the biggest one we can create using Ω. We can therefore omit F {\displaystyle {\mathcal {F}}} and just write (Ω,P) to define the probability space. On the other hand, if Ω is uncountable and we use F = 2 Ω {\displaystyle {\mathcal {F}}=2^{\Omega }} we get into trouble defining our probability measure P because F {\displaystyle {\mathcal {F}}} is too "large", i.e. there will often be sets to which it will be impossible to assign a unique measure. In this case, we have to use a smaller σ-algebra F {\displaystyle {\mathcal {F}}} , for example the Borel algebra of Ω, which is the smallest σ-algebra that makes all open sets measurable. === Conditional probability === Kolmogorov's definition of probability spaces gives rise to the natural concept of conditional probability. Every set A with non-zero probability (that is, P(A) > 0) defines another probability measure P ( B ∣ A ) = P ( B ∩ A ) P ( A ) {\displaystyle P(B\mid A)={P(B\cap A) \over P(A)}} on the space. This is usually pronounced as the "probability of B given A". For any
|
{"page_id": 43325, "title": "Probability space"}
|
as dams and bridge piles are therefore particularly sensitive. These reactions are also characterized by slow reaction kinetics, depending on environmental conditions such as temperature and relative humidity. They develop at a slow rate and may take several years before damages become apparent. Often a decade is needed to observe their harmful consequences. Protecting concrete structures from water contact may help to slow down the progression of the damages. == Chemical damages == === Carbonation === Carbon dioxide (CO2) from air (~ 412 ppm vol.) and bicarbonate (HCO−3) or carbonate (CO2−3) anions dissolved in water react with the calcium hydroxide (Ca(OH)2, portlandite) produced by Portland cement hydration in concrete to form calcium carbonate (CaCO3) while releasing a water molecule in the following reaction: CO2 + Ca(OH)2 → CaCO3 + H2O Exception made of the water molecule, the carbonation reaction is essentially the reverse of the process of calcination of limestone taking place in a cement kiln: CaCO3 → CaO + CO2 Carbonation of concrete is a slow and continuous process of atmospheric CO2 diffusing from the outer surface of concrete exposed to air into its mass and chemically reacting with the mineral phases of the hydrated cement paste. Carbonation slows down with increasing diffusion depth. Carbonation has two antagonist effects for (1) the concrete strength, and (2) its durability: The precipitation of calcite filling the microscopic voids in the concrete pore space decreases the concrete matrix porosity: so, it increases the mechanical strength of concrete; At the same time carbonation consumes portlandite and therefore decreases the concrete alkalinity reserve buffer. Hyper-alkaline conditions (i.e., basic chemical conditions) characterized by a high pH (typically 12.5 – 13.5) are needed to passivate the steel surface of the reinforcement bars (rebar) and to protect them from corrosion. Below a pH of 10, the solubility
|
{"page_id": 24979028, "title": "Concrete degradation"}
|
usefulness. They generally involve either slower bit banging than a parallel port, or a microcontroller translating some command protocol to JTAG operations. Such serial adapters are also not fast, but their command protocols could generally be reused on top of higher-speed links. With all JTAG adapters, software support is a basic concern. Some vendors do not publish the protocols used by their JTAG adapter hardware, limiting their customers to the tool chains supported by those vendors. This is a particular issue for "smart" adapters, some of which embed significant amounts of knowledge about how to interact with specific CPUs. === Software development === Most development environments for embedded software include JTAG support. There are, broadly speaking, three sources of such software: Chip vendors may provide the tools, usually requiring a JTAG adapter they supply. Examples include FPGA vendors such as Xilinx and Altera, Atmel for its AVR8 and AVR32 product lines, and Texas Instruments for most of its DSP and micro products. Such tools tend to be highly featured and may be the only real option for highly specialized chips like FPGAs and DSPs. Lower-end software tools may be provided free of charge. The JTAG adapters themselves are not free, although sometimes they are bundled with development boards. Tool vendors may supply them, usually in conjunction with multiple chip vendors to provide cross-platform development support. ARM-based products have a particularly rich third-party market, and a number of those vendors have expanded to non-ARM platforms like MIPS and PowerPC. Tool vendors sometimes build products around free software like GCC and GDB, with GUI support frequently using Eclipse. JTAG adapters are sometimes sold along with support bundles. Open source tools exist. As noted above, GCC and GDB form the core of a good toolchain, and there are GUI environments to support them.
|
{"page_id": 638112, "title": "JTAG"}
|
the pulsar decreased by about 1 part in a million. Statistically, nearly the 1% of the long-term spin-down of the pulsar is reversed in spin-up glitches, a fraction that is also observed in other monitored pulsars. Careful estimation of the glitch activity and its uncertainty requires statistical tools beyond the simple linear regression. == Research campaigns == The association of the Vela pulsar with the Vela Supernova Remnant, made by astronomers at the University of Sydney in 1968, was direct observational proof that supernovae form neutron stars. Studies conducted by Kellogg et al. with the Uhuru spacecraft in 1970–71 showed the Vela pulsar and Vela X to be separate but spatially related objects. The term Vela X was used to describe the entirety of the supernova remnant. Weiler and Panagia established in 1980 that Vela X was actually a pulsar wind nebula, contained within the fainter supernova remnant and driven by energy released by the pulsar. == Nomenclature == The pulsar is occasionally referred to as Vela X, but this phenomenon is separate from either the pulsar or the Vela X nebula. A radio survey of the Vela-Puppis region was made with the Mills Cross Telescope in 1956–57 and identified three strong radio sources: Vela X, Vela Y, and Vela Z. These sources are observationally close to the Puppis A supernova remnant, which is also a strong X-ray and radio source. Neither the pulsar nor either of the associated nebulae should be confused with Vela X-1, an observationally close but unrelated high-mass X-ray binary system. == In music == The emissions of Vela and the pulsar PSR B0329+54 were converted into audible sound by French composer Gérard Grisey and used in the piece Le noir de l'étoile (1989–90). == Gallery == == References == == External links == Vela Pulsar
|
{"page_id": 4087645, "title": "Vela Pulsar"}
|
that under suitable conditions those "life form[s] could be along the lines of [...] plant[s] or bacteria." Cuntz and Quarles collaborated again as co-authors on a study led by Oshina Jagtap published in 2021, which "explores the possibility of exomoons in a planetary system named HD 23079, located in Reticulum, a small constellation in the southern sky." This system is of interest because it contains a planet similar to Jupiter. Cuntz argues that since "Jupiter [is] a host to four planet-size moons (among many other moons), with two of them (Europa and Ganymede) having a significant chance of being habitable," gas giants in other star systems which could host an Earth-sized moon with the conditions for liquid water. Other work also focused on the principal possibility of submoons and on exocomets. To assist astrophysicists in identifying habitable zones, Cuntz developed "BinHab, a new online tool that can be used to calculate the regions of binary systems favorable for life" in 2014. According to Cuntz, the program considers both "the amounts of stellar radiation, which provides a favorable planetary climate for life, and the gravitational influence of both stars on an existing planet." The interim dean of the UTA College of Science, James Grover, said this tool "holds enormous potential for those who study space in the search for life." Cuntz has worked with other researchers to "examined both the damaging and the favourable effects of ultraviolet (UV) radiation from stars on DNA molecules" and how it could affect "potential carbon-based extraterrestrial life forms in the habitable zones around other stars." A study conducted by Cuntz, Satoko Sato, and researchers from the University of Guanajuato in Mexico found that F-type star systems "may [...] be a good place to look for habitable planets" because they have a larger "area where conditions
|
{"page_id": 74380434, "title": "Manfred Cuntz"}
|
geology (Argentina, Bolivia, Chile and Peru) == See also == Asthenosphere – Highly viscous, ductile, and mechanically weak region of Earth's mantle Continent – Large geographical region identified by convention Craton – Old and stable part of the continental lithosphere Platform – Continental area covered by relatively flat or gently tilted, mainly sedimentary strata Shield – Large stable area of exposed Precambrian crystalline rock Earth's crust – Earth's outer shell of rock Continental crust – Layer of rock that forms the continents and continental shelves Oceanic crust – Uppermost layer of the oceanic portion of a tectonic plate Earth's mantle – Layer of silicate rock Lower mantle – The region from 660 to 2900 km below Earth's surfacePages displaying short descriptions of redirect targets Upper mantle – Very thick layer of rock inside EarthPages displaying short descriptions of redirect targets Geochemistry – Science that applies chemistry to analyze geological systems Sial – Rocks rich in aluminium silicate minerals Sima – Rocks rich in magnesium silicate minerals Hydrosphere – Total amount of water on a planet Lithosphere – Outermost shell of a terrestrial-type planet or natural satellite Ocean – Body of salt water covering most of Earth Plate tectonics – Movement of Earth's lithosphere List of tectonic plate interactions – Types of plate boundaries Supercontinent – Landmass comprising more than one continental core, or craton Terrane – Fragment of crust formed on one tectonic plate and accreted to another == Notes and references == === Notes === === References === === Bibliography === North Andes plate Restrepo, Jorge Julián; Ordóñez Carmona, Oswaldo; Martens, Uwe; Correa, Ana María (2009). "Terrenos, complejos y provincias en la Cordillera Central de Colombia (Terrains, complexes and provinces in the central cordillera of Colombia)". Ingeniería Investigación y Desarrollo. 9: 49–56. Retrieved 2019-10-31. Fuck, Reinhardt A.; Brito Neves,
|
{"page_id": 494100, "title": "List of tectonic plates"}
|
stages and each stage is executed on a different device. While a stage is processing one batch, the preceding stage can work on the next batch. See also staged training. pjit A JAX function that splits code to run across multiple accelerator chips. The user passes a function to pjit, which returns a function that has the equivalent semantics but is compiled into an XLA computation that runs across multiple devices (such as GPUs or TPU cores). pjit enables users to shard computations without rewriting them by using the SPMD partitioner. As of March 2023, pjit has been merged with jit. Refer to Distributed arrays and automatic parallelization for more details. PLM #language #generativeAI Abbreviation for pre-trained language model. pmap A JAX function that executes copies of an input function on multiple underlying hardware devices (CPUs, GPUs, or TPUs), with different input values. pmap relies on SPMD. policy #rl In reinforcement learning, an agent's probabilistic mapping from states to actions. pooling #image Reducing a matrix (or matrixes) created by an earlier convolutional layer to a smaller matrix. Pooling usually involves taking either the maximum or average value across the pooled area. For example, suppose we have the following 3x3 matrix: A pooling operation, just like a convolutional operation, divides that matrix into slices and then slides that convolutional operation by strides. For example, suppose the pooling operation divides the convolutional matrix into 2x2 slices with a 1x1 stride. As the following diagram illustrates, four pooling operations take place. Imagine that each pooling operation picks the maximum value of the four in that slice: Pooling helps enforce translational invariance in the input matrix. Pooling for vision applications is known more formally as spatial pooling. Time-series applications usually refer to pooling as temporal pooling. Less formally, pooling is often called subsampling or
|
{"source": 979, "title": "from dpo"}
|
red ball displayed. e “With arrow” refers to a signalized intersection at which the turning traffic has a red arrow displayed when an LRV is approaching. When a turn arrow traffic signal indication is used, TCRP Report 17 recommends that an exclusive turn lane be provided. Source: Adapted from Korve, Hans W., Jose I. Farran, Douglas M. Mansel, et al. Integration of Light Rail Transit into City Streets, Washington, DC, TRCP Report 17, TRB, 1996. For semi‑exclusive alignments, all traffic conflicting with LRV movements at intersections and crossings should be positively controlled through use of turn pockets, traffic signals, and active warning signs. > 100 Three-Lens Signal > SINGLE # =LRT STOP > ROUTE # t PREPARE # a Flashing TO STOP GO D > TWO LRT # =ROUTE DIVERSION # ~ a Fla•hlng # D ~ ( 1) ------------ ----------------------------=~ a Floohlng # 13 0 ( 1) > THREE LRT # =ROUTE DIVERSION # a # ~ Flashi ng # 13 D Pl (1) > Notes: All aspects (or signal indications) are White. (1) Could be In single housing. Two-Lens Signal STOP # =co 0 (2 ) # =D ~ (1),(2} --------------------------- # =13 0 (1 ),(2) # =13 D Pl (1 ),P} > (2) "Go" lens may be used in flashing mode to Indicate "prepare to stop ". LRT Bar Signals The MUTCD Section 8C.11 provides the following guidance for use of LRT signals (refer to Figure 47 ): • LRT movements in semi-exclusive alignments at non-gated crossings that are equipped with traffic control signals should be controlled with LRT Bar Signals. • LRT signals that are used to control LRT movements only should display the signal indications illustrated in MUTCD Figure 8C-3. • Standard traffic control signal indications may be used instead of LRT signals to control
|
{"source": 2649, "title": "from dpo"}
|
```python from transformers import pipeline classifier = pipeline('zero-shot-classification', model='roberta-large-mnli') ``` You can then use this pipeline to classify sequences into any of the class names you specify. For example: ```python sequence_to_classify = "one day I will see the world" candidate_labels = ['travel', 'cooking', 'dancing'] classifier(sequence_to_classify, candidate_labels) ``` ## Uses #### Direct Use This fine-tuned model can be used for zero-shot classification tasks, including zero-shot sentence-pair classification (see the GitHub repo and zero-shot sequence classification. #### Misuse and Out-of-scope Use The model should not be used to intentionally create hostile or alienating environments for people. In addition, the model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model. ## Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propogate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021)]( The RoBERTa large model card ``` Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. ## Training #### Training Data This model was fine-tuned on the [Multi-Genre Natural Language Inference (MNLI)]( corpus. Also see the [MNLI data card]( for more information.
|
{"source": 4959, "title": "from dpo"}
|
rY2 = -588, and rn = .657. We determine by Eq. (3.2.4) that and that the full regression equation for the standardized variables is therefore 3.3 MEASURES OF ASSOCIATION WITH TWO INDEPENDENT VARIABLES 69 Once P F12 and P n.i nave been determined, conversion to the original units is readily accomplished by Substituting the values for our running example (Table 3.2.1), we find Because we are again using the original units, we need a constant B0 that serves to adjust for differences in means. This is calculated in the same way as with a single IV: The full (raw score) regression equation for estimating academic salary is therefore and the resulting values are provided in the third column of Table 3.3.1 later in this chapter. The partial regression coefficients, 5 n 2 = $983 and fl y2 .i = $122, are the empirical esti-mates, respectively, of h and g, the causal effects of our independent variables accompanying the arrows in the causal diagram (Fig. 3.1.1). 3.3 MEASURES OF ASSOCIATION WITH TWO INDEPENDENT VARIABLES Just as there are partial regression coefficients for multiple regression equations (equations for predicting Y from more than one IV), so are there partial and multiple correlation coefficients that answer the same questions answered by the simple product moment correlation coefficient in the single IV case. These questions include the following: 1. How well does this group of IVs together estimate 7? 2. How much does any single variable add to the estimation of Y already accomplished by other variables? 3. When all other variables are held constant statistically, how much of Y does a given variable account for? 3.3.1 Multiple R and R2 Just as r is the measure of association between two variables, so the multiple R is the measure of association between a dependent
|
{"source": 6256, "title": "from dpo"}
|
is in the past. On the other hand, if finding that life is commonplace while technosignatures are absent, then this would increase the likelihood that the Great Filter lies in the future. Recently, paleobiologist Olev Vinn has suggested that the great filter may exist between steps 8 and 9 due to inherited behavior patterns (IBP) that initially occur in all intelligent biological organisms. These IBPs are incompatible with conditions prevailing in technological civilizations and could inevitably lead to the self-destruction of civilization in multiple ways. In a specific formulation named the "Berserker hypothesis", a filter exists between steps 8 and 9 in which each civilization is destroyed by a lethal Von Neumann probe created by a more advanced civilization. == Responses == There are many alternative scenarios that might allow for the evolution of intelligent life to occur multiple times without either catastrophic self-destruction or glaringly visible evidence. These are possible resolutions to the Fermi paradox: "They do exist, but we see no evidence". Other ideas include: it is too expensive to spread physically throughout the galaxy; Earth is purposely isolated; it is dangerous to communicate and hence civilizations actively hide, among others. Astrobiologists Dirk Schulze-Makuch and William Bains, reviewing the history of life on Earth, including convergent evolution, concluded that transitions such as oxygenic photosynthesis, the eukaryotic cell, multicellularity, and tool-using intelligence are likely to occur on any Earth-like planet given enough time. They argue that the Great Filter may be abiogenesis, the rise of technological human-level intelligence, or an inability to settle other worlds because of self-destruction or a lack of resources. Astronomer Seth Shostak of the SETI Institute argues that one can postulate a galaxy filled with intelligent extraterrestrial civilizations that have failed to colonize Earth. Perhaps the aliens lacked the intent and purpose to colonize or
|
{"page_id": 1347945, "title": "Great Filter"}
|
following the 15-0 victory of the New National Party in the March general election, she independently exercised the royal prerogative by appointing members of the defeated National Democratic Congress to the Senate in order to provide a parliamentary opposition to the government. As patron of the Willie Redhead Foundation and the Grenada National Trust, she has been an outspoken supporter of the restoration of Grenada's architectural heritage, especially the viceregal residence of Government House and York House, the former seat of Parliament. Upon the death of Queen Elizabeth II in September 2022, La Grenade became the first Grenadian governor-general to have served under two monarchs. She said that the Queen served with "incomparable devotion", and that "her legacy of leadership and exemplary service shall live on indelibly". She also represented Grenada at the Queen's state funeral in the United Kingdom. Alongside Prime Minister Dickon Mitchell she represented Grenada at the Coronation of King Charles III in 2023. == Honours == Italy Two Sicilian Royal Family: Knight Grand Cross of Justice of the Two Sicilian Royal Sacred Military Constantinian Order of Saint George United Kingdom Dame Grand Cross with Collar of the Order of St Michael and St George Officer of the Order of the British Empire Dame of the Order of St John == References == == Sources == United Nations CEDAW/C/GRD/Q/1-5/Add.1 :Convention on the Elimination of All Forms of Discrimination against Women; Distr.: General, 4 November 2011; ADVANCE UNEDITED VERSION (p. 14) Shepherd, Verene A. (editor), Women in Caribbean History (Kingston: Ian Randle, 1999, ISBN 978-1558761896). == External links == Grenada Names First Female Governor General, Cecile La Grenade Convention on the Elimination of All Forms of Discrimination against Women
|
{"page_id": 39357240, "title": "Cécile La Grenade"}
|
interval of group level data. == See also == Ergodic process Ergodic theory, a branch of mathematics concerned with a more general formulation of ergodicity Ergodicity Loschmidt's paradox Poincaré recurrence theorem Lindy effect == References ==
|
{"page_id": 258980, "title": "Ergodic hypothesis"}
|
Client Puzzle Protocol (CPP) is a computer algorithm for use in Internet communication, whose goal is to make abuse of server resources infeasible. It is an implementation of a proof-of-work system (PoW). The idea of the CPP is to require all clients connecting to a server to correctly solve a mathematical puzzle before establishing a connection, if the server is under attack. After solving the puzzle, the client would return the solution to the server, which the server would quickly verify, or reject and drop the connection. The puzzle is made simple and easily solvable but requires at least a minimal amount of computation on the client side. Legitimate users would experience just a negligible computational cost, but abuse would be deterred: those clients that try to simultaneously establish a large number of connections would be unable to do so because of the computational cost (time delay). This method holds promise in fighting some types of spam as well as other attacks like denial-of-service. == See also == Computer security Intrusion-prevention system Proof-of-work system Hashcash Guided tour puzzle protocol == References == Juels, Ari; Brainard, John (1999). "Client Puzzles: A Cryptographic Countermeasure Against Connection Depletion Attacks" (PDF). In Kent, S. (ed.). Proceedings of NDSS '99 (Networks and Distributed Security Systems). pp. 151–165. == External links == RSA press release about client puzzles Client Puzzles: A Cryptographic Countermeasure Against Connection Depletion Attacks New Client Puzzle Outsourcing Techniques for DoS Resistance
|
{"page_id": 8412968, "title": "Client Puzzle Protocol"}
|
job security, status, working conditions, fringe benefits, job policies, and relations with co-workers) could only reduce employee dissatisfaction (not create satisfaction). Motivation factors (level of challenge, the work itself, responsibility, recognition, advancement, intrinsic interest, autonomy, and opportunities for creativity) however, could stimulate satisfaction within the employee, provided that minimum levels of the hygiene factors were reached. For an organization to take full advantage of Herzberg's theory, they must design jobs in such a way that motivators are built in, and thus are intrinsically rewarding. While the Motivation–Hygiene Theory was the first to focus on job content, it has not been strongly supported through empirical studies. Frederick Herzberg also came up with the concept of job enrichment, which expands jobs to give employees a greater role in planning, performing, and evaluating their work, thus providing the chance to satisfy their motivators needs. Some suggested ways would be to remove some management control, provide regular and continuously feedback. Proper job enrichment, therefore, involves more than simply giving the workers extra tasks to perform. It means expanding the level of knowledge and skills needed to perform the job. ==== Job characteristics theory ==== Shortly after Herzberg's Two-factor theory, Hackman and Oldham contributed their own, more refined, job-based theory; Job characteristic theory (JCT). JCT attempts to define the association between core job dimensions, the critical psychological states that occur as a result of these dimensions, the personal and work outcomes, and growth-need strength. Core job dimensions are the characteristics of a person's job. The core job dimensions are linked directly to the critical psychological states. The Job Characteristics Model (JCM), as designed by Hackman and Oldham attempts to use job design to improve employee intrinsic motivation. They show that any job can be described in terms of five key job characteristics: According to the
|
{"page_id": 35231573, "title": "Work motivation"}
|
WBE. For fire ant venom immunotherapy, the most common maintenance dose is 0.5 mL of a 1:200 (wt/vol) dilution. During the build-up phase, it is recommended that dosing is given weekly or biweekly, although some scientists suggest that rush protocols can be successful. It is recommended that patients going through immunotherapy receive treatment for three to five years, and lifelong therapy, although there is no consensus as to how long an individual should be treated. == Stings to animals == The stings of the red imported fire ant in animals are painful, and may prove life-threatening. In dogs, stings from the red imported fire ant can cause pustular dermatosis, a condition where pustules appear in crops as a result of the ant sting. After getting stung, the immediate response consists of erythema and swelling. The pustules remain for approximately 24 hours, whereas in humans they can last for several days. In livestock, red imported ants mostly sting animals in regions with no hair, particularly around the ears, eyes, muzzle, the perineum and ventral portion of the abdomen. Newborn or young livestock can be blinded or killed when attacked by the ants. Healthy individuals are less likely to be attacked than weak or sick animals. Red papule and mild swelling occur, followed by vesicopustule with a red halo developing within 24 to 48 hours. The eyes and eyelids are commonly damaged from the stings; in sheep and goats, ophthalmic ointment containing antibiotics and corticosteroids can be used to treat the eyes of sheep and goats, but this treatment is not recommended for horses. In non-domestic animals, cases of red imported fire ants stings in animals such as ferrets, moles squirrels, white-tailed deer, cottontail rabbits, and newborn blackbucks have been reported, as well as lizards and screech owl nestlings. The aftermath of
|
{"page_id": 55137260, "title": "Toxicology of red imported fire ant venom"}
|
Z 229-15 is a ring galaxy in the constellation Lyra. It is around 390 million light-years from Earth. It has been referred to by NASA and other space agencies as hosting an active galactic nucleus, a quasar, and a Seyfert galaxy, each of which overlap in some way. Z 229-15 was first discovered by astronomer, D. Proust from the Meudon Observatory in 1990. According to Proust, he described the object as a possible obscured spiral galaxy featuring strong signs of absorption. Additionally, Z 229-15 was also observed through the 1.93-m telescope taken at Observatorie de Haute-Provence. Z 229-15's classification has been up for speculation for many years. Z 229-15 has been widely called a quasar, and if this is true would make Z 229-15 positively local. Many space agencies, notably NASA, have called it a Seyfert galaxy that contains a quasar, and that, by definition, hosts an active galactic nuclei. This would make Z 229-15 a very uncommon galaxy in scientific terms. Z 229-15 has a supermassive black hole at its core. The mass of the black hole is log 10 M B H = 6.94 ± 0.14 {\displaystyle \log _{10}M_{BH}=6.94\pm 0.14} solar masses. The interstellar matter in Z 229-15 gets so hot that it releases a large amount of energy across the electromagnetic spectrum on a regular basis. == References ==
|
{"page_id": 76412112, "title": "Z 229-15"}
|
ELLIS - the European Laboratory for Learning and Intelligent Systems - is a pan-European AI network of excellence which focuses on fundamental science, technical innovation and societal impact. Founded in 2018, ELLIS builds upon machine learning as the driver for modern AI and aims to secure Europe’s sovereignty in this competitive field by creating a multi-centric AI research laboratory. ELLIS wants to ensure that the highest level of AI research is performed in the open societies of Europe and consists of 43 sites, 16 research programs and a pan-European PhD & Postdoc Program. == History == The organization was inspired by the Learning in Machines and Brains program of the Canadian Institute for Advanced Research. ELLIS was first proposed in an open letter to European governments in April 2018, which stated that Europe was not keeping up with the US and China. It urged that European governments act to provide opportunities and funding for world-class AI research in Europe. It was founded on 6 December 2018 at the Conference on Neural Information Processing Systems (NeurIPS). == Board == The members of the board are: Serge Belongie (University of Copenhagen & Cornell University) Nicolò Cesa-Bianchi (Università degli Studi di Milano) Florence d'Alché-Buc (Télécom Paris) Nada Lavrač (Jožef Stefan Institute) Neil D. Lawrence (University of Cambridge) Nuria Oliver (ELLIS Alicante Unit Foundation | Institute of Humanity-centric AI) Bernhard Schölkopf (Max Planck Institute for Intelligent Systems) Chairman Josef Sivic (Czech Technical University, École Normale Supérieure & INRIA) Sepp Hochreiter (Johannes Kepler University Linz) Board guest == ELLIS sites == ELLIS is creating a network of research sites distributed across Europe and Israel. Currently, there are 43 sites in 17 countries. The long-term goal is to establish a set of world-class ELLIS institutes, each acting as the core of a local AI ecosystem. ==
|
{"page_id": 62558598, "title": "European Laboratory for Learning and Intelligent Systems"}
|
Slow strain rate testing (SSRT), also called constant extension rate tensile testing (CERT), is a popular test used by research scientists to study stress corrosion cracking. It involves a slow (compared to conventional tensile tests) dynamic strain applied at a constant extension rate in the environment of interest. These test results are compared to those for similar tests in a, known to be inert, environment. A 50-year history of the SSRT has recently been published by its creator. The test has also been standardized and two ASTM symposia devoted to it. == Effect of strain rate == The important characteristic of these tests is that the strain rate is low, for example extension rates selected in the range from 10−8 to 10−3 s−1. The selection of the strain rate is very important because the susceptibility to cracking may not be evident from result of tests at too low or too high strain rate. For numerous material-environment systems, strain rates in range 10−5 - 10−6 s−1 are used; however, the observed absence of cracking at a given strain rate should not be taken as a proof of immunity to cracking. There are known cases wherein the susceptibility to stress-corrosion cracking only became evident at strain rates as low as 10−8 or 10−9 s−1. Nevertheless, the method is very suitable for mechanistic studies, as well as for relative ranking of susceptibility to cracking of different alloys, or the aggressiveness of environments and the effect of temperature, pH, metallurgical condition etc. The fastest strain rate that will still promote SCC for a given environment-material system is sometimes called the "critical strain rate", some values are given in the table: == The importance of other test parameters == Electrode potential and other environmental factors such as temperature, pH and degree of aeration can greatly
|
{"page_id": 35333512, "title": "Slow strain rate testing"}
|
Heliatek is a German company headquartered in Dresden. The company develops and produces lightweight, flexible solar power films that is stable over a wide temperature range. == History == The company was spun off in July 2006 from the Technical University of Dresden (IAPP) and the University of Ulm. The company's founding brought together the fields of organic optoelectronics and organic oligomer synthesis. In 2011 the company was recognized, by an audience of non professionals in the field, by winning the German Future Prize. The World Economic Forum awarded the firm as a Technology Pioneer in 2015. == References == == External links == Can Heliatek Get Organic PV to Market? Heliatek achieves new world record for organic solar cells SPIE TV: Karl Leo: Efficiency improvements are key to future of organic photovoltaics
|
{"page_id": 34061920, "title": "Heliatek"}
|
in banking was conducted in 2017, and a report is available for purchase (see Tiwan, 2017). The key findings of this report are as follows: • AI technologies in banking include all those listed in Section 2.7 and several other analytical tools (Chapters 3 to 11 of this book). • These technologies help banks improve both their front-office and back-office operations. • Major activities are the use of chatbots to improve customer service and com-municating with customers (see Chapter 12), and robo advising is used by some financial institutions (see Chapter 12). • Facial recognition is used for safer online banking. • Advanced analytics helps customers with investment decisions. For examples of this help, see Nordrum (2017), E. V. Staff (2017), and Agrawal (2018). • AI algorithms help banks identify and block fraudulent activities including money laundering. • AI algorithms can help in assessing the creditworthiness of loan applicants. (For a case study of an application of AI in credit screening, see ai-toolkit.blogspot. com/2017/01/case-study-artificial-intelligence-in.html .) Illustrative AI Applications in Banking The following are banking institutions that use AI: • Banks are using AI machines, such as IBM Watson, to step up employee sur-veillance. This is important in preventing illegal activities such as those that occurred at Wells Fargo, the financial services and banking company. For de-tails, see information-management.com/articles/banks-using-algorithms-to-step-up-employee-surveillance .• Banks use applications for tax preparation. H&R Block is using IBM Watson to review tax returns. The program makes sure that individuals pay only what they owe. Using interactive conversations, the machine attempts to lower peo-ple’s tax bills. • Answering many queries in real time. For example, Rainbird Co. ( rainbird.ai/ ) is an AI vendor that trains machines to answer customers’ queries. Millions of cus-tomers’ questions keep bank employees busy. Bots assist staff members to quickly find the appropriate answers
|
{"source": 1196, "title": "from dpo"}
|
their audience and build a community around their content. In these cases, optimizing for LSI may be less important, as the focus may be more on creating content is no longer being used by search engines, it would have significant implications for the way that content is created and optimized for search engines. One way that content creators and marketers could adapt to the lack of LSI is by focusing more on [keyword optimization]( Without LSI, search engines may place more emphasis on the presence of specific keywords in content in order to understand the [topic]( and [relevance]( of the content. This could mean that content creators would need to be more strategic in their use of keywords, ensuring that they are placed in prominent locations within the content and are used in a natural and relevant way. Another way that content creators and marketers could adapt to the lack of LSI is by using more [long-tail keywords]( Long-tail keywords
|
{"source": 3642, "title": "from dpo"}
|
S =X 1 + • • • +X,, is the number of successes in n Bernoulli trials. By (9.4) and Example 9.1, S has the moment generating function > n > E[e's] = ( pe' +q) n= # E ( fl ) pkqn _kelk . > k The right-hand form shows this to be the moment generating function of a distribution with mass ( ; )pq n1 -k at the integer k, 0 S k :5n. The uniqueness just established therefore yields the standard fact that P[S = k 1= (n)pkqnk # • The cumulant generating function of X (or of its distribution) is (9.5) C(t) = log M(t) = log E[e' X ] (Note that M(t) is strictly positive.) Since C' = M'/M and C" _ (MM" — (M') 2 )/M 2 , and since M(0) = 1, (9.6) C(0) = 0, C'(0) = E[X ], C"(0) = Var[X]. Let mk = E[ X k 1. The leading term in (9.2) is m 0 = 1, and so a formal expansion of the logarithm in (9.5) gives > 00 1 L^- ^ ^ m (9.7) C(t)= E( ) # Ek ^t k) • > u=1 k=1 > k=0 Since M(t) —* 1 as t -40, this expression is valid for t in some neighborhood of 0. By the theory of series, the powers on the right can be expanded and 148 PROBABILITY terms with a common factor t` collected together. This gives an expansion (9.8) > ^ ## go= ) _ E lf t', > ^=1 valid in some neighborhood of 0. The ci are the cumulants of X. Equating coefficients in the expansions (9.7) and (9.8) leads to c E = m 1 and c 2 = m 2 — m 21 , which checks with (9.6). Each
|
{"source": 5649, "title": "from dpo"}
|
fact that a belongs to the multiplicative group !Image 65: {\displaystyle (\mathbb {Z} /m\mathbb {Z} )}a is coprime to m. Therefore, a modular multiplicative inverse can be found directly: !Image 66: {\displaystyle a^{\phi (m)-1}\equiv a^{-1}{\pmod {m}}.}=m-1}]( and a modular inverse is given by !Image 68: {\displaystyle a^{-1}\equiv a^{m-2}{\pmod {m}}.}, can be protected from side-channel attacks. For this reason, the standard implementation of Curve25519 uses this technique to compute an inverse. It is possible to compute the inverse of multiple numbers a i, modulo a common m, with a single invocation of the Euclidean algorithm and three multiplications per additional input.[]( The basic idea is to form the product of all the a i, invert that, then multiply by a j for all _j_ ≠ _i_ to leave only the desired _a_−1 _i_. More specifically, the algorithm is (all arithmetic performed modulo m): 1. Compute the prefix products which occurs when a salient environmental change causes a shift in attention and overt (endogenous) which occurs when the individual makes a conscious decision to orient attention to a stimuli During a covert orientation of attention, the individual does not physically move, and during an overt orientation of attention the individual's eyes and head physically move in the direction of the stimulus. Information acquired through covert and overt visual orientations travels through the norepinephrine system, indirectly effecting the ventral visual pathway. The four specific brain regions involved in this process are the frontal eye field, the temporoparietal junction, the pulvinar, and the superior colliculus. The frontal eye field is involved in goal-driven eye movements and can inhibit stimulus driven eye movements. The temporoparietal junction appears to be involved location-cueing tasks, and individuals with lesions in this area have difficulty with attentional reorienting. The pulvinar is located posterior to the thalamus and its role in the orientating system is still being researched; however it is thought to be involved in covert orienting. Finally, the superior colliculus provides information about the location of the stimuli to which attention is directed. == References ==
|
{"page_id": 8015297, "title": "Orienting system"}
|
accuse Socrates of "stunning" people with his puzzling questions, in a manner similar to the way the torpedo fish stuns with electricity. Scribonius Largus, a Roman physician, recorded the use of torpedo fish for treatment of headaches and gout in his Compositiones Medicae of 46 AD. In the 1770s the electric organs of the torpedo ray were the subject of Royal Society papers by John Walsh, and John Hunter. These appear to have influenced the thinking of Luigi Galvani and Alessandro Volta – the founders of electrophysiology and electrochemistry. Henry Cavendish proposed that electric rays use electricity; he built an artificial ray consisting of fish shaped Leyden jars to successfully mimic their behaviour in 1773. === In folklore === The torpedo fish, or electric ray, appears continuously in premodern natural histories as a magical creature, and its ability to numb fishermen without seeming to touch them was a significant source of evidence for the belief in occult qualities in nature during the ages before the discovery of electricity as an explanatory mode. == Bioelectricity == The electric rays have specialised electric organs. Many species of rays and skates outside the family have electric organs in the tail; however, the electric ray has two large kidney-shaped electric organs on each side of its head, where current passes from the lower to the upper surface of the body. The nerves that signal the organ to discharge branch repeatedly, then attach to the lower side of each plaque in the batteries. These are composed of hexagonal columns, closely packed in a honeycomb formation. Each column consists of 500 to more than 1000 plaques of modified striated muscle, adapted from the branchial (gill arch) muscles. In marine fish, these batteries are connected as a parallel circuit, whereas freshwater batteries are arranged in series. This
|
{"page_id": 437961, "title": "Electric ray"}
|
Epsilon form the cross beam. The nova P Cygni was then considered to be the body of Christ. == Features == There is an abundance of deep-sky objects, with many open clusters, nebulae of various types and supernova remnants found in Cygnus due to its position on the Milky Way. Its molecular clouds form the Cygnus Rift dark nebula constellation, comprising one end of the Great Rift along the Milky Way's galactic plane. The rift begins around the Northern Coalsack, and partially obscures the larger Cygnus molecular cloud complex behind it, which the North America Nebula is part of. === Stars === Bayer catalogued many stars in the constellation, giving them the Bayer designations from Alpha to Omega and then using lowercase Roman letters to g. John Flamsteed added the Roman letters h, i, k, l and m (these stars were considered informes by Bayer as they lay outside the asterism of Cygnus), but were dropped by Francis Baily. There are several bright stars in Cygnus. α Cygni, called Deneb, is the brightest star in Cygnus. It is a white supergiant star of spectral type A2Iae that varies between magnitudes 1.21 and 1.29, one of the largest and most luminous A-class stars known. It is located about 2600 light-years away. Its traditional name means "tail" and refers to its position in the constellation. Albireo, designated β Cygni, is a celebrated binary star among amateur astronomers for its contrasting hues. The primary is an orange-hued giant star of magnitude 3.1 and the secondary is a blue-green hued star of magnitude 5.1. The system is 430 light-years away and is visible in large binoculars and all amateur telescopes. γ Cygni, traditionally named Sadr, is a yellow-tinged supergiant star of magnitude 2.2, 1800 light-years away. Its traditional name means "breast" and refers to
|
{"page_id": 6421, "title": "Cygnus (constellation)"}
|
OPPPP (1-(3-Oxo-3-phenylpropyl)-4-phenyl-4-piperidinyl propionate) is one of several compounds derived from MPPP, the reversed ester of the opioid analgesic pethidine, which were sold as designer drugs in the 1980s, but have been rarely encountered by law enforcement since the passage of the Federal Analogue Act in 1986. In animal studies it was found to be around 1000× the potency of pethidine, making it several times the potency of fentanyl and with similar hazards of respiratory depression and overdose. It is closely related to numerous compounds made by Janssen et al. for which the structure-activity relationship is well established. == See also == List of fentanyl analogues PEPAP LY-88329 == References ==
|
{"page_id": 66356142, "title": "OPPPP"}
|
analytical hierarchy. The higher-order counterparts of the major subsystems of second-order arithmetic generally prove the same second-order sentences (or a large subset) as the original second-order systems. For instance, the base theory of higher-order reverse mathematics, called RCAω0, proves the same sentences as RCA0, up to language. As noted in the previous paragraph, second-order comprehension axioms easily generalize to the higher-order framework. However, theorems expressing the compactness of basic spaces behave quite differently in second- and higher-order arithmetic: on one hand, when restricted to countable covers/the language of second-order arithmetic, the compactness of the unit interval is provable in WKL0 from the next section. On the other hand, given uncountable covers/the language of higher-order arithmetic, the compactness of the unit interval is only provable from (full) second-order arithmetic. Other covering lemmas (e.g. due to Lindelöf, Vitali, Besicovitch, etc.) exhibit the same behavior, and many basic properties of the gauge integral are equivalent to the compactness of the underlying space. == The big five subsystems of second-order arithmetic == Second-order arithmetic is a formal theory of the natural numbers and sets of natural numbers. Many mathematical objects, such as countable rings, groups, and fields, as well as points in effective Polish spaces, can be represented as sets of natural numbers, and modulo this representation can be studied in second-order arithmetic. Reverse mathematics makes use of several subsystems of second-order arithmetic. A typical reverse mathematics theorem shows that a particular mathematical theorem T is equivalent to a particular subsystem S of second-order arithmetic over a weaker subsystem B. This weaker system B is known as the base system for the result; in order for the reverse mathematics result to have meaning, this system must not itself be able to prove the mathematical theorem T. Steve Simpson describes five particular subsystems of second-order
|
{"page_id": 326365, "title": "Reverse mathematics"}
|
f {\displaystyle A_{f}} = Final Area For each of these methods of quantifying, one must take measurements of both the initial and final dimensions of the rock sample. For Elongation, the measurement is a uni-dimensional initial and final length, the former measured before any stress is applied and the latter measuring the length of the sample after fracture occurs. For Area, it is strongly preferable to use a rock that has been cut into a cylindrical shape before stress application so that the cross-sectional area of the sample can be taken. Cross-Sectional Area of a Cylinder = Area of a Circle = A = π r 2 {\displaystyle A=\pi r^{2}} Using this, the initial and final areas of the sample can be used to quantify the % change in the area of the rock. == Deformation == Any material is shown to be able to deform ductilely or brittlely, in which the type of deformation is governed by both the external conditions around the rock and the internal conditions sample. External conditions include temperature, confining pressure, presence of fluids, etc. while internal conditions include the arrangement of the crystal lattice, the chemical composition of the rock sample, the grain size of the material, etc. Ductilely deformative behavior can be grouped into three categories: elastic, viscous, and crystal-plastic deformation. Elastic deformation Elastic deformation is deformation which exhibits a linear stress-strain relationship (quantified by Young's modulus) and is derived from Hooke's law of spring forces (see Fig. 1.2). In elastic deformation, objects show no permanent deformation after the stress has been removed from the system and return to their original state. σ = E ϵ {\displaystyle \sigma =E\epsilon } Where: σ {\displaystyle \sigma } = Stress (In Pascals) E {\displaystyle E} = Young's Modulus (In Pascals) ϵ {\displaystyle \epsilon } = Strain
|
{"page_id": 31557892, "title": "Ductility (Earth science)"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.