text
stringlengths 5
10.5k
| source
stringlengths 33
146
|
|---|---|
subject to comprehensive sanctions? No. In contrast to sanctions programs administered and enforced by OFAC with regard to North Korea, Cuba, Iran, Syria, and the Crimea and so-called Donetsk People’s Republic and Luhansk People’s Republic regions of Ukraine, there are no comprehensive sanctions on Afghanistan. Therefore, there are no OFAC-administered sanctions that prohibit the export or reexport of goods or services to Afghanistan, moving or sending money into and out of Afghanistan, or activities in Afghanistan, provided that such transactions or activities do not involve sanctioned individuals, entities, or property in which sanctioned individuals and entities have an interest. Certain Afghanistan-related individuals and entities are included on OFAC’s List of Specially Designated Nationals and Blocked Persons (SDN List), most notably the Taliban and the Haqqani Network. The Taliban are designated as a Specially Designated Global Terrorist (SDGT) under Executive Order (E.O.) 13224. The Haqqani Network is designated as an SDGT under E.O. 13224 and a Foreign Terrorist Organization (FTO) under section 219 of the Immigration and Nationality Act. Transactions or activities by U.S. persons that involve these entities are generally prohibited. In addition, OFAC has issued GLs 14, 15, 16, 17, 18, 19, and 20 under the Global Terrorism Sanctions Regulations, 31 CFR part 594 (GTSR), the Foreign Terrorist Organizations Sanctions Regulations, 31 CFR part 597 (FTOSR), and E.O. 13224, as amended. For a consolidated list of all relevant General Licenses and FAQs, please see OFAC’s humanitarian Fact Sheet, “Provision of Humanitarian Assistance to Afghanistan and Support for the Afghan People,” (this content was updated on April 13, 2022) that provides an overview of the relevant authorizations and guidance related to U.S. sanctions on the Taliban and the Haqqani Network. Date Updated: February 25, 2022 Afghanistan-Related Sanctions 952. When operating in Afghanistan, how can I tell who is a member
|
{"source": 54, "title": "from dpo"}
|
fetch bandwidth basic considerations, 202–203 branch-target buffers, 203–206, 204 integrated units, 207–208 return address predictors, 206–207 Intel Core i7, 236–241 limitation studies, 213–221 microarchitectural techniques case study, 247–254 MIPS scoreboarding, C-77 to C-79 multicore performance/energy efficiency, 404 multicore processor performance, 400 multiple-issue processors, L-30 multiple issue/static scheduling, 192–196 multiprocessor importance, 344 multithreading, basic considerations, 223–226 multithreading history, L-34 to L-35 name dependences, 152–153 perfect processor, 215 pipeline scheduling/loop unrolling, 157–162 processor clock rates, 244 realizable processor limitations, 216–218 RISC development, 2 SMT on superscalar processors, 230–232 speculation advantages/ disadvantages, 210–211 Index ■ I-35 speculation and energy efficiency, 211–212 speculation support, 208–210 speculation through multiple branches, 211 speculative execution, 222–223 Sun T1 fine-grained multithreading effectiveness, 226–229 switch to DLP/TLP/RLP, 4–5 TI 320C6x DSP, E-8 value prediction, 212–213 Instruction path length, processor performance time, 49 Instruction prefetch integrated instruction fetch units, 208 miss penalty/rate reduction, 91–92 SPEC benchmarks, 92 Instruction register (IR) basic MIPS pipeline, C-35 dynamic scheduling, 170 MIPS implementation, C-31 Instruction set architecture (ISA), see also Intel 80x86 processors; Reduced Instruction Set Computer (RISC) addressing modes, A-9 to A-10 architect-compiler writer relationship, A-29 to A-30 ARM Cortex-A8, 114 case studies, A-47 to A-54 class code sequence example, A-4 classification, A-3 to A-7 code size-compiler considerations, A-43 to A-44 compiler optimization and performance, A-27 compiler register allocation, A-26 to A-27 compiler structure, A-24 to A-26 compiler technology and architecture decisions, A-27 to A-29 compiler types and classes, A-28 complications, C-49 to C-51 computer architecture definition, L-17 to L-18 control flow instructions addressing modes, A-17 to A-18 basic considerations, A-16 to A-17, A-20 to A-21 conditional branch options, A-19 procedure invocation options, A-19 to A-20 Cray X1, G-21 to G-22 data access distribution example, A-15 definition and types, 11–15 displacement addressing mode, A-10 encoding considerations, A-21 to A-24, A-22 , A-24 first
|
{"source": 2299, "title": "from dpo"}
|
(16 messages, latest: Jan 04 2022 at 03:37) Contents of lean4 nix shell (2 messages, latest: Jan 03 2022 at 22:52) Treating Float as reals, inconsistent? (46 messages, latest: Jan 03 2022 at 22:04) ✔ help with two proofs (7 messages, latest: Jan 03 2022 at 21:23) help with two proofs (11 messages, latest: Jan 03 2022 at 21:09) ✔ debugging simp (5 messages, latest: Jan 03 2022 at 17:00) Shallow clones (1 message, latest: Jan 02 2022 at 15:09) Conv pattern and typeclasses problem (1 message, latest: Jan 02 2022 at 10:42) Slow sort then dedup (18 messages, latest: Dec 31 2021 at 02:46) formalizing music theory (10 messages, latest: Dec 30 2021 at 23:45) ✔ Spawned >1000 tasks. Have 4 cores. Program only uses 2. (25 messages, latest: Dec 30 2021 at 17:40) Understanding partial (2 messages, latest: Dec 30 2021 at 17:08) Use proof in custom tactic (14 messages, latest: Dec 30 2021 at 13:38) newtype (10 messages, latest: Dec 29 2021 at 00:06) ✔ Lake not present (6 messages, latest: Dec 28 2021 at 23:04) Invalid parser (8 messages, latest: Dec 28 2021 at 17:09) Syntax from object (4 messages, latest: Dec 28 2021 at 13:04) mkSimpAttr (20 messages, latest: Dec 27 2021 at 19:06) [PSA] don't modify the environment in ParametricAttribute (1 message, latest: Dec 27 2021 at 16:13) What comes after macros? (11 messages, latest: Dec 27 2021 at 16:10) Derived Ord instance (1 message, latest: Dec 27 2021 at 03:55) adding library dependencies to compiler (8 messages, latest: Dec 26 2021 at 13:57) ✔ CoqSearch and friends? (1 message, latest: Dec 25 2021 at 19:41) ✔ Disable prelude? (2 messages, latest: Dec 25 2021 at 19:38) CoqSearch and friends? (6 messages, latest: Dec 25 2021 at 16:23) Disable prelude? (2 messages, latest: Dec
|
{"source": 4218, "title": "from dpo"}
|
Springer-Verlag, Berlin. p.6"):10.1007/978-3-642-58169-4")3-540-15286-5. MR")1083352. _The Elements of Algebra in Ten Books_ (1832). "Theoria residuorum biquadraticorum". _Comm. Soc. Reg. Sci. Gött. Rec_. **4**. Reprinted in Gauss, C. F. (2011). "Theoria residuorum biquadraticorum commentatio prima". _Werke_. Vol.2. Cambridge Univ. Press. pp.65–92. doi"):10.1017/CBO9781139058230.004")9781139058230. and Gauss, C. F. (2011). "Theoria residuorum biquadraticorum commentatio secunda". _Werke_. Vol.2. Cambridge Univ. Press. pp.93–148. doi"):10.1017/CBO9781139058230.005")9781139058230. 44. **^ in Lejeune Dirichlet 1894 (1829). "Mémoire sur la résolution des équations numériques". _Bull. Des sciences de Férussac_ (in French). **11**: 419–422. 49. **^; Forcade, R. W. (1979). "Generalization of the Euclidean algorithm for real numbers to all dimensions higher than two": 912–914. doi"):10.1090/S0273-0979-1979-14691-3")0546316. "Jazzing Up Euclid's Algorithm" (16 May 2000). "The Best of the 20th Century: Editors Name Top 10 Algorithms". Society for Industrial and Applied Mathematics. Archived from the original. "A game based on the Euclidean algorithm and a winning strategy for it". _Math. Gaz_. **53**
|
{"source": 6119, "title": "from dpo"}
|
= D 0 ( p 0 p ) ( T T 0 ) 3 / 2 , {\displaystyle D=D_{0}\left({\frac {p_{0}}{p}}\right)\left({\frac {T}{T_{0}}}\right)^{3/2},} where: p 0 {\displaystyle p_{0}} is the standard pressure, T 0 {\displaystyle T_{0}} is the standard temperature, D 0 {\displaystyle D_{0}} is the standard diffusitivity. The equation tells that increasing the temperature or decreasing the pressure can increase the diffusivity. Fick's first law predicts the flux of the reactants to the substrate and product away from the substrate: J = − D i ( d c i d x ) , {\displaystyle J=-D_{i}\left({\frac {dc_{i}}{dx}}\right),} where: x {\displaystyle x} is the thickness δ {\displaystyle \delta } , d c i {\displaystyle dc_{i}} is the first reactant's concentration. In ideal gas law p V = n R T {\displaystyle pV=nRT} , the concentration of the gas is expressed by partial pressure. J = − D i ( p i − p 0 δ R T ) , {\displaystyle J=-D_{i}\left({\frac {p_{i}-p_{0}}{\delta RT}}\right),} where R {\displaystyle R} is the gas constant, p i − p 0 δ {\displaystyle {\frac {p_{i}-p_{0}}{\delta }}} is the partial pressure gradient. As a result, Fick's first law tells us we can use a partial pressure gradient to control the diffusivity and control the growth of thin films of semiconductors. In many realistic situations, the simple Fick's law is not an adequate formulation for the semiconductor problem. It only applies to certain conditions, for example, given the semiconductor boundary conditions: constant source concentration diffusion, limited source concentration, or moving boundary diffusion (where junction depth keeps moving into the substrate). ==== Invalidity of Fickian diffusion ==== Even though Fickian diffusion has been used to model diffusion processes in semiconductor manufacturing (including CVD reactors) in early days, it often fails to validate the diffusion in advanced semiconductor nodes (< 90 nm).
|
{"page_id": 11671, "title": "Fick's laws of diffusion"}
|
to lattice sites. Then the free energy is f ( β , h ) = − lim L → ∞ 1 β L ln Z ( β ) = − 1 β ln ( e β J cosh β h + e 2 β J ( sinh β h ) 2 + e − 2 β J ) , {\displaystyle f(\beta ,h)=-\lim _{L\to \infty }{\frac {1}{\beta L}}\ln Z(\beta )=-{\frac {1}{\beta }}\ln \left(e^{\beta J}\cosh \beta h+{\sqrt {e^{2\beta J}(\sinh \beta h)^{2}+e^{-2\beta J}}}\right),} and the spin-spin correlation (i.e. the covariance) is ⟨ σ i σ j ⟩ − ⟨ σ i ⟩ ⟨ σ j ⟩ = C ( β ) e − c ( β ) | i − j | , {\displaystyle \langle \sigma _{i}\sigma _{j}\rangle -\langle \sigma _{i}\rangle \langle \sigma _{j}\rangle =C(\beta )e^{-c(\beta )|i-j|},} where C(β) and c(β) are positive functions for T > 0. For T → 0, though, the inverse correlation length c(β) vanishes. ===== Proof ===== The proof of this result is a simple computation. If h = 0, it is very easy to obtain the free energy in the case of free boundary condition, i.e. when H ( σ ) = − J ( σ 1 σ 2 + ⋯ + σ L − 1 σ L ) . {\displaystyle H(\sigma )=-J\left(\sigma _{1}\sigma _{2}+\cdots +\sigma _{L-1}\sigma _{L}\right).} Then the model factorizes under the change of variables σ j ′ = σ j σ j − 1 , j ≥ 2. {\displaystyle \sigma '_{j}=\sigma _{j}\sigma _{j-1},\quad j\geq 2.} This gives Z ( β ) = ∑ σ 1 , … , σ L e β J σ 1 σ 2 e β J σ 2 σ 3 ⋯ e β J σ L − 1 σ L = 2 ∏ j = 2
|
{"page_id": 292744, "title": "Ising model"}
|
showed no effect. An international team led by Madeleine Ennis of Queen's University of Belfast claimed in 1999 to have replicated the Benveniste results. Randi then forwarded the $1 million challenge to the BBC Horizon program to prove the "water memory" theory following Ennis's experimental procedure. In response, experiments were conducted with the vice-president of the Royal Society, John Enderby, overseeing the proceedings. The challenge ended with no memory effect observed by the Horizon team. For a piece on homeopathy, the ABC program 20/20 also attempted, unsuccessfully, to reproduce Ennis's results. Ennis has claimed that these tests did not follow her own experiment protocols. == Other scientists == In 2003, Louis Rey, a chemist from Lausanne, reported that frozen samples of lithium and sodium chloride solutions prepared according to homeopathic prescriptions showed – after being exposed to radiation – different thermoluminescence peaks compared with pure water. Rey claimed that this suggested that the networks of hydrogen bonds in homeopathic dilutions were different. These results have never been replicated and are not generally accepted - even Benveniste criticised them, pointing out that they were not blinded. In January 2009, Luc Montagnier, the Nobel Laureate virologist who led the team that discovered the human immunodeficiency virus (HIV), claimed (in a paper published in a journal that he set up, which seems to have avoided conventional peer review as it was accepted three days after submission) that the DNA of pathogenic bacteria and viruses massively diluted in water emit radio waves that he can detect. The device used to detect these signals was developed by Jacques Benveniste, and was independently tested, with the co-operation of the Benveniste team, at the request of the United States Defense Advanced Research Projects Agency. That investigation was unable to replicate any effects of digital signals using the
|
{"page_id": 974761, "title": "Water memory"}
|
on {T, F}, the elements of A4 are ordered by inclusion making it a lattice with Both at the supremum and None at the infimum, and T and F on the wings. Referring to Dana Scott, he assumes the connectives are Scott-continuous or monotonic functions. First he expands negation by deducing that ¬Both = Both and ¬None = None. To expand And and Or the monotonicity goes only so far. Belnap uses equivalence (a&b = a iff avb = b) to fill out the tables for these connectives. He finds None & Both = F while None v Both = T. The result is a second lattice L4 called the "logical lattice", where A4 is the "approximation lattice" determining Scott continuity. == Implementation using two bits == Let one bit be assigned for each truth value: 01=T and 10=F with 00=N and 11=B. Then the subset relation in the power set on {T, F} corresponds to order ab<cd iff a<c and b<d in two-bit representation. Belnap calls the lattice associated with this order the "approximation lattice". The logic associated with two-bit variables can be incorporated into computer hardware. == Matrix transitions == As a discrete system, the four-valued logic illustrates a set of states subject to transitions by logical matrices to form a transition system. An input of two bits transitions to an output of two bits through matrix multiplication. There are sixteen logical matrices that are 2 × 2, and four logical vectors that act as inputs and outputs of the matrix transitions: X = {A, B, C, D} = {(0,1), (1, 0), (0, 0), (1, 1)}. When C is input, the output is always C. Four of the sixteen have zero in one corner only, so the output of vector-matrix multiplication with Boolean arithmetic is always D, except
|
{"page_id": 4213424, "title": "Four-valued logic"}
|
Williams has provided a few examples of the benefits of collective intelligence to business: Talent utilization At the rate technology is changing, no firm can fully keep up in the innovations needed to compete. Instead, smart firms are drawing on the power of mass collaboration to involve participation of the people they could not employ. This also helps generate continual interest in the firm in the form of those drawn to new idea creation as well as investment opportunities. Demand creation Firms can create a new market for complementary goods by engaging in open-source community. Firms also are able to expand into new fields that they previously would not have been able to without the addition of resources and collaboration from the community. This creates, as mentioned before, a new market for complementary goods for the products in said new fields. Costs reduction Mass collaboration can help to reduce costs dramatically. Firms can release a specific software or product to be evaluated or debugged by online communities. The results will be more personal, robust and error-free products created in a short amount of time and costs. New ideas can also be generated and explored by collaboration of online communities creating opportunities for free R&D outside the confines of the company. ==== Open-source software ==== Cultural theorist and online community developer, John Banks considered the contribution of online fan communities in the creation of the Trainz product. He argued that its commercial success was fundamentally dependent upon "the formation and growth of an active and vibrant online fan community that would both actively promote the product and create content- extensions and additions to the game software". The increase in user created content and interactivity gives rise to issues of control over the game itself and ownership of the player-created content. This
|
{"page_id": 20756850, "title": "Collective intelligence"}
|
determines the size of the visible universe and is key to determining its age. Over the course of the Key Project, the team measured the distances to 24 galaxies using Cepheid variable stars, and measured the Hubble constant using five independent methods. The project's researchers, led by Freedman, published their final result in 2001. The work provided a value of the Hubble constant accurate to 10%, resolving a long-standing, factor-of-two debate. She continues to refine her measurements of the Hubble constant using not just Cepheid variables but also the method of the tip of the red-giant branch. == Giant Magellan Telescope == Freedman initiated the Giant Magellan Telescope (GMT) Project and served as chair of the board of directors from its inception in 2003 until 2015. GMT is an international consortium of leading universities and science institutions to build a 25-meter optical telescope at the Carnegie Institution for Science's Las Campanas Observatory in the Chilean Andes. With a primary mirror 80 feet (24 meters) in diameter, the GMT is poised to be the world's second largest ground-based telescope when it is completed. The telescope, which has entered its construction phase and is expected to become fully operational by 2034, will be able to produce images 10 times sharper than those of the Hubble Space Telescope. == Recognition == Freedman has been elected a member of the US National Academy of Sciences, the American Philosophical Society, the American Academy of Arts and Sciences, and a Fellow of the American Physical Society and a Legacy Fellow of the American Astronomical Society. She was elected Fellow of the Royal Society in 2023. She has received several awards for her contributions to observational cosmology, including a Centennial Lectureship of the American Physical Society (1999), the John P. McGovern Award in Science (2000), the Magellanic
|
{"page_id": 35731918, "title": "Wendy Freedman"}
|
Anima and the Parva Naturalia of the Greek philosopher Aristotle. Metacognologists believe that the ability to consciously think about thinking is unique to sapient species and indeed is one of the definitions of sapience. There is evidence that rhesus monkeys and apes can make accurate judgments about the strengths of their memories of fact and monitor their own uncertainty, while attempts to demonstrate metacognition in birds have been inconclusive. A 2007 study provided some evidence for metacognition in rats, but further analysis suggested that they may have been following simple operant conditioning principles, or a behavioral economic model. ==== Mirror neurons ==== Mirror neurons are neurons that fire both when an animal acts and when the animal observes the same action performed by another. Thus, the neuron "mirrors" the behavior of the other, as though the observer were themselves acting. Such neurons have been directly observed in primate and other species including birds. The function of the mirror system is a subject of much speculation. Many researchers in cognitive neuroscience and cognitive psychology consider that this system provides the physiological mechanism for the perception action coupling (see the common coding theory). They argue that mirror neurons may be important for understanding the actions of other people, and for learning new skills by imitation. Some researchers also speculate that mirror systems may simulate observed actions, and thus contribute to theory of mind skills, while others relate mirror neurons to language abilities. Neuroscientists such as Marco Iacoboni (UCLA) have argued that mirror neuron systems in the human brain help us understand the actions and intentions of other people. In a study published in March 2005, Iacoboni and his colleagues reported that mirror neurons could discern if another person who was picking up a cup of tea planned to drink from it or
|
{"page_id": 13001588, "title": "Animal consciousness"}
|
2018. == Plot == In 2010 after years of research, scientist K. Vaseegaran creates an android robot named Chitti with the help of his assistants, Siva and Ravi, to commission it into the Indian Army. Chitti helps Sana, Vaseegaran's medical student girlfriend, cheat on her examination, then saves her from being assaulted by a group of thugs. Vaseegaran's mentor, Bohra, is secretly contracted to create similar robots for a terrorist organisation but so far has been unsuccessful. Threatened with death if he fails to meet the deadline, Bohra seeks the details of Chitti's neural schema, intending to program his robots correctly. Vaseegaran prepares Chitti for an evaluation by the Artificial Intelligence Research and Development (AIRD) Institute, headed by Bohra. During the evaluation, Chitti nearly stabs Vaseegaran at Bohra's command, which convinces the evaluation committee that the robot is a liability and cannot be used for military purposes. Hoping to prove Bohra wrong, Vaseegaran deploys Chitti to save people from a burning building but fails when Chitti saves a girl bathing but flees being ashamed filmed naked on TV and is run over a truck and dies. Vaseegaran requests and pleads Bohra for one month to modify Chitti's neural schema to enable it to understand human behaviour and emotions to which Bohra agrees. While nearing the deadline, Vaseegaran insults Chitti, who becomes angry with Vaseegaran, demonstrating to him that it can manifest emotions. Chitti uses Sana's textbooks and an ancient delivery method to help Sana's sister Latha's childbirth complication. Bohra congratulates Vaseegaran and belatedly lets Chitti pass the AIRD evaluation. Chitti develops romantic feelings for Sana after she congratulates Chitti by kissing it. Later, at Sana's birthday party, Chitti seduces her while dancing and is confronted by Vaseegaran and Sana, to whom it confesses its love towards her. Vaseegaran reprimands Chitti,
|
{"page_id": 12481557, "title": "Enthiran"}
|
x 1 , x 2 , … , x n ) = 0 {\displaystyle \phi (x_{1},x_{2},\ldots ,x_{n},y)=y-f(x_{1},x_{2},\ldots ,x_{n})=0} but the converse is not always possible, i.e. not all implicit functions have an explicit form. For example, using interval notation, let ϕ : X → { 0 } ϕ ( x , y , z ) = ( x a ) 2 + ( y b ) 2 + ( z c ) 2 − 1 = 0 X = [ − a , a ] × [ − b , b ] × [ − c , c ] = { ( x , y , z ) ∈ R 3 : − a ≤ x ≤ a , − b ≤ y ≤ b , − c ≤ z ≤ c } . {\displaystyle {\begin{aligned}&\phi :X\to \{0\}\\&\phi (x,y,z)=\left({\frac {x}{a}}\right)^{2}+\left({\frac {y}{b}}\right)^{2}+\left({\frac {z}{c}}\right)^{2}-1=0\\&X=[-a,a]\times [-b,b]\times [-c,c]=\left\{(x,y,z)\in \mathbb {R} ^{3}\,:\,-a\leq x\leq a,-b\leq y\leq b,-c\leq z\leq c\right\}.\end{aligned}}} Choosing a 3-dimensional (3D) Cartesian coordinate system, this function describes the surface of a 3D ellipsoid centered at the origin (x, y, z) = (0, 0, 0) with constant semi-major axes a, b, c, along the positive x, y and z axes respectively. In the case a = b = c = r, we have a sphere of radius r centered at the origin. Other conic section examples which can be described similarly include the hyperboloid and paraboloid, more generally so can any 2D surface in 3D Euclidean space. The above example can be solved for x, y or z; however it is much tidier to write it in an implicit form. For a more sophisticated example: ϕ : R 4 → { 0 } ϕ ( t , x , y , z ) = C t z e t x − y z + A
|
{"page_id": 39783039, "title": "Function of several real variables"}
|
depending on local laws. In the UK a court order was given in April 2015 to ISPs to block URLs that provided either the Popcorn Time application software (PTAS) or "sources of update information" (SUI), i.e. pointers to torrent-indexing sites. The court found that, unlike previous cases concerning indexing sites directly, neither websites providing the PTAS nor the SUI could be construed to be "communicating a work to the public", since neither contained any specific information about any specific work. It considered it entirely probable that both the providers of the PTAS and the SUI could be held to be "authorising acts of infringement" by users, but this was not the case that the claimants had raised at the hearing. Instead, they had claimed that the providers had been authorising acts of infringement by content-hosting websites, but then that claim had not been made out. The judge, however, found that the Popcorn Time suppliers did "plainly know and intend" for the application to be "the key means which procures and induces the user to access the host website and therefore causes the infringing communications to occur"; and on this basis had "a common design with the operators of the host websites" and therefore shared a joint liability for the copyright infringements (joint tortfeasance). It was therefore appropriate to order the ISPs to block the websites as provided for by section 97A of the Copyright Designs and Patents Act 1988. On May 20, 2015, the government of Israel blocked all access to the official downloads of Popcorn Time, following a lawsuit from its biggest cable and satellite providers for copyright infringement. Although the download sites were blocked, internet users still possessing a copy of the installation file and/or the program were not affected, and there were other sharing sites that distributed
|
{"page_id": 42180723, "title": "Popcorn Time"}
|
syntax and semantics we define in the next section, is built around objects and relations. It has been so important to mathematics, philosophy, and artificial intelligence precisely because those fields—and indeed, much of everyday human existence—can be usefully thought of as dealing with objects and the relations among them. First-order logic can also express facts about some or all of the objects in the universe. This enables one to represent general laws or rules, such as the statement “Squares neighboring the wumpus are smelly.” The primary difference between propositional and first-order logic lies in the ontologi-cal commitment made by each language—that is, what it assumes about the nature of reality .ONTOLOGICAL COMMITMENT Mathematically, this commitment is expressed through the nature of the formal models with respect to which the truth of sentences is defined. For example, propositional logic assumes that there are facts that either hold or do not hold in the world. Each fact can be in one of two states: true or false, and each model assigns true or false to each proposition sym-bol (see Section 7.4.2). 2 First-order logic assumes more; namely, that the world consists of objects with certain relations among them that do or do not hold. The formal models are correspondingly more complicated than those for propositional logic. Special-purpose logics make still further ontological commitments; for example, temporal logic assumes that facts TEMPORAL LOGIC hold at particular times and that those times (which may be points or intervals) are ordered. Thus, special-purpose logics give certain kinds of objects (and the axioms about them) “first class” status within the logic, rather than simply defining them within the knowledge base. Higher-order logic views the relations and functions referred to by first-order logic as ob-HIGHER-ORDER LOGIC jects in themselves. This allows one to make assertions about all
|
{"source": 1019, "title": "from dpo"}
|
several datasets highlights two primary benefits of our proposed method: 1) Efficiency: it enables approximately a 50\% reduction in FLOPs at a mere 10% to 20% of the original training expenditure; 2) Consistency: the pruned diffusion models inherently preserve generative behavior congruent with their pre-trained models. Poster #225 Enhancing Motion Deblurring in High-Speed Scenes with Spike Streams Shiyan Chen · Jiyuan Zhang · Yajing Zheng · Tiejun Huang · Zhaofei Yu Traditional cameras produce desirable vision results but struggle with motion blur in high-speed scenes due to long exposure windows. Existing frame-based deblurring algorithms face challenges in extracting useful motion cues from severely blurred images. Recently, an emerging bio-inspired vision sensor known as the spike camera has achieved an extremely high frame rate while preserving rich spatial details, owing to its novel sampling mechanism. However, typical binary spike streams are relatively low-resolution, degraded image signals devoid of color information, making them unfriendly to human vision. In this paper, we propose a novel approach that integrates the two modalities from two branches, leveraging spike streams as auxiliary visual cues for guiding deblurring in high-speed motion scenes. We propose the first spike-based motion deblurring model with bidirectional information complementarity. We introduce a content-aware motion magnitude attention module that utilizes learnable mask to extract relevant information from blurry images effectively, and we incorporate a transposed cross-attention fusion module to efficiently combine features from both spike data and blurry RGB images.Furthermore, we build two extensive synthesized datasets for training and validation purposes, encompassing high-temporal-resolution spikes, blurry images, and corresponding sharp images. The experimental results demonstrate that our method effectively recovers clear RGB images from highly blurry scenes and outperforms state-of-the-art deblurring algorithms in multiple settings. Poster #226 FlowCam: Training Generalizable 3D Radiance Fields without Camera Poses via Pixel-Aligned Scene Flow Cameron Smith · Yilun
|
{"source": 2773, "title": "from dpo"}
|
path— Barrett’s metaplasia versus peptic stricture— for reasons that are unknown. The ontological scripts that support each stage of simulation include the patient’s basic physiological property changes, how the patient will respond to interventions if the user (i.e., a medical trainee) chooses to administer them, and the effects of the patient’s life - style choices. Sparing the reader the code in which scripts are written, here is an example, > Table 8.1 > Sample GERD levels and associated properties > GERD-LEVEL > TOTAL-TIME-IN-ACID-REFLUX > in hours per day Stage duration in days > 10 less than 1.2 a non-disease state > 81.92 160 > 53.12 110 > 34.08 60 > Downloaded from by guest on 05 June 2025 308 Chapter 8 in plain En glish, of how GERD progresses in a par tic ular instance of a virtual patient who is predisposed to having erosion as the end stage of disease. In this example, the disease is left untreated throughout the entire simulation. • During PRECLINICAL- GERD , the value of the property PRECLINICAL-IRRITATION- > PERCENTAGE (an abstract property whose domain is MUCOSA- OF-ESOPHAGUS ) increases from 0 to 100. 6 • When the value of PRECLINICAL-IRRITATION-PERCENTAGE reaches 100, the script for > PRECLINICAL-GERD is unasserted and the script for the INFLAMMATION-STAGE is asserted. • During the INFLAMMATION-STAGE , the mucosal layer of the esophageal lining (recorded as the property MUCOSAL-DEPTH applied to the object ESOPHAGEAL-MUCOSA ) is eroded, going from a depth of 1 mm to 0 mm over the duration of the stage. • When MUCOSAL-DEPTH reaches 0 mm, the script for the INFLAMMATION-STAGE is unas - serted, with the simultaneous assertion of the script for the EROSION-STAGE . • At the start of the EROSION-STAGE , between one and three EROSION objects are created whose DEPTH
|
{"source": 4999, "title": "from dpo"}
|
Provide individual descriptions of panels for multipanel figures. If a graph includes error bars, explain in the image or general note whether they represent standard deviations, standard errors, confidence limits, or ranges; it is also helpful to provide sample sizes. Also include within the general note any acknowledgment that a figure is reprinted or adapted from another source (see Section 7.7). Place explanations of abbreviations and copyright attributions for reproduced figures last in the general note. Position any superscripts for specific notes near the element being identified. It is preferable to report exact p values; however, if statistically significant values are marked with asterisks or daggers in the figure, explain them in a probability note (see Section 7.14). For guidelines on formatting figure notes, see Section 7.14. 7.29 Relation Between Figures Similar figures or figures of equal importance should be of equal size and scale. Combine figures that are alike to facilitate comparisons between their content. For example, two line graphs with identical axes might be combined horizontally into a single figure, or multiple figures might be combined into one figure with multiple panels (see Section 7.26). 7.30 Photographs Photographs are a type of figure with special considerations. Authors seeking publication must check publisher guidelines to ensure the photograph is submitted in the correct file type. Photographs may be printed in grayscale or in color, depending on the contents of the photograph and the venue of publication. Color photographs should include enough contrast to ensure that contents will be understandable if reproduced in grayscale. Photographs in most student papers can be in color and saved in any widely available photo format (see Section 7.26 for more information on the use of color in figures). It is essential that photographic images be submitted at appropriate levels of resolution (as specified by
|
{"source": 6367, "title": "from dpo"}
|
A node is a point along a standing wave where the wave has minimum amplitude. For instance, in a vibrating guitar string, the ends of the string are nodes. By changing the position of the end node through frets, the guitarist changes the effective length of the vibrating string and thereby the note played. The opposite of a node is an antinode, a point where the amplitude of the standing wave is at maximum. These occur midway between the nodes. == Explanation == Standing waves result when two sinusoidal wave trains of the same frequency are moving in opposite directions in the same space and interfere with each other. They occur when waves are reflected at a boundary, such as sound waves reflected from a wall or electromagnetic waves reflected from the end of a transmission line, and particularly when waves are confined in a resonator at resonance, bouncing back and forth between two boundaries, such as in an organ pipe or guitar string. In a standing wave the nodes are a series of locations at equally spaced intervals where the wave amplitude (motion) is zero (see animation above). At these points the two waves add with opposite phase and cancel each other out. They occur at intervals of half a wavelength (λ/2). Midway between each pair of nodes are locations where the amplitude is maximum. These are called the antinodes. At these points the two waves add with the same phase and reinforce each other. In cases where the two opposite wave trains are not the same amplitude, they do not cancel perfectly, so the amplitude of the standing wave at the nodes is not zero but merely a minimum. This occurs when the reflection at the boundary is imperfect. This is indicated by a finite standing wave ratio
|
{"page_id": 998070, "title": "Node (physics)"}
|
A heat kernel signature (HKS) is a feature descriptor for use in deformable shape analysis and belongs to the group of spectral shape analysis methods. For each point in the shape, HKS defines its feature vector representing the point's local and global geometric properties. Applications include segmentation, classification, structure discovery, shape matching and shape retrieval. HKS was introduced in 2009 by Jian Sun, Maks Ovsjanikov and Leonidas Guibas. It is based on the heat kernel, which is a fundamental solution to the heat equation. HKS is one of the many recently introduced shape descriptors which are based on the Laplace–Beltrami operator associated with the shape. == Overview == Shape analysis is the field of automatic digital analysis of shapes, e.g., 3D objects. For many shape analysis tasks (such as shape matching/retrieval), feature vectors for certain key points are used instead of using the complete 3D model of the shape. An important requirement of such feature descriptors is for them to be invariant under certain transformations. For rigid transformations, commonly used feature descriptors include shape context, spin images, integral volume descriptors and multiscale local features, among others. HKS allows isometric transformations which generalizes rigid transformations. HKS is based on the concept of heat diffusion over a surface. Given an initial heat distribution u 0 ( x ) {\displaystyle u_{0}(x)} over the surface, the heat kernel h t ( x , y ) {\displaystyle h_{t}(x,y)} relates the amount of heat transferred from x {\displaystyle x} to y {\displaystyle y} after time t {\displaystyle t} . The heat kernel is invariant under isometric transformations and stable under small perturbations to the isometry. In addition, the heat kernel fully characterizes shapes up to an isometry and represents increasingly global properties of the shape with increasing time. Since h t ( x , y )
|
{"page_id": 34060358, "title": "Heat kernel signature"}
|
Prodigy, America Online, and eWorld. The address bus of the PowerPC 603 can theoretically access memory up to 64 MB. However, the operating system's maximum addressable memory size is 37 MB. Furthermore, because of the ASIC design of the Pippin hardware, the maximum RAM size that can be added is 32 MB. Officially, Bandai produced memory upgrade modules of 2, 4, 8, and 16 MB. The memory chips are soldered onto a printed circuit board which is placed in a plastic housing, simplifying installation for the end user. Japanese hackers produced an aftermarket 16 MB module, but because the module was much larger than the memory module compartment on the Pippin, installation required removing the logic board from the chassis, and then mounting the large memory module in-between the logic board and chassis. Apple encouraged hardware developers to produce PCI compatible peripherals that could be added to the Pippin. The only official method of producing add-ons for the Pippin was by developing PCI-compatible devices and then placed in a docking station cabinet. A proprietary riser card interface (referred to by Apple as an X-PCI slot) is located on the bottom of a Pippin system and is used by docking stations. A docking station for a Pippin can contain a variety of hardware, such as SCSI or floppy disk drive controllers, video interfaces, codecs, or network interfaces such as Ethernet. The logic board passes PCI signals through the X-PCI docking interface, and then to the docking station. Docking stations within the Pippin line do not provide pass-through support, thereby limiting a Pippin system to use only one docking station at one time. For example, a docking station for a floppy disk drive would need to be removed in order to attach a docking station for the magneto-optical drive. Katz Media produced
|
{"page_id": 17826747, "title": "Apple Pippin"}
|
at a specified place and at a specified time; or one might want to conduct open market operations so that both the inflation rate and the unemployment rate are as close as possible to their desired values. Often such problems are subject to linear equality constraints that prevent all objectives from being simultaneously perfectly met, especially when the number of controllable variables is less than the number of objectives and when the presence of random shocks generates uncertainty. Commonly a multi-objective quadratic objective function is used, with the cost associated with an objective rising quadratically with the distance of the objective from its ideal value. Since these problems typically involve adjusting the controlled variables at various points in time and/or evaluating the objectives at various points in time, intertemporal optimization techniques are employed. === Optimal design === Product and process design can be largely improved using modern modeling, simulation, and optimization techniques. The key question in optimal design is measuring what is good or desirable about a design. Before looking for optimal designs, it is important to identify characteristics that contribute the most to the overall value of the design. A good design typically involves multiple criteria/objectives such as capital cost/investment, operating cost, profit, quality and/or product recovery, efficiency, process safety, operation time, etc. Therefore, in practical applications, the performance of process and product design is often measured with respect to multiple objectives. These objectives are typically conflicting, i.e., achieving the optimal value for one objective requires some compromise on one or more objectives. For example, when designing a paper mill, one can seek to decrease the amount of capital invested in a paper mill and enhance the quality of paper simultaneously. If the design of a paper mill is defined by large storage volumes and paper quality is defined
|
{"page_id": 10251864, "title": "Multi-objective optimization"}
|
Epigenetics of bipolar disorder is the effect that epigenetics has on triggering and maintaining bipolar disorder. Bipolar disorder is a chronic mood disorder, characterized by manic and depressive episodes. The symptoms of a manic episode include high mood, low sleep, and reduced inhibition, while the symptoms of a depressive episode include low mood, lethargy, and reduced motivation. There are different types of bipolar disorders; the two most common are bipolar I and bipolar II. Patients are diagnosed with bipolar disorder I if their manic episodes last at least seven consecutive days and they experience major depressive symptoms over the course of two weeks. In bipolar disorder II, patients experience shorter hypomanic episodes or manic symptoms that have less disruptive impacts on their daily lives. Sometimes patients can experience extreme cycling where they experience four or more episodes of mania and major depression in one year. In addition to affecting mood, people who have bipolar disorder often deal with impaired cognitive abilities, where memory, speech, attention and decision-making skills are all impacted. Bipolar disorder has one of the highest rates of suicide amongst psychiatric disorders, as well as high comorbidity rates with alcohol and substance use disorders. Bipolar disorder has a genetic component. This means that the sequence of nucleotides in DNA contains information that can lead to bipolar disorder in individuals. Researchers determined that bipolar disorder has a genetic component by comparing individuals who have been diagnosed with the disorder and those who have not. However, findings are inconsistent as to what specific genes are involved. The trouble that researchers have had in conclusively identifying genes that cause bipolar disorder has led them to search for an epigenetic component to bipolar disorder. Epigenetics is the study of heritable phenotypes that occur without changes to the primary DNA sequence. Typically, epigenetics
|
{"page_id": 70050408, "title": "Epigenetics of bipolar disorder"}
|
of electrical product are particularly susceptible to fraudulent counterfeit goods because most OEMs will not sell their 'new' product at wholesale prices directly to non-licensed distributors, forcing independent electrical supply houses to alternate sources. In September 2007, PEARL held a special board meeting to discuss counterfeit electrical power equipment. Among other actions, PEARL's Standards and Practices Committee issued a policy directive to all members to pro-actively assist OEMs and other organizations with identifying, reporting, and policing counterfeit electrical product and the companies and individuals that sell it. Since 2007, PEARL members have helped OEMs and other industry associations locate several shipments of counterfeit product. == Sponsored Events == PEARL sponsors an annual "Electrical Safety, Reliability and Sustainability Conference & Exhibition. Conference topics include but are not limited to: Electrical safety Electrical failure diagnosis Reconditioning, inspection and testing standards & techniques Counterfeiting issues Government regulations update Hands-on training Panel discussions == PEARL and the Environment == In 2010 PEARL published a white paper "Reconditioning: The Ultimate Form of Recycling" outlining how reuse, reconditioning, and remanufacturing use a fraction of the energy of new production, keep millions of tons of waste from landfills every year, reduce raw material consumption, and create 3 to 5 times more skilled jobs than automated production lines. Because the remanufacturing process only consumes about 15% of the energy used to create a new product, remanufacturing in the U.S. saves 400 trillion BTUs annually, the equivalent of 16 million barrels of crude oil, or enough gasoline to run 6 million cars for a year. Based on a weighted average of 140 pounds of CO2 gas pollution for every 1 million BTUs of energy consumed, remanufacturing reduces CO2 generation by 28 million tons each year, which is equal to the CO2 output of 10, 500-megawatt coal-burning electrical plants. Remanufacturing
|
{"page_id": 23947961, "title": "Professional Electrical Apparatus Recyclers League"}
|
The Journal of NeuroVirology is a medical journal that publishes review articles on the molecular biology, immunology, genetics, epidemiology, and pathogenesis of CNS disorders with the goal of bridging the gap between basic and clinical studies, and enhancing translational research in neurovirology. It is published by Springer Science+Business Media. The Journal of NeuroVirology is the official journal of the International Society for Neurovirology. == Abstracting and indexing == The journal is abstracted and indexed in: Science Citation Index Expanded Journal Citation Reports/Science Edition SCOPUS PsycINFO EMBASE Chemical Abstracts Service CSA Illumina Biological Abstracts BIOSIS CAB Abstracts Current Contents/ Life Sciences Global Health Neuroscience Citation Index PASCAL Summon by Serial Solutions == External links == Journal of NeuroVirology Springer - Journal of NeuroVirology International Society of Neurovirology Editorial board
|
{"page_id": 24514125, "title": "Journal of NeuroVirology"}
|
n − A ) {\displaystyle \det(\varphi I_{n}-A)} evaluates to the value p(φ) of the characteristic polynomial of A at φ (this holds independently of the relation between A and φ); the Cayley–Hamilton theorem states that p(φ) is the null endomorphism. In this form, the following proof can be obtained from that of Atiyah & MacDonald (1969, Prop. 2.4) (which in fact is the more general statement related to the Nakayama lemma; one takes for the ideal in that proposition the whole ring R). The fact that A is the matrix of φ in the basis e1, ..., en means that φ ( e i ) = ∑ j = 1 n A j , i e j for i = 1 , … , n . {\displaystyle \varphi (e_{i})=\sum _{j=1}^{n}A_{j,i}e_{j}\quad {\text{for }}i=1,\ldots ,n.} One can interpret these as n components of one equation in Vn, whose members can be written using the matrix-vector product M(n, End(V)) × Vn → Vn that is defined as usual, but with individual entries ψ ∈ End(V) and v in V being "multiplied" by forming ψ ( v ) {\displaystyle \psi (v)} ; this gives: φ I n ⋅ E = A tr ⋅ E , {\displaystyle \varphi I_{n}\cdot E=A^{\operatorname {tr} }\cdot E,} where E ∈ V n {\displaystyle E\in V^{n}} is the element whose component i is ei (in other words it is the basis e1, ..., en of V written as a column of vectors). Writing this equation as ( φ I n − A tr ) ⋅ E = 0 ∈ V n {\displaystyle (\varphi I_{n}-A^{\operatorname {tr} })\cdot E=0\in V^{n}} one recognizes the transpose of the matrix φ I n − A {\displaystyle \varphi I_{n}-A} considered above, and its determinant (as element of M(n, R[φ])) is also p(φ). To derive from this
|
{"page_id": 173547, "title": "Cayley–Hamilton theorem"}
|
See also == Exact cover Block design Cluster analysis List of partition topics Lamination (topology) MECE principle Partial equivalence relation Partition algebra Partition refinement Point-finite collection Rhyme schemes by set partition Weak ordering (ordered set partition) == Notes == == References == Brualdi, Richard A. (2004). Introductory Combinatorics (4th ed.). Pearson Prentice Hall. ISBN 0-13-100119-1. Schechter, Eric (1997). Handbook of Analysis and Its Foundations. Academic Press. ISBN 0-12-622760-8.
|
{"page_id": 340240, "title": "Partition of a set"}
|
Dialog was a microcomputer system developed by Gorenje in 1980s. It was based on the 8-bit 4 MHz Zilog Z-80A microprocessor. The primary operating system was FEDOS (CP/M 2.2 compatible), developed by Computer Structures and Systems Laboratory (Faculty of Electrical Engineering, University of Ljubljana) and Gorenje. There were three variants of the Dialog microcomputer system, distinguished only by minor changed: home, laboratory and personal (PC) (in Slovene: hišni, laboratorijski, osebni). Three types of external memory can be connected with Dialog: cassette recorder, floppy drive (5,25" and 8") and hard drive. The home variant of Dialog used resident FEBASIC (a variant of BASIC). == References == Mikroračunalnik DIALOG, Tehnično navodilo-uporaba, Gorenje procesna oprema. FEBASIC, priročnik za uporabnike sistema DIALOG, T. Žitko, Ljubljana, 1985.
|
{"page_id": 4707580, "title": "Gorenje Dialog"}
|
can be uniquely identified using a unique identifier assigned to this session, which is called session id. getId() gives you the session id as String. isNew() will be handy in quite a lot of situations. It returns true if the client does not know about the session or if the client chooses not to join the session. getCreationTime() returns the time when this session was created. getLastAccessedTime() returns the last time the client sent a request associated with this session. Case Study on Encapsulation Problem Statement : "Encapsulation is to hide the variables or something inside a class, preventing unauthorized parties to use. So the public methods like getter and setter access it and the other classes call these methods for accessing". Mapping with real world: Let ’s imagine you own a Ferrari Car and you are the only one knows how to drive it in your family. One day a terrible breakdown happened to your car and you bring one mechanic to home and he checked it. But he is unable to repair it. So you contacted Ferrari company and some chief mechanic came to your home and repaired it(Since your car is under warranty, your pocket is still big :-))This is a real time example for the above mentioned OOP's concepts, How? Encapsulation: As a driver you know how to start the car by pressing the start button and internal details of the starting operations are hidden from you. So the entire starting process is hidden from you otherwise we can tell starting operation is encapsulated from you. OR The driving wheel is encapsulated the process of rotating the wheel from you. Case Studies of The # Transformers Failure # Analyses # India ’s manufacturing sector, heavy industries, various global service centres are all dependent on reliable power
|
{"source": 1463, "title": "from dpo"}
|
components. If, however, we wish to compare different values of K, then we need to take account of this multimodality. A simple approximate solution is to add a term ln K! onto the lower bound when used for model comparison and averaging. Exercise 10.22 Figure 10.7 shows a plot of the lower bound, including the multimodality fac-tor, versus the number K of components for the Old Faithful data set. It is worth emphasizing once again that maximum likelihood would lead to values of the likeli-hood function that increase monotonically with K (assuming the singular solutions have been avoided, and discounting the effects of local maxima) and so cannot be used to determine an appropriate model complexity. By contrast, Bayesian inference automatically makes the trade-off between model complexity and fitting the data. Section 3.4 This approach to the determination of K requires that a range of models having different K values be trained and compared. An alternative approach to determining a suitable value for K is to treat the mixing coefficients π as parameters and make point estimates of their values by maximizing the lower bound (Corduneanu and Bishop, 2001) with respect to π instead of maintaining a probability distribution over them as in the fully Bayesian approach. This leads to the re-estimation equation Exercise 10.23 πk = 1 N > N ∑ > n=1 rnk (10.83) and this maximization is interleaved with the variational updates for the q distribution over the remaining parameters. Components that provide insufficient contribution 10.2. Illustration: Variational Mixture of Gaussians 485 to explaining the data will have their mixing coefficients driven to zero during the optimization, and so they are effectively removed from the model through automatic relevance determination . This allows us to make a single training run in which we start with a relatively
|
{"source": 3852, "title": "from dpo"}
|
potential for generating malicious content have emerged. in this paper, we explore the power of in-context learning (icl) in manipulating the alignment ability of llms. we find that by providing just few in-context demonstrations without fine-tuning, llms can be manipulated to increase or decrease the probability of jailbreaking, i.e. answering malicious prompts. based on these observations, we propose in-context attack (ica) and in-context defense (icd) methods for jailbreaking and guarding aligned language model purposes. ica crafts malicious contexts to guide models in generating harmful outputs, while icd enhances model robustness by demonstrations of rejecting to answer harmful prompts. our experiments show the effectiveness of ica and icd in increasing or reducing the success rate of adversarial jailbreaking attacks. overall, we shed light on the potential of icl to influence llm behavior and provide a new perspective for enhancing the safety and alignment of llms. Expand Abstract Large Language Models for Propaganda Detection Kilian Sprenkamp, Daniel Gordon Jones, Liudmila Zavolokina Abstract: the prevalence of propaganda in our digital society poses a challenge to societal harmony and the dissemination of truth. detecting propaganda through nlp in text is challenging due to subtle manipulation techniques and contextual dependencies. to address this issue, we investigate the effectiveness of modern large language models (llms) such as gpt-3 and gpt-4 for propaganda detection. we conduct experiments using the semeval-2020 task 11 dataset, which features news articles labeled with 14 propaganda techniques as a multi-label classification problem. five variations of gpt-3 and gpt-4 are employed, incorporating various prompt engineering and fine-tuning strategies across the different models. we evaluate the models' performance by assessing metrics such as f1 score, precision, and recall, comparing the results with the current state-of-the-art approach using roberta. our findings demonstrate that gpt-4 achieves comparable results to the current state-of-the-art. further, this study analyzes
|
{"source": 5788, "title": "from dpo"}
|
groups of citizens, their political representatives, political parties or countries interpose to each other in order to hinder the actions of certain of their opponents, such as: the prevention of a political minority group to achieving their aspirations in the parliament by the politically dominant voting majority, in the parliamentary procedure; the ideological repression, persecution and imprisonment for political reasons; the blocking of the international political and economic influence of a country by a multilateral treaty or alliance between countries opposed to such influence. === Technological === The improvement of living conditions of any human community is constantly challenged by the need of technologies still inaccessible or unavailable, which can be internally developed or acquired from other communities that have already developed them, and in both cases must overcome such barriers as: in the technology transfer between different countries, the trade and diplomatic negotiating skills with the countries which are providers of the desired new technologies; in the internal development approach, the educational level of the community or country, the accessible collection of specialized information, their technological and industrial base, their institutional level of scientific and technological research, development and innovation, and the level of practiced international collaboration. === Military === When different communities or countries, which border or not, cannot develop good relations, for economic, cultural or political reasons, they may exceed the limits of diplomatic negotiations, creating military defensive or offensive obstacles to their opponents or enemies, such as: building fortifications, entrenchments, barbed wire beds or mine fields, and other similar tactics intended to prevent or hinder movement of the enemy in a certain direction, and to protect your own forces from attack; blocking or destroying physical resources or logistic interconnections, such as bridges, highways, ports or airports, creating barriers to migration, trade, tourism, etc.; the invasion of
|
{"page_id": 22214599, "title": "Obstacle"}
|
of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II. The most important finding from Ginzburg–Landau theory was made by Alexei Abrikosov in 1957. He used Ginzburg–Landau theory to explain experiments on superconducting alloys and thin films. He found that in a type-II superconductor in a high magnetic field, the field penetrates in a triangular lattice of quantized tubes of flux vortices. For this and related work, he was awarded the Nobel Prize in 2003 with Ginzburg and Leggett. === Fluxoid quantization === For superconductors the bosons involved are the so-called Cooper pairs which are quasiparticles formed by two electrons. Hence m = 2me and q = −2e where me and e are the mass of an electron and the elementary charge. It follows from Eq. (8) that Integrating Eq. (15) over a closed loop gives As in the case of helium we define the vortex strength and use the general relation where Φ is the magnetic flux enclosed by the loop. The so-called fluxoid is defined
|
{"page_id": 35543722, "title": "Macroscopic quantum phenomena"}
|
California's Monterey Peninsula, forever immortalized by the location, Asilomar. Its historic outcome was an unprecedented call for a halt in research until it could be regulated in such a way that the public need not be anxious, and it led to a 16-month moratorium until National Institutes of Health (NIH) guidelines were established. Joshua Lederberg was the leading exception in emphasizing, as he had for years, the potential benefits. At Asilomar, in an atmosphere favoring control and regulation, he circulated a paper countering the pessimism and fears of misuses with the benefits conferred by successful use. He described "an early chance for a technology of untold importance for diagnostic and therapeutic medicine: the ready production of an unlimited variety of human proteins. Analogous applications may be foreseen in fermentation process for cheaply manufacturing essential nutrients, and in the improvement of microbes for the production of antibiotics and of special industrial chemicals." In June 1976, the 16-month moratorium on research expired with the Director's Advisory Committee (DAC) publication of the NIH guidelines of good practice. They defined the risks of certain kinds of experiments and the appropriate physical conditions for their pursuit, as well as a list of things too dangerous to perform at all. Moreover, modified organisms were not to be tested outside the confines of a laboratory or allowed into the environment. Atypical as Lederberg was at Asilomar, his optimistic vision of genetic engineering would soon lead to the development of the biotechnology industry. Over the next two years, as public concern over the dangers of recombinant DNA research grew, so too did interest in its technical and practical applications. Curing genetic diseases remained in the realms of science fiction, but it appeared that producing human simple proteins could be good business. Insulin, one of the smaller, best characterized
|
{"page_id": 6012335, "title": "History of biotechnology"}
|
introduced what became known as the Apex band, consisting of 75 broadcasting frequencies from 41.02 to 43.98 MHz. As on the standard broadcast band, these were AM stations but with higher quality audio – in one example, a frequency response from 20 Hz to 17,000 Hz +/- 1 dB – because station separations were 40 kHz instead of the 10 kHz spacings used on the original AM band. Armstrong worked to convince the FCC that a band of FM broadcasting stations would be a superior approach. That year he financed the construction of the first FM radio station, W2XMN (later KE2XCC) at Alpine, New Jersey. FCC engineers had believed that transmissions using high frequencies would travel little farther than line-of-sight distances, limited by the horizon. When operating with 40 kilowatts on 42.8 MHz, the station could be clearly heard 100 miles (160 km) away, matching the daytime coverage of a full power 50-kilowatt AM station. FCC studies comparing the Apex station transmissions with Armstrong's FM system concluded that his approach was superior. In early 1940, the FCC held hearings on whether to establish a commercial FM service. Following this review, the FCC announced the establishment of an FM band effective January 1, 1941, consisting of forty 200 kHz-wide channels on a band from 42 to 50 MHz, with the first five channels reserved for educational stations. Existing Apex stations were notified that they would not be allowed to operate after January 1, 1941, unless they converted to FM. Although there was interest in the new FM band by station owners, construction restrictions that went into place during WWII limited the growth of the new service. Following the end of WWII, the FCC moved to standardize its frequency allocations. One area of concern was the effects of tropospheric and Sporadic E
|
{"page_id": 10315, "title": "Edwin Howard Armstrong"}
|
This article provides a global overview of the current trends and distribution of metabolic syndrome. Metabolic syndrome (also known as the cardiometabolic syndrome) refers to a cluster of related risk factors for cardiovascular disease that includes abdominal obesity, diabetes, hypertension, and elevated cholesterol. Data from the World Health Organization suggests 65% of the world's population live in countries where being overweight or obese kills more people than being underweight. The WHO defines "overweight" as a BMI greater than or equal to 25, and "obesity" as a BMI greater than or equal to 30. Both overweight and obesity are major risk factors for cardiovascular diseases, specifically heart disease and stroke, and diabetes. The International Diabetes Federation reports that as of 2011, 366 million people have diabetes; this number is projected to increase to over half a billion (estimated 552 million) by 2030. 80 percent of people with diabetes live in developing countries and in 2011, diabetes caused 4.6 million deaths and approximately 78,000 children were diagnosed with type 1 diabetes. == Background == Different definitions of the cardiometabolic syndrome have been proposed by different public health organizations, but recently the International Diabetes Federation (IDF), the National Heart, Lung, and Blood Institute (NHLBI), the American Heart Association (AHA), and others proposed a definition for diagnosing the cardiometabolic syndrome that includes the presence of three out of the following five risk factors: Fasting plasma glucose greater than or equal to 100 mg/dL, or undergoing drug treatment for elevated glucose HDL cholesterol less than 40 mg/dL in men or less than 50 mg/dL in women, or undergoing drug treatment for reduced HDL cholesterol Triglycerides greater than or equal to 150 mg/dL, or undergoing drug treatment for elevated triglycerides Waist circumference greater than or equal to 102 cm (40 in) in men or 88 cm
|
{"page_id": 36738647, "title": "Epidemiology of metabolic syndrome"}
|
fluorite structure === Beyond the until cell, the extended crystal structure of fluorite continues packing in a face-centered cubic (fcc) packing structure (also known as cubic close-packed or ccp). This pattern of spherical packing follows an ABC pattern, where each successive layer of spheres settles on top of the adjacent hole of the lattice. In contrast, hexagonal close-packed (hcp), are successively layered with an ABAB pattern. These two types of packing are the most closely packed forms of spherical packing. == See also == Rock-salt structure == References ==
|
{"page_id": 62983576, "title": "Fluorite structure"}
|
The Borisoglebsk 2 is a Russian, MT-LBu ground vehicle mounted, multi-functional electronic warfare (EW) weapon system. It was developed by Sozvezdie over a six-year period, from 2004 to 2010. The system was however not ordered, or for other reasons not manufactured or delivered, at once to the Russian military. Starting in February 2015, it has been manufactured and delivered by UIMC to the Russian Armed Forces. It is designed to disrupt communications and GPS systems. Borisoglebsk 2 achieved initial operating capability in 2010, but was not ordered and delivered to Russian military until February 2015. Rossiyskaya Gazeta reported that Borisoglebsk 2 was the core system for electronic warfare in the Russian Army, controlling four types of jamming units from a single point. Experimentation and testing were conducted after the first deliveries to the Russian armed forces. The system was in active use by the summer of 2015, in eastern Ukraine. US advisers sent to Ukraine have learned about Russian electronic warfare from the Ukrainian Army, though Ukraine never has had access to this new EW-technology. The American advisers are nevertheless impressed even with earlier Russian EW-technology in the hands of the Ukrainian Army. Swedish newspaper Svenska Dagbladet claimed that the United States and NATO are worried that the F-35 fighter aircraft may not stand up against new Russian EW systems. Borisoglebsk 2 was given as an example of a new Russian system, but not directly compared to the F-35. As of August 2015, ten sets of this system have been delivered to the Russian armed forces with another 14 sets to follow. According to Rostec, Russia plans to deploy them along the Russian borders "from Kaliningrad to Blagoveshchensk". As of October 2015, these systems are also rumored to be active in Syria. On 21 September 2016, more than 10 Borisoglebsk
|
{"page_id": 47554353, "title": "Borisoglebsk-2"}
|
An industrial robot is a robot system used for manufacturing. Industrial robots are automated, programmable and capable of movement on three or more axes. Typical applications of robots include welding, painting, assembly, disassembly, pick and place for printed circuit boards, packaging and labeling, palletizing, product inspection, and testing; all accomplished with high endurance, speed, and precision. They can assist in material handling. In the year 2023, an estimated 4,281,585 industrial robots were in operation worldwide according to International Federation of Robotics (IFR). == Types and features == There are six types of industrial robots. === Articulated robots === Articulated robots are the most common industrial robots. They look like a human arm, which is why they are also called robotic arm or manipulator arm. Their articulations with several degrees of freedom allow the articulated arms a wide range of movements. === Autonomous robot === An autonomous robot is a robot that acts without recourse to human control. The first autonomous robots environment were known as Elmer and Elsie, which were constructed in the late 1940s by W. Grey Walter. They were the first robots in history that were programmed to "think" the way biological brains do and meant to have free will. Elmer and Elsie were often labeled as tortoises because of how they were shaped and the manner in which they moved. They were capable of phototaxis which is the movement that occurs in response to light stimulus. === Cartesian coordinate robots === Cartesian robots, also called rectilinear, gantry robots, and x-y-z robots have three prismatic joints for the movement of the tool and three rotary joints for its orientation in space. To be able to move and orient the effector organ in all directions, such a robot needs 6 axes (or degrees of freedom). In a 2-dimensional environment,
|
{"page_id": 147918, "title": "Industrial robot"}
|
This article discusses how rocks are formed. There are also articles on physical rock formations, rock layerings (strata), and the formal naming of geologic formations. Terrestrial rocks are formed by three main mechanisms: Sedimentary rocks are formed through the gradual accumulation of sediments: for example, sand on a beach or mud on a river bed. As the sediments are buried they get compacted as more and more material is deposited on top. Eventually the sediments will become so dense that they would essentially form a rock. This process is known as lithification. Igneous rocks have crystallised from a melt or magma. The melt is made up of various components of pre-existing rocks which have been subjected to melting either at subduction zones or within the Earth's mantle. The melt is hot and so passes upward through cooler country rock. As it moves, it cools and various rock types will form through a process known as fractional crystallisation. Igneous rocks can be seen at mid-ocean ridges, areas of island arc volcanism or in intra-plate hotspots. Metamorphic rocks once existed as igneous or sedimentary rocks, but have been subjected to varying degrees of pressure and heat within the Earth's crust. The processes involved will change the composition and fabric of the rock and their original nature is often hard to distinguish. Metamorphic rocks are typically found in areas of mountain building. Rock can also form in the absence of a substantial pressure gradient as material that condensed from a protoplanetary disk, without ever undergoing any transformations in the interior of a large object such as a planet or moon. Astrophysicists classify this as a fourth type of rock: primitive rock. This type is common in asteroids and meteorites.: 145 == Rock formation == === 19th-century efforts to synthesize rocks === The synthetic
|
{"page_id": 1235284, "title": "Formation of rocks"}
|
Ralph Asher Alpher (February 3, 1921 – August 12, 2007) was an American cosmologist, who carried out pioneering work in the early 1950s on the Big Bang model, including Big Bang nucleosynthesis and predictions of the cosmic microwave background radiation. == Childhood and education == Alpher was the son of a Jewish immigrant, Samuel Alpher (né Ilfirovich), from Vitebsk, Russian Empire. His mother, Rose Maleson, died of stomach cancer in 1938, and his father later remarried. Alpher graduated at age 15 from Theodore Roosevelt High School in Washington, D.C., and held the ranks of Major and Commander of his school's Cadet program. He worked in the high school theater as stage manager for two years, supplementing his family's Depression-era income. He also learned Gregg shorthand, and in 1937 began working for the director of the American Geophysical Union as a stenographer. In 1940, he was hired by the Department of Terrestrial Magnetism of the Carnegie Foundation, where he worked with Dr. Scott Forbush under contract for the U.S. Navy to develop ship degaussing techniques during World War II. He contributed to the development of the Mark 32 and Mark 45 detonators, torpedoes, Naval gun control, Magnetic Airborne Detection (of submarines), and other top-secret ordnance work (including the Manhattan Project), and he was recognized at the end of the War with the Naval Ordnance Development Award (December 10, 1945 — with Symbol), and another Naval Ordnance Development award in 1946. Alpher's war time work been somewhat obscured by security classification. From 1944 through 1955, he was employed at the Applied Physics Laboratory (APL). During the daytime he was involved in the development of ballistic missiles, guidance systems, supersonics, and related subjects. In 1948, he earned his Ph.D. in physics with a theory of nucleosynthesis called neutron capture, and from 1948 onward
|
{"page_id": 1273628, "title": "Ralph Alpher"}
|
Problem analysis is inaccurate because it’s based on the patient's words. 2. The habits and problems of users can be monitored in the Metaverse. 3. Psychiatric group therapy in the Metaverse reduces time and solves space problems. 4. Hierarchical segregation and tagging for complex organ surgery reduce errors. Empty Cell Challenges 1. Institutional arrangements are needed for issues (e.g., surrogate treatment for drugs). 2. As a replica of the real world, it’s helpful for behavioural research. 3. Medical information has to be handled carefully (e.g., user permission, privacy) 2.1.3.2. Metaverse as a target The metaverse itself is used as the target. In the beginning, the metaverse was used as a tool that reflected the real world. However, people found social communication and value in the metaverse. Detailed cases are discussed in Table 3. Table 3. List of use as a target. Domain Type Details Game Overview 1. Games are the most popular application in the Metaverse. 2. In the real world, games are separated from real life. Empty Cell Advantages 1. The Metaverse can overlap the real world and makes everyday life a game. 2. By adding gaming experience, the game elements are reflected in reality. 3. (e.g., passing points on the way to work) Empty Cell Challenges 1. The game rewards that are obtained in daily life have important meanings. 2. It can be used not only for entertainment but also for academic problem-solving. 3. Migrating existing games to the Metaverse also become a new market. Business Overview 1. The Metaverse is a space where companies utilize their potential as a new market. 2. Many business models of companies gain income and advertising in the Metaverse. Empty Cell Advantages 1. Virtual products require less process and resources to make real products. 2. Users of a relatively young age can
|
{"source": 54, "title": "from dpo"}
|
implement more complex synchro-nization mechanisms. In high-contention situations, synchronization can become a performance bot-tleneck because contention introduces additional delays and because latency is potentially greater in such a multiprocessor. We discuss how the basic synchroni-zation mechanisms of this section can be extended for large processor counts in Appendix I. # Basic Hardware Primitives The key ability we require to implement synchronization in a multiprocessor is a set of hardware primitives with the ability to atomically read and modify a memory location. Without such a capability, the cost of building basic synchronization primitives will be too high and will increase as the processor count increases. There are a number of alternative formulations of the basic hardware primitives, all of which provide the ability to atomically read and modify a location, together with some way to tell whether the read and write were performed atomically. These hardware primitives are the basic building blocks that are used to build a wide vari-ety of user-level synchronization operations, including things such as locks and barriers. In general, architects do not expect users to employ the basic hardware primitives, but instead expect that the primitives will be used by system program-mers to build a synchronization library, a process that is often complex and tricky. Let ’s start with one such hardware primitive and show how it can be used to build some basic synchronization operations. 412 ■ Chapter Five Thread-Level Parallelism One typical operation for building synchronization operations is the atomic exchange , which interchanges a value in a register for a value in memory. To see how to use this to build a basic synchronization operation, assume that we want to build a simple lock where the value 0 is used to indicate that the lock is free and 1 is used to indicate that
|
{"source": 2300, "title": "from dpo"}
|
These parentheses are a part of the cast operator. Use parentheses here if the cast is applied to an expression with arithmetic operators. > ( typeName ) expression 2.2 Arithmetic 47 Discarding the fractional part is not always appropriate. If you want to round a floating-point number to the nearest whole number, use the Math.round method. This method returns a long integer, because large floating-point numbers cannot be stored in an int . > long rounded =Math.round(balance); If balance is 13.75, then rounded is set to 14. If you know that the result can be stored in an int and does not require a long , you can use a cast: > int rounded =(int) Math.round(balance); Table 7 Arithmetic Expressions Mathematical Expression Java Expression Comments x y+ 2 > (x +y) /2 The parentheses are required; > x + y / 2 computes x y + 2 . xy 2 > x*y/2 Parentheses are not required; operators with the same precedence are evaluated left to right. 1 100 +⎛⎝⎜ ⎞⎠⎟ r n Math.pow(1 + r / 100, n) Use Math.pow(x, n) to compute xn. a b2 2 + Math.sqrt(a * a + b * b) a * a is simpler than Math.pow(a, 2) . i j k+ + 3 > (i +j+k) /3.0 If i, j, and k are integers, using a denominator of 3.0 forces floating-point division. π Math.PI Math.PI is a constant declared in the Math class. > 10. A bank account earns interest once per year. In Java, how do you compute the interest earned in the first year? Assume variables percent and balance of type > double have already been declared. > 11. In Java, how do you compute the side length of a square whose area is stored in the variable area ? > 12.
|
{"source": 4220, "title": "from dpo"}
|
A., Mariño, M., Vafa, C., “The topological vertex,” Comm. Math. Phys. 254 (2005), no. 2, 425–478.CrossRefGoogle Scholar Aganagic, M., Klemm, A., Vafa, C., “Disk instantons, mirror symmetry and the duality web,” Z. Naturforsch. A 57 (2002), no. 1-2, 1–28.CrossRefGoogle Scholar Aganagic, M., Vafa, C., “Mirror Symmetry, D-Branes and Counting Holomorphic Discs,” arXiv:hep-th/0012041.Google Scholar Borisov, L., Chen, L., Smith, G., “The orbifold Chow ring of toric Deligne-Mumford stacks,” J. Amer. Math. Soc. 18 (2005), no. 1, 193–215.CrossRefGoogle Scholar Bouchard, V., Klemm, A., Mariño, M., and Pasquetti, S., “Remodeling the B-model,” Comm. Math. Phys. 287 (2009), no. 1, 117–178.CrossRefGoogle Scholar Bouchard, V., Klemm, A., Mariño, M., and Pasquetti, S., “Topological open strings on orbifolds,” Comm. Math. Phys. 296 (2010), 589–623.CrossRefGoogle Scholar Brini, A., “Open topological strings and integrable hierarchies: remodeling the A-model,” Comm. Math. Phys. 312 (2012), no. 3, 735–780.CrossRefGoogle Scholar Brini, A., Cavalieri, R., “Open orbifold Gromov-Witten invariants of \left[{\mathbb{C}}^3/ {\mathbb{Z}}_n\right] : localization and mirror symmetry,” Selecta Math. (N.S.) 17 (2011), no. 4, 879–933.CrossRefGoogle Scholar Brini, A., Cavalieri, R., Ross, D., “Crepant resolution and open strings,” J. Reine Angew. Math. 755 (2019), 191–245.CrossRefGoogle Scholar Bryan, J., Cadman, C., Young, B., “The orbifold topological vertex,” Adv. Math. 229 (2012), no. 1, 531–595.CrossRefGoogle Scholar Cadman, C., “Using stacks to impose tangency conditions on curves,” Amer. J. Math. 129 (2007), no. 2, 405–427.CrossRefGoogle Scholar Cho, C.-H., “Counting real J-holomorphic discs and spheres in dimension four and six,” J. Korean Math. Soc. 45 (2008), no. 5, 1427–1442.Google Scholar Cavalieri, R., Ross, D., “Open Gromov-Witten theory and the crepant resolution conjecture,” Michigan Math. J. 61 (2012), no. 4, 807–837.CrossRefGoogle Scholar Cecotti, S., Vafa, C., “Massive orbifolds,” Modern Phys. Lett. A 7 (1992), no. 19, 1715–1723.CrossRefGoogle Scholar Chan, K., Cho, C.-H., Lau, S.-C.,
|
{"source": 6133, "title": "from dpo"}
|
Substructure search (SSS) is a method to retrieve from a database only those chemicals matching a pattern of atoms and bonds which a user specifies. It is an application of graph theory, specifically subgraph matching in which the query is a hydrogen-depleted molecular graph. The mathematical foundations for the method were laid in the 1870s, when it was suggested that chemical structure drawings were equivalent to graphs with atoms as vertices and bonds as edges. SSS is now a standard part of cheminformatics and is widely used by pharmaceutical chemists in drug discovery. There are many commercial systems that provide SSS, typically having a graphical user interface and chemical drawing software. Large publicly-available databases like PubChem and ChemSpider can be searched this way, as can Wikipedia's articles describing individual chemicals. == Definitions == Substructure search is used to retrieve from a database of chemicals those which contain the pattern of atoms and bonds specified by a user. It is implemented using a specialist type of query language and in real-world applications the search may be further constrained using logical operators on additional data held in the database. Thus "return all carboxylic acids where a sample of >1 g is available". One definition of "substructure" was provided in 2008: "given two chemical structures A and B, if structure A is fully contained in structure B, then A is a substructure of B, while B is a superstructure of A." In this definition, the word "structure" is not synonymous with "compound". If it were, the structure for ethanol, CH3CH2OH would not be a substructure of propanol, CH3CH2CH2OH, since the terminal CH3 of ethanol is not fully contained at the propanol chain two atoms away from the OH group. Instead the query structure is, formally, a hydrogen-depleted molecular graph. The search is thus
|
{"page_id": 5643301, "title": "Substructure search"}
|
governments may have an incentive to disrupt or obstruct investigations into the matter. Additional remediative policies have been proposed by concerned groups of citizens due to a lack of governmental accountability. The US-Vietnam Dialogue Group on Agent Orange/Dioxin of the Aspen Institute established a 10-year Plan of Action on June 16, 2010, to call for governmental participation in addressing herbicide effect in Vietnam. This plan calls for the United States and the Vietnamese government to work with other governments and NGOs to invest 30 million dollars over ten years to clean and purify harmed ecosystems and expand services to families who have been affected medically and physically by Agent Orange. === Scientific objections === The current scientific consensus on the effects of Agent Orange concludes that scientists at the time made erroneous judgments on how devastating the chemical could be. Scientific reviews ex post facto have indicated that many of these supposedly objective studies that conclude a beneficial use of Agent Orange were based on access to still classified documents and little else. According to Koppes's study, scientists repeatedly minimized the harmful effects of the chemical and ignored empirical evidence. == References ==
|
{"page_id": 47675831, "title": "Impact of Agent Orange in Vietnam"}
|
aerodynamic diameter. The thoracic fraction is the proportion of the particles in ambient aerosol that can reach the thorax or chest region. The respirable fraction is the proportion of particles in the air that can reach the alveolar region. To measure the respirable fraction of particles in air, a pre-collector is used with a sampling filter. The pre-collector excludes particles as the airways remove particles from inhaled air. The sampling filter collects the particles for measurement. It is common to use cyclonic separation for the pre-collector, but other techniques include impactors, horizontal elutriators, and large pore membrane filters. Two alternative size-selective criteria, often used in atmospheric monitoring, are PM10 and PM2.5. PM10 is defined by ISO as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 10 μm aerodynamic diameter and PM2.5 as particles which pass through a size-selective inlet with a 50% efficiency cut-off at 2.5 μm aerodynamic diameter. PM10 corresponds to the "thoracic convention" as defined in ISO 7708:1995, Clause 6; PM2.5 corresponds to the "high-risk respirable convention" as defined in ISO 7708:1995, 7.1. The United States Environmental Protection Agency replaced the older standards for particulate matter based on Total Suspended Particulate with another standard based on PM10 in 1987 and then introduced standards for PM2.5 (also known as fine particulate matter) in 1997. == See also == Aerogel Aeroplankton Aerosol transmission Bioaerosol Deposition (Aerosol physics) Global dimming Nebulizer Monoterpene Stratospheric aerosol injection == References == === Sources === == External links == International Aerosol Research Assembly Archived 2020-01-21 at the Wayback Machine American Association for Aerosol Research NIOSH Manual of Analytical Methods (see chapters on aerosol sampling)
|
{"page_id": 57763, "title": "Aerosol"}
|
that the jets are only present during bright outbursts. The jets were observed again during subsequent outbursts; their velocity is highly variable at the beginning but settles on a constant velocity after roughly 1 month. A single jet can also occur. The jets could be formed by material that cannot accrete on the white dwarf that reaches the Eddington limit. == References == == Further reading == Tomov, N. A.; Tomova, M. T.; Taranova, O. G. (2004). "Broad-band multicolour observations of the symbiotic binary Z And during quiescence and its activity at the end of 2002". Astronomy & Astrophysics. 428 (3): 985–992. Bibcode:2004A&A...428..985T. doi:10.1051/0004-6361:20041065. Taranova, O. G.; Tomov, N. A.; Tomova, M. T.; Shenavrin, V. I. (2004). "The symbiotic system Z Andromedae during the flare of 2000–2002". Astronomy Reports. 48 (9): 742. Bibcode:2004ARep...48..742T. doi:10.1134/1.1800174. S2CID 123346371. Tomov, N. A.; Taranova, O. G.; Tomova, M. T. (2003). "Mass ejection by the symbiotic binary Z And during its 2000–2002 outburst". Astronomy and Astrophysics. 401 (2): 669. Bibcode:2003A&A...401..669T. doi:10.1051/0004-6361:20030140. Sokoloski, J. L.; Brocksopp, C; Kaiser, C; Seymour, N (2002). "A Radio "Jet" in the Prototypical Symbiotic Star Z Andromedae?". American Astron. Soc. Meeting. 201: 17.12. Bibcode:2002AAS...201.1712S. Bisikalo, D. V.; Boyarchuk, A. A.; Kilpio, E. Yu; Kuznetsov, O. A. (2002). "Structure of gas flows in Z and in quiescence and during outbursts". Astronomy Reports. 46 (12): 1022. Bibcode:2002ARep...46.1022B. doi:10.1134/1.1529260. S2CID 121279397. Birriel, Jennifer J; Espey, Brian R; Schulte-Ladbeck, Regina E (1998). "Near-simultaneous Observations of Direct and Raman-scattered Linesin the Symbiotic Star Z Andromedae". The Astrophysical Journal. 507 (1): L75 – L78. Bibcode:1998ApJ...507L..75B. doi:10.1086/311673. Schmid, H. M.; Schild, H (1997). "The polarimetric orbit of Z Andromedae". Astronomy & Astrophysics. 327: 219. Bibcode:1997A&A...327..219S. Skopal, A; Vittone, A. A.; Errico, L; Otsuka, M; Tamura, S; Wolf, M; Elkin, V. G. (2006). "Structure of the hot object in
|
{"page_id": 11215402, "title": "Z Andromedae"}
|
carbonation to form deep grykes and rounded blocks called clints. Grykes have a habitat of their own, which encourages the growth of shade-loving ferns such as hart's tongue and dog's mercury. During the last Ice Age, a stream is thought to have poured over Malham Cove - the most spectacular feature in the Yorkshire Dales. At the end of the Ice Age the limestone, which had been frozen solid, once again became permeable, allowing the water to disappear through its joints. Now Malham Cove is a high cliff (83 m high) – it is completely dry, and a great attraction to rock climbers. A gorge is a steep-sided valley, often formed in a limestone area as the result of the collapse of a roof above a cave system. Gordale Scar is an excellent example. == Subsurface features == Caves are common subsurface features in limestone landscapes. In the Yorkshire Dales, there are numerous caves, three of which – Ingleborough Cave, White Scar Caves and Stump Cross Caverns – are now show caves for the public. In Ireland there is a large number of show caves open to visitors - Crag Cave, Ailwee Cave and Marble Arch Caves. The stalagmite and stalactite are the two main subsurface features in a Carboniferous Limestone area. These are formed when rainwater - a weak carbonic acid capable of dissolving limestone - percolates through it via the grykes and joints underground. This means the limestone is pervious. As this happens the limestone is dissolved and removed in solution. Caverns are often found below the surface in the limestone and as the lime-rich water finds its way underground it begins to drip from the roof of the cavern. It is cold underground so there is little evaporation but some does take place leaving a trace of
|
{"page_id": 4242325, "title": "Carboniferous Limestone"}
|
veteran and was a design engineer at the British Broadcasting Corporation (BBC) for a year. He was later Technical Director at Wharfedale, then a leading British loudspeaker manufacturer. Following corporate change at Wharfedale, Cooke left to see his own ideas realized. Cooke acquired the site of a foundry in Tovil, Maidstone, owned by makers of agricultural machines, and initially worked in Nissen huts erected on the site. In KEF: 50 Years of Innovation in Sound, the authors assert that KEF reduced the average size of bass-rich home loudspeakers from 9–10 cubic feet (250–280 L) to about 2 cubic feet (57 L), based on the work on the "acoustic-suspension woofer" at Acoustic Research. The company pioneered large-scale production of drivers with cones made of materials other than paper, and the application of fast Fourier transform analysis to the acoustics of loudspeakers. KEF was also an early-adopter of modern quality-control principles to driver manufacture. The first loudspeaker manufactured was the K1 Slimline in which the driver units used diaphragms made of polystyrene and melinex. Soon after, in 1962, came the famous B139 'racetrack'-shaped woofer, which allowed for the design of the Celeste – one of the first truly high-performance bookshelf loudspeakers. As Laurie Fincham, Cooke's successor as chief engineer, later revealed, the only reason the B139 had vertically mounted ovoid-shaped was that the British tax code at the time penalised 2 way speakers below a certain arbitrary width. Professional products were not taxed and professional was defined as above 8 inches for a woofer or as a 3-way speaker. From the mid-1960s, KEF manufactured BBC-designed monitor loudspeakers, such as the LS5/1A, for the Corporation and for wider distribution. Cooke's previous relationship with the BBC in the 1950s continued as KEF developed through the 1960s and 70s. In the mid-1960s KEF introduced the
|
{"page_id": 22101159, "title": "KEF"}
|
LTT 3780, also known as TOI-732 or LP 729-54, is the brighter component of a wide visual binary star system in the constellation Hydra. This star is host to a pair of orbiting exoplanets. Based on parallax measurements, it is located at a distance of 72 light years from the Sun. LTT 3780 has an apparent visual magnitude of 13.07, requiring a telescope to view. The spectrum of LTT 3780 presents as a small M-type main-sequence star, a red dwarf, with a stellar classification of M3.5 V. It is spinning very slowly, with a rotation period of 104 days. The abundance of iron, an indicator of the star's metallicity, appears higher than in the Sun. The star is inactive, showing a negligible level of magnetic activity in its chromosphere. It has about 40% of the mass and 37% of the radius of the Sun. The star is radiating just 17% of the Sun's luminosity from its photosphere at an effective temperature of 3,331. Collectively designated LDS 3977, the two stars in this system share a common proper motion and have an angular separation of 15.8″, which corresponds to a (physical) projected separation of 348 AU. At this separation, the orbital period would be ~9,100 years. The fainter member is a red dwarf with a class of M5.0 V. It has 14% of the mass of the Sun and 17% of the Sun's radius. == Planetary system == In 2020, an analysis carried out by a team of astronomers led by astronomer Ryan Cloutier of the TESS project confirmed the existence of two planets on mildly eccentric orbits, the inner being a super-Earth and the outer a small gas planet about half the mass of Uranus. === LTT 3780 c === Astronomers utilizing the Gemini South 8.1-meter telescope performed an atmospheric
|
{"page_id": 63297156, "title": "LTT 3780"}
|
Konica Minolta, Inc. (コニカミノルタ, Konika Minoruta) is a Japanese multinational technology company headquartered in Marunouchi, Chiyoda, Tokyo, with offices in 49 countries worldwide. The company manufactures business and industrial imaging products, including copiers, laser printers, multi-functional peripherals (MFPs) and digital print systems for the production printing market. Konica Minolta's Managed Print Service (MPS) is called Optimised Print Services. The company also makes optical devices, including lenses and LCD film; medical and graphic imaging products, such as X-ray image processing systems, colour proofing systems, and X-ray film; photometers, 3-D digitizers, and other sensing products; and textile printers. It once had camera and photo operations inherited from Konica and Minolta but they were sold in 2006 to Sony, with Sony's Alpha series being the successor SLR division brand. == History == === Company history === Konica Minolta was formed by a merger between Japanese imaging firms Konica and Minolta, announced on 7 January 2003 with the corporate structure completing the re-organization in October 2003. Different group companies, such as the operations in the headquarters and national operating companies, began the process around the same time, however the exact dates vary for each group company. Konica Minolta uses a "Globe Mark" logo that is similar to but slightly different from that of the former company. It also uses the same corporate slogan as the former Minolta company: "The Essentials of Imaging". On 19 January 2006 the company announced that it was quitting the camera business due to high financial losses. SLR camera service operations were handed over to Sony starting on 31 March 2006 and Sony has continued development of cameras that are compatible with Minolta autofocus lenses. Originally, in the negotiations, Konica Minolta wanted cooperation with Sony in camera equipment production rather than a sell-out deal, but Sony vehemently refused, saying that
|
{"page_id": 423830, "title": "Konica Minolta"}
|
years, specify which definition is being used. To distinguish between calendar years and Besselian years, it became customary to add ".0" to the Besselian years. Since the switch to Julian years in the mid-1980s, it has become customary to prefix "B" to Besselian years. So, "1950" is the calendar year 1950, and "1950.0" = "B1950.0" is the beginning of Besselian year 1950. The IAU constellation boundaries are defined in the equatorial coordinate system relative to the equinox of B1875.0. The Henry Draper Catalog uses the equinox B1900.0. The classical star atlas Tabulae Caelestes used B1925.0 as its equinox. According to Meeus, and also according to the formula given above, B1900.0 = JDE 2415020.3135 = 1900 January 0.8135 TT B1950.0 = JDE 2433282.4235 = 1950 January 0.9235 TT == Julian years and J2000 == A Julian year is an interval with the length of a mean year in the Julian calendar, i.e. 365.25 days. This interval measure does not itself define any epoch: the Gregorian calendar is in general use for dating. But, standard conventional epochs which are not Besselian epochs have been often designated nowadays with a prefix "J", and the calendar date to which they refer is widely known, although not always the same date in the year: thus "J2000" refers to the instant of 12 noon (midday) on January 1, 2000, and J1900 refers to the instant of 12 noon on January 0, 1900, equal to December 31, 1899. It is also usual now to specify on what time scale the time of day is expressed in that epoch-designation, e.g. often Terrestrial Time. In addition, an epoch optionally prefixed by "J" and designated as a year with decimals (2000 + x), where x is either positive or negative and is quoted to 1 or 2 decimal places, has
|
{"page_id": 200006, "title": "Epoch (astronomy)"}
|
Attribute substitution is a psychological process thought to underlie a number of cognitive biases and perceptual illusions. It occurs when an individual has to make a judgment (of a target attribute) that is computationally complex, and instead substitutes a more easily calculated heuristic attribute. This substitution is thought of as taking place in the automatic intuitive judgment system, rather than the more self-aware reflective system. Hence, when someone tries to answer a difficult question, they may actually answer a related but different question, without realizing that a substitution has taken place. This explains why individuals can be unaware of their own biases, and why biases persist even when the subject is made aware of them. It also explains why human judgments often fail to show regression toward the mean. The theory of attribute substitution unifies a number of separate explanations of reasoning errors in terms of cognitive heuristics. In turn, the theory is subsumed by an effort-reduction framework proposed by Anuj K. Shah and Daniel M. Oppenheimer, which states that people use a variety of techniques to reduce the effort of making decisions. == History == In a 1974 paper, psychologists Amos Tversky and Daniel Kahneman argued that a broad family of biases (systematic errors in judgment and decision) were explainable in terms of a few heuristics (information-processing shortcuts), including availability and representativeness. In 1975, psychologist Stanley Smith Stevens proposed that the strength of a stimulus (e.g., the brightness of a light, the severity of a crime) is encoded neurally in a way that is independent of modality. Kahneman and Frederick built on this idea, arguing that the target attribute and heuristic attribute could be unrelated. In a 2002 revision of the theory, Kahneman and Shane Frederick proposed attribute substitution as a process underlying these and other effects. == Conditions
|
{"page_id": 23044987, "title": "Attribute substitution"}
|
the hypotheses is correct; that is, it believes the sentence h1 ∨ h2 ∨ h3 ∨ . . . ∨ hn . (19.2) As the examples arrive, hypotheses that are not consistent with the examples can be ruled out. Let us examine this notion of consistency more carefully. Obviously, if hypothesis hj is consistent with the entire training set, it has to be consistent with each example in the training set. What would it mean for it to be inconsistent with an example? There are two possible ways that this can happen: 770 Chapter 19. Knowledge in Learning • An example can be a false negative for the hypothesis, if the hypothesis says it should FALSE NEGATIVE be negative but in fact it is positive. For instance, the new example X13 described by Patrons (X13 , Full ) ∧ ¬ Hungry (X13 ) ∧ . . . ∧ WillWait (X13 ) would be a false negative for the hypothesis hr given earlier. From hr and the example description, we can deduce both WillWait (X13 ), which is what the example says, and ¬WillWait (X13 ), which is what the hypothesis predicts. The hypothesis and the example are therefore logically inconsistent. • An example can be a false positive for the hypothesis, if the hypothesis says it should FALSE POSITIVE be positive but in fact it is negative. 1If an example is a false positive or false negative for a hypothesis, then the example and the hypothesis are logically inconsistent with each other. Assuming that the example is a correct observation of fact, then the hypothesis can be ruled out. Logically, this is exactly analogous to the resolution rule of inference (see Chapter 9), where the disjunction of hypotheses cor-responds to a clause and the example corresponds to a literal that
|
{"source": 1019, "title": "from dpo"}
|
least min_samples many data points within a distance of eps to a given data point, that data point is classified as a core sample. Core samples that are closer to each other than the distance eps are put into the same cluster by DBSCAN. The algorithm works by picking an arbitrary point to start with. It then finds all points with distance eps or less from that point. If there are less than min_samples points within distance eps of the starting point, this point is labeled as noise , meaning that it doesn’t belong to any cluster. If there are more than min_samples points within a distance of eps , the point is labeled a core sample and assigned a new cluster label. Then, all neighbors (within eps ) of the point are visited. If they have not been assigned a cluster yet, they are assigned the new cluster label that was just created. If they are core samples, their neighbors are visited in turn, and so on. The cluster grows until there are no more core samples within distance eps of the cluster. Then another point that hasn’t yet been visited is picked, and the same procedure is repeated. > Clustering |187 In the end, there are three kinds of points: core points, points that are within distance eps of core points (called boundary points ), and noise. When the DBSCAN algorithm is run on a particular dataset multiple times, the clustering of the core points is always the same, and the same points will always be labeled as noise. However, a boundary point might be neighbor to core samples of more than one cluster. Therefore, the cluster membership of boundary points depends on the order in which points are vis‐ ited. Usually there are only few boundary points,
|
{"source": 2778, "title": "from dpo"}
|
from among the set P ′ that we’ve seen so far. Specifically, for each point p ′ ∈ P ′ that we have seen so far, we keep the subsquare containing it in the dictionary, tagged with the index of p ′ . We note that N 2 = ⌈1/( 2δ)⌉ 2 will, in general, be much larger than n, the number of points. Thus we are in the type of situation considered in Section 13.6 on hashing, where the universe of possible elements (the set of all subsquares) is much larger than the number of elements being indexed (the subsquares containing an input point seen thus far). Now, when we consider the next point p in the random order, we determine the subsquare Sst containing it and perform a Lookup operation for each of the 25 subsquares close to S st . For any points discovered by these Lookup operations, we compute the distance to p. If none of these distances are less than δ, then the closest distance hasn’t changed; we insert S st (tagged with p)into the dictionary and proceed to the next point. However, if we find a point p ′ such that δ ′ = d(p, p ′ ) < δ, then we need to update our closest pair. This updating is a rather dramatic activity: Since the value of the closest pair has dropped from δ to δ ′ , our entire collection of subsquares, and the dictionary supporting it, has become useless—it was, after all, designed only to be useful if the minimum distance was δ. We therefore invoke MakeDictionary to create a new, empty dictionary that will hold subsquares whose side lengths are δ ′ /2. For each point seen thus far, we determine the subsquare containing it (in this new collection
|
{"source": 5202, "title": "from dpo"}
|
𝐹 , 𝑆 , 𝑇 , Δ ) ) ≠ 0. It follows from (ii) that we have 𝜋 ( Fitt 𝑅 ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 Ad , Δ Ad ) ) ) = Fitt Λ 𝑂 ( 𝑊 𝐾 ) ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 , Δ ) ) . (A.9) In particular, Fitt Λ 𝑂 ( 𝑊 𝐾 ) ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 , Δ ) ) ≠ 0 , as required. Corollary A.5. The 𝑅 -module Fitt 𝑅 ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 Ad , Δ Ad ) ) is nontrivial and cyclic. Proof. We verified the nontriviality of Fitt 𝑅 ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 Ad , Δ Ad ) ) in the proof of Proposition A.4(iii). It follows from Proposition A.4(ii) that the projective dimension of 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 Ad , Δ Ad ) as an 𝑅 -module equals 1 . This completes the proof that Fitt 𝑅 ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 Ad , Δ Ad ) ) is cyclic. Definition A.6. We define 𝐷 ~ Ad alg ∈ Frac ( 𝑅 ) × / 𝑅 × as a generator of the cyclic 𝑅 -module Fitt 𝑅 ( 𝐻 ~ f 2 ( 𝐺 𝐹 , 𝑆 , 𝑇 Ad , Δ Ad ) ) . Corollary A.7. 𝜋 ( 𝐷 ~ Ad alg ) = 𝐷 ~ alg . Proof. This is a restatement of (A.9), in view of the definitions of 𝐷 ~ Ad alg and 𝐷 ~ alg .
|
{"source": 6389, "title": "from dpo"}
|
bodies, including "GW-bodies" and "decapping-bodies"; however "P-bodies" was the term chosen and is now widely used and accepted in the scientific literature. Recently evidence has been presented suggesting that GW-bodies and P-bodies may in fact be different cellular components. The evidence being that GW182 and Ago2, both associated with miRNA gene silencing, are found exclusively in multivesicular bodies or GW-bodies and are not localized to P-bodies. Also of note, P-bodies are not equivalent to stress granules and they contain largely non-overlapping proteins. The two structures support overlapping cellular functions but generally occur under different stimuli. Hoyle et al. suggests a novel site termed EGP bodies, or stress granules, may be responsible for mRNA storage as these sites lack the decapping enzyme. == Associations with microRNA == microRNA mediated repression occurs in two ways, either by translational repression or stimulating mRNA decay. miRNA recruit the RISC complex to the mRNA to which they are bound. The link to P-bodies comes by the fact that many, if not most, of the proteins necessary for miRNA gene silencing are localized to P-bodies, as reviewed by Kulkarni et al. (2010). These proteins include, but are not limited to, the scaffold protein GW182, Argonaute (Ago), decapping enzymes and RNA helicases. The current evidence points toward P-bodies as being scaffolding centers of miRNA function, especially due to the evidence that a knock down of GW182 disrupts P-body formation. However, there remain many unanswered questions about P-bodies and their relationship to miRNA activity. Specifically, it is unknown whether there is a context dependent (stress state versus normal) specificity to the P-body's mechanism of action. Based on the evidence that P-bodies sometimes are the site of mRNA decay and sometimes the mRNA can exit the P-bodies and re-initiate translation, the question remains of what controls this switch. Another
|
{"page_id": 4144434, "title": "P-bodies"}
|
which in turn facilitates heterolytic bond cleavage (in the case of Friedel-Crafts reaction) or directly activates the substrate toward nucleophilic attack (in the case of carbonyl addition reactions). The dichotomy can have important consequences in some reactions, as in the case of Lewis acid-promoted acetal substitution reactions, where the SN1 and SN2 mechanisms shown below may give different stereochemical outcomes. Studying the product ratio in a bicyclic system, Denmark and colleagues showed that both mechanisms could be operative depending on the denticity of the Lewis acid and the identity of the R' group. In Diels-Alder and 1,3-dipolar cycloaddition reactions, Lewis acids lower the LUMO energy of the dienophile or dipolarphile, respectively, making it more reactive toward the diene or the dipole. == Lewis acid catalysis with carbonyl-containing substrates == Among the types of reactions that can be catalyzed by Lewis acids, those with carbonyl-containing substrates have received the greatest amount of attention. The first major discovery in this area was in 1960, when Yates and Eaton reported the significant acceleration of the Diels-Alder reaction by AlCl3 when maleic anhydride is the dienophile. Early theoretical studies that depended on frontier orbital analysis established that Lewis acid catalysis operates via lowering of the dienophile's LUMO energy,. Recent studies, however, have shown that this rationale behind Lewis acid-catalyzed Diels-Alder reactions is incorrect. It is found that Lewis acids accelerate the Diels-Alder reaction by reducing the destabilizing steric Pauli repulsion between the interacting diene and dienophile and not by lowering the energy of the dienophile's LUMO and consequently, enhancing the normal electron demand orbital interaction. The Lewis acid bind via a donor-acceptor interaction to the dienophile and via that mechanism polarizes occupied orbital density away from the reactive C=C double bond of the dienophile towards the Lewis acid. This reduced occupied orbital density on
|
{"page_id": 37264695, "title": "Lewis acid catalysis"}
|
Hod Lipson and Jordan Pollack at Brandeis University at the turn of the 21st century. == See also == Bio-inspired robotics Evolutionary computation == References ==
|
{"page_id": 1050195, "title": "Evolutionary robotics"}
|
cubic Percolation threshold == References == == Further reading == Schaffer; Saxena; Antolovich; Sanders; Warner (1999). The Science and Design of Engineering Materials (2nd ed.). New York, NY: WCB/McGraw-Hill. pp. 81–88. ISBN 978-0256247664. Callister, W. (2002). Materials Science and Engineering (6th ed.). San Francisco, CA: John Wiley and Sons. pp. 105–114. ISBN 978-0471135760.
|
{"page_id": 3436583, "title": "Atomic packing factor"}
|
the English Language. The Oxford English Dictionary dates this use to 1729. Fact may also indicate findings derived through a process of evaluation, including review of testimony, direct observation, or otherwise; as distinguishable from matters of inference or speculation. This use is reflected in the terms "fact-find" and "fact-finder" (e.g., "set up a fact-finding commission"). Facts may be checked by reason, experiment, personal experience, or may be argued from authority. Roger Bacon wrote "If in other sciences we should arrive at certainty without doubt and truth without error, it behooves us to place the foundations of knowledge in mathematics." == In philosophy == In philosophy, the concept fact is considered in the branch of philosophy concerned with knowledge, called epistemology and ontology, which studies concepts such as existence, being, becoming, and reality. Questions of objectivity and truth are closely associated with questions of fact. A fact can be defined as something that is the case, in other words, a state of affairs. Facts may be understood as information, which makes a true sentence true: "A fact is, traditionally, the worldly correlate of a true proposition, a state of affairs whose obtaining makes that proposition true." Facts may also be understood as those things to which a true sentence refers. The statement "Jupiter is the largest planet in the Solar System" is about the fact that Jupiter is the largest planet in the Solar System. === Correspondence and the slingshot argument === Pascal Engel's version of the correspondence theory of truth explains that what makes a sentence true is that it corresponds to a fact. This theory presupposes the existence of an objective world. The Slingshot argument claims to show that all true statements stand for the same thing, the truth value true. If this argument holds, and facts are taken
|
{"page_id": 58617, "title": "Fact"}
|
In computer networking, a Fibre Channel frame is the frame of the Fibre Channel protocol. The basic building blocks of an FC connection are the frames. They contain the information to be transmitted (payload), the address of the source and destination ports and link control information. Frames are broadly categorized as Data frames Link_control frames Data frames may be used as Link_Data frames and Device_Data frames, link control frames are classified as Acknowledge (ACK) and Link_Response (Busy and Reject) frames. The primary function of the Fabric is to receive the frames from the source port and route them to the destination port. It is the FC-2 layer's responsibility to break the data to be transmitted into frame size, and reassemble the frames. Each frame begins and ends with a frame delimiter. The frame header immediately follows the Start of Frame (SOF) delimiter. The frame header is used to control link applications, control device protocol transfers, and detect missing or out of order frames. Optional headers may contain further link control information. A maximum 2048 byte long field (payload) contains the information to be transferred from a source N_Port to a destination N_Port. The 4 byte Cyclic Redundancy Check (CRC) precedes the End of Frame (EOF) delimiter. The CRC is used to detect transmission errors. The maximum total frame length is 2148 bytes. Between successive frames a sequence of (at least) six primitives must be transmitted, sometimes called interframe gap. == References ==
|
{"page_id": 13516816, "title": "Fibre Channel frame"}
|
and in use by artists who prefer their unique handling, mixing, and structural qualities. Lead white has also shown to have extended longevity compared to zinc and titanium, which will crack much earlier. Lead white is preferred by some artists for its warmer tone, compared with the colder titanium and zinc whites. Flake white has various drawbacks, including a tendency to become transparent over time. It also blackens in the presence of certain atmospheric pollutants, although this can be reversed. === Water-based paints === Lead is not a traditional pigment in water media, as zinc is superior for works on paper, as is calcium hydroxide (slaked lime) for frescos. Lead-based paints, when used on paper, often cause the work to become discolored after long periods; the paint's lead carbonate reacts with hydrogen sulfide in the air and with acids, which often come from fingerprints. == Substitutes == === Titanium === Paint manufacturers have replaced white lead with a less toxic substitute, titanium dioxide, which was first used in paints in the 19th century. Titanium dioxide is considered safe enough to use as a food coloring and in toothpaste, and is a common ingredient in sunscreen. Titanium white has far greater opacity and tinting strength than lead white, and it can easily overpower most other pigments if not mixed carefully. Titanium white has been criticized for leading to "chalkiness" in mixtures. === Zinc === Zinc white is less opaque and weaker in tinting strength than either titanium white or lead white. It is commonly used to lighten mixtures subtly while maintaining transparency. Although zinc white is the standard white in watercolors, its structural soundness in oils has been debated. Zinc white dries slowly and creates a relatively inflexible paint film. Critics of the pigment argue that its use leads to excessive
|
{"page_id": 1970496, "title": "Lead paint"}
|
There are some controversies surrounding the CNS. According to Perrin and Benassi, “after reanalyzing data from [Mayer and Frantz’s] article, collecting and analyzing our own data, and conducting a content analysis of CNS scale items, we conclude that the CNS does not measure an emotional connection to nature.” Perrin and Benassi provided a number of reasons for why they concluded the CNS does not measure emotions towards nature. For example, Mayer and Frantz use the word feel in eight out of the fourteen items on the CNS. Perrin and Benassi argue that the word feel, as it is used in the items of the CNS (“I feel that all inhabitants of the Earth, human and nonhuman, share a common life force”), does not mean an emotional state but a “cognitive assessment,”. Based on the definition of the word feel as cognition, Mayer and Frantz were correct in using this word in the measurement of a person's beliefs; however, they cannot accurately call the scale an affect-measuring one since the use of feel does not suggest an emotion. Nearly half of the items on the CNS involve no emotional content, while the rest contain that word feel, which in this context, really refers to cognition rather than an emotional state of being, according to Perrin and Benassi. Using data from their own studies, Perrin and Benassi “suggest that the CNS may be measuring a cognitive identity dimension of one’s relationship with nature,”. Perrin and Benassi suggest that the items of the CNS should be revised to clearly ask about a person's beliefs, not to be confused with a person's emotions, and that a new scale should be devised to measure one's emotions and the relation to the environment since the CNS does not. Zhang, Howell, and Iyer designed a study to
|
{"page_id": 11446426, "title": "Connectedness to nature scale"}
|
location of the station was chosen based on three main criteria: Can be reached by boat or helicopter from the nearest surrounding polar stations; Landing areas are suitable with view to logistics; Diverse surrounding area allows the broadest possible range of research activities. == Construction == === Transport of material === Structural elements and parts of the infrastructure needed for the station were produced in 2001–2002 in the Czech Republic. During the preparations phase, some parts were assembled and tried out to reduce on-site construction time and eliminate potential problems. Transport of construction material started in November 2004. The material was sent first to Hamburg and then to the Punta Arenas port in Chile. The plan was to transport everything from there directly to James Ross Island, but the transport was fraught with problems: the first ship, Antarctic Dream, could not set out at all due to its poor technical condition and the second one, Porvenir I., had an accident close to the Port on its way to the loading site. The third attempt, which used a Chilean military Icebreaker, Oscar Almirante Viel, finally succeeded. === Construction works === The icebreaker approached the construction site in the morning of 24 February 2004. During the two days that followed, eight containers weighing together 130 tonnes (286,601 pounds) were unloaded from the ship. Construction works started as soon as the ship was unloaded. This first delivery of material was used to build almost the whole main building, which was then used to store materials for further construction. The construction took place towards the end of the Antarctic summer and lasted seven days. As the first delivery did not contain all the necessary construction materials and systems, the main building and surrounding containers were winterised and the construction activities continued the following year.
|
{"page_id": 49495570, "title": "Mendel Polar Station"}
|
vectors using the Euclidean norm. For solving the kinematical optimization problems least-squares descent methods are convenient, e.g. a modified quasi-Newton method. This procedure supplies corrected kinematical parameters for the measured machine, which then, for example, can be used to update the system variables in the controller to adapt the used robot model to the real kinematics. == Results == The positioning accuracy of industrial robots varies by manufacturer, age, and robot type. Using kinematic calibration, these errors can be reduced to less than a millimeter in most cases. An example of this is shown in the figure to the right. Accuracy of 6-axis industrial robots can improved by a factor of 10. Accuracy of parallel robots after calibration can be as low as a tenth of a millimeter. == Sample applications == In the industry, there is a general trend towards substitution of machine tools and special machines by industrial robots for certain manufacturing tasks whose accuracy demands can be fulfilled by calibrated robots. Through simulation and off-line programming, it is possible to easily accomplish complex programming tasks, such as robot machining. However, contrary to the teach programming method, good accuracy as well as repeatability is required. In the figure, a current example is shown: In-line measurement in automotive manufacturing, where the common "measurement tunnel" used for 100% inspection with many expensive sensors are partly replaced by industrial robots that carry only one sensor each. This way the total costs of a measurement cell can be reduced significantly. The station can also be re-used after a model change by simple re-programming without mechanical adaptations. Further examples for precision applications are robot-guided hemming in car body manufacturing, assembly of mobile phones, drilling, riveting and milling in the aerospace industry, and increasingly in medical applications. == See also == Hand eye calibration
|
{"page_id": 3922021, "title": "Robot calibration"}
|
to be interesting because it might not be profitable to promote items that customers seldom buy together (with the exception of the situation described in Section 5.8). For these reasons, we are interested in finding rules whose support is greater than some user-defined threshold. As will be shown in Section 5.2.1, support also has a desirable property that can be exploited for the efficient discovery of association rules. Confidence, on the other hand, measures the reliability of the inference made by a rule. For a given rule X −→ Y , the higher the confidence, the more likely it is for Y to be present in transactions that contain X. Confidence also provides an estimate of the conditional probability of Y given X.Association analysis results should be interpreted with caution. The infer-ence made by an association rule does not necessarily imply causality. Instead, it can sometimes suggest a strong co-occurrence relationship between items in the antecedent and consequent of the rule. Causality, on the other hand, requires knowledge about which attributes in the data capture cause and effect, and typically involves relationships occurring over time (e.g., greenhouse gas emissions lead to global warming). See Section 5.7.1 for additional discussion. 5.1 Preliminaries 361 Formulation of the Association Rule Mining Problem The associa-tion rule mining problem can be formally stated as follows: Definition 5.1 (Association Rule Discovery) . Given a set of transactions T , find all the rules having support ≥ minsup and confidence ≥ minconf ,where minsup and minconf are the corresponding support and confidence thresholds. A brute-force approach for mining association rules is to compute the support and confidence for every possible rule. This approach is prohibitively expensive because there are exponentially many rules that can be extracted from a data set. More specifically, assuming that neither
|
{"source": 1468, "title": "from dpo"}
|
the distance to the target distribution since that's the distribution we want to sample from. The Wasserstein distance (as any distance) satisfies the triangular inequality. We can then use this inequality to bound on the distance to the target distribution: \begin{equation} W_2(p_t, q) \leq W_2(p_t, p_{\step}) + W_2(p_{\step}, q)\,. \end{equation} Of the two terms in the right hand side, the first one is already bounded by the previous lemma. The second one is the distance between the biased limit and the target distribution. By bounding this last term we can achieve our goal, which was to bound the distnace between p_t and the target distribution. As in the previous Lemma, we denote by \ell and L a lower and upper bound on H's eigenvalues respectively. Let p_t denote the distribution of the iterates of ULA with step-size \step on a quadratic objective function, where the initial guess p_0 is a Gaussian distribution with covariance \sigma^2 I, \sigma \leq L(1 + \frac{\step}{2}L). Then, for any step-size \step \lt 2/L, the Wasserstein distance between p_t and the target distribution q can be bounded by a sum of two terms, of which the first one vanishes exponentially fast in t, while the second one is \mathcal{O}(\step) close to the target distribution. More precisely, at every iteration t we have \begin{equation} W_2(\probaone_t, \probatwo) \leq \underbrace{\rho^{t}\, W_2(p_0, p_{\step})\vphantom{\frac{1}{2}}}_{\text{exponential convergence}} + \underbrace{\frac{\step}{4}\sqrt{\tr(H)}}_{\text{stationary}} \,, \end{equation} with linear rate factor \rho \defas \max\{|1 - \step L|,|1 - \step \ell|\}. **Show proof** As in the proof of the previous lemma, we denote the eigenvalues of H by h_1, h_2, \ldots. We'll now bound the distance between p_{\step} and the target distribution q. Both distributions are Gaussians with mean \mu_q, so their Wasserstein distance is the Frobenius norm of the difference of their square root covariances \eqref{eq:wasserstein_simple}. Then we have:
|
{"source": 1223, "title": "from dpo"}
|
make. for illustration, we discuss two customer use cases. we present our deployment results including qualitative customer feedback and a quantitative evaluation. finally, we summarize lessons learned, and discuss best practices for the successful adoption of fairness and explanation tools in practice. Expand Abstract Hi, My Name Is Martha: Using Names to Measure and Mitigate Bias in Generative Dialogue Models Eric Michael Smith, Adina Williams Abstract: all ai models are susceptible to learning biases in data that they are trained on. for generative dialogue models, being trained on real human conversations containing unbalanced gender and race/ethnicity references can lead to models that display learned biases, which we define here broadly as any measurable differences in the distributions of words or semantic content of conversations based on demographic groups. we measure the strength of such biases by producing artificial conversations between two copies of a dialogue model, conditioning one conversational partner to state a name commonly associated with a certain gender and/or race/ethnicity. we find that larger capacity models tend to exhibit more gender bias and greater stereotyping of occupations by gender. we show that several methods of tuning these dialogue models, specifically name scrambling, controlled generation, and unlikelihood training, are effective in reducing bias in conversation, including on a downstream conversational task. name scrambling is also effective in lowering differences in token usage across conversations where partners have names associated with different genders or races/ethnicities. Expand Abstract 2021-09-05 End-to-End Self-Debiasing Framework for Robust Nlu Training Abbas Ghaddar, Philippe Langlais, Mehdi Rezagholizadeh, Ahmad Rashid Abstract: existing natural language understanding (nlu) models have been shown to incorporate dataset biases leading to strong performance on in-distribution (id) test sets but poor performance on out-of-distribution (ood) ones. we introduce a simple yet effective debiasing framework whereby the shallow representations of the main model are
|
{"source": 5788, "title": "from dpo"}
|
is required that provides two functions sign and verifysig . A process may get a signature of a message using the sign function with its process identifier. On the other hand, a process may verify a signature of an incoming message by calling the verifysig function, which returns a Boolean value. Algorithm overview. In a first round of Signed echo broadcast (Algorithm 2), the message is sent from the sender process to all processes. Instead of sending the ECHO messages in the second round to all processes as in the algorithm before, the ECHO messages are only sent back to the sender, however, a signature is added. If the sender has received a message m in more than N +f > 2ECHO messages, it sends a > FINAL message to all processes. This message also contains the signatures of all ECHO messages. Any process that receives the FINAL message may then use the signatures to verify that the sender process has received enough ECHO messages to deliver the message. Correctness. The FINAL message with the signatures of the ECHO messages contains indirectly the same information as the ECHO messages sent to all processes in the Authenticated echo broadcast . If a correct process delivers a message m, it has received a FINAL message with N +f > 2 valid signatures. The digital signature scheme guarantees that the sender process has received an ECHO message containing m from at least N +f > 2 processes. With this observation, the proof of the Authenticated echo broadcast may be used to show the four properties. ## 3.2 Byzantine Reliable Broadcast To guarantee that a correct process delivers a message if and only if every other correct process delivers a message, a fifth property may be added to the properties of Byzantine consistent broadcast
|
{"source": 7365, "title": "from dpo"}
|
as a potential way to address the grandfather paradox. This model introduces a "noise" factor to account for imperfections in time travel, proposing a framework that could help mitigate paradoxes. In contrast, Carlo Rovelli has argued that thermodynamics inhibits time travel to the past. == See also == Novikov self-consistency principle Grandfather paradox Causal loop Chronology protection conjecture Retrocausality == References ==
|
{"page_id": 30366747, "title": "Quantum mechanics of time travel"}
|
The Olin Palladium Award (formerly the Palladium Medal Award) was established by The Electrochemical Society (ECS) in 1950 and is presented every 2 years to recognize outstanding contributions to the fundamental understanding of all types of electrochemical and corrosion phenomena and processes. The award consists of a uniquely designed palladium medal bearing the medalist’s name. The design of the medal depicts Pallas Athene employing a shield, on which the seal of the Society is inscribed, to protect the metals represented by ancient symbols from the elements, earth, air, fire, and water. Recipients are also presented with a wall plaque, cash prize, Electrochemical Society Life membership, and a free meeting registration. == History == The Palladium Medal Award was initially funded by the royalties derived from the sales of the Corrosion Handbook and a gift of palladium metal from the International Nickel Company. The original purpose of the medal was to encourage research and achievement in the study of the corrosion of metals and its control, or in theoretical electrochemistry upon which the understanding of corrosion is based. In 1971 the scope was modified, and in 1977 the name was changed to The Olin Palladium Award after a generous endowment from the Olin Company. == Recipients == As listed by ECS: == See also == List of chemistry awards == References == == External links == Media related to Olin Palladium Award at Wikimedia Commons Olin Palladium Award
|
{"page_id": 47366037, "title": "Olin Palladium Award"}
|
the main game, Deus Ex: Human Revolution – The Missing Link, was released digitally on October 18 for Steam and Xbox Live, and October 19 for PlayStation Network (PSN). It likewise debuted in Japan for the console versions: it was released on PSN on March 7, 2012, and March 20 for Xbox Live. Set during a transitional event in-game, the plot sees Adam captured and stripped of his augmentations, having to escape and navigate a cargo ship and then a base operated by Belltower. Plans for DLC were first announced in August 2010, with it being planned as an extension of the game. The developers initially did not plan for DLC, with it beginning development later in the game's production when the visual theme was finalized. Despite using only core team members, development was slow due to the main focus being on Human Revolution. For The Missing Link, the team attempted to improve the lighting, gameplay mechanics, player freedom, and character animation. The DLC was developed entirely at Eidos-Montréal, and this gave the team the opportunity of developing a boss fight with multiple solutions, something they regretted not being able to do with the main game. === Director's Cut === A director's cut of Human Revolution, Deus Ex: Human Revolution – Director's Cut, was announced in April 2013. It was co-produced by Eidos-Montréal, Australian developer Straight Right — who had previously handled the Wii U port of Mass Effect 3 — and Canada-based Snowed In Studios. Originally announced as a Wii U exclusive, it was later announced that it would also be released on its original platforms. One of the major changes was the boss arenas: while they could not create non-lethal options to take down bosses, the team created alternative strategies for players who took a stealthy and otherwise
|
{"page_id": 11283085, "title": "Deus Ex: Human Revolution"}
|
strategic optimists, and aschematic (neither strategy is used). Norem critiqued the OPPQ due to its reliance on theoretical assumptions, such as the notion that optimism and pessimism are opposites, rather than empirical evidence. The OPPQ includes items such as “I often think about what it will be like if I do poorly in an academic situation,’’ and “I often think about what it will be like if I do very well in an academic situation,” measuring optimism and pessimism separately. Norem cites subsequent research that concludes that those who often reflect on negative outcomes tend to do the same for positive outcomes as well. In line with current literature, Norem suggests that defensive pessimists engage in a "thinking-through" process that considers all outcomes. These advances prompted the creation of the DPQ, focusing on the thinking-through process by measuring reflexivity, as well as pessimism. == Strategy effectiveness == Though defensive pessimists are less satisfied with their performances and rate themselves higher in "need for improvement," they do not actually perform worse than people with a more optimistic strategy. Norem and Cantor (1986) investigated whether encouraging defensive pessimists, and thereby interfering with their typical negative thinking, would result in worse performances. Participants in the study were in either encouragement or non-encouragement scenarios as they prepared to complete anagram and puzzle tasks. In the encouragement condition, the defensive pessimists were told that, based on their GPA, they should expect to do well. Defensive pessimists performed worse when encouraged than the defensive pessimists whose strategy was not manipulated. Similarly multiple studies have found that inducing a positive mood through listening to music or watching film clips resulted in lowered academic performance. This effectiveness is not only reduced through interference, but naturally occurring positive mood directly correlated to significantly worse performance for those utilising defensive
|
{"page_id": 39486735, "title": "Defensive pessimism"}
|
== References == "What Is Windows Communication Foundation". MSDN. Microsoft. 10 August 2023. "Windows Communication Foundation Architecture". MSDN. Microsoft. 15 September 2021. == Further reading == Craig McMurtry, Marc Mercuri, and Nigel Watling: Microsoft Windows Communication Foundation: Hands-On, SAMS Publishing, May 26, 2004, ISBN 0-672-32877-1 Steve Resnick, Richard Crane, Chris Bowen: Essential Windows Communication Foundation (WCF): For .NET Framework 3.5, Addison-Wesley, February 11, 2008, ISBN 0-321-44006-4 Craig McMurtry, Marc Mercuri, Nigel Watling, Matt Winkler: Windows Communication Foundation Unleashed (WCF), Sams Publishing, March 6, 2007, ISBN 0-672-32948-4 Juval Löwy: Programming WCF Service, O'Reilly Media, Inc., February 20, 2007, ISBN 0-596-52699-7 Pablo Cibraro, Kurt Claeys, Fabio Cozzolino, Johann Grabner: Professional WCF 4: Windows Communication Foundation with .NET 4, Wrox, June 15, 2010, ISBN 0-470-56314-1 Andrew Zhu: Microsoft Windows Workflow Foundation 4.0 Cookbook:Chapter 3, Packt Publishing, September 2010, ISBN 978-1-84968-078-3 == External links == Windows Communication Foundation, MSDN Windows Communication Foundation portal. MSDN Library: Windows Communication Foundation WCF Security Guide Archived 2011-03-14 at the Wayback Machine, Microsoft Patterns & Practices - Improving Web Services Security: Scenarios and Implementation Guidance for WCF. Released Aug 1, 2008. Understanding WCF Services in Silverlight 2 Archived 2011-03-12 at the Wayback Machine - In depth explanation of WCF services for Silverlight clients. David Chappell: "Introduction to WCF" and "Dealing with Diversity", two papers covering WCF. November 2007. Getting Started with WCF RIA Services - part 1 of the series articles on WCF RIA Services
|
{"page_id": 2429012, "title": "Windows Communication Foundation"}
|
Nita Ahuja is a surgeon and the Chair of the Department of Surgery at Yale School of Medicine and Surgeon-in-Chief of Surgery at Yale New Haven Hospital. She is the first woman ever to serve as Chair of Surgery in Yale in its >200 year history. Before taking this position she was the first woman ever to be the Chief of Surgical Oncology at Johns Hopkins Hospital, Baltimore, USA. Ahuja researches in the field of epigenetics and is a passionate advocate of clinician scientist. She also served as the director of Sarcoma and peritoneal surface malignancy program. She is a surgeon-scientist and her research has been cited more than 11,000 times in scientific literature. In February 2025, Dr. Ahuja was announced as the next Dean of the University of Wisconsin School of Medicine and Public Health, a position she will assume on May 15, 2025. == Early life and education == Born in India, she migrated to the United States with her parents when she was 8 years old. Her journey into the world of science started as a laboratory technician in Dept. of Immunology, National Institutes of Health (NIH), Bethesda. She was awarded with "Outstanding College Students of America" and "Alpha-omega-alpha original research award" for her outstanding research work. She joined the faculty of Johns Hopkins in 2003 after studying medicine at Duke University and surgery at Johns Hopkins. == Career == Ahuja runs a research laboratory focused on understanding the epigenetic dysregulation in gastrointestinal cancers such as colorectal cancers and pancreas cancers and translating the information to develop biomarkers and epigenetic therapeutics. She has led over twenty national and international clinical trials on testing new therapies in gastrointestinal and breast cancers based on concepts identified in her laboratory. Her work initially as a postdoctoral research fellow twenty years
|
{"page_id": 53846617, "title": "Nita Ahuja"}
|
Lambda Pegasi (λ Peg, λ Pegasi) is a fourth-magnitude star in the constellation Pegasus. λ Pegasi is a yellow giant with stellar classification G8II-III. With a mass of 1.5 M☉ and radius that is 28.5 R☉, the star boasts a bolometric luminosity that is roughly 390 L☉. Its apparent magnitude was calibrated in 1983 at 3.96, yielding an absolute magnitude of -1.45. Parallax calculations place the star at a distance of roughly 112 parsecs from Earth, or 365 ± 10 light years away, about three times the distance of its line-of-sight double μ Pegasi. In the constellation, Lambda and Mu lie to the southwest of Beta Pegasi, the nearest bright star. == References ==
|
{"page_id": 36848987, "title": "Lambda Pegasi"}
|
of American Dad!) On November 4, 2013, it was announced that Barker had departed American Dad! during its run as well, after 10 seasons of serving as producer and co-showrunner over the series. During the 2007–08 Writers Guild of America strike, official production of the show halted for most of December 2007 and for various periods afterward. Fox continued producing episodes without MacFarlane's final approval, which he termed "a colossal dick move" in an interview with Variety. Though MacFarlane refused to work on the show, his contract under Fox required him to contribute to any episodes it would subsequently produce. Production officially resumed after the end of the strike, with regularly airing episodes recommencing on February 17, 2008. According to MacFarlane, in 2009, it cost about $2 million to make an episode of Family Guy. During his September 2017 AMA on Reddit, MacFarlane revealed that he had not written for the show since 2010, choosing instead to focus on production and voice acting. On May 12, 2023, it was announced that the showrunners of Family Guy, including Seth MacFarlane, would temporarily leave the show as a result of the 2023 Writers Guild of America Strike. They returned to the show on September 27, 2023, once the strike was declared to be over. === Voice cast === Main cast Seth MacFarlane voices three of the show's main characters: Peter Griffin, Brian Griffin, and Stewie Griffin. Since MacFarlane had a strong vision for these characters, he chose to voice them himself, believing it would be easier than for someone else to attempt it. MacFarlane drew inspiration for the voice of Peter from a security guard he overheard talking while attending the Rhode Island School of Design. Stewie's voice was based on the voice of English actor Rex Harrison, especially his performance in
|
{"page_id": 187586, "title": "Family Guy"}
|
because a payoff of 11 ending in (9, 11) is greater than "Terminator" with a payoff of 6 at (6, 6). Player 1, at the initial node, would select "Terminator" because it offers a higher payoff of 11 at (11, 9) than Joker, which has a reward of 9 at (9, 11). To identify a subgame perfect equilibrium, one needs to identify a route that selects an optimal subgame at each information set. In this example, Player 1 chooses "Terminator" and Player 2 also chooses "Terminator." Then they both choose "go to movie." The subgame perfect equilibrium leads to a payoff of (11,9). === Limitations === Backward induction can be applied to only limited classes of games. The procedure is well-defined for any game of perfect information with no ties of utility. It is also well-defined and meaningful for games of perfect information with ties. However, in such cases it leads to more than one perfect strategy. The procedure can be applied to some games with nontrivial information sets, but it is not applicable in general. It is best suited to solve games with perfect information. If all players are not aware of the other players' actions and payoffs at each decision node, then backward induction is not so easily applied. === Ultimatum game === A second example demonstrates that even in games that formally allow for backward induction in theory, it may not accurately predict empirical game play in practice. This example of an asymmetric game consists of two players: Player 1 proposes to split a dollar with Player 2, which Player 2 then accepts or rejects. This is called the ultimatum game. Player 1 acts first by splitting the dollar however they see fit. Next, Player 2 either accepts the portion they have been offered by Player 1
|
{"page_id": 2060912, "title": "Backward induction"}
|
0000 0000 0000b +0Ch > Data Offset [15:0] Instruction [19:16] 0Instruction Opcode[10:0] +08h > Instruction Offset [15:0] x87 Tag Word (FTW) +04h > x87 Status Word (FSW) x87 Control Word (FCW) +00h [AMD Public Use] SSE, MMX, and x87 Programming 354 24593—Rev. 3.42—March 2024 AMD64 Technology > 11.4.4.3 FXSAVE and FXRSTOR Instructions The FXSAVE and FXRSTOR instructions save and restore the entire 128-bit media, 64-bit media, and x87 state. These instructions usually execute faster than FSAVE/FNSAVE and FRSTOR because they do not normally save and restore the x87 exception pointers (last-instruction pointer, last data-operand pointer, and last opcode). The only case in which they do save the exception pointers is the relatively rare case in which the exception-summary bit in the x87 status word (FSW.ES) is set to 1, indicating that an unmasked exception has occurred. The FXSAVE and FXRSTOR memory format contains fields for storing these values. Unlike FSAVE and FNSAVE, the FXSAVE instruction does not alter the x87 tag word. Therefore, the contents of the shared 64-bit MMX and 80-bit FPR registers can remain valid after an FXSAVE instruction (or any other value the tag bits indicated before the save). Also, FXSAVE (like FNSAVE) does not check for pending unmasked-x87 floating-point exceptions. Figure 11 -9 on page 360 shows the memory format of the media x87 state in long mode. If a 32-bit operand size is used in 64-bit mode, the memory format is the same, except that RIP and RDS are stored as sel:offset pointers, as shown in Figure 11 -10 on page 361. For more information on the FXSAVE and FXRSTOR instructions, see individual instruction listings in “64-Bit Media Instruction Reference” of Volume 5. # 11.5 XSAVE/XRSTOR Instructions The XSAVE, XSAVEOPT, XRSTOR, XGETBV, and XSETBV instructions and associated data structures extend the FXSAVE/FXRSTOR memory image used
|
{"source": 76, "title": "from dpo"}
|
F Interconnection Networks Protection and User Access to the Network A challenge is to ensure safe communication across a network without invoking the operating system in the common case. The Cray Research T3D supercomputer offers an interesting case study. Like the more recent Cray X1E, the T3D supports a global address space, so loads and stores can access memory across the network. Protection is ensured because each access is checked by the TLB. To support trans-fer of larger objects, a block transfer engine (BLT) was added to the hardware. Pro-tection of access requires invoking the operating system before using the BLT to check the range of accesses to be sure there will be no protection violations. Figure F.40 compares the bandwidth delivered as the size of the object varies for reads and writes. For very large reads (e.g., 512 KB), the BLT achieves the highest performance: 140 MB/sec. But simple loads get higher performance for 8 KB or less. For the write case, both achieve a peak of 90 MB/sec, presumably because of the limitations of the memory bus. But, for writes, the BLT can only match the performance of simple stores for transfers of 2 MB; anything smaller and it ’s faster to send stores. Clearly, a BLT that can avoid invoking the operating system in the common case would be more useful. # Efficient Interface to the Memory Hierarchy versus the Network Traditional evaluations of processor performance, such as SPECint and SPECfp, encourage integration of the memory hierarchy with the processor as the efficiency of the memory hierarchy translates directly into processor performance. Hence, > 128 256 512 1024 2048 4096 8192 16,384 32,768 65,536 131,072 262,144 524,288 1,048,576 2,097,152 4,194,304 8,388,608 Transfer size (bytes) 020 40 60 80 100 120 140 160 CPU write BLT read BLT
|
{"source": 2300, "title": "from dpo"}
|
and d? When is this algorithm preferable to merge sort? > •• R14.23 A stable sort does not change the order of elements with the same value. This is a desirable feature in many applications. Consider a sequence of e-mail messages. If you sort by date and then by sender, you’d like the second sort to preserve the rela-tive order of the first, so that you can see all messages from the same sender in date order. Is selection sort stable? Insertion sort? Why or why not? > •• R14.24 Give an O(n) algorithm to sort an array of n bytes (numbers between –128 and 127). Hint: Use an array of counters. > •• R14.25 You are given a sequence of arrays of words, representing the pages of a book. Your task is to build an index (a sorted array of words), each element of which has an array of sorted numbers representing the pages on which the word appears. Describe an algorithm for building the index and give its big-Oh running time in terms of the total number of words. > •• R14.26 Given two arrays of n integers each, describe an O(n log( n)) algorithm for determin-ing whether they have an element in common. > ••• R14.27 Given an array of n integers and a value v, describe an O(n log( n)) algorithm to find whether there are two values x and y in the array with sum v. > •• R14.28 Given two arrays of n integers each, describe an O(n log( n)) algorithm for finding all elements that they have in common. > •• R14.29 Suppose we modify the quicksort algorithm from Special Topic 14.3, selecting the middle element instead of the first one as pivot. What is the running time on an array that is
|
{"source": 4220, "title": "from dpo"}
|
restate as Second Principle. Let G and H be Lie groups with G simply connected, and let > 9 and ~ be their Lie algebras. A linear map ~ : 9 --+ ~ is the differential of a map A: G --+ H of Lie groups if and only if ~ is a map of Lie algebras. PROOF. To see this, consider the product G x H. Its Lie algebra is just 9 EB ~. Let j c 9 EB ~ be the graph of the map ~. Then the hypothesis that ~ is a map of Lie algebras is equivalent to the statement that j is a Lie subalgebra of 9 EB ~; and given this, by the proposition there exists an immersed Lie subgroup J c G x H with tangent space T"J = j. Look now at the map n: J --+ G given by projection on the first factor. By hypothesis, the differential of this map dn.: i --+ 9 is an isomorphism, so that the map J --+ G is an isogeny; but since G is simply connected it follows that n is an isomorphism. The projection '1: G ~ J --+ H on the second factor is then a Lie group map whose differential at the identity is ~. 0Exercise 8.43*. If 9 --+ g' is a homomorphism of Lie algebras with kernel ~, show that the kernel H of the corresponding map of simply connected Lie groups G --+ G' is a closed subgroup of G with Lie group ~. This does not extend to non-normal subgroups, i.e., to the situation when ~ is not the kernel of a homomorphism: give an example of an immersed subgroup of a simply connected Lie group G whose image in G is not closed. Exercise 8.44.
|
{"source": 6137, "title": "from dpo"}
|
An evolutionary landscape is a metaphor or a construct used to think about and visualize the processes of evolution (e.g. natural selection and genetic drift) acting on a biological entity (e.g. a gene, protein, population, or species). This entity can be viewed as searching or moving through a search space. For example, the search space of a gene would be all possible nucleotide sequences. The search space is only part of an evolutionary landscape. The final component is the "y-axis", which is usually fitness. Each value along the search space can result in a high or low fitness for the entity. If small movements through search space cause changes in fitness that are relatively small, then the landscape is considered smooth. Smooth landscapes happen when most fixed mutations have little to no effect on fitness, which is what one would expect with the neutral theory of molecular evolution. In contrast, if small movements result in large changes in fitness, then the landscape is said to be rugged. In either case, movement tends to be toward areas of higher fitness, though usually not the global optima. What exactly constitutes an "evolutionary landscape" is frequently confused in the literature; the term is often used interchangeably with "adaptive landscape" and "fitness landscape", although some authors have different definitions of adaptive and fitness landscapes. Additionally, there is a large disagreement whether the concept of an evolutionary landscape should be used as a visual metaphor disconnected from the underlying math, a tool for evaluating models of evolution, or a model in and of itself used to generate hypotheses and predictions. == History == === Pre-Wright === According to McCoy (1979), the first evolutionary landscape was presented by Armand Janet of Toulon, France, in 1895. In Janet's evolutionary landscape, a species is represented as a point
|
{"page_id": 5848903, "title": "Evolutionary landscape"}
|
make it fitter. === Recombination/segregation === Combinations of alleles that have evolved to work well together may not work when recombined with a different suite of coevolved alleles, leading to outbreeding depression. Segregation load occurs in the presence of overdominance, i.e. when heterozygotes are more fit than either homozygote. In such a case, the heterozygous genotype gets broken down by Mendelian segregation, resulting in the production of homozygous offspring. Therefore, there is segregation load as not all individuals have the theoretical optimum genotype. Recombination load arises through unfavorable combinations across multiple loci that appear when favorable linkage disequilibria are broken down. Recombination load can also arise by combining deleterious alleles subject to synergistic epistasis, i.e. whose damage in combination is greater than that predicted from considering them in isolation. Evidence was reviewed indicating that meiosis reduces recombination load, thus providing a selective advantage of sexual reproduction. === Migration === Migration load is hypothesized to occur when maladapted non-native organisms enter a new environment. On one hand, beneficial genes from migrants can increase the fitness of local populations. On the other hand, migration may reduce the fitness of local populations by introducing maladaptive alleles. This is hypothesized to occur when the migration rate is "much greater" than the selection coefficient. Migration load may occur by reducing the fitness of local organisms, or through natural selection imposed on the newcomers, such as by being eliminated by local predators. Most studies have only found evidence for this theory in the form of selection against immigrant populations, however, one study found evidence for increased mutational burden in recipient populations, as well. == References ==
|
{"page_id": 1669966, "title": "Genetic load"}
|
the penetration of conventional X-rays. It has been found and confirmed in almost all studies, that critical scattering and correlation lengths are strongly affected by this effect. Combination of neutron and HEX-ray investigations on the same sample, such as contrast variations due to the different scattering lengths. Residual stress analysis in the bulk with unique spatial resolution in centimeter thick samples; in-situ under realistic load conditions. In-situ studies of thermo-mechanical deformation processes such as forging, rolling, and extrusion of metals. Real time texture measurements in the bulk during a deformation, phase transition or annealing, such as in metal processing. Structures and textures of geological samples which may contain heavy elements and are thick. High resolution triple crystal diffraction for the investigation of single crystals with all the advantages of high penetration and studies from the bulk. Compton spectroscopy for the investigation of momentum distribution of the valence electron shells. Imaging and tomography with high energies. Dedicated sources can be strong enough to obtain 3D tomograms in a few seconds. Combination of imaging and diffraction is possible due to simple geometries. For example, tomography combined with residual stress measurement or structural analysis. == See also == == Notes == == References == == Further reading == Liss, Klaus-Dieter; Bartels, Arno; Schreyer, Andreas; Clemens, Helmut (2003). "High-Energy X-Rays: A tool for Advanced Bulk Investigations in Materials Science and Physics". Textures and Microstructures. 35 (3–4): 219–252. doi:10.1080/07303300310001634952. Benmore, C. J. (2012). "A Review of High-Energy X-Ray Diffraction from Glasses and Liquids". ISRN Materials Science. 2012: 1–19. doi:10.5402/2012/852905. Eberhard Haug; Werner Nakel (2004). The elementary process of Bremsstrahlung. World Scientific Lecture Notes in Physics. Vol. 73. River Edge, NJ: World Scientific. ISBN 978-981-238-578-9. == External links == Liss, Klaus-Dieter; et al. (2006). "Recrystallization and phase transitions in a γ-Ti Al-based alloy as observed by
|
{"page_id": 4055891, "title": "High-energy X-rays"}
|
Varanasi's ghats the river water already contains 120 times as much, 60,000 fecal coliform bacteria per 100 ml. After the cremation of the deceased at Varanasi's ghats, the bones and ashes are immersed into the Ganges. However, in the past thousands of uncremated bodies were thrown into the Ganges during cholera epidemics, spreading the disease. Even today, holy men, pregnant women, people with leprosy or chicken pox, people who have been bitten by snakes, people who have committed suicide, the poor, and children under 5 are not cremated at the ghats but are left to float free, to decompose in the waters. In addition, those who cannot afford the large amount of wood needed to incinerate the entire body, leave behind many half-burned body parts. After passing through Varanasi, and receiving 32 streams of raw sewage from the city, the concentration of fecal coliforms in the river's waters rises from 60,000 to 1.5 million, with observed peak values of 100 million per 100 ml. Drinking and bathing in its waters therefore carries a high risk of infection. Between 1985 and 2000, Rs. 10 billion, around US$226 million, or less than 4 cents per person per year, were spent on the Ganga Action Plan, an environmental initiative that was "the largest single attempt to clean up a polluted river anywhere in the world". The Ganga Action Plan has been described variously as a "failure" and a "major failure". According to one study, The Ganga Action Plan, which was taken on priority and with much enthusiasm, was delayed for two years. The expenditure was almost doubled. But the result was not very appreciable. Much expenditure was done on political propaganda. The concerning governments and the related agencies were not very prompt to make it a success. The public of the areas was
|
{"page_id": 12448, "title": "Ganges"}
|
not very fast THEN brake pressure IS slightly decreased. In this example, the two input variables are "brake temperature" and "speed" that have values defined as fuzzy sets. The output variable, "brake pressure" is also defined by a fuzzy set that can have values like "static" or "slightly increased" or "slightly decreased" etc. === Fuzzy control in detail === Fuzzy controllers are very simple conceptually. They consist of an input stage, a processing stage, and an output stage. The input stage maps sensor or other inputs, such as switches, thumbwheels, and so on, to the appropriate membership functions and truth values. The processing stage invokes each appropriate rule and generates a result for each, then combines the results of the rules. Finally, the output stage converts the combined result back into a specific control output value. The most common shape of membership functions is triangular, although trapezoidal and bell curves are also used, but the shape is generally less important than the number of curves and their placement. From three to seven curves are generally appropriate to cover the required range of an input value, or the "universe of discourse" in fuzzy jargon. As discussed earlier, the processing stage is based on a collection of logic rules in the form of IF-THEN statements, where the IF part is called the "antecedent" and the THEN part is called the "consequent". Typical fuzzy control systems have dozens of rules. Consider a rule for a thermostat: IF (temperature is "cold") THEN turn (heater is "high") This rule uses the truth value of the "temperature" input, which is some truth value of "cold", to generate a result in the fuzzy set for the "heater" output, which is some value of "high". This result is used with the results of other rules to finally generate
|
{"page_id": 48660, "title": "Fuzzy control system"}
|
by the 413-series, which is a four-cylinder-version of the 416-series (long wheelbase (2,900 mm (114 in)) model). The 421 is the successor of the 411-series and has a 2,250 mm (89 in) wheelbase. It is powered by a 40 PS (29 kW; 39 hp) 2.2-litre passenger car Diesel engine.: 107 The 100,000th Unimog (a 421) was built in 1966 in Gaggenau.: 8 Argentina was the first country to manufacture the Unimog outside Germany. The first Unimog produced in the Mercedes-Benz Argentina S.A. factory in Gonzalez Catán, in the outskirts of Buenos Aires city, rolled off the assembly line on 1 September 1968.: 232 The two models made in Argentina, are the 426 and 431. They are versions of the 416 respectively 431 produced under licence.: 122 === 1970s === ==== 1972 – MB Trac ==== Despite originally being designed as an agricultural vehicle, the Unimog had more success as a multi-purpose tool carrier. To actually serve the agricultural market, Daimler-Benz designed a completely new agricultural tractor in 1972, the MB Trac. It is a body-on-frame design trac-tractor, has four big wheels of the same size, and all-wheel-drive, a slim bonnet, and an angular driver cab. In contrast to conventional tractors the cab is situated between the axles, similar to comparable four-wheel-drive tractors. There is no articulation between the front and rear sections, instead, the MB Trac has conventional steering. A wide range of MB Trac tractors were offered, ranging from the entry model MB-trac 65 to the top model MB Trac 1800 intercooler. Daimler-Benz later merged the MB-trac with the agricultural machinery activities of Deutz AG. The manufacturing of the MB Trac series ceased in 1991. ==== 1974 – Heavy series ==== In 1974, Mercedes-Benz presented the new Unimog U 120. It was the first model of the "heavy duty"
|
{"page_id": 81636, "title": "Unimog"}
|
In mathematics, given partial orders ⪯ {\displaystyle \preceq } and ⊑ {\displaystyle \sqsubseteq } on sets A {\displaystyle A} and B {\displaystyle B} , respectively, the product order (also called the coordinatewise order or componentwise order) is a partial order ≤ {\displaystyle \leq } on the Cartesian product A × B . {\displaystyle A\times B.} Given two pairs ( a 1 , b 1 ) {\displaystyle \left(a_{1},b_{1}\right)} and ( a 2 , b 2 ) {\displaystyle \left(a_{2},b_{2}\right)} in A × B , {\displaystyle A\times B,} declare that ( a 1 , b 1 ) ≤ ( a 2 , b 2 ) {\displaystyle \left(a_{1},b_{1}\right)\leq \left(a_{2},b_{2}\right)} if a 1 ⪯ a 2 {\displaystyle a_{1}\preceq a_{2}} and b 1 ⊑ b 2 . {\displaystyle b_{1}\sqsubseteq b_{2}.} Another possible order on A × B {\displaystyle A\times B} is the lexicographical order. It is a total order if both A {\displaystyle A} and B {\displaystyle B} are totally ordered. However the product order of two total orders is not in general total; for example, the pairs ( 0 , 1 ) {\displaystyle (0,1)} and ( 1 , 0 ) {\displaystyle (1,0)} are incomparable in the product order of the order 0 < 1 {\displaystyle 0<1} with itself. The lexicographic combination of two total orders is a linear extension of their product order, and thus the product order is a subrelation of the lexicographic order. The Cartesian product with the product order is the categorical product in the category of partially ordered sets with monotone functions. The product order generalizes to arbitrary (possibly infinitary) Cartesian products. Suppose A ≠ ∅ {\displaystyle A\neq \varnothing } is a set and for every a ∈ A , {\displaystyle a\in A,} ( I a , ≤ ) {\displaystyle \left(I_{a},\leq \right)} is a preordered set. Then the product preorder on
|
{"page_id": 1956306, "title": "Product order"}
|
Involution is the shrinking or return of an organ to a former size. At a cellular level, involution is characterized by the process of proteolysis of the basement membrane (basal lamina), leading to epithelial regression and apoptosis, with accompanying stromal fibrosis. The consequent reduction in cell number and reorganization of stromal tissue leads to the reduction in the size of the organ. == Examples == === Thymus === The thymus continues to grow between birth and sexual maturity and then begins to atrophy, a process directed by the high levels of circulating sex hormones. Proportional to thymic size, thymic activity (T cell output) is most active before maturity. Upon atrophy, the size and activity are dramatically reduced, and the organ is primarily replaced with fat. The atrophy is due to the increased circulating level of sex hormones, and chemical or physical castration of an adult results in the thymus increasing in size and activity. === Uterus === Involution is the process by which the uterus is transformed from pregnant to non-pregnant state. This period is characterized by the restoration of ovarian function in order to prepare the body for a new pregnancy. It is a physiological process occurring after parturition; the hypertrophy of the uterus has to be undone since it does not need to house the fetus anymore. This process is primarily due to the hormone oxytocin. The completion of this period is defined as when the diameter of the uterus returns to the size it is normally during a woman's menstrual cycle. === Mammary gland === During pregnancy until after birth, mammary glands grow steadily to a size required for optimal milk production. At the end of nursing, the number of cells in the mammary gland becomes reduced until approximately the same number is reached as before the
|
{"page_id": 19157294, "title": "Involution (medicine)"}
|
Boshirov, as suspected of the Skripals' poisoning, and alleged that they were active officers in Russian military intelligence. Later, investigative website Bellingcat stated that it had positively identified Ruslan Boshirov as being the highly decorated GRU Colonel Anatoliy Chepiga, that Alexander Petrov was Alexander Mishkin, also of the GRU, and that a third GRU officer present in the UK at the time was identified as Denis Vyacheslavovich Sergeev, believed to hold the rank of major general in the GRU. The pattern of his communications while in the UK indicates that he liaised with superior officers in Moscow. The attempted assassination and subsequent agent exposures was an embarrassment for Putin and for Russia's spying organisation. It was allegedly organised by the secret Unit 29155 of the Russian GRU, under the command of Major General Andrei V. Averyanov. On 27 November 2019, the Organisation for the Prohibition of Chemical Weapons (OPCW) added Novichok, the Soviet-era nerve agent used in the attack, to its list of banned substances. == Chronology of events == At 14:40 GMT on 3 March 2018, Yulia Skripal, the 33-year-old daughter of Sergei Skripal, a 66-year-old resident of Salisbury, flew into Heathrow Airport from Sheremetyevo International Airport in Moscow, Russia. At 09:15 on 4 March Sergei Skripal's burgundy 2009 BMW 320d was seen in the area of London Road, Churchill Way North and Wilton Road at Salisbury. At 13:30 Skripal's car was seen on Devizes Road on the way towards the town centre. At 13:40 the Skripals arrived in the upper level car park at the Maltings, Salisbury and then went to the Bishop's Mill pub in the town centre. At 14:20 they dined at Zizzi on Castle Street, leaving at 15:35. At 16:15 an emergency services call reported that a man and woman, later identified as Sergei and
|
{"page_id": 56823699, "title": "Poisoning of Sergei and Yulia Skripal"}
|
The barrel cortex is a region of the somatosensory cortex that is identifiable in some species of rodents and species of at least two other orders and contains the barrel field. The 'barrels' of the barrel field are regions within cortical layer IV that are visibly darker when stained to reveal the presence of cytochrome c oxidase and are separated from each other by lighter areas called septa. These dark-staining regions are a major target for somatosensory inputs from the thalamus, and each barrel corresponds to a region of the body. Due to this distinctive cellular structure, organisation, and functional significance, the barrel cortex is a useful tool to understand cortical processing and has played an important role in neuroscience. The majority of what is known about corticothalamic processing comes from studying the barrel cortex, and researchers have intensively studied the barrel cortex as a model of neocortical column. The most distinctive aspect of the barrel field are the whisker barrels. These structures were first discovered by Woolsey and Van der Loos in 1970. Staining in the whisker barrels is more distinct than that in other areas of the somatosensory cortex. Recognizing that the array was similar to that of the vibrissae (whiskers) on the mystacial pad (region where whiskers grow from) of certain mammals, they hypothesized that the barrels were the "cortical correlates of the mystacial vibrissae" and that "one barrel represents one vibrissa". Whereas small non-whisker areas of barrel cortex correspond to large and sometimes overlapping areas of the body, each much larger whisker barrel corresponds to a single whisker. As a result, the whisker barrels are the focus of the majority of barrel cortex research, and 'barrel cortex' is often used to refer primarily to the whisker barrels. Consequently, much of this article focuses on rodent whisker
|
{"page_id": 1287722, "title": "Barrel cortex"}
|
total, and most have less than five terms. This poses an additional issue for search engines, as finding relevant documents with this limited input is more difficult. > 4.3.2 Boolean retrieval Boolean search uses first-order logic expressions to filter the collection [7, Ch. 7.1]. It uses the logical operators “AND”, “OR”, and “NOT” to connect search terms and narrow down the result list. The following example shows a Boolean search query. > Example 3: A Boolean query. This query returns all documents which have “apple” in their postings list, and either do not contain “cherry” or contain“banana”. “apple” AND (NOT “cherry” OR “banana”) Additionally, the list of operators can be expanded with locality constraints, such as a “NEAR” operator. Using the “NEAR” operator, the terms are only counted if they are separated by fewer than a given number of words. This type of extension is infeasible without recording the term positions in the index. Boolean retrieval is ideal for comprehensive searches and finding all relevant documents to a query. The result list of a Boolean search contains all documents that match the expression. If no additional information is used, the order of the documents 26 is arbitrary. This type of query logic can be very efficient for users who have experience using the system for comprehensive searches. However, using Boolean queries to find information quickly without the need for completeness can be inefficient. It is difficult to formulate Boolean queries that produce a balanced number of documents. The query can easily be too broad, returning many documents, or too specific, only applying to a few documents. See [7, pp. 235–237] and [15, p. 15] for example problems. > 4.3.3 Ranked retrieval Ranked retrieval is ideal for retrieving documents with ad-hoc queries . Ad-hoc queries are quick, one-off searches for specific
|
{"source": 1021, "title": "from dpo"}
|
groups from ScienceDirect's AI-generated Topic Pages") due to not considering students and retired individuals. The results for the _Gender_ variable are inconclusive. They confirm the results of Roche-Cerasi et al. (2013) with a statistically significant negative impact on the _Bususe_ (-0.31). This suggests that respondents that have completed an intermediate level of education are more car dependent than respondents with long higher education studies, partly confirming the findings of (Rachele et al., 2015, flexibility of the schedule, personal need for status associated with car etc. As the survey data does not give details on these accounts, a clear conclusion can not be drawn on these results. 5. Conclusions, policy recommendations, and future research ----------------------------------------------------------- Our study offers additional insight into the level of influence of eleven individual factors on the travel mode choice of employees living and working in networks of small cities and towns. The research uses the region of Agder in Norway as a case study, profiting from its representativity as a coastal network of small cities and towns typical for Northern Europe. We consider the contribution of our research to be primarily in the area of travel behaviour analysis from the perspective of the relation between PT usage and age, gender, education level, accessibility, time, parking provision, car ownership, household componence, and costs, based on precise and quantifiable data, in the context of networks of small cities and towns. Secondly,
|
{"source": 2802, "title": "from dpo"}
|
to the desire, no ultimate satisfaction, life is perpetual suffering. In his first book, The Birth of Tragedy, Nietzsche clearly adopts this Schopenhauerian dualistic view of a distinction between appearance and reality, will and representation, but interestingly he personifies the word "will," treats it as if it were a conscious agent, and refers to it as "the primal unity." 4 Now, the word "aesthetics," which has to do with the study of art and beauty, is derived from the Greek word, "aisthetikos," which refers to the perceptive quality, or the appearance of things. Since the world as representation, the world we experience around us everyday, is an appearance, Nietzsche in this first work talks about this world as if it were a kind of artistic creation of this personified primal unity at the heart of things: "[W]e may assume that we are merely images and artistic projections for the true author, and that we have our highest dignity in our significance as works of art -- for it is only as an aesthetic phenomenon that existence and the world are eternally justified . . ." 5 The "true author" is of course the primal unity, but -- continuing the anthropomorphism -- why does it project us and the rest of the world, why does it do art? Nietzsche says: > . . . the truly existent primal unity, eternally suffering and contradictory . . . needs the rapturous vision, the pleasurable illusion, for its continuous redemption. And we, completely wrapped up in this illusion and composed of it, are compelled to consider this illusion as the truly nonexistent -- i.e., as a perpetual becoming in time, space and causality -- in other words, as empirical reality. 6 The world as we know it, the everyday world, the world as representation,
|
{"source": 5205, "title": "from dpo"}
|
shards that can be corrected. Circuit for this fraud proof for 1 MiB cluster requires \sim 3 \cdot 2^{15}(16 \to 8) poseidon2 hashes. * **Partial incorrectness of shards**: Some shards are damaged but can be recovered from others. Circuit for this fraud proof for 1 MiB cluster requires \sim 2^{15}(16 \to 8) poseidon2 hashes. **Reaction**: * Nodes use data redundancy to recover correct shards in the mining process. * Miners will form **fraud proofs** consisting of a zkSNARK that recalculates this data correctly. Since the data of one block is just hundreds of thousands of field elements, there is no difficulty for the miner to generate such a zkSNARK. The proof and verification of the zkSNARK will be paid from the penalty of validators who proposed such a transaction. ### []( Guarantees It’s important to note that the optimistic elements of the protocol do not reduce the security of the system, as the assumption of the presence of a sufficient number of honest nodes is as reliable as these elements themselves. If an honest node misses defects or refuses to store data, it still won’t be able to mine and receive rewards. * Data preservation with an honest minority: If there is a sufficient number of honest nodes in the network, the data will be correctly stored and correspond to the corrected code close to the original code used to generate the commitment. Even if the commitment or shards were generated with errors, the network will be able to correct them and restore the data without errors. * Protection against incorrect changes: In the process of error correction, a dishonest majority will not be able to substitute shard hashes with incorrect ones or replace the commitment with one that does not correspond to the corrected code. This guarantees the immutability
|
{"source": 6413, "title": "from dpo"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.