text
stringlengths 5
10.5k
| source
stringlengths 33
146
|
|---|---|
has been solved. The sample problems are in verse and the commentary is in prose associated with calculations. The problems involve arithmetic, algebra and geometry, including mensuration. The topics covered include fractions, square roots, arithmetic and geometric progressions, solutions of simple equations, simultaneous linear equations, quadratic equations and indeterminate equations of the second degree. === Composition === The manuscript is written in an earlier form of Sharada script, a script which is known for having been in use mainly from the 8th to the 12th century in the northwestern part of South Asia, such as Kashmir and neighbouring regions. The language of the manuscript, though intended to be Sanskrit, was significantly influenced in its phonetics and morphology by a local artist dialect or dialects, and some of the resultant linguistic peculiarities of the text are shared with Buddhist Hybrid Sanskrit. The overlying dialects, though sharing affinities with Apabhraṃśa and with Old Kashmiri, have not been identified precisely. It is probable that most of the rules and examples had been originally composed in Sanskrit, while one of the sections was written entirely in a dialect. It is possible that the manuscript might be a compilation of fragments from different works composed in a number of language varieties. Hayashi admits that some of the irregularities are due to errors by scribes or may be orthographical. A colophon to one of the sections states that it was written by a brahmin identified as "the son of Chajaka", a "king of calculators," for the use of Vasiṣṭha's son Hasika. The brahmin might have been the author of the commentary as well as the scribe of the manuscript. Near the colophon appears a broken word rtikāvati, which has been interpreted as the place Mārtikāvata mentioned by Varāhamihira as being in northwestern India (along with Takṣaśilā,
|
{"page_id": 4305817, "title": "Bakhshali manuscript"}
|
the antennas with phasing techniques that produced the same output pattern with no moving parts. One of the longest lasting examples was Sonne, which went into operation just before World War II and was used operationally under the name Consol until 1991. The modern VOR system is based on the same principles (see below). === ADF and NDB === A great advance in the RDF technique was introduced in the form of phase comparisons of a signal as measured on two or more small antennas, or a single highly directional solenoid. These receivers were smaller, more accurate, and simpler to operate. Combined with the introduction of the transistor and integrated circuit, RDF systems were so reduced in size and complexity that they once again became quite common during the 1960s, and were known by the new name, automatic direction finder, or ADF. This also led to a revival in the operation of simple radio beacons for use with these RDF systems, now referred to as non-directional beacons (NDB). As the LF/MF signals used by NDBs can follow the curvature of earth, NDB has a much greater range than VOR which travels only in line of sight. NDB can be categorized as long range or short range depending on their power. The frequency band allotted to non-directional beacons is 190–1750 kHz, but the same system can be used with any common AM-band commercial station. === VOR === VHF omnidirectional range, or VOR, is an implementation of the reverse-RDF system, but one that is more accurate and able to be completely automated. The VOR station transmits two audio signals on a VHF carrier – one is Morse code at 1020 Hz to identify the station, the other is a continuous 9960 Hz audio modulated at 30 Hz, with the 0-degree referenced to
|
{"page_id": 153095, "title": "Radio navigation"}
|
in between which a thin layer of insulator (normally aluminum oxide) is deposited. == Hamiltonian == If the Josephson junction has a junction capacitance C J {\displaystyle C_{\rm {J}}} , and the gate capacitor C g {\displaystyle C_{\rm {g}}} , then the charging (Coulomb) energy of one Cooper pair is: E C = ( 2 e ) 2 / 2 ( C g + C J ) . {\displaystyle E_{\rm {C}}=(2e)^{2}/2(C_{\rm {g}}+C_{\rm {J}}).} If n {\displaystyle n} denotes the number of excess Cooper pairs in the island (i.e. its net charge is − 2 n e {\displaystyle -2ne} ), then the Hamiltonian is: H = ∑ n [ E C ( n − n g ) 2 | n ⟩ ⟨ n | − 1 2 E J ( | n ⟩ ⟨ n + 1 | + | n + 1 ⟩ ⟨ n | ) ] , {\displaystyle H=\sum _{n}{\big [}E_{\rm {C}}(n-n_{\rm {g}})^{2}|n\rangle \langle n|-{\frac {1}{2}}E_{\rm {J}}(|n\rangle \langle n+1|+|n+1\rangle \langle n|){\big ]},} where n g = C g V g / ( 2 e ) {\displaystyle n_{\rm {g}}=C_{\rm {g}}V_{\rm {g}}/(2e)} is a control parameter known as effective offset charge ( V g {\displaystyle V_{\rm {g}}} is the gate voltage), and E J {\displaystyle E_{\rm {J}}} the Josephson energy of the tunneling junction. At low temperature and low gate voltage, one can limit the analysis to only the lowest n = 0 {\displaystyle n=0} and n = 1 {\displaystyle n=1} states, and therefore obtain a two-level quantum system (a.k.a. qubit). Note that some recent papers adopt a different notation, and define the charging energy as that of one electron: E C = e 2 / 2 ( C g + C J ) , {\displaystyle E_{\rm {C}}=e^{2}/2(C_{\rm {g}}+C_{\rm {J}}),} and then the corresponding Hamiltonian is: H = ∑ n
|
{"page_id": 977983, "title": "Charge qubit"}
|
There were approximately 13 miles of track in and out of Hawthorne Works for rail freight of inbound materials and outbound finished products. Western Electric had a tenure of 50 years up to 1952, in the responsibility and operation of its use for Hawthorne and other nearby industrial companies. Also, in 1903, the construction of Hawthorne Works first buildings were authorized by Barton. In 1907, the research and development staffs of Western Electric and AT&T were consolidated to 463 West Street, New York. The location served the newly Western Electric Engineering Department for the responsibility of the testing and inspection of its telephones and equipment. AT&T's Engineering Department retained the responsibility for the growth of the Bell System with compatible equipment and service. Gradually the consolidation improved and advanced the telephony response to expanding use. On July 24, 1915, employees of the Hawthorne Works boarded the SS Eastland in downtown Chicago for a company picnic. The ship rolled over at the dock and 844 people died. In 1920, Alice Heacock Seidel was the first female Western Electric employee to be given permission to stay on after she had married. This set a precedent in the company, which previously had not allowed married women in their employ. Miss Heacock had worked for Western Electric for sixteen years before her marriage, and was at the time the highest-paid secretary in the company. In her memoirs, she wrote that the decision to allow her to stay on "required a meeting of the top executives to decide whether I might remain with the Company, for it established a precedent and a new policy for the Company – that of married women in their employ. If the women at the top were permitted to remain after marriage then all women would expect the same privilege.
|
{"page_id": 229970, "title": "Western Electric"}
|
referring to evidence on the topic, text, or issue to probe and reflect on ideas under discussion. (See grade 6 Reading Literature Standard 1 and Reading Informational Text Standard 1 for specific expectations regarding the use of textual evidence.) b. Follow rules for collegial discussions, set specific goals and deadlines, and define individual roles as needed. Massachusetts Curriculum Framework for English Language Arts and Literacy 94 c. Po se and respond to specific questions with elaboration and detail by making comments that contribute to the topic, text, or issue under discussion. d. Review the key ideas expressed and demonstrate understanding of multiple perspectives through reflection and paraphrasing. 2. Interpret information presented in diverse media and formats (e.g., visually, quantitatively, orally) and explain how it contributes to a topic, text, or issue under study. 3. Delineate a speaker’s argument and specific claims, distinguishing c laims that are supported by reasons and evidence from claims that are not. Presentation of Knowledge and Ideas 4. Present claims and findings, sequencing ideas logically and using pertinent descriptions, facts, and details to accentuate main ideas or themes; use appropriate vocabulary, eye contact, volume, and pronunciation. (See grade 6 Language Standards 4 –6 for specific expectations regarding vocabulary.) 5. Include multimedia components and visual displays in presentations to clarify information. 6. Adapt speech to a variety of contexts and tasks, demonstrating command of formal English when indicated or appropriate. (See grade 6 Language Standards 1 and 3 for specific expectations.) # Grade 6 Language Standards [L] The following standards for grades 6 –12 offer a f ocus for instruction each year to help ensure that students gain adequate mastery of a range of skills and applications. Students advancing through the grades are expected to meet each year’s grade -specific standards and retain or further develop skills
|
{"source": 966, "title": "from dpo"}
|
P1 contention experiment are on average 5% higher than the control experiment, allowing the browser to efficiently distinguish between the two distributions. We observe similar results on Chrome 95. In the following sections, we describe how to convert this proof-of-concept into practical attacks. In particular we obtain a higher spatial resolution and evaluate 100 WebAssembly instructions ( C1 ), we ensure the attacker does not have to pin processes ( C2 ), and we use a higher resolution timer ( C3 ). 4 PC-detector The translation of WebAssembly instructions into 𝜇 ops is variable on different systems: it can depend on the microarchitecture, in-struction extension sets or JavaScript engine. In this context, it can be hard to find WebAssembly instructions that reliably cause port contention. In this section, we propose PC-detector, a Selenium-based framework to dynamically detect and characterize the port usage of WebAssembly instructions. Using the methodology de-scribed in Section 3, PC-detector automatically tests multiple Web-Assembly instructions and checks if they cause contention on P1 or P5. ## 4.1 Description Framework. Our framework is composed of two components. The first component is a native C script that either runs an empty loop, creates contention on P1, or creates contention on P5. The second component is a Selenium-controlled browser which runs automati-cally generated WebAssembly code. For each WebAssembly instruc-tion instr , we create a binary file with 1 000 000 calls. This file is then executed in the browser, and its runtime is measured using performance.now() . We run three experiments: (1) Repeatedly executing and timing the WebAssembly file, used as a control. (2) Creating contention on P1 with native code and timing the WebAssembly file. (3) Creating contention on P5 with native code and timing the WebAssembly file. By evaluating the timing distributions of these three experiments, we can
|
{"source": 2331, "title": "from dpo"}
|
Executive Chair may not simultaneously act as Banco Santander’s Chief Executive Officer. • The corporate Risk, Compliance and Conduct, and Internal Audit functions report as independent units to a committee or a member of the board of directors and have direct, unfettered access to the board. > 215 2023 Annual report Contents Business model and strategy > Responsible banking > Corporate governance > Economic and financial review > Risk, compliance &conduct management Lead Independent Director Our Lead Independent Director is Glenn Hutchins as of 1 October 2023. He replaced Bruce Carnegie-Brown, who had been in the role for almost nine years. The Lead Independent Director, who is key to our governance, coordinates the non-executive directors effectively and makes sure they serve as an appropriate counter-balance to the executive directors. The following chart shows the Lead Independent Director's functions and activities in 2023. Before stepping down, Bruce Carnegie-Brown provided a detailed report to the nomination committee and board of directors on his activities and the discharge of his duties. Duties of the Lead Independent Director and activities during 2023 Duties Activities in 2023 Facilitate discussion and open dialogue among independent directors, coordinating private meetings of non-executive directors without the executive directors present and proactively engaging with them to consider their views and opinions. Held five meetings with non-executive directors where they were able to voice their views and opinions. These meetings provided a valuable opportunity to reflect on the overall board and committee cycle throughout the year, to discuss board training topics, strategy execution, executive director and top management performance and objectives, succession planning and reflections on areas of continuous improvement. Given the appointment of a new Chief Executive Officer, the non - executive directors invited him to one session to gain his views after three months in office. In addition,
|
{"source": 4951, "title": "from dpo"}
|
the widest range of settings, we ask for the key-derivation to be secure even against seed-dependent 3, adversarially-manipulated sources. However, Proposition 3.1 shows that, at least in general, no extractors exist that work for such a strong adversarial model. We therefore turn to seed-dependent condensers, showing that these yield strong positive results about the security of key-derivation. Towards this, we model the “real” seed-dependent setting as follows. Let S ← { 0, 1}d be a random seed that is chosen and X ← A (S) is sampled by an adversarial sampler A. Finally, the cryptographic primitive P uses R ← Cond (X; S) as the key. While the above model is the one of greatest most direct > 3For example the Linux RNG folds back into its entropy pool prior outputs . 8practical interest, we will actually consider the more general case of average-case condensing, in which an attacker B against P obtains part of the input to the condenser, the side-information Z. The resulting real/ideal settings for deriving the key for P are formalized by the procedures Real (A) and Ideal (A): > Real (A): > S← { 0,1}d > (X, Z )← A (S) > R←Cond (( X, Z ); S)Return ( R, S, Z ) > Ideal (A): > S← { 0,1}d > (X, Z )← A (S) > R← { 0,1}m > Return ( R, S, Z ) The two procedures are parameterized by a sampler A that on input the seed S outputs a pair ( X, Z ). We assume that the sampler A has size at most t and produces a source X of (conditional) min-entropy H∞(X|(S, Z )) ≥ k, for some parameters t and k. We call such samplers ( t, k )-bounded . Sometimes, to emphasize the dependence on the
|
{"source": 6222, "title": "from dpo"}
|
)} with counting measure on { t 1 , … , t n } {\displaystyle \{t_{1},\dots ,t_{n}\}} and Lebesgue measure on R {\displaystyle \mathbb {R} } , the kernel is f 1 / 2 K Ai ext ( t i , x ; t j , y ) f 1 / 2 {\displaystyle f^{1/2}K_{\operatorname {Ai} }^{\operatorname {ext} }(t_{i},x;t_{j},y)f^{1/2}} . == Literature == Prähofer, Michael; Spohn, Herbert (2002). "Scale Invariance of the PNG Droplet and the Airy Process". Journal of Statistical Physics. 108. Springer. arXiv:math/0105240. Johansson, Kurt (2003). "Discrete Polynuclear Growth and Determinantal Processes". Commun. Math. Phys. 242. Springer: 290. arXiv:math/0206208. doi:10.1007/s00220-003-0945-y. Tracy, Craig; Widom, Harold (2003). "A System of Differential Equations for the Airy Process". Electron. Commun. Probab. 8: 93–98. arXiv:math/0302033. doi:10.1214/ECP.v8-1074. == References ==
|
{"page_id": 73934690, "title": "Airy process"}
|
In molecular biology mir-828 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. == See also == MicroRNA == References == == Further reading == == External links == Page for mir-828 microRNA precursor family at Rfam
|
{"page_id": 36472139, "title": "Mir-828 microRNA precursor family"}
|
is given by F ⋆ G ( z ) = 1 2 π ∫ F ( z 1 ) G ( z 2 − z 1 ) e i ( x 1 y 2 − y 1 x 2 ) d x 1 d y 1 . {\displaystyle F\star G(z)={1 \over 2\pi }\int F(z_{1})G(z_{2}-z_{1})e^{i(x_{1}y_{2}-y_{1}x_{2})}\,dx_{1}dy_{1}.} The smoothing operators correspond to W(F) or ψ(a) with F or a Schwartz functions on R2. The corresponding operators T have kernels that are Schwartz functions. They carry each Sobolev space into the Schwartz functions. Moreover, every bounded operator on L2 (R) having this property has this form. For the operators ψ(a) the Moyal product translates into the Weyl symbolic calculus. Indeed, if the Fourier transforms of a and b have compact support then ψ ( a ) ψ ( b ) = ψ ( a ∘ b ) , {\displaystyle \psi (a)\psi (b)=\psi (a\circ b),} where a ∘ b = ∑ n ≥ 0 i n n ! ( ∂ 2 ∂ x 1 ∂ y 2 − ∂ 2 ∂ y 1 ∂ x 2 ) n a ⊗ b | d i a g o n a l . {\displaystyle a\circ b=\sum _{n\geq 0}{i^{n} \over n!}\left({\partial ^{2} \over \partial x_{1}\partial y_{2}}-{\partial ^{2} \over \partial y_{1}\partial x_{2}}\right)^{n}a\otimes b|_{\mathrm {diagonal} }.} This follows because in this case b must extend to an entire function on C2 by the Paley-Wiener theorem. This calculus can be extended to a broad class of symbols, but the simplest corresponds to convolution by a class of functions or distributions that all have the form T + S where T is a distribution of compact with singular support concentrated at 0 and where S is a Schwartz function. This class contains the operators P, Q as well as D1/2 and D−1/2
|
{"page_id": 34205013, "title": "Oscillator representation"}
|
psychologists also study aging and processes throughout the life span, including old age. These psychologists draw on the full range of psychological theories to inform their research. === Genes and environment === All researched psychological traits are influenced by both genes and environment, to varying degrees. These two sources of influence are often confounded in observational research of individuals and families. An example of this confounding can be shown in the transmission of depression from a depressed mother to her offspring. A theory based on environmental transmission would hold that an offspring, by virtue of their having a problematic rearing environment managed by a depressed mother, is at risk for developing depression. On the other hand, a hereditarian theory would hold that depression risk in an offspring is influenced to some extent by genes passed to the child from the mother. Genes and environment in these simple transmission models are completely confounded. A depressed mother may both carry genes that contribute to depression in her offspring and also create a rearing environment that increases the risk of depression in her child. Behavioral genetics researchers have employed methodologies that help to disentangle this confound and understand the nature and origins of individual differences in behavior. Traditionally the research has involved twin studies and adoption studies, two designs where genetic and environmental influences can be partially un-confounded. More recently, gene-focused research has contributed to understanding genetic contributions to the development of psychological traits. The availability of microarray molecular genetic or genome sequencing technologies allows researchers to measure participant DNA variation directly, and test whether individual genetic variants within genes are associated with psychological traits and psychopathology through methods including genome-wide association studies. One goal of such research is similar to that in positional cloning and its success in Huntington's: once a causal
|
{"page_id": 22921, "title": "Psychology"}
|
J147 is an experimental drug with reported effects against both Alzheimer's disease and ageing in mouse models of accelerated aging. The approach that lead to development of the J147 drug was to screen candidate molecules for anti-aging effects, instead of targeting the amyloid plaques. It is contrary to most other approaches to developing drugs against Alzheimer's disease that target the plaque deposits in the brain. The J147 drug is also reported to address other biological aging factors, such as preventing the leakage of blood from microvessels in mice brains. The development of J147 follows the chemical pharmacological way, contrary to biological ways that exploit e.g. use of bacteriophages. Its derivative CAD-31 has enhanced neurogenic activity over J147 in human neural precursor cells. CAD-31 enhances the use of free fatty acids for energy production by shifting of the metabolic profile of fatty acids toward the production of ketone bodies, the only alternative source of energy in the brain when glucose levels are low. The target molecule is a protein called ATP synthase, which is found in the mitochondria. == References ==
|
{"page_id": 48548495, "title": "J147"}
|
This is a list of TCP and UDP port numbers used by protocols for operation of network applications. The Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) only need one port for bidirectional traffic. TCP usually uses port numbers that match the services of the corresponding UDP implementations, if they exist, and vice versa. The Internet Assigned Numbers Authority (IANA) is responsible for maintaining the official assignments of port numbers for specific uses, However, many unofficial uses of both well-known and registered port numbers occur in practice. Similarly, many of the official assignments refer to protocols that were never or are no longer in common use. This article lists port numbers and their associated protocols that have experienced significant uptake. == Table legend == == Well-known ports == The port numbers in the range from 0 to 1023 (0 to 210 − 1) are the well-known ports or system ports. They are used by system processes that provide widely used types of network services. On Unix-like operating systems, a process must execute with superuser privileges to be able to bind a network socket to an IP address using one of the well-known ports. == Registered ports == The range of port numbers from 1024 to 49151 (210 to 215 + 214 − 1) are the registered ports. They are assigned by IANA for specific service upon application by a requesting entity. On most systems, registered ports can be used without superuser privileges. == Dynamic, private or ephemeral ports == The range 49152–65535 (215 + 214 to 216 − 1), 16 384 ports, contains dynamic or private ports that cannot be registered with IANA. This range is used for private or customized services, for temporary purposes, and for automatic allocation of ephemeral ports. == Note == == See also
|
{"page_id": 347136, "title": "List of TCP and UDP port numbers"}
|
different currents for specific functions. Polarities are changed to get more possible functions over a single circuit. For example, imagine one possible scheme where the presence of these currents cause the base station to change state: no current means receive on channel 1, (the default). +6 mA might mean transmit on channel 1 −6 mA might mean stay in receive mode but switch to channel 2. So long as the −6 mA current were present, the remote base station would continue to receive on channel 2. −12 mA might command the base station to transmit on channel 2. This circuit is polarity-sensitive. If a telephone company cable splicer accidentally reversed the conductors, selecting channel 2 would lock the transmitter on. Each current level could close a set of contacts, or operate solid-state logic, at the other end of the circuit. That contact closure caused a change of state on the controlled device. Some remote control equipment could have options set to allow compatibility between manufacturers. That is, a base station that was configured to transmit with a +18 mA current could have options changed to (instead) make it transmit when +6 mA was present. In two-way radio use, AC signals were also present on the circuit pair. If the base station were idle, receive audio would be sent over the line from the base station to the dispatch office. In the presence of a transmit command current, the remote control console would send audio to be transmitted. The voice of the user in the dispatch office would be modulated and superimposed over the DC current that caused the transmitter to operate. == See also == Current source – a current loop transmitter Current-to-voltage converter Highway Addressable Remote Transducer Protocol NAMUR – German industry standards body defining fault levels for 4–20
|
{"page_id": 1174172, "title": "Current loop"}
|
HD 99922 is a double star system in the constellation of Crater. It shines with an apparent visual magnitude of 5.77 from a distance of about 450 light years (140 parsecs) away from the Earth. The primary star is an A-type main sequence star; the secondary star is located about 8 arcseconds away. Other designations include HR 4428 and HIP 56078. == References ==
|
{"page_id": 37652780, "title": "HD 99922"}
|
as a linker histone and about 180 base pairs of DNA. These core histones are rich in lysine and arginine residues. The carboxyl (C) terminal end of these histones contribute to histone-histone interactions, as well as histone-DNA interactions. The amino (N) terminal charged tails are the site of the post-translational modifications, such as the one seen in H3K36me3. == Epigenetic implications == The post-translational modification of histone tails by either histone modifying complexes or chromatin remodelling complexes are interpreted by the cell and lead to complex, combinatorial transcriptional output. It is thought that a Histone code dictates the expression of genes by a complex interaction between the histones in a particular region. The current understanding and interpretation of histones comes from two large scale projects: ENCODE and the Epigenomic roadmap. The purpose of the epigenomic study was to investigate epigenetic changes across the entire genome. This led to chromatin states which define genomic regions by grouping the interactions of different proteins and/or histone modifications together. Chromatin states were investigated in Drosophila cells by looking at the binding location of proteins in the genome. Use of ChIP-sequencing revealed regions in the genome characterised by different banding. Different developmental stages were profiled in Drosophila as well, an emphasis was placed on histone modification relevance. A look in to the data obtained led to the definition of chromatin states based on histone modifications. The human genome was annotated with chromatin states. These annotated states can be used as new ways to annotate a genome independently of the underlying genome sequence. This independence from the DNA sequence enforces the epigenetic nature of histone modifications. Chromatin states are also useful in identifying regulatory elements that have no defined sequence, such as enhancers. This additional level of annotation allows for a deeper understanding of cell specific
|
{"page_id": 62594569, "title": "H3K23ac"}
|
effect caused by slight differences in observation position between the two images. When using two images produced by the same sensor with a separation in time, it must be assumed other phase contributions (for example from deformation or atmospheric effects) are minimal. In 1995 the two ERS satellites flew in tandem with a one-day separation for this purpose. A second approach is to use two antennas mounted some distance apart on the same platform, and acquire the images at the same time, which ensures no atmospheric or deformation signals are present. This approach was followed by NASA's SRTM mission aboard the Space Shuttle in 2000. InSAR-derived DEMs can be used for later two-pass deformation studies, or for use in other geophysical applications. === Mapping and classification of active deformation areas === Various procedures have been developed to semi-automatically identify clusters of active persistent scatterers, usually referred to as active deformation areas, and preliminarily associate them with different potential types of deformational processes (e.g., landslides, sinkholes, building settlements, land subsidence) across wide areas. == See also == Coherence (physics) Optical heterodyne detection Remote sensing ROI PAC == References == == Further reading == B. Kampes, Radar Interferometry – Persistent Scatterer Technique, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2006. ISBN 978-1-4020-4576-9 == External links == InSAR, a tool for measuring Earth's surface deformation Matthew E. Pritchard USGS InSAR factsheet Archived 2009-06-25 at the Wayback Machine InSAR Principles, ESA publication, TM19, February 2007.
|
{"page_id": 7321060, "title": "Interferometric synthetic-aperture radar"}
|
stopping points, the engineer can design the signal phasing and timing to best match the goals of the operational plan. Guidance: > 02 Pavement markings should be used at highway traffic signal locations as provided in Part 3. If the road surface will not retain pavement markings, signs should be installed to provide the needed road user information. > MUTCD 11th Edition Page 647 Decem ber 20 23 Sect. 4A.10 Section 4A.10 Responsibility for Operation and Maintenance Guidance: > 01 Prior to installing any highway traffic signal, the responsibility for the maintenance of the signal and all of the appurtenances, hardware, software, and the timing plan(s) should be clearly established by the responsible agency. > 02 To this end the agency should: A. Keep every controller assembly in effective operation in accordance with its predetermined timing schedule, check the operation of the controller assembly frequently enough to verify that it is operating in accordance with the predetermined timing schedule, and establish a policy to maintain a record of all timing changes and that only authorized persons are permitted to make timing changes; B. Clean the optical system of the signal sections and replace the light sources as frequently as experience proves necessary; C. Clean and service equipment and other appurtenances as frequently as experience proves necessary; D. Provide for alternate operation of the traffic control signal during a period of failure, using flashing mode or manual control, or manual traffic direction by proper authorities as might be required by traffic volumes or congestion, or by erecting other traffic control devices; E. Have properly-skilled maintenance personnel available without undue delay for all signal malfunctions and signal indication failures; F. Provide spare equipment to minimize the interruption of highway traffic signal operation as a result of equipment failure; G. Provide for the
|
{"source": 1185, "title": "from dpo"}
|
Title: A deterministic gradient-based approach to avoid saddle points | European Journal of Applied Mathematics | Cambridge Core URL Source: Markdown Content: Abstract -------- Loss functions with a large number of saddle points are one of the major obstacles for training modern machine learning (ML) models efficiently. First-order methods such as gradient descent (GD) are usually the methods of choice for training ML models. However, these methods converge to saddle points for certain choices of initial guesses. In this paper, we propose a modification of the recently proposed Laplacian smoothing gradient descent (LSGD) Osher et al., [arXiv:1806.06317, and demonstrate its potential to avoid saddle points without sacrificing the convergence rate. Our analysis is based on the attraction region, formed by all starting points for which the considered numerical scheme converges to a saddle point. We investigate the attraction region’s dimension both analytically and numerically. For a canonical class of quadratic functions, we show that the dimension of the attraction region for mLSGD is \lfloor (n-1)/2\rfloor⌊(n−1)/2⌋, and hence it is significantly smaller than that of GD whose dimension is n-1 n−1. References ---------- Agarwal, N., Allen-Zhu, Z., Bullins, B., Hazan, E.&Ma, T. (2017) Finding approximate local minima faster than gradient descent. In: _Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing_, STOC 2017, Association for Computing Machinery, New York, NY, USA, pp. 1195–1199.CrossRef Learning deep architectures for AI. Found. Trends Mach. Learn.2(1), 1–127.CrossRef Gradient descent finds the cubic-regularized nonconvex Newton step. SIAM J. Optim.29(3), 2146–2178.CrossRef Exploiting negative curvature in deterministic and stochastic optimization. Math. Program.176(1), 69–94.CrossRef A trust region algorithm with a worst-case iteration complexity of \mathcal{O}(\epsilon^{-3/2})
|
{"source": 3359, "title": "from dpo"}
|
taking a full walk of the tree and skip-ping nodes, but without skipping more than two consecutive intermediate nodes. Show that the costliest edge in a bottleneck spanning tree has a cost that is at most the cost of the costliest edge in a bottleneck hamiltonian cycle.) 35.2-5 Suppose that the vertices for an instance of the traveling-salesman problem are points in the plane and that the cost c.u; / is the euclidean distance between points u and . Show that an optimal tour never crosses itself. 35.3 The set-covering problem The set-covering problem is an optimization problem that models many problems that require resources to be allocated. Its corresponding decision problem general-izes the NP-complete vertex-cover problem and is therefore also NP-hard. The ap-proximation algorithm developed to handle the vertex-cover problem doesn’t apply here, however, and so we need to try other approaches. We shall examine a simple greedy heuristic with a logarithmic approximation ratio. That is, as the size of the instance gets larger, the size of the approximate solution may grow, relative to the size of an optimal solution. Because the logarithm function grows rather slowly, however, this approximation algorithm may nonetheless give useful results. 1118 Chapter 35 Approximation Algorithms S3 > S6 > S4S5 > S2 > S1 > Figure 35.3 An instance .X; F/of the set-covering problem, where Xconsists of the 12 black points and FD f S1; S 2; S 3; S 4; S 5; S 6g.A minimum-size set cover is CD f S3; S 4; S 5g, with size 3.The greedy algorithm produces a cover of size 4by selecting either the sets S1,S4,S5,and S3or the sets S1,S4,S5, and S6, in order. An instance .X; F / of the set-covering problem consists of a finite set X and a family F of subsets of X
|
{"source": 5230, "title": "from dpo"}
|
use exceptions for error handling. Consequently in modern Java developers are put in an awkward position of having to invent ad-hoc error-handling alternatives when using its modern features. Exceptions don't even work for Java. This should give you some insight into why Rust chose not to use Exceptions: they simply don't work with the ML-language-family feature-set. Now, there is an argument that Rust could probably do with some syntactic sugar for its error-handling mechanism, like Swift (which also eschews exceptions). For example `fn name() -> String throws E` could desugar to `fn name() -> Result` Similarly, defining useful `Error` types is hard, with various flavour of the month approaches like `error-chain`, `failure`, and now `anyhow`. Consequently I have some sympathy for you, since the Rust Book doesn't really cover these libraries, and so the error-handling story looks sparse. The knowledge is all in the "culture" which is (unnecessarily) hard for a newbie to figure out. I do think the stdlib could do with more support for creation of error types, and perhaps looking at `failure` and `anyhow` would be a good start. I like the sugar described above precisely because it would stop the creation of alternative `Result` types and force everyone to use the standard. My fear is without this Rust could get into the Haskell situation where a project might depend on 5 crates, each using different and slightly incompatible error-handling methods. My hope is the newly announced Rust Error-Handling Working-Group. I could do without the cost of stack unwinding and encouragement of non-local handling, but each to
|
{"source": 6614, "title": "from dpo"}
|
t ′ − t ) {\displaystyle a_{\mathbf {k} \lambda }(t)=a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}+ie{\sqrt {\frac {2\pi }{\hbar \omega _{k}V}}}\int _{0}^{t}dt'\,e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} (t')e^{i\omega _{k}\left(t'-t\right)}} and therefore the equation for ȧkλ may be written: x ¨ + ω 0 2 x = e m E 0 ( t ) + e m E R R ( t ) {\displaystyle \mathbf {\ddot {x}} +\omega _{0}^{2}\mathbf {x} ={\frac {e}{m}}\mathbf {E} _{0}(t)+{\frac {e}{m}}\mathbf {E} _{RR}(t)} where E 0 ( t ) = i ∑ k λ 2 π ℏ ω k V [ a k λ ( 0 ) e − i ω k t − a k λ † ( 0 ) e i ω k t ] e k λ {\displaystyle \mathbf {E} _{0}(t)=i\sum _{\mathbf {k} \lambda }{\sqrt {\frac {2\pi \hbar \omega _{k}}{V}}}\left[a_{\mathbf {k} \lambda }(0)e^{-i\omega _{k}t}-a_{\mathbf {k} \lambda }^{\dagger }(0)e^{i\omega _{k}t}\right]e_{\mathbf {k} \lambda }} and E R R ( t ) = − 4 π e V ∑ k λ ∫ 0 t d t ′ [ e k λ ⋅ x ˙ ( t ′ ) ] cos ω k ( t ′ − t ) {\displaystyle \mathbf {E} _{RR}(t)=-{\frac {4\pi e}{V}}\sum _{\mathbf {k} \lambda }\int _{0}^{t}dt'\left[e_{\mathbf {k} \lambda }\cdot \mathbf {\dot {x}} \left(t'\right)\right]\cos \omega _{k}\left(t'-t\right)} It can be shown that in the radiation reaction field, if the mass m is regarded as the "observed" mass then we can take E R R ( t ) = 2 e 3 c 3 x ¨ {\displaystyle \mathbf {E} _{RR}(t)={\frac {2e}{3c^{3}}}\mathbf {\ddot {x}} } The total field acting on the dipole has two parts, E0(t) and ERR(t). E0(t) is the free or zero-point field acting on the dipole. It is the homogeneous solution of the Maxwell equation for the field acting on the dipole, i.e., the solution,
|
{"page_id": 84400, "title": "Zero-point energy"}
|
The energy eigenvalues of the periodic solid for a particular k {\displaystyle {\bf {k}}} , E b ( k ) {\displaystyle {E_{b}}\left({\bf {k}}\right)} , are the roots of the equation det M ( E , k ) = 0 {\displaystyle \det {\bf {M}}\left({E,{\bf {k}}}\right)=0} . The eigenfunctions are found by solving for the c l , m ( E , k ) {\displaystyle {c_{l,m}}\left({E,k}\right)} with E = E b ( k ) {\displaystyle E={E_{b}}\left({\bf {k}}\right)} . The dimension of these matrix equations is technically infinite, but by ignoring all contributions that correspond to an angular momentum quantum number l {\displaystyle l} greater than l max {\displaystyle {l_{\max }}} , they have dimension ( l max + 1 ) 2 {\displaystyle {\left({{l_{\max }}+1}\right)^{2}}} . The justification for this approximation is that the matrix elements of the t-matrix t l m , l ′ m ′ {\displaystyle {t_{lm,l'm'}}} are very small when l {\displaystyle l} and l ′ {\displaystyle l'} are greater than l max {\displaystyle {l_{\max }}} , and the elements of the inverse matrix m l m , l ′ m ′ {\displaystyle {m_{lm,l'm'}}} are very large. In the original derivations of the KKR method, spherically symmetric muffin-tin potentials were used. Such potentials have the advantage that the inverse of the scattering matrix is diagonal in l {\displaystyle l} m l m , l ′ m ′ = [ α cot δ l ( E ) − i α ] δ l , l ′ δ m , m ′ {\displaystyle {m_{lm,l'm'}}=\left[{\alpha \cot {\delta _{l}}\left(E\right)-i\alpha }\right]{\delta _{l,l'}}{\delta _{m,m'}}} , where δ l ( E ) {\displaystyle {\delta _{l}}\left(E\right)} is the scattering phase shift that appears in the partial wave analysis in scattering theory. It is also easier to visualize the waves scattering from one atom to another, and l
|
{"page_id": 50768319, "title": "Multiple scattering theory"}
|
a way that it's harder for him to be objective about, because his daughter, Grace, is there. Then you have the dean, and she lives having seen this culture develop over the course of her years as an educator, and she's reacting to it." In September 2015, it was revealed that scream queen Heather Langenkamp is behind the special effects for the series. === Casting === In December 2014, it was reported that Emma Roberts and Jamie Lee Curtis would be featured as series regulars. In January 2015, Lea Michele, Joe Manganiello, Keke Palmer, and Abigail Breslin joined the series' main cast, as well as actress/singer Ariana Grande in a recurring capacity. Later that month, The Hollywood Reporter confirmed that Nick Jonas would recur throughout the first season. In February 2015, newcomer Billie Lourd and Skyler Samuels joined the series' main cast. Later in the month, Niecy Nash joined the recurring cast as Denise, and British actor Lucien Laviscount, Diego Boneta and Glen Powell were confirmed as regulars. In March 2015, Nasim Pedrad was cast as a series regular. On March 13, previously cast Manganiello was forced to depart the series, due to publicity obligations for his film Magic Mike XXL. Oliver Hudson was hired as his replacement. On June 24, it was announced that Charisma Carpenter and Roger Bart would portray Chanel #2's (Grande) parents. In August 2015, Philip Casnoff was cast as Cathy's (Curtis) husband. In September 2015, Murphy announced, through his Twitter feed, that Patrick Schwarzenegger had joined the cast. He will portray Chad's (Powell) younger brother, Thad. Chad's older brother, Brad, will be played by Chad Michael Murray; while Alan Thicke and Julia Duffy have been cast as Mr. and Mrs. Radwell. John Stamos, Taylor Lautner, and Colton Haynes joined the second season. On July 28,
|
{"page_id": 45093891, "title": "Scream Queens (2015 TV series)"}
|
more detail. == Design == === URLs and URNs === A Uniform Resource Name (URN) is a URI that identifies a resource by name in a particular namespace. A URN may be used to talk about a resource without implying its location or how to access it. For example, in the International Standard Book Number (ISBN) system, ISBN 0-486-27557-4 identifies a specific edition of the William Shakespeare play Romeo and Juliet. The URN for that edition would be urn:isbn:0-486-27557-4. However, it gives no information as to where to find a copy of that book. A Uniform Resource Locator (URL) is a URI that specifies the means of acting upon or obtaining the representation of a resource, i.e. specifying both its primary access mechanism and network location. For example, the URL http://example.org/wiki/Main_Page refers to a resource identified as /wiki/Main_Page, whose representation is obtainable via the Hypertext Transfer Protocol (http:) from a network host whose domain name is example.org. (In this case, HTTP usually implies it to be in the form of HTML and related code. In practice, that is not necessarily the case, as HTTP allows specifying arbitrary formats in its header.) A URN is analogous to a person's name, while a URL is analogous to their street address. In other words, a URN identifies an item and a URL provides a method for finding it. Technical publications, especially standards produced by the IETF and by the W3C, normally reflect a view outlined in a W3C Recommendation of 30 July 2001, which acknowledges the precedence of the term URI rather than endorsing any formal subdivision into URL and URN. URL is a useful but informal concept: a URL is a type of URI that identifies a resource via a representation of its primary access mechanism (e.g., its network "location"), rather than
|
{"page_id": 32146, "title": "Uniform Resource Identifier"}
|
loop antenna peaks at right angles to the plane of the loop. As the frequency progresses to the second and third resonances, the perpendicular radiation fades and strong lobes near the plane of the loop arise.(p 235) At the lower shortwave frequencies, a full loop is physically quite large, and its only practical installation is "lying flat", with the plane of the loop horizontal to the ground and the antenna wire supported at the same relatively low height by masts along its perimeter. This results in horizontally-polarized radiation, which peaks toward the vertical near the lowest harmonic; that pattern is good for regional NVIS communication, but unfortunately is not generally useful for making continental-scale contacts. Above about 10 MHz, the loop is approximately 10 meters in diameter, and it becomes more practical for the loop to be mounted "standing up" – that is, with the plane of the loop vertical – in order to direct its main beam towards the horizon. If the frequency is high enough, then the loop might be small enough to attach to an antenna rotator, in order to rotate that direction as desired. Compared to a dipole or folded dipole, a vertical large loop wastes less power radiating toward the sky or ground, resulting in about 1.5 dB higher gain in the two favored horizontal directions. Additional gain (and a uni-directional radiation pattern) is usually obtained with an array of such elements either as a driven endfire array or in a Yagi configuration – with only one of the loops being driven by the feedline and all the remaining loops being "parasitic" reflectors and directors. The latter is widely used in amateur radio in the "quad" configuration (see photo). Low-frequency one-wavelength loops "lying down" are sometimes used for local NVIS communication. This is sometimes called
|
{"page_id": 2159451, "title": "Loop antenna"}
|
are also based at Pilgrim Motorsports. In March 2018, the company announced that it had hired multi-award-winning vehicle design expert Paul Burgess from McLaren who led the engineering team. == Parent Company == The parent company was founded in 2017 by entrepreneur William Sachiti at Aberystwyth University. The company was seeded with a £10 000 grant from the university as part of its InvEnterPrize scheme. In late 2016, the company partnered with Pilgrim Motorsports, a specialist UK car manufacturer. In early 2017, Academy of Robotics was announced to be part of NVIDIA's accelerator to further develop Kar-go. In mid-2017, the company sought funding via Crowdfunding on the UK Financial Services Authority regulated platform Crowdcube and raised £320K at a £2 million post money valuation for its Kar-go project. In August 2018, the company raised additional funding from private investors. An offer was made for more cash which the company turned down in a move the CEO William Sachiti stated that he did not want to dilute the value for existing shareholders. In November 2019, the company announced a partnership with Eurovia UK, part of the Vinci group, Eurovia announced its plans to test the Kar-go technology to automate the delivery of small plant equipment, tools, materials and other components to and from a highway work site as well as the potential use of data collected by Kar-go as it travels, to determine the condition of roads.The technology Academy of Robotics has developed is able to detect not only the potential hazards in the path such as the edge of a road in snowy conditions, but also the likely causes of deterioration on road surfaces. In 2020 the company became the first UK company to have a vehicle designed for autonomous delivery licensed to drive on the roads in the UK.
|
{"page_id": 54624237, "title": "Kar-go"}
|
Tetramethylxylylene diisocyanate (TMXDI) is an organic compound with the formula C6H4(CMe2NCO)2 (Me = CH3). Introduced in the 1980s by American Cyanamid, the molecule features two isocyanate groups. TMXDI is generally classified as an aliphatic isocyanate, which are generally more UV stable than their aromatic counterparts. == Production == Many isocyanates are produced by phosgenation of amines, but TMXDI is not. It is produced by the reaction of diisopropenylbenzene with hydrogen chloride followed by isocyanic acid: C6H4(C(Me)=CH2)2 + 2 HCl → C6H4(CMe2Cl)2 C6H4(CMe2Cl)2 + 2 HNCO → C6H4(CMe2NCO)2 + 2 HCl == Uses == A key use for TMXDI is in manufacturing polyurethane prepolymers. It is also used to manufacture polyurethane dispersions (PUDs). These materials are then further used to manufacture coatings, adhesives, sealants and elastomers. TMXDI has been considered as a replacement for Isophorone diisocyanate (IPDI). IPDI has a molecular weight of 222.3 and thus a NCO equivalent weight of 111.15. TMXDI has a molecular weight of 244.3 and thus an equivalent weight of 122.15. Thus per mole, approximately 10% more is required than the equivalent prepolymer based on IPDI. This difference increases cost.When making polyurethanes dispersions (PUDs) TMXDI is advantageous. Being sterically hindered, the NCO groups are slower reacting which is good when dispersing a prepolymer in water to make a PUD. It reduces side reactions and allows more time to allow the dispersion stage before the mix is chain extended. This is done usually with a diamine. It has even found use in a rocket propellant binder by the US military. == Safety == Extensive data has become available. == See also == Hexamethylene diisocyanate Isophorone diisocyanate Methylene diphenyl diisocyanate Toluene diisocyanate == References == == External links == Technical Data Sheet TMXDI Safety Data Sheet TMXDI Incorez website
|
{"page_id": 58860755, "title": "Tetramethylxylylene diisocyanate"}
|
In analogy with the Merian formula, the expected period of the internal wave can be expressed as: T = 2 L c {\displaystyle T={\frac {2L}{c}}} with c 2 = g ρ 2 − ρ 1 ρ 2 h 1 h 2 h 1 + h 2 {\displaystyle c^{2}=g{\frac {\rho _{2}-\rho _{1}}{\rho _{2}}}{\frac {h_{1}h_{2}}{h_{1}+h_{2}}}} where T is the natural period, L is the length of the water body, h 1 , h 2 {\displaystyle h_{1},h_{2}} the average thicknesses of the two layers separated by stratification (e.g. epilimnion and hypolimnion), ρ 1 , ρ 2 {\displaystyle \rho _{1},\rho _{2}} the densities of these two same layers and g the acceleration of gravity. As the thermocline moves up and down a sloping lake bed, it creates a 'swash zone', where temperatures can vary rapidly, potentially affecting fish habitat. As the thermocline rises up a sloping lake bed, it can also cause benthic turbulence by convective overturning, whereas the falling thermocline experiences greater stratification and low turbulence at the lake bed. Internal waves can also degenerate into non-linear internal waves on sloping lake-beds. When such non-linear waves break on the lake bed, they can be an important source of turbulence and have the potential for sediment resuspension === Cave seiches === On September 19, 2022, a seiche reaching 4 feet (1.2 metres) occurred at Devils Hole at Death Valley National Park in the U.S. after a 7.6-magnitude earthquake hit western Mexico, about 1,500 miles (2,400 kilometres) away. Seiches were also observed in the cave after powerful earthquakes in 2012, 2018 and 2019. == Engineering for seiche protection == Engineers consider seiche phenomena in the design of flood protection works (e.g., Saint Petersburg Dam), reservoirs and dams (e.g., Grand Coulee Dam), potable water storage basins, harbours, and even spent nuclear fuel storage basins. Structures and
|
{"page_id": 440906, "title": "Seiche"}
|
to scare the apes off and alternatively force their prisoners to fight to death against their will. In The Toxic Avenger and the animated Toxic Crusaders cartoon, "Toxie" and his friends are deformed mutants of super-human size and strength whose mutations are the results of exposure to toxic waste, radiation, and other environmental pollutants. In the Bollywood film Krrish 3, many mutants appeared. These mutants were called Maanvars, created by an evil scientist named Kaal. In Sign Gene: The First Deaf Superheroes, the mutants happened to be deaf and have superhuman powers through the use of sign language. The leading character Tom Clerc is the fourth great-grandson of the legendary Laurent Clerc, the father of American Sign Language. In the 1955 science fiction film This Island Earth, the character Exeter is badly injured by a Mutant while fleeing the planet Metaluna aboard a flying saucer, accompanied by other characters Cal and Ruth. The Mutant has also boarded the saucer and attacks Ruth, but dies as a result of pressure differences on the journey back to Earth. === Print media === A December, 1953 article in Mechanix Illustrated Magazine called "How Nuclear Radiation Can Change Our Race", warned that in the event of an "Atom War", a mutant species of supermen might arise to assist—or to dominate—humanity. The article was written by "O. O. Binder", and opened with a two-page illustration drawn by comic book artist Kurt Schaffenberger, which shows bald, large-craniumed mutants either helping humanity with their superior intellects (in a small section of the picture) or dominating mankind as slavemasters (in the much bigger splash image). The Mutant Chronicle novels are based on a tabletop role-playing game originally published in 1993. It was made into a film (which has very little to do with the novels or the RPG)
|
{"page_id": 1003628, "title": "Mutants in fiction"}
|
of an automobile, a schematic display shows a heavily armed humanoid-looking robot with wheeled legs that converts into an ambiguous off-road vehicle. KARR has the ability to transform from vehicle mode into a large wheeled robotic exoskeleton, instead of KITT's "Attack Mode". The vehicle mode of KARR is a 2008–2009 Shelby GT500KR with the license plate initials K.R. KARR is once again voiced by Peter Cullen, who also voiced the first appearance of KARR in "Trust Doesn't Rust". KARR was originally designed for military combat. Armed with twin machine guns on each shoulder and missiles, the exoskeleton combines with a human being for easier control. KARR is visually identical to KITT in this iteration, lacking the two-tone black and silver paint job of the 1980s version of KARR. The only difference is the scanner and voice box, which are yellow compared to KITT's red. Once again, similar to the original character, this entirely different "KARR" project (2.0) had an A.I. that was programmed for self-preservation, and he was deactivated and placed in storage after he reprogrammed itself and killed seven people. When KARR finally appears again in the episode "Knight to King's Pawn", he takes a form once again similar to KITT as a 2008 Ford Shelby Mustang GT500KR, and is once again 100% black like KITT 3000, the only difference is that he has a yellow scan light bar and 100% yellow color voice module. In the original series, it was more amber/yellow, and KARR's voice module originally yellow-green in the original series. KARR's scanner sounds much lower with much more of an echo. The sound is especially noticeable when KARR is chasing down KITT while he is still in Ford Mustang mode. == Reception and significance == KITT, despite being just an AI without a body, has proven
|
{"page_id": 1829947, "title": "KITT"}
|
months ago | prev | next [–] I'm not too worried. As long as someone needs to be held accountable, you will need humans. As long as you're doing novel work not in the training set, you will probably need humans. chirau 5 months ago | prev | next [–] With every new technology comes new challenges. The role will evolve to tackle those new challenges as long as they are software/programming/engineering specific ilaksh 5 months ago | prev | next [–] It's not going to be about careers anymore. It's going to be about leveraging AI and robotics as very cheap labor to provide goods and services. whateveracct 5 months ago | prev | next [–] LLMs have not affected my day-to-day at all. I'm a senior eng getting paid top percentile using a few niche technologies at a high profile company. kixpanganiban 5 months ago | prev | next [–] We had this same question when IDEs and autocomplete became a thing. We're still around today, just doing work that's a level harder :) cpill 5 months ago | prev | next [–] if things go as you predict then the models are going to start to eat their own tail in terms of training data. because of the nature of LLMs training, they can't come up with anything truly original. if you have tried to do something even slightly novel then you'll know what I mean. web development might need taken out, if front-end Devs didn't perpetually reinvent the FE :P wolvesechoes 5 months ago | prev | next [–] Writing code and making commits is only a part of my work. I also have to know ODEs/DAEs, numerical solvers, symbolic transformations, thermodynamics, fluid dynamics, dynamic systems, controls theory etc. So basically math and physics. LLMs are
|
{"source": 1722, "title": "from dpo"}
|
the dynamics parameters, since high variability is crucial to regularize the agent's behavior but notoriously leads to overly conservative policies when randomizing excessively. In this paper, we propose a novel approach to address sim-to-real transfer, which automatically shapes dynamics distributions during training in simulation without requiring real-world data. We introduce DOmain RAndomization via Entropy MaximizatiON (DORAEMON), a constrained optimization problem that directly maximizes the entropy of the training distribution while retaining generalization capabilities. In achieving this, DORAEMON gradually increases the diversity of sampled dynamics parameters as long as the probability of success of the current policy is sufficiently high. We empirically validate the consistent benefits of DORAEMON in obtaining highly adaptive and generalizable policies, i.e., solving the task at hand across the widest range of dynamics parameters, as opposed to representative baselines from the DR literature. Notably, we also demonstrate the Sim2Real applicability of DORAEMON through its successful zero-shot transfer in a robotic manipulation setup under unknown real-world parameters. _Soumyadeep Pal, Yuguang Yao, Ren Wang, Bingquan Shen, Sijia Liu_ !Image 1473 systems demand substantial training data, often resorting to external sources. Nevertheless, this practice renders them vulnerable to backdoor poisoning attacks. Prior backdoor defense strategies have primarily focused on the identification of backdoored models or poisoned data characteristics, typically operating under the assumption of access to clean data. In this work, we delve into a relatively underexplored challenge: the automatic identification of backdoor data within a poisoned dataset, all under realistic conditions, *i.e.*, without the need for additional clean data or manually defining a threshold for backdoor detection. We draw an inspiration from the scaled prediction consistency (SPC) technique, which exposes the prediction invariance of poisoned data to an input scaling factor. Based on this, we resolve the backdoor data identification problem as a hierarchical data
|
{"source": 3883, "title": "from dpo"}
|
collapse-binding property, the theorem follows. We now give the full proof: Proof of Theorem 30. Let (C0, C 1) be an adversary in the sense of Definition 29 (against sum-binding). Let p0 := p0(C0, C 1) and p1 := p1(C0, C 1). We have to show that the advantage ε := p0 + p1 − 1 is upper bounded by a negligible function. Without loss of generality, we can assume that C1 is unitary. More precisely, C1(S, m ) applies a unitary circuit Um to S, resulting in two output registers U and E. Then he measures U in the computational basis and returns the outcomes u.With that notation, we can express the game from Definition 29 as the following circuit (renaming the register S to S′ to avoid name clashes later): P Co Um > m$ > ← { 0,1} Um M u Um > k > c > S′E > U (6) (Here and in the following, M denotes a measurement in the computational basis.) In that circuit, Pr[ verify (k, c, m, u ) = 1] = δ := 12 (1 + ε).Let M denote a one-qubit quantum register, and define UM : |m〉M ⊗ | Ψ 〉S′ 7 →|m〉M ⊗ Um|Ψ 〉S′ . That is, UM is a unitary with two input registers M, S ′, and three output registers M, U, E which is realized by applying U0 or U1 to S′,depending on whether M is |0〉 or |1〉.Let M+ be the binary measurement that checks whether register M is in state |+〉 = 1√2 |0〉 + 1√2 |1〉. Formally, M+ is defined by the projector P+ := |+〉〈 +| on M .Recall that Vc from Lemma 6 is the measurement defined by the projector Pc := ∑ m,u > verify (k,c,m,u )=1 |m〉〈 m|
|
{"source": 5862, "title": "from dpo"}
|
Spectrochimica Acta Part B: Atomic Spectroscopy is a monthly peer-reviewed scientific journal covering spectroscopy. The journal was established in 1939 as Spectrochimica Acta. In 1967, Spectrochimica Acta was split into two journals, Spectrochimica Acta Part A: Molecular and Biomolecular Spectroscopy and Spectrochimica Acta Part B: Atomic Spectroscopy. Part B obtained its current title around the time of the split. According to the Journal Citation Reports, the journal has a 2019 impact factor of 3.086. As of April 2024 the editor-in-chief is Alessandro De Giacomo of the University of Bari, Italy. == See also == Elsevier / Spectrochimica Acta Atomic Spectroscopy Award == References == == External links == Official website Spectroch. Acta B at CAS Source Index
|
{"page_id": 40922108, "title": "Spectrochimica Acta Part B"}
|
\},\end{aligned}}} where εklm is the Levi-Civita symbol. Together the three operators define a vector operator, a rank one Cartesian tensor operator, j = ( j x , j y , j z ) . {\displaystyle \mathbf {j} =(\mathrm {j_{x}} ,\mathrm {j_{y}} ,\mathrm {j_{z}} ).} It is also known as a spherical vector, since it is also a spherical tensor operator. It is only for rank one that spherical tensor operators coincide with the Cartesian tensor operators. By developing this concept further, one can define another operator j2 as the inner product of j with itself: j 2 = j x 2 + j y 2 + j z 2 . {\displaystyle \mathbf {j} ^{2}=\mathrm {j_{x}^{2}} +\mathrm {j_{y}^{2}} +\mathrm {j_{z}^{2}} .} This is an example of a Casimir operator. It is diagonal and its eigenvalue characterizes the particular irreducible representation of the angular momentum algebra s o ( 3 , R ) ≅ s u ( 2 ) {\displaystyle {\mathfrak {so}}(3,\mathbb {R} )\cong {\mathfrak {su}}(2)} . This is physically interpreted as the square of the total angular momentum of the states on which the representation acts. One can also define raising (j+) and lowering (j−) operators, the so-called ladder operators, j ± = j x ± i j y . {\displaystyle \mathrm {j_{\pm }} =\mathrm {j_{x}} \pm i\mathrm {j_{y}} .} == Spherical basis for angular momentum eigenstates == It can be shown from the above definitions that j2 commutes with jx, jy, and jz: [ j 2 , j k ] = 0 k ∈ { x , y , z } . {\displaystyle {\begin{aligned}&[\mathbf {j} ^{2},\mathrm {j} _{k}]=0&k&\in \{\mathrm {x} ,\mathrm {y} ,\mathrm {z} \}.\end{aligned}}} When two Hermitian operators commute, a common set of eigenstates exists. Conventionally, j2 and jz are chosen. From the commutation relations, the possible eigenvalues can
|
{"page_id": 1074990, "title": "Clebsch–Gordan coefficients"}
|
Meta-cold dark matter, also known as mCDM, is a form of cold dark matter proposed to solve the cuspy halo problem. It consists of particles "that emerge relatively late in cosmic time (z ≲ 1000) and are born non-relativistic from the decays of cold particles". == Notes ==
|
{"page_id": 40750190, "title": "Meta-cold dark matter"}
|
produced microchip, Neil Harbisson and Moon Ribas, the first people in the world to be recognised as cyborgs, Don Tapscott, etc. The focus on astronomy at the event has drawn astronauts Neil Armstrong, Jean-François Clervoy, Ellen Baker, Buzz Aldrin, Marcos Pontes, and Rodolfo Neri Vela to Campus Party. The organization's work with bridging the digital divide has attracted politicians and government figures, including High Commissioner for the United Nations for the Millennium Objective Eveline Herfkens, Neelie Kroes, Brazilian Presidential candidates Marina Silva and Dilma Rousseff, Gilberto Gil, a Grammy Award-winning musician and former Brazilian Minister of Culture, and ex-Mayor of New York City, Rudolph Giuliani. President of the Robotics Society of America, David Calkins, video game industry icon Tommy Tallarico, founding member of Blizzard Entertainment, Frank Pearce, media theorist Don Tapscott, and Linux International Executive Director Jon "maddog" Hall have all spoken at the event. == Editions == === Campus Party Spain === The Spanish edition of Campus Party has been held at the Colegio Miguel Hernández, Ceulaj, and the Municipal Sport Arena of Benalmádena in Málaga, Spain; and at both the Valencia County Fair and the City of Arts and Sciences in Valencia over the past 15 years. ==== 2011 ==== In July 2011 the 15th edition of Campus Party Spain will be held at the City of Arts and Sciences in Valencia. Over $350,000 will be awarded for competition winners during the week-long event. Kevin Mitnick, David Calkins, Amira Al Hussaini, Carlos Schmukler, Gianluca Fratellini, Jon "Maddog" Hall, David O'Reilly, Stuart Clark, Julien Fourgeaud and David Bravo are confirmed speakers at the event. === Expansion === In 2008 the Campus Party crossed the Atlantic Ocean to be celebrated in the Americas, the first Latin American edition was held in São Paulo in February, and the second in Bogotá
|
{"page_id": 18102288, "title": "Campus Party"}
|
sensing of a diverse array of mixtures in a variety of conditions. Chemical sensor arrays are often noted as mimicking the five senses—audition, gustation, olfaction, somatosensation, and vision—because the combinatorial responses to the different array components of a particular analytes create fingerprints for specific analytes or mixtures using both targeted molecular interactions and pattern recognition. === History === The history of chemical sensor arrays is closely linked with the development of other chemical sensor technologies, with research in the area of electronic chemical sensors picking up in the 1960s with the demonstration of metal-oxide semiconductor sensors capable of sensing analyses such as oxygen. Humans are capable of identifying and discerning between an estimated 10,000 scents or more, while only possessing 400 olfactory receptors. Signal processing in the brain of individual array component responses of olfactory receptors results in pattern recognition for discrimination of a particular scent. One of the design aims of many chemical sensor arrays is to mimic the performance of olfaction to design an electronic nose integrated with a variety of materials. Combining chemical sensor arrays with pattern recognition methods mimics biological sensory recognition methods. See Figure 1. Commercially available electronic nose systems exist and are used in the food industry for quality control. Current research efforts demonstrate the introduction of the electronic nose principle into environmental monitoring and medicine both as commercial instruments as well as in consumer-grade wearable electronic devices. At the center of chemical sensor arrays is the principle that different analytes will interact differently with a variety of materials. As such, any sort of material may be used in a sensor array, so long as it responds differently to different analytes or mixtures. From this idea, cross-reactive sensor arrays have been the focus of chemical sensor array development for their broad compatibility with the
|
{"page_id": 66888952, "title": "Chemical sensor array"}
|
3,4-Dihydroxystyrene (DHS) is a centrally-acting inhibitor of the enzyme phenylalanine hydroxylase (PH). It is likely that DHS and other PH inhibitors will never have clinical applications on account of their capacity for inducing hyperphenylalaninemia and phenylketonuria. == See also == Phenylalanine hydroxylase == References ==
|
{"page_id": 23142974, "title": "3,4-Dihydroxystyrene"}
|
to be collinear or non-orthogonal. Only in experimental conditions, via SP data, can performance and price be varied independently – have their effects decomposed. An experimental design (below) in a Choice Experiment is a strict scheme for controlling and presenting hypothetical scenarios, or choice sets to respondents. For the same experiment, different designs could be used, each with different properties. The best design depends on the objectives of the exercise. It is the experimental design that drives the experiment and the ultimate capabilities of the model. Many very efficient designs exist in the public domain that allow near optimal experiments to be performed. For example the Latin square 1617 design allows the estimation of all main effects of a product that could have up to 1617 (approximately 295 followed by eighteen zeros) configurations. Furthermore this could be achieved within a sample frame of only around 256 respondents. Below is an example of a much smaller design. This is 34 main effects design. This design would allow the estimation of main effects utilities from 81 (34) possible product configurations assuming all higher order interactions are zero. A sample of around 20 respondents could model the main effects of all 81 possible product configurations with statistically significant results. Some examples of other experimental designs commonly used: Balanced incomplete block designs (BIBD) Random designs Main effects Higher order interaction designs Full factorial More recently, efficient designs have been produced. These typically minimise functions of the variance of the (unknown but estimated) parameters. A common function is the D-efficiency of the parameters. The aim of these designs is to reduce the sample size required to achieve statistical significance of the estimated utility parameters. Such designs have often incorporated Bayesian priors for the parameters, to further improve statistical precision. Highly efficient designs have become extremely
|
{"page_id": 15841082, "title": "Choice modelling"}
|
systems for production and assembly lines. Here, Grob is particularly active in the planning and construction of complete manufacturing systems for engines, vehicle transmissions, injection pumps, and similar components. The universal machines are used in various sectors, including the automotive industry, energy and medical technology, aerospace, as well as machinery, tooling, and mould making. The 4- and 5-axis universal machines, for example, can machine or manufacture delicate components. Another mainstay is the construction of special machines for transfer lines. Among the assembly systems distributed by Grob-Werke are systems for stator production using hairpin technology, rotor production, and battery module assembly, which enable fully automated manufacturing for vehicle drives. === Machining technology === Grob-Werke offers various machining concepts used by the automotive industry. These systems consist of individually modular machining centers as well as custom machines and are customisable. For example, the G520F of the F-Series, introduced in 2023, is designed for the machining of battery housings and lightweight components such as frame structures and chassis parts. === Electromobility and industrial electric motors === Grob entered into the development of electromobility and adapted the company structure accordingly. This division primarily develops and produces products for battery technology and electric drive trains, including battery pack systems, battery module systems and large-scale systems for battery cell production. Other areas include the complete assembly of electric motors as well as stator and rotor assembly with permanent magnet technology. The company also develops individual processes and systems, for example for processing structural and chassis parts or coating engine components. The electric mobility sector accounts for over 60% of Grob's business. Customers include European, American, and Asian car manufacturers, with a predominant presence of emerging Chinese manufacturers in the Asian market. === Additive manufacturing === In the field of additive manufacturing, Grob developed the liquid metal
|
{"page_id": 77224234, "title": "Grob-Werke"}
|
stock market, but he recovers from the lobotomy and exploits the situation to his own benefit until he is captured and taken into custody by S.H.I.E.L.D. In GLX-Mas Special #1, MODOK and A.I.M. fought Dum Dum Dugan and his S.H.I.E.L.D. squad, but were defeated by Squirrel Girl and her squirrel sidekick Tippy-Toe. MODOK then seeks a sample of the cybernetic species the Phalanx, and after brief encounters with the mutant superhero team the X-Men, battles Ms. Marvel once again, with the heroine this time aided by fellow Avenger Wonder Man during an elaborate scheme by renegade A.I.M. branches to kill MODOK, with one of the rogue A.I.M. agents being MODOK's long-lost son, who seeks revenge for his abandonment. Employing an elaborate scheme and double-cross involving several supervillains, MODOK restores his personal wealth and power and re-establishes himself as the leader of A.I.M. once again. MODOK was then seen in Puerto Rico attempting to create an army of genetically enhanced monkeys called A.I.Monkeys to eliminate the recession in A.I.M., until he was defeated by Mister Fantastic, the Invisible Woman and the rookie Puerto Rican superhero known as El Vejigante. It is revealed that MODOK was involved in the creation of both the Red Hulk and the Red She-Hulk and is a member of the Intelligencia, a secret organization of genius-level supervillains. During the "Fall of the Hulks" storyline, the Intelligencia captured some of the smartest men in the world and brought about the events that would lead up to the "World War Hulks" storyline. When several heroes are subjected by the Intelligencia to the Cathexis ray, which can transfer radiant energy from one subject to another, Amadeus Cho is affected as well. Cho gains the ability to warp reality within a 10-foot radius and restores MODOK's human form, leaving him amnesiac.
|
{"page_id": 30863029, "title": "MODOK"}
|
Wireless failover is an automated function in telephone networks and computer networks where a standard hardwired connection is switched to a redundant wireless connection upon failure or irregular closure of a default hardwired connection or component in the network such as a router, server, or computer. Wireless failover is a business continuity function. That is, it allows businesses to continue operations even in the event of a network failure. In retail, wireless failover is typically used when a standard connection for a point of sale credit card machine fails. In this instance, the wireless failover allows business transactions to continue to be processed, ensuring business continuity. == Infrastructure == Wireless failover solutions are offered in different forms. A radio may be installed into the network. Examples of this may include a 3G or 4G network connection. Additionally, 3G or 4G network cards may be used. Also, a router may be used with an Ethernet connection. == References ==
|
{"page_id": 37455194, "title": "Wireless failover"}
|
the ground truth. Use online WordNet to understand the definition of each of the senses. 13 Have a partner do the same annotations, and compute the raw rate of agreement, expected chance rate of agreement, and Cohen’s kappa. 6. Download the Pang and Lee movie review data, currently available from http: //www.cs.cornell.edu/people/pabo/movie-review-data/ . Hold out a randomly-selected 400 reviews as a test set. Download a sentiment lexicon, such as the one currently available from Bing Liu, . Tokenize the data, and classify each document as positive iff it has more positive sentiment words than negative sentiment words. Compute the accuracy and F -MEASURE on detecting positive reviews on the test set, using this lexicon-based classifier. Then train a discriminative classifier (averaged perceptron or logistic regression) on the training set, and compute its accuracy and F -MEASURE on the test set. Determine whether the differences are statistically significant, using two-tailed hy-pothesis tests: Binomial for the difference in accuracy, and bootstrap for the differ-ence in macro-F -MEASURE . > 12 e.g., corpora or > 13 Jacob Eisenstein. Draft of November 13, 2018. 4.5. BUILDING DATASETS 93 The remaining problems will require you to build a classifier and test its properties. Pick a multi-class text classification dataset that is not already tokenized. One example is a dataset of New York Times headlines and topics (Boydstun, 2013). 14 Divide your data into training (60%), development (20%), and test sets (20%), if no such division already exists. If your dataset is very large, you may want to focus on a few thousand instances at first. 7. Compare various vocabulary sizes of 10 2, 10 3, 10 4, 10 5, using the most frequent words in each case (you may use any reasonable tokenizer). Train logistic regression clas-sifiers for each vocabulary size,
|
{"source": 972, "title": "from dpo"}
|
Number is encrypted in such a way that Apple can’t access it. The Device Account Number is unique and different from most credit or debit card numbers, in that the card issuer or payment network can prevent its use on a magnetic stripe card, over the phone, or on websites. The Device Account Number in the Secure Element is never stored on Apple Pay servers or backed up to iCloud, and it’s isolated from: • Devices that use biometric authentication • Apple Watch • Mac computers with Apple silicon that use the Magic Keyboard with Touch ID Users can add cards to Apple Watch for Apple Pay using either the Watch app on their iPhone or the card issuer’s app. To add a card to Apple Watch: • When paired with an iPhone: The watch must be within Bluetooth communications range • When set up without an iPhone: The watch must have internet access using Wi-Fi Cards are specifically enrolled for use with Apple Watch and have their own Device Account Numbers, which are stored within the Secure Element on the Apple Watch. When credit, debit, or prepaid cards (including store cards) are added, they appear in a list of cards during Setup Assistant on devices that are signed in to the same iCloud account. These cards remain in this list for as long as they are active on at least one device. Cards are removed from this list after they have been removed from all devices for 7 days. This feature requires two-factor authentication to be enabled on the respective iCloud account. 180 Apple Platform Security ## Adding credit or debit cards to Apple Pay Credit cards can be manually added to Apple Pay in Apple devices. Adding credit or debit cards manually To add a card manually, users
|
{"source": 2332, "title": "from dpo"}
|
were amortised (note 31) and 246,911,504 shares at an average price of EUR 3.34 per share have been transferred, of which 6,617,008 shares correspond to the donation made by Banco Santander to Fundación Banco Santander with extraordinary character. At 31 December 2023, the Group holds 297,815,673 shares of the Bank's issued share capital (1.84%). The effect on equity, net of tax, arising from the purchase and sale of Bank shares is of EUR 13 million profit in 2023 (EUR 7 million and EUR 23 million profit in 2022 and 2021, respectively). # 35. Memorandum items Memorandum items relates to balances representing rights, obligations and other legal situations that in the future may have an impact on net assets, as well as any other balances needed to reflect all transactions performed by the consolidated entities although they may not impinge on their net assets. a) Guarantees and contingent commitments granted Contingent liabilities includes all transactions under which an entity guarantees the obligations of a third party and which result from financial guarantees granted by the entity or from other types of contracts. The detail is as follows: > 2023 2022 2021 > Loans commitment granted 279,589 274,075 262,737 > Of which impaired 406 653 615 > Financial guarantees granted 15,435 12,856 10,758 > Of which impaired 578 521 188 > Financial guarantees 15,400 12,813 10,715 > Credit derivatives sold 35 43 43 > Other commitments granted 113,273 92,672 75,733 > Of which impaired 542 608 781 > Technical guarantees 57,363 50,508 40,158 > Other 55,910 42,164 35,575 The breakdown as at 31 December 2023 of the exposures and the provision fund out of balance sheet by impairment stage is EUR 398,243 million and EUR 302 million (EUR 370,729 million and EUR 331 million in 2022 and EUR 337,113 million and EUR
|
{"source": 4951, "title": "from dpo"}
|
and x is a non-trivial solution to the equation x2 = 1(mod N ) in the range 1 ≤ x ≤ N , that is, neither x = 1(mod N ) nor x = N − 1 = −1(mod N ). Then at least one of gcd( x − 1, N ) and gcd( x + 1 , N ) is a non-trivial factor of N that can be computed using O(L3 ) operations. Theorem 5.3 : Suppose N = pα11 . . . p αm > m is the prime factorization of an odd composite positive integer. Let x be an integer chosen uniformly at random, subject to the requirements that 1 ≤ x ≤ N − 1 and x is co-prime to N . Let r be the order of x modulo N . Then p(r is even and xr/ 2 = − 1(mod N )) ≥ 1 − 1 2m . (5.60) Theorems 5.2 and 5.3 can be combined to give an algorithm which, with high prob-ability, returns a non-trivial factor of any composite N . All the steps in the algorithm can be performed efficiently on a classical computer except (so far as is known today) an order-finding ‘subroutine’ which is used by the algorithm. By repeating the procedure we may find a complete prime factorization of N . The algorithm is summarized below. Algorithm: Reduction of factoring to order-finding Inputs: A composite number N Outputs: A non-trivial factor of N . Runtime: O((log N )3 ) operations. Succeeds with probability O(1). Procedure: 1. If N is even, return the factor 2. 2. Determine whether N = ab for integers a ≥ 1 and b ≥ 2, and if so return the factor a (uses the classical algorithm of Exercise 5.17). 3. Randomly choose x in
|
{"source": 6248, "title": "from dpo"}
|
is larger, and with a lower airspeed the radius is smaller. This formula also shows that the radius of turn decreases with the angle of bank. With a higher angle of bank the radius of turn is smaller, and with a lower angle of bank the radius is greater. In a banked turn at constant altitude, the load factor is equal to 1 cos θ {\displaystyle {\frac {1}{\cos \theta }}} . We can see that the load factor in straight and level flight is 1 {\displaystyle 1} , since cos ( 0 ) = 1 {\displaystyle \cos(0)=1} , and to generate sufficient lift to maintain constant altitude, the load factor must approach infinity as the bank angle approaches 90 ∘ {\displaystyle 90^{\circ }} and cos θ {\displaystyle \cos \theta } approaches 0 {\displaystyle 0} . This is physically impossible, because structural limitations of the aircraft or physical endurance of the occupants will be exceeded well before then. == Banked turn in athletics == Most indoor track and field venues have banked turns since the tracks are smaller than outdoor tracks. The tight turns on these small tracks are usually banked to allow athletes to lean inward and neutralize the centrifugal force as they race around the curve; the lean is especially noticeable on sprint events. == See also == == References == == Further reading == Surface vehicles Serway, Raymond. Physics for Scientists and Engineers. Cengage Learning, 2010. Health and Safety Issues, the EU Roadex III project on health and safety issues raised by poorly maintained road networks. Aeronautics Kermode, A.C. (1972) Mechanics of Flight, Chapter 8, 10th Edition, Longman Group Limited, London ISBN 0-582-23740-8 Clancy, L.J. (1975), Aerodynamics, Pitman Publishing Limited, London ISBN 0-273-01120-0 Hurt, H.H. Jr, (1960), Aerodynamics for Naval Aviators, A National Flightshop Reprint,
|
{"page_id": 3221992, "title": "Banked turn"}
|
by Google of Nest Labs, Dropcam, and Revolv === On January 13, 2014, Google announced plans to acquire Nest Labs for $3.2 billion in cash. Google completed the acquisition the next day, on January 14, 2014. The company would operate independently from Google's other businesses. In June 2014, it was announced that Nest would buy camera startup Dropcam for $555 million. With the purchase, Dropcam became integrated with other Nest products; if the Protect alarm is triggered, the Dropcam can automatically start recording, and the Thermostat can use Dropcam to sense for motion. In September 2014, the Nest Thermostat and Nest Protect (a smoke alarm) became available in Belgium, France, Ireland, and the Netherlands. Initially, they were sold in approximately 400 stores across Europe, with another 150 stores to be added by the end of the year. In June 2015, the new Nest Cam, replacing the Dropcam, was announced, together with the second generation of the Nest Protect; there were internal reports that sales of the rebranded camera fell. On October 24, 2014, Nest both acquired the hub service Revolv, and discontinued its product line, gaining the expertise of Revolv's staff. === Nest as a subsidiary of Alphabet Inc. === In August 2015, Google announced that it would restructure its operations under a new parent company, Alphabet Inc., with Nest being separated from Google as a subsidiary of the new holding company. In January 2016, some Nest thermostats stopped working, a fault attributed to a software update from two weeks earlier. There were no lawsuits, individual or class-action, due to an arbitration clause in the contract. All Revolv smart hubs, costing several hundred dollars, were deliberately remotely bricked on May 15, 2016; notice was posted on the company's website in February. The story became news on April 4. The "lifetime
|
{"page_id": 34113322, "title": "Google Nest"}
|
Multilinear principal component analysis (MPCA) is a multilinear extension of principal component analysis (PCA) that is used to analyze M-way arrays, also informally referred to as "data tensors". M-way arrays may be modeled by linear tensor models, such as CANDECOMP/Parafac, or by multilinear tensor models, such as multilinear principal component analysis (MPCA) or multilinear independent component analysis (MICA). The origin of MPCA can be traced back to the tensor rank decomposition introduced by Frank Lauren Hitchcock in 1927; to the Tucker decomposition; and to Peter Kroonenberg's "3-mode PCA" work. In 2000, De Lathauwer et al. restated Tucker and Kroonenberg's work in clear and concise numerical computational terms in their SIAM paper entitled "Multilinear Singular Value Decomposition", (HOSVD) and in their paper "On the Best Rank-1 and Rank-(R1, R2, ..., RN ) Approximation of Higher-order Tensors". Circa 2001, Vasilescu and Terzopoulos reframed the data analysis, recognition and synthesis problems as multilinear tensor problems. Tensor factor analysis is the compositional consequence of several causal factors of data formation, and are well suited for multi-modal data tensor analysis. The power of the tensor framework was showcased by analyzing human motion joint angles, facial images or textures in terms of their causal factors of data formation in the following works: Human Motion Signatures (CVPR 2001, ICPR 2002), face recognition – TensorFaces, (ECCV 2002, CVPR 2003, etc.) and computer graphics – TensorTextures (Siggraph 2004). Historically, MPCA has been referred to as "M-mode PCA", a terminology which was coined by Peter Kroonenberg in 1980. In 2005, Vasilescu and Terzopoulos introduced the Multilinear PCA terminology as a way to better differentiate between linear and multilinear tensor decomposition, as well as, to better differentiate between the work that computed 2nd order statistics associated with each data tensor mode(axis), and subsequent work on Multilinear Independent Component Analysis that computed
|
{"page_id": 30928751, "title": "Multilinear principal component analysis"}
|
StaDyn is an object-oriented general-purpose programming language for the .NET platform that supports both static and dynamic typing in the same programming language. The StaDyn compiler gathers type information for the dynamically typed code. That type information is used to detect type errors at compilation time and to perform significant optimizations. For that purpose, it provides type reconstruction (inference), flow-sensitive types, union and intersection types, constraint-based typing, alias analysis and method specialization. Its first prototype appeared in 2007, as a modification of C# 3.0. Type inference was supported by including var as a new type, unlike C#, which only offers var to define initialized local variables. Flow-sensitive types of var references are inferred by the compiler, providing type-safe duck typing. When a more lenient approach is required by the programmer, the dynamictype could be used instead of var. Although type inference is still performed, dynamic references behave closer to those in dynamic languages. StaDyn is designed by Francisco Ortin from the University of Oviedo. The language has been implemented by different members of the Computational Reflection research group, including Miguel Garcia, Jose Baltasar García Perez-Schofield and Jose Quiroga, besides Francisco Ortin. The name StaDyn is a portmanteau of static and dynamic, denoting its aim to provide the benefits of both static and dynamic typing. == Code samples == === Variables with different types === Just like dynamic languages, variables may hold different types in the same scope: The age variable is first inferred as string, so it is safe to get its Length property. Then, it holds an integer, so age++ is a valid expression. The compiler detects an error in the last line, since Length is no longer provided by age. The generated code does not use a single Object variable to represent age, but two different variables whose
|
{"page_id": 70802400, "title": "StaDyn (programming language)"}
|
deletion, substitution) needed to transform one string into another. The gap between intention and execution , for example, is 5 (delete an i, substi-tute e for n, substitute x for t, insert c, substitute u for n). It’s much easier to see this by looking at the most important visualization for string distances, an alignment alignment between the two strings, shown in Fig. 2.14. Given two sequences, an alignment is a correspondence between substrings of the two sequences. Thus, we say I aligns with the empty string, N with E, and so on. Beneath the aligned strings is another representation; a series of symbols expressing an operation list for converting the top string into the bottom string: d for deletion, s for substitution, i for insertion. ## I N T E * N T I O N ## | | | | | | | | | | ## * E X E C U T I O N > d s s i s > Figure 2.14 Representing the minimum edit distance between two strings as an alignment .The final row gives the operation list for converting the top string into the bottom string: d for deletion, s for substitution, i for insertion. We can also assign a particular cost or weight to each of these operations. The Levenshtein distance between two sequences is the simplest weighting factor in which each of the three operations has a cost of 1 (Levenshtein, 1966)—we assume that the substitution of a letter for itself, for example, t for t, has zero cost. The Lev-enshtein distance between intention and execution is 5. Levenshtein also proposed an alternative version of his metric in which each insertion or deletion has a cost of 1 and substitutions are not allowed. (This is equivalent to allowing substitution,
|
{"source": 1018, "title": "from dpo"}
|
of the cosmic microwave background radiation these implied a value of ΩΛ ≈ 0.7, a result which has been supported and refined by more recent measurements (as well as previous works). If one assumes the cosmological principle, as in the case for all models that use the Friedmann–Lemaître–Robertson–Walker metric, while there are other possible causes of an accelerating universe, such as quintessence, the cosmological constant is in most respects the simplest solution. Thus, the Lambda-CDM model, the current standard model of cosmology which uses the FLRW metric, includes the cosmological constant, which is measured to be on the order of 10−52 m−2. It may be expressed as 10−35 s−2 (multiplying by c2 ≈ 1017 m2⋅s−2) or as 10−122 ℓP−2 (where ℓP is the Planck length). The value is based on recent measurements of vacuum energy density, ρvac = 5.96×10−27 kg/m3 ≘ 5.3566×10−10 J/m3 = 3.35 GeV/m3. However, due to the Hubble tension and the CMB dipole, recently it has been proposed that the cosmological principle is no longer true in the late universe and that the FLRW metric breaks down, so it is possible that observations usually attributed to an accelerating universe are simply a result of the cosmological principle not applying in the late universe. As was only recently seen, by works of 't Hooft, Susskind and others, a positive cosmological constant has surprising consequences, such as a finite maximum entropy of the observable universe (see Holographic principle). == Predictions == === Quantum field theory === A major outstanding problem is that most quantum field theories predict a huge value for the quantum vacuum. A common assumption is that the quantum vacuum is equivalent to the cosmological constant. Although no theory exists that supports this assumption, arguments can be made in its favor. Such arguments are usually based on
|
{"page_id": 38992, "title": "Cosmological constant"}
|
to implement the management (M-branch) and product assurance (Q-branch) standards in the body of CEN standards. On 15 May 1998, a co-operation agreement between the ECSS and the space systems and operations ISO committee ISO/TC20/SC14 was established, with the objective being to avoid duplication, improve harmonization, and to achieve the benefit of reciprocal expertise in the field of space standardization. As of May 2021, the table below shows the total number of standards published per year by the ECSS (316 total releases, with 139 currently active). The surge observed in 2008 is the result of adopting a new ECSS standards template, aiming for the harmonization and improved organization of the ECSS system. Since its creation, the ECSS promotes the application of its standards for all European space activities. Currently, the ECSS standards are essential in ESA projects (e.g. the ESTRACK stations), but are also widely adopted by the European space industry. These standards have been applied to the qualification of software engineering (e.g. AdaCore's Ravenscar SFP run-time), mechanical parts (e.g. AMRC nanosatellite fuel tank burst pressure), radiation hardening (e.g. Reflex Photonics transceivers radiation dose tolerance), material processes (e.g. Surrey Nanosystems super black coating), among others. == Organization == The ECSS is a cooperation by its definition; therefore, it is formed by an arrangement between European space agencies and industries. The administrative and technical organization of the ECSS is conducted by the ESA Requirement and Standard Division, acting as the ECSS central secretariat, based in the European Space Research and Technology Centre (ESTEC) in Noordwijk, the Netherlands. The ESA holds the copyright for all ECSS documents on behalf of the members of ECSS. === Structure === The ECSS organization structure, called the ECSS Developer Structure, is defined by the ECSS-P-00C “ECSS Standardization objectives, policies and organization” document. The developer structure consists
|
{"page_id": 12971550, "title": "European Cooperation for Space Standardization"}
|
Bell") local operating subsidiaries. They regrouped into seven Regional Bell Operating Companies (RBOCs), commonly referred to as "Baby Bells", resulting in seven independent companies. Critics were divided on whether the decision was good for the economy. == Cellphones and smartphones == From Finland the Nokia 1011 was introduced in 1992, as the first mass-market battery-powered portable cell phone. From Canada the BlackBerry Pearl reached an upscale market after 2006 when T-Mobile US bundled it to subscribers. By 2000 most of the 111 million cell phone subscribers talked on them while driving. Many local and state jurisdictions considered bans. The industry claimed cell phones are no more dangerous than listening to car radios. Furthermore, they argued that increased productivity and their necessity in emergencies outweigh the safety factor. By 2015 most states prohibited drivers from texting and talking on handheld cell phones. They cut usage about in half but did not reduce traffic accidents because only the careful drivers stopped using phones. Smartphones became popular in the early 2000s, when BlackBerry and Nokia introduced their innovative models. BlackBerry was particularly successful in the business market, thanks to its emphasis on email-by-phone. Meanwhile, Nokia was popular among consumers due to its user-friendly interface and attractive design. In 2007, Apple revolutionized the industry with the introduction of the iPhone. The iPhone's touch screen interface, sleek design, and extensive app store quickly made it the most popular smartphone on the market. Android, a mobile operating system developed by Google, was introduced in 2008 and quickly became the most popular operating system for non-Apple smartphones. The new rivals demolished BlackBerry and Nokia sales. Since then, smartphones have continued to evolve, with advancements in technology and major producers based in South Korea and China. Today, billions of people around the world rely on smartphones for communication,
|
{"page_id": 73610738, "title": "History of the telephone in the United States"}
|
lightfast and as such is used in cladding coatings. It is also the main ingredient in infrared reflecting paints, used by the armed forces to paint vehicles and to give them the same infrared reflectance as green leaves. === Other uses === Chromium(III) ions present in corundum crystals (aluminium oxide) cause them to be colored red; when corundum appears as such, it is known as a ruby. If the corundum is lacking in chromium(III) ions, it is known as a sapphire. A red-colored artificial ruby may also be achieved by doping chromium(III) into artificial corundum crystals, thus making chromium a requirement for making synthetic rubies. Such a synthetic ruby crystal was the basis for the first laser, produced in 1960, which relied on stimulated emission of light from the chromium atoms in such a crystal. Ruby has a laser transition at 694.3 nanometers, in a deep red color. Chromium(VI) salts are used for the preservation of wood. For example, chromated copper arsenate (CCA) is used in timber treatment to protect wood from decay fungi, wood-attacking insects, including termites, and marine borers. The formulations contain chromium based on the oxide CrO3 between 35.3% and 65.5%. In the United States, 65,300 metric tons of CCA solution were used in 1996. Chromium(III) salts, especially chrome alum and chromium(III) sulfate, are used in the tanning of leather. The chromium(III) stabilizes the leather by cross linking the collagen fibers. Chromium tanned leather can contain 4–5% of chromium, which is tightly bound to the proteins. Although the form of chromium used for tanning is not the toxic hexavalent variety, there remains interest in management of chromium in the tanning industry. Recovery and reuse, direct/indirect recycling, and "chrome-less" or "chrome-free" tanning are practiced to better manage chromium usage. The high heat resistivity and high melting point makes
|
{"page_id": 5669, "title": "Chromium"}
|
monitors. Later on, the standards into other product categories such as peripherals and the computer itself. == TCO Certified requirements == TCO publishes new guidelines every 3 to 4 years. The standards expanded from covering only computer monitors in 1992 to a wide array of devices today. == Product categories == TCO Certified is available for the following products: displays, notebooks, tablets, smartphones, desktops, all-in-one PCs, projectors, headsets, and data center products: network equipment, data storage products and servers. == References == == Further reading == Boivie, Per-Erik (2007). Global standard: how computer displays worldwide got the TCO logo. Stockholm: Premiss. ISBN 978-91-85343-43-0. == External links == Official website
|
{"page_id": 63944872, "title": "TCO Certified"}
|
> \item commands, 18 \subitem arguments, 19, 101 \subsubitem multiple, 103, 104 \subsubitem replacement symbol, 20 \subitem as environments, 42 \subitem used as arguments for sectioning commands, 41, 42 \indexspace \item displayed text, 21--32 If the text entry is too long for one line, it is broken and continued on the next line, indented deeper than all other lines, as in the above example ‘used as argument for sectioning commands, 41, 42’. The command \indexspace leaves a blank line in the index. # 9.4.2 Preparing the index entries The theindex environment only sets up a suitable format for the index. The entries themselves, as well as their page numbers, are generated by the MakeIndex program described in Section 9.4.3. This program requires input information from the L ATEX file in the form of unsorted keywords and page numbers. The author enters this information in the text file with the command \index{ index entry }226 Chapter 9. Document Management where index entry is the text to be entered into the index. It may con-tain any combination of letters, numbers, and symbols including even command characters and blanks . This means that even commands may be included in the index entry , such as \index{\LaTeX\ logo} . Even the one command that may otherwise never be used as an argument, \verb , may be included. However, if index entry does contain a com-mand, \index may not be used as an argument of another command. Normally all the \index commands are ignored by L ATEX and do abso-lutely nothing. They are activated only when the preamble contains the command \makeindex in which case, a file with the document’s root name plus the extension .idx is opened. Now the \index commands write index entry and the current page number to this file in the
|
{"source": 1186, "title": "from dpo"}
|
\xrightarrow{f} c')$ #### The story in slice categories - If we have $f: A \to B$ in `Set`, then we have $f^*: Set/B \to Set/A$, which sends a morphism $(K \xrightarrow{g} B)$ to $(K \xrightarrow{g} B \xrightarrow{f^{-1}} A)$. - This also motivates the presheaves story, as $Set/B \simeq Set^B$. - Recall that any morphism $K \xrightarrow{h} B \in Set/B$ can be equally seen as a morphism $b \mapsto h^{-1}(b) \in Set^B$. This is the mapping between slice and exponential. - We can think of $(K \xrightarrow{h} B) \in Set/B$ as a collection $\{ h_b \equiv h^{-1}(b) \subseteq B \}$. This is the fibrational viewpoint. - Then the functor $f^*(\{ h_b : b \in B\}) \equiv \{ h_{f(a)} : a \in A\}$. - TODO # Paredit via adjoints - We posit that text editor movements ought to be endofunctions, and complementary keybinds ought to be adjoints to each other. - With this in mind, what is the correct category for `paredit`, and what are the adjunctions? - Suppose we wish to build a theory of `Sexp`s. Then let's consider the category of rooted trees, where the root is the currently selected sexp, where the morphisms are inclusion maps of trees. - What are the operations? They are going to be endofunctions in this category. For example, moving up to the parent, moving to the left and right sibling, etc. - Hopf algebras and rooted trees ([ # Less than versus Less than or equals over Z - If we have a theorem whose hypotheses and goal are of the form `a (2a <= 2n - 2)`. - When we lose
|
{"source": 3346, "title": "from dpo"}
|
to do so at an early stage of the proceedings, thereby providing the party submitting the document the opportunity of gathering evidence to prove the veracity of the document. 2.40 See Chapter 6 on authentication for a more detailed discussion. # Best evidence 2.41 The best evidence rule can be considered from two points of view. It can be regarded as an inclusionary rule under which whatever is the best evidence is admissible, thus overcoming exclusionary rules such as the hearsay rule; alternatively, it can be regarded as an exclusionary rule, so that anything which is not the best evidence is inadmissible. Since Omychund v Barker ,1 the majority of the cases have The foundations of evidence in electronic form 65 used the rule in an exclusionary way to deny the use of copies of documents when the absence of the original was not satisfactorily accounted for. 1 1 Atk 22, 26 ER 15. 2.42 Reaction against this rule began in the nineteenth century, 1 and by the latter part of the twentieth century it was recognized that the best evidence rule was no longer as relevant as it once was. In Kajala v Noble ,2 Ackner LJ held that the rule is now confined to written documents in the strictest sense of the term. Echoing the robust comments of Lord Denning MR in Garton v Hunter (Valuation Officer) ,3 His Lordship said: The old rule, that a party must produce the best evidence that the nature of the case will allow, and that any less good evidence is to be excluded, has gone by the board long ago. The only remaining instance of it is that, if an original document is available in one’s hands, one must produce it; that one cannot give secondary evidence by producing a copy.
|
{"source": 5648, "title": "from dpo"}
|
2.3.3. When we have to calculuate a (mod 100) it is often more helpful to find { a (mod 4) a (mod 25) and then using the Chinese Remainder Theorem to find a (mod 100). We can do similar methods when dealing with a (mod 1000). Example 2.3.6 (Senior Hanoi Open MO 2006) . Calculuate the last three digits of 2005 11 + 2005 12 + · · · + 2005 2006 .Solution. By reducing the expression modulo 1000, it remains to find the last three digits of the somewhat-less-daunting expression 2005 11 + 2005 12 + · · · + 2005 2006 ≡ 511 + 5 12 + · · · + 5 2006 (mod 1000) . Notice that 5 11 + 5 12 + · · · + 5 2006 ≡ 0 (mod 125). Next, we want 5 11 + 5 12 + · · · + 5 2006 (mod 8). Notice that 5 2 ≡ 1 (mod 8) and therefore 5 2k ≡ 1 (mod 8) and 5 2k+1 ≡ 5 (mod 8). Henceforth 511 + 5 12 + · · · + 5 2006 ≡ 1996 2 (1 + 5) ≡ 4 (mod 8) .Therefore 5 11 + 5 12 + · · · + 5 2006 ≡ 500 (mod 1000). Justin Stevens 49 Example 2.3.7 (PuMAC a). Calculuate the last 3 digits of 2008 2007 2006 ··· 21 . > a Princeton University Mathematics Competition Solution. To begin we notice 2008 2007 2006 ··· 21 ≡ 0 (mod 8). Next, we notice via Euler’s Totient: 2008 2007 2006 ··· 21 ≡ 2008 2007 2006 ··· 21 > (mod φ(125)) (mod 125) Now notice 2007 2006 ··· 21 ≡ 72006 ··· 21 (mod 100) ≡ 1 (mod 100) since 7 4 ≡ 1 (mod 100). Therefore
|
{"source": 6633, "title": "from dpo"}
|
bark, or other surface tissues from a plant or from harvested material, such as in extracting fiber from harvested Agave leaves. decumbent Having branches growing horizontally along the ground but which are turned up at the ends. decurrent Extending downward beyond the point of insertion, e.g. when the base of a leaf or a fungal gill is prolonged downward along the stem in a raised line or narrow wing. decussant A synonym of decussate; the usage decussant is questionable and occurs rarely, probably as an error. The formally correct usage is decussate. decussate Opposite with successive pairs borne at right angles to the last; generally applied to the arrangement of leaves. definite Of a constant number, e.g. twice as many stamens as petals or sepals (or less), or an inflorescence ending in a flower or an aborted floral bud, typically a cymose inflorescence. Contrast indefinite. deflexed Bent downward. Contrast inflexed. dehiscent Breaking open at maturity to release contents; refers e.g. to the opening of fruits to release seeds, of anthers to release pollen, and of sporangia to release spores. Contrast indehiscent. deltoid Shaped like the uppercase Greek letter Δ, i.e. like a more or less equilateral triangle. dendroid Tree-like; branching like a tree. dentate Toothed, especially in reference to leaf margins. denticulate Finely toothed; a diminutive form of dentate. deserticolous Inhabiting a desert. determinate Limited, usually in growth. Contrast indeterminate. diadelphous Referring to a class of adelphous structure in which the stamens or similar organs are connected in two adelphiae instead of just one. diaspore Any reproductive part of a plant adapted for dispersal and for establishing new plants; may be a disseminule such as a seed, or other parts such as specialized buds, branches, inflorescences, or fruits. dichasium A cymose inflorescence with all branches below the terminal flower in
|
{"page_id": 18238240, "title": "Glossary of botanical terms"}
|
definition of the adiabatic wall should in no way depend upon the notions of heat or temperature. This is achieved by careful wording and reference to transfer of energy only as work. Buchdahl is careful in the same way. Nevertheless, Carathéodory explicitly postulates the existence of walls that are permeable only to heat, that is to say impermeable to work and to matter, but still permeable to energy in some unspecified way. One might be forgiven for inferring from this that heat is energy in transfer across walls permeable only to heat, and that such exist as undefined postulated primitives. In the widely cited presentation of Callen, the notion of an adiabatic wall is introduced as a limit of a wall that is poorly conductive of heat. Although Callen does not here explicitly mention temperature, he considers the case of an experiment with melting ice, done on a summer's day, when, the reader may speculate, the temperature of the surrounds would be higher. Nevertheless, when it comes to a hard core definition, Callen does not use this introductory account. He eventually defines an adiabatic enclosure as does Carathéodory, that it passes energy only as work, and does not pass matter. Accordingly, he defines heat, therefore, as energy that is transferred across the boundary of a closed system other than by work. As suggested for example by Carathéodory and used for example by Callen, the favoured instance of an adiabatic wall is that of a Dewar flask. A Dewar flask has rigid walls. Nevertheless, Carathéodory requires that his adiabatic walls shall be imagined to be flexible, and that the pressures on these flexible walls be adjusted and controlled externally so that the walls are not deformed, unless a process is undertaken in which work is transferred across the walls. The work
|
{"page_id": 35470585, "title": "Adiabatic wall"}
|
that women perform higher on verbal tasks and men perform higher on spatial tasks (Voyer, Voyer, & Saint-Aubin, 2016). These findings are consistent with studies of intelligence with regards to pattern, females performing higher on certain verbal tasks and males performing higher on certain spatial tasks (Voyer, Voyer, & Saint-Aubin, 2016). Same results have been also found cross culturally. Sex differences in verbal short-term memory have been found regardless of age even among adults, for example a review published in the journal Neuropsychologia which evaluated studies from 1990 to 2013 found greater female verbal memory from ages 11–89 years old. === Working memory === There are usually no sex differences in overall working memory except those involving spatial information such as space and object. A 2004 study published in the journal of Applied Cognitive Psychology found significantly higher male performance on four visuo-spatial working memory. Another 2010 study published in the journal Brain and Cognition found a male advantage in spatial and object working memory on an n-back test but not for verbal working memory. Similarly another study published in the journal Human Brain Mapping found no sex differences in a verbal n-back working memory task among adults from ages 18–58 years old. There was also no sex differences in verbal working memory among a study of university students published in the Journal of Dental and Medical Sciences. However, they still found greater male spatial working memory in studies published in the journals Brain Cognition and Intelligence. Also, even though they found no sex differences in verbal working memory, researchers have found lower brain activity or thermodynamics in the prefrontal cortex of females which suggested greater neural efficiency and less effort for the same performance. Researchers indicate females might have greater working memory on tasks that only relies on the
|
{"page_id": 49026556, "title": "Sex differences in cognition"}
|
a given communication channel, such as a radio channel or a submarine cable. Analog vocoders typically analyze an incoming signal by splitting the signal into multiple tuned frequency bands or ranges. To reconstruct the signal, a carrier signal is sent through a series of these tuned band-pass filters. In the example of a typical robot voice the carrier is noise or a sawtooth waveform. There are usually between 8 and 20 bands. The amplitude of the modulator for each of the individual analysis bands generates a voltage that is used to control amplifiers for each of the corresponding carrier bands. The result is that frequency components of the modulating signal are mapped onto the carrier signal as discrete amplitude changes in each of the frequency bands. Often there is an unvoiced band or sibilance channel. This is for frequencies that are outside the analysis bands for typical speech but are still important in speech. Examples are words that start with the letters s, f, ch or any other sibilant sound. Using this band produces recognizable speech, although somewhat mechanical sounding. Vocoders often include a second system for generating unvoiced sounds, using a noise generator instead of the fundamental frequency. This is mixed with the carrier output to increase clarity. In the channel vocoder algorithm, among the two components of an analytic signal, considering only the amplitude component and simply ignoring the phase component tends to result in an unclear voice; on methods for rectifying this, see phase vocoder. == History == The development of a vocoder was started in 1928 by Bell Labs engineer Homer Dudley, who was granted patents for it on March 21, 1939, and Nov 16, 1937. To demonstrate the speech synthesis ability of its decoder section, the voder (voice operating demonstrator) was introduced to the public
|
{"page_id": 32678, "title": "Vocoder"}
|
to low oxygen availability. The anoxic settings can cause the soil to change as a result of redox reactions. Because of the anoxic environment, there is not much oxygen which means the redox reactions have to find new terminal electron acceptors for these reactions. Once the redox reactions have used up all the available oxygen in the soil, they move on to nitrogen, iron, manganese, and sulfate in that order. These processes cause redoximorphic features, which render color changes in the surrounding soils. == See also == == References ==
|
{"page_id": 20248000, "title": "Backswamp"}
|
Unidyne". The 55S is sometimes referred to as the "Elvis mic" due to its frequent use by Elvis Presley, and is the microphone depicted with Elvis on the commemorative first-class Elvis stamp issued by the U.S. Postal Service in 1993. In 2008, the Unidyne Model 55 microphone was inducted into the TECnology Hall of Fame, and the following year, Shure released the 55SH Series II. A supercardioid version, the Super 55 Deluxe Vocal Microphone, was introduced in 2009, featuring high gain before feedback and excellent off-axis rejection and further extending Unidyne's 70-plus year legacy. The 55 Series microphones were given the "IEEE Milestone" award in 2014. With the U.S. Army's approval of the Shure T-17 microphone for use during World War II, Shure began producing what would be several specialized microphones for U.S. military use during that war. Shure's adoption of the Military Standard Specification, and product redesigns intended to conserve raw materials essential to the war effort, positioned the company to fulfill the military's needs for specialized microphones. The T-17 Battle Announce Microphone was the most widely used microphone in the U.S. Army and Air Force during World War II, and featured a plastic case that conserved aluminum and lighter and more reliable in a wide range of temperatures and climates. A waterproof version was used on nearly all U.S. Navy ships. Shure also designed the T-30 Throat Microphone for flight crews. A cloth strap held the T-30 against the throat, capturing the user's voice box vibrations directly and avoiding the background noise of the airplane. Shure also manufactured specialized headsets and the MC-1 oxygen mask microphone. In yet another example of the widespread use of Shure microphones by the U.S. military, U.S. lookout Private Lockhard used a Shure 700A microphone to announce his sighting of Japanese planes approaching
|
{"page_id": 1409949, "title": "Shure"}
|
the flow model goes beyond the access matrix model in its ability to specify secure information flow. A practical system needs both access and flow control to satisfy all security requirements." — D. Denning, 1976 Access control enforces checks on access to information, but is not concerned about what happens after that. An example: A system has two users, Alice and Bob. Alice has a file secret.txt, which is only allowed to be read and edited by her, and she prefers to keep this information to herself. In the system, there also exists a file public.txt, which is free to read and edit for all users in the system. Now suppose that Alice has accidentally downloaded a malicious program. This program can access the system as Alice, bypassing the access control check on secret.txt. The malicious program then copies the content of secret.txt and places it in public.txt, allowing Bob and all other users to read it. This constitutes a violation of the intended confidentiality policy of the system. ==== Noninterference ==== Noninterference is a property of programs that does not leak or reveal information of variables with a higher security classification, depending on the input of variables with a lower security classification. A program which satisfies noninterference should produce the same output whenever the corresponding same input on the lower variables are used. This must hold for every possible value on the input. This implies that even if higher variables in the program has different values from one execution to another, this should not be visible on the lower variables. An attacker could try to execute a program which does not satisfy noninterference repeatedly and systematically to try to map its behavior. Several iterations could lead to the disclosure of higher variables, and let the attacker learn sensitive information
|
{"page_id": 44705838, "title": "Language-based security"}
|
the fallout from radiation, the chemical pesticides mentioned in Rachel Carson's Silent Spring, and the significant amounts of air pollution and waste, the public's concern for their health and the health of their natural environment led to a unifying phenomenon known as environmentalism. Environmental education was born of the realization that solving complex local and global problems cannot be accomplished by politicians and experts alone, but requires "the support and active participation of an informed public in their various roles as consumers, voters, employers, and business and community leaders." In 1960 the National Rural Studies Association (now known as the National Association for Environmental Education) was established in the UK to promote environmental education and support teachers in incorporating sustainability into their curricula. One of the first articles about environmental education as a new movement appeared in the Phi Delta Kappan in 1969, authored by James A. Swan. A definition of "Environmental Education" first appeared in The Journal of Environmental Education in 1969, written by William B. Stapp. Stapp later went on to become the first Director of Environmental Education for UNESCO, and then the Global Rivers International Network. Ultimately, the first Earth Day on April 22, 1970 – a national teach-in about environmental problems – paved the way for the modern environmental education movement. Later that same year, President Nixon passed the National Environmental Education Act, which was intended to incorporate environmental education into K-12 schools. Then, in 1971, the National Association for Environmental Education (now known as the North American Association for Environmental Education) was created to improve environmental literacy by providing resources to teachers and promoting environmental education programs. Internationally, environmental education gained recognition when the UN Conference on the Human Environment held in Stockholm, Sweden, in 1972, declared environmental education must be used as a tool
|
{"page_id": 2538735, "title": "Environmental education"}
|
can be studied using hidden Markov models. Song, et al. claim that it can recover the password fifty times faster than a brute force attack. Onion routing systems are used to gain anonymity. Traffic analysis can be used to attack anonymous communication systems like the Tor anonymity network. Adam Back, Ulf Möeller and Anton Stiglic present traffic analysis attacks against anonymity providing systems. Steven J. Murdoch and George Danezis from University of Cambridge presented research showing that traffic-analysis allows adversaries to infer which nodes relay the anonymous streams. This reduces the anonymity provided by Tor. They have shown that otherwise unrelated streams can be linked back to the same initiator. Remailer systems can also be attacked via traffic analysis. If a message is observed going to a remailing server, and an identical-length (if now anonymized) message is seen exiting the server soon after, a traffic analyst may be able to (automatically) connect the sender with the ultimate receiver. Variations of remailer operations exist that can make traffic analysis less effective. Traffic analysis involves intercepting and scrutinizing cybersecurity threats to gather valuable insights about anonymous data flowing through the exit node. By using technique rooted in dark web crawling and specializing software, one can identify the specific characteristics of a client's network traffic within the dark web. == Countermeasures == It is difficult to defeat traffic analysis without both encrypting messages and masking the channel. When no actual messages are being sent, the channel can be masked by sending dummy traffic, similar to the encrypted traffic, thereby keeping bandwidth usage constant. "It is very hard to hide information about the size or timing of messages. The known solutions require Alice to send a continuous stream of messages at the maximum bandwidth she will ever use...This might be acceptable for military applications,
|
{"page_id": 480015, "title": "Traffic analysis"}
|
NGC 4349 is an open cluster in the constellation Crux. It was discovered by James Dunlop in 1826. It is located approximately 7,000 light years away from Earth. == Characteristics == There are 390 probable member stars within the angular radius of the cluster and 129 within the central part of the cluster. The tidal radius of the cluster is 17.8 - 22.8 parsecs (58 - 75 light years) and represents the average outer limit of NGC 4349, beyond which a star is unlikely to remain gravitationally bound to the cluster core. One blue straggler has been detected in the cluster. There are four Cepheid variables in the direction of the cluster, among them R and T Crucis, which, however, are not members of the cluster. R Crucis lies 16 arcminutes from the centre of the open cluster NGC 4349, which is beyond the outer limit of the cluster, and is estimated to be nearly 1 kpc closer to Earth than the cluster. The cluster has subsolar metallicity (−0.12 ± 0.06). The giant star NGC 4349 No. 127 (vmag. 10.82 and with mass 3.0 M☉) displays periodic radial velocity variations caused by intrinsic stellar variability, formerly thought to be caused by an orbiting brown dwarf companion. == References == == External links == NGC 4349 on WikiSky: DSS2, SDSS, GALEX, IRAS, Hydrogen α, X-Ray, Astrophoto, Sky Map, Articles and images
|
{"page_id": 55308304, "title": "NGC 4349"}
|
apology and heartfelt promise to offer a stellar recommendation. Then the OP needs to do some serious soul searching – because seriously, they seem to have lost their soul and their heart somewhere along the way. Then the OP needs to start thinking about how to fix their dysfunctional team, because OMG I cannot believe the heartless band of asshats that all refused to help this employee. This can’t be the only dysfunction at that workplace; it’s probably rife with dehumanizing and demoralizing practices. ▼ Collapse 2 replies Leatherwings* July 5, 2016 at 12:58 pm Yeah, just as a rule of thumb I don’t think that managers should EVER reach out to employees that have quit to offer advice on how to conduct themselves. But particularly not in this situation, and I agree that an apology and great recommendation are in order. ▼ Collapse 1 reply fposte* July 5, 2016 at 1:32 pm Yup. It’s also futile–it’s not going to teach an employee anything even if the point is valid–and it makes it hard for both sides to let it go when the thing’s over. Temperance* July 5, 2016 at 12:58 pm Ouch. As the first person in my family to finish a four-year degree, I am so impressed with the LW’s employee. It’s hard to do that with minimal family support … I can’t imagine how hard it is to do it without any family support. It hurts to think about someone wanting to take away her achievement for what … Kid Rock tickets? I don’t see how a concert is more acceptable than celebrating what might be the most important achievement in her life to date. I’m guessing that LW didn’t go to college for her to have such a weird attitude about this achievement. LW, here’s some
|
{"source": 1738, "title": "from dpo"}
|
explicit formula, one could choose a large enough _λ_ to mitigate the peak to avoid overfitting for a given noise level (Fig.3, we find that _λ_* = _σ_ 2 (yellow dashed line in Fig.3 for a given _σ_ 2 at all _α_ (Fig.3."). Further insight to the phase transition can be gained by looking at the bias and the variance of the estimator38."),42 in the lazy regime. In Proc. of the 37th International Conference on Machine Learning (PMLR, 2020)."),43."). The average estimator learned by kernel regression linearly approaches to the target function as _α_ → 1 (Supplementary Note2\right\rangle }_{{\mathcal{D}}}=\min \{\alpha ,1\}\bar{f}({\bf{x}})\) (Fig.4 and variance (_V_) contributions to generalization error have the forms \(B=\max {\{0,1-\alpha \}}^{2}\), \(V=\alpha
|
{"source": 3893, "title": "from dpo"}
|
Title: Loops in the fundamental group of ${\mbox{Symp}} ({\mathbb C}{\mathbb P}^2\# \mbox{5}\overline { \mathbb C\mathbb P}\,\!^2,\omega )$ which are not represented by circle actions | Canadian Journal of Mathematics | Cambridge Core URL Source: Markdown Content: Loops in the fundamental group of ${\mbox{Symp}} ({\mathbb C}{\mathbb P}^2\# \mbox{5}\overline { \mathbb C\mathbb P}\,\!^2,\omega )$ which are not represented by circle actions | Canadian Journal of Mathematics | Cambridge Core =============== Skip to main content Cart]( []( "Cambridge Core homepage") Search []( "Cambridge Core homepage") * * * * Browse * Services * Open research Institution Login Search Hostname: page-component-5b777bbd6c-j65dx Total loading time: 0 Render date: 2025-06-05T16:57:33.373Z Has data issue: false hasContentIssue false * HomeSymp(C P 2#5 C P¯2,ω)... * English * Français !Image 2Symp(C P 2#5 C P¯2,ω) which are not represented by circle actions ============================================================================================================================== Part of: [Differential topology]( geometry, contact geometry]( and homotopy of topological groups and related structures]( Published online by Cambridge University Press:**30 June 2022** [Sílvia Anjos]( 3: Open the ORCID record for Sílvia Anjos](blob: in a new window]]( [Miguel Barata]( [Martin Pinsonnault]( [Ana Alexandra Reis]( [ Pyropia gardneri (G.M.Smith & Hollenberg) S.C.Lindstrom in Sutherland et al. 2011, syn. Porphyrella gardneri G.M.Smith & Hollenberg 1943, Porphyra gardneri (G.M.Smith & Hollenberg) M.W.Hawkes 1977, (Cape of Good Hope to Brandfontein) Pyropia saldanhae (Stegenga, J.J.Bolton & R.J.Anderson) J.E.Sutherland in Sutherland et al. 2011, syn. Porphyra saldanhae Stegenga, Bolton & R.J.Anderson 1997, (Hondeklip Bay and Olifantsbos, endemic) === Order: Porphyridiales === ==== Family Phragmonemataceae ==== Neevea cf. repens Batters 1900, (Hout Bay) == Class: Compsopogonophyceae == === Order: Erythropeltidales === ==== Family Erythrotrichiaceae ==== Erythrocladia cf. polystromatica P.J.L.Dangeard 1932, (St James, False Bay and Cape Hangklip) Erythrotrichia carnea (Dillwyn) J.Agardh 1883, syn. Erythrocladia carnea, Conferva carnea Dillwyn 1807, Bangia ciliaris subsp. pulchella (Harvey) De Toni 1897, (Probably fairly common, but South African distribution uncertain) Erythrotrichia welwitschii (Ruprecht) Batters 1902, syn. Cruoria welwitschii Ruprecht 1850, (Cape of Good Hope and False Bay extending eastwards at least as far as Port Elizabeth) Membranella africana Stegenga, Bolton & Anderson 1997, (Cape of Good Hope at least as far as Port Alfred) Porphyrostromium boryanum (Montagne) P.C.Silva in Silva, Basson & Moe 1996, Porphyra boryana Montagne 1846, Erythrotrichia boryana (Montagne) Berthold 1882, Phyllona boryana (Montagne) Kuntze 1891, Erythrotrichopeltis boryana (Montagne) Kornmann 1984, Porphyrostromium boryanum (Montagne) M.J.Wynne 1986, (Yzerfontein to Oatlands Point, False Bay) Sahlingia subintegra (Rosenvinge, 1909) Kornmann 1989, syn. Erythrocladia subintegra Rosenvinge 1909, Erythrocladia irregularis f. subintegra (Rosenvinge) Garbary, Hansen & Scagel 1981. Erythropeltis subintegra (Rosenvinge) Kornmann & Sahling 1985, Erythrotrichopeltis subintegra (Rosenvinge) Kornmann & Sahling 1985, (Worldwide – probably widely distributed in SA ) == Class: Florideophyceae == === Order: Acrochaetiales === ==== Family Acrochaetiaceae ==== Acrochaetium brebneri (Batters) G.Hamel 1928, syn. Rhodochorton brebneri Batters 1897, Chantransia brebneri (Batters) Rosenvinge
|
{"page_id": 35704471, "title": "List of red seaweeds of the Cape Peninsula and False Bay"}
|
The Rayleigh–Ritz method is a direct numerical method of approximating eigenvalues, originated in the context of solving physical boundary value problems and named after Lord Rayleigh and Walther Ritz. In this method, an infinite-dimensional linear operator is approximated by a finite-dimensional compression, on which we can use an eigenvalue algorithm. It is used in all applications that involve approximating eigenvalues and eigenvectors, often under different names. In quantum mechanics, where a system of particles is described using a Hamiltonian, the Ritz method uses trial wave functions to approximate the ground state eigenfunction with the lowest energy. In the finite element method context, mathematically the same algorithm is commonly called the Ritz-Galerkin method. The Rayleigh–Ritz method or Ritz method terminology is typical in mechanical and structural engineering to approximate the eigenmodes and resonant frequencies of a structure. == Naming and attribution == The name of the method and its origin story have been debated by historians. It has been called Ritz method after Walther Ritz, since the numerical procedure has been published by Walther Ritz in 1908-1909. According to A. W. Leissa, Lord Rayleigh wrote a paper congratulating Ritz on his work in 1911, but stating that he himself had used Ritz's method in many places in his book and in another publication. This statement, although later disputed, and the fact that the method in the trivial case of a single vector results in the Rayleigh quotient make the case for the name Rayleigh–Ritz method. According to S. Ilanko, citing Richard Courant, both Lord Rayleigh and Walther Ritz independently conceived the idea of utilizing the equivalence between boundary value problems of partial differential equations on the one hand and problems of the calculus of variations on the other hand for numerical calculation of the solutions, by substituting for the variational problems
|
{"page_id": 2111994, "title": "Rayleigh–Ritz method"}
|
In quantum field theory, the operator product expansion (OPE) is used as an axiom to define the product of fields as a sum over the same fields. As an axiom, it offers a non-perturbative approach to quantum field theory. One example is the vertex operator algebra, which has been used to construct two-dimensional conformal field theories. Whether this result can be extended to QFT in general, thus resolving many of the difficulties of a perturbative approach, remains an open research question. In practical calculations, such as those needed for scattering amplitudes in various collider experiments, the operator product expansion is used in QCD sum rules to combine results from both perturbative and non-perturbative (condensate) calculations. OPE Formulation and Application of Thirring Model are conceived by Kenneth G. Wilson. == 2D Euclidean quantum field theory == In 2D Euclidean field theory, the operator product expansion is a Laurent series expansion associated with two operators. In such an expansion, there are finitely many negative powers of the variable, in addition to potentially infinitely many positive powers of the variable. This expansion is a locally convergent sum. More precisely, if y {\displaystyle y} is a point, and A {\displaystyle A} and B {\displaystyle B} are operator-valued fields, then there is an open neighborhood O {\displaystyle O} of y {\displaystyle y} such that for all x ∈ O ∖ { y } {\displaystyle x\in O\setminus \{y\}} A ( x ) B ( y ) = ∑ i c i ( x − y ) C i ( y ) {\displaystyle A(x)B(y)=\sum _{i}c_{i}(x-y)C_{i}(y)} Heuristically, in quantum field theory the interest is in the physical observables represented by operators. To know the result of making two physical observations at two points z {\displaystyle z} and w {\displaystyle w} , their operators can be ordered in increasing
|
{"page_id": 1877895, "title": "Operator product expansion"}
|
Sundew was a large electrically powered dragline excavator used in mining operations in Rutland and Northamptonshire in the United Kingdom from 1957. It was the lead ship of a series of four Type W1400-series dragline excavators. == Specifications and History == Built by Ransomes & Rapier and named after the winning horse of the 1957 Grand National, it began work in a Rutland iron ore quarry belonging to the United Steel Companies (Ore Mining Branch) that year. At the time of its construction Sundew was the largest walking dragline in the world, weighing 1,675 long tons (1,702 t). With a reach of 86 metres (282 ft) and a bucket capacity of 27 long tons (27 t) the machine was able to move a substantial amount of material in a relatively short period. Propulsion was via two large movable feet which could be used to "walk" the dragline forwards and backwards, while directional control was provided by a large circular turntable under the body of the machine. Sundew remained until operations at the quarry ceased in 1974 and plans were then devised to relocate the machine to a recently opened British Steel Corporation quarry near Corby. At a cost of £250,000 and taking two years to complete, it was decided that dismantling, moving and reconstructing the machine was not a viable option, and so over an eight-week period in 1974 Sundew walked 13 miles (21 km) from its home in Exton Park near the village of Exton in Rutland to a site north of Corby. During the walk the dragline crossed three water mains, four water courses, thirteen power lines, ten roads, a railway line, two gas mains, seven telephone lines, 74 hedges, and the River Welland before reaching its new home. As part of a major restructuring of British Steel
|
{"page_id": 8201402, "title": "Sundew (dragline)"}
|
is then used to filter the vibration information to determine the amount of movement, or force, in one rotation of the part. Also, the time difference between the phase and the vibration peak gives the angle at which the unbalance exists. Amount of unbalance and angle of unbalance give an unbalance vector. Calibration is performed by adding a known weight at a known angle. In a soft-bearing machine, trial weights must be added in correction planes for each part. This is because the location of the correction planes along the rotational axis is unknown, and therefore it is unknown how much a given amount of weight will affect the balance. By using trial weights, a known weight at a known angle is added, and getting the unbalance vector caused by it. == Other balancing machine types == Static balancing machines differ from hard- and soft-bearing machines in that the part is not rotated to take a measurement. Rather than resting on its bearings, the part rests vertically on its geometric center. Once at rest, any movement by the part away from its geometric center is detected by two perpendicular sensors beneath the table and returned as unbalance. Static balancers are often used to balance parts with a diameter much larger than their length, such as fans. The advantages of using a static balancer are speed and price. However a static balancer can only correct in one plane, so its accuracy is limited. A blade balancing machine attempts to balance a part in assembly, so minimal correction is required later on. Blade mass balancing is typically done for short blades, while long blades may require moment weighing in one or two axes. Long blades that are also wide may require its axial moment to be measured to optimize hub stress distribution.
|
{"page_id": 6142621, "title": "Balancing machine"}
|
Micro-Star International Co., Ltd. (commonly known as MSI; Chinese: 微星科技股份有限公司) is a Taiwanese multinational information technology corporation headquartered in New Taipei City, Taiwan. It designs, develops and provides computer hardware as well as related products and services, including laptops, desktops, motherboards, graphics cards, all-in-one PCs, servers, industrial computers, PC peripherals, and car infotainment products, among other products. The company has a primary listing on the Taiwan Stock Exchange and was established on August 4, 1986, by five founders – Hsu Hsiang (a.k.a. Joseph Hsu), Huang Chin-Ching (a.k.a. Jeans Huang), Lin Wen-Tung (a.k.a. Frank Lin), Yu Hsien-Neng (a.k.a. Kenny Yu), and Lu Chi-Lung (a.k.a. Henry Lu). == Operations == First starting its business in New Taipei City, Taiwan, MSI later expanded into China, setting up its Bao'an plant in Shenzhen in 2000 and establishing research and development facilities in Kunshan in 2001. It also provides global warranty service in North America, Central/South America, Asia, Australia and Europe. MSI's offices in Zhonghe District, New Taipei City, Taiwan, serve as the company's headquarters, and house a number of different divisions and services. Manufacturing initially took place at plants in Taiwan, but has been moved elsewhere. Many MSI graphics cards are manufactured at its plant in mainland China. The company has branch offices in the Americas, Europe, Asia, Australia and South Africa. As of 2015, the company has a global presence in over 120 countries. MSI and Syrma SGS announced their collaboration to make laptops in Chennai on January 10, 2025. MSI will transfer technology to Syrma SGS for localized production in India. == Products == The company first built its reputation on developing and manufacturing computer motherboards and graphics cards. It established its subsidiary FUNTORO in 2008 for the vehicle infotainment market. It provides many computer and tech oriented products including laptops, desktops,
|
{"page_id": 941245, "title": "Micro-Star International"}
|
planetary nebulae once they become hot enough to ionise their ejected outer layers, it is thought that IRAS 08544−4431 is not massive enough to do this. == Dusty disc == The warm material surrounding IRAS 08544−4431 has been resolved using interferometry with the AMBER and MIDI instruments at the Very Large Telescope. It is a circumbinary disc surrounding both stars, is heated mainly by the primary post-AGB star, and has a total mass of 0.015 M☉. The disc starts 9 AU from the stars and is approximately 4 AU thick at its inner edge. The thick disc protects much of the dust from direct heating out to 70 AU from the stars. Beyond 70 AU, the disc is thick enough to receive direct radiation from the stars. The disc is at a temperature of 1,150 K. Although the companion is far less luminous than the primary, it is brighter than expected, especially at infrared wavelengths. It is suspected to be a main sequence star with its own compact accretion disc. The best images of the disc and stars, taken using the PIONIER interferometer, show the primary star to be 0.5 mas across, the secondary to be an unresolved point source 0.91 mas away, and the circumbinary disc to be 14.15 mas in diameter. The disc is oriented at 19° to the plane of the sky aligned at an angle of about 6° away from N-S. == References ==
|
{"page_id": 54456719, "title": "IRAS 08544−4431"}
|
of realms of reality and the more restricted sub-fields of ontological pluralism (that examines what exists in each of these realms) and epistemological pluralism (that deals with the methodology for establishing knowledge about these realms). === Ancient pluralism === In ancient Greece, Empedocles wrote that they were fire, air, water and earth, although he used the word "root" rather than "element" (στοιχεῖον; stoicheion), which appeared later in Plato. From the association (φιλία; philia) and separation (νεῖκος; neikos) of these indestructible and unchangeable root elements, all things came to be in a fullness (πλήρωμα; pleroma) of ratio (λόγος; logos) and proportion (ἀνάλογος; analogos). Similar to Empedocles, Anaxagoras was another Classical Greek philosopher with links to pluralism. His metaphysical system is centered around mechanically necessitated nous which governs, combines and diffuses the various "roots" of reality (known as homoioneroi). Unlike Empedocles' four "root elements" and similar to Democritus' multitude of atoms (yet not physical in nature), these homoioneroi are used by Anaxagoras to explain the multiplicity in reality and becoming. This pluralist theory of being influenced later thinkers such as Gottfried Wilhelm Leibniz's theory of monads and Julius Bahnsen's idea of will henades. The notion of a governing nous would also be used by Socrates and Plato, but they will assign it a more active and rational role in their philosophical systems. Aristotle incorporated these elements, but his substance pluralism was not material in essence. His hylomorphic theory allowed him to maintain a reduced set of basic material elements as per the Milesians, while answering for the ever-changing flux of Heraclitus and the unchanging unity of Parmenides. In his Physics, due to the continuum of Zeno's paradoxes, as well as both logical and empirical considerations for natural science, he presented numerous arguments against the atomism of Leucippus and Democritus, who posited a
|
{"page_id": 231376, "title": "Pluralism (philosophy)"}
|
arc hungry id oil - ld tr.nspo.t, nldusq, etc. So dcmand is risirrg. hrt logcthcr filling supplv an.l rising dcnrand aDd lou gcl onc thing: much higher prices for the i.rcsccablc Energy se.lrity and alternative enerSy Somc countdcs h.!e a lot ofenergy re$urces, others don't. And if you don't, you hive a nrajo geopoliticaI problen. li's called dependenc\. Put this issue bgether s'ith pcak oil, and it points in one dnection: aliematile cncrg\. But sone g.een acii\ists are unre.listi. about this solar Nind, tidal, ei. can onh nreet a lraction ofthe world's energl' needs. The one technology that might make a difference isnuclear. And ihat, of course, is.onhoversial. Shortages oI otherresources and cofrnodities The bad nerls continucs. As ryell as a shotage or energy, {,e're also short of water (in China, Southem Europe and the MiddLc East). And as h,ing standards rise, $Ie'11 find that manv agricultural .ommodites (eg ('heat, com, neat) are jr short supplv as well. # Management and business il.r3!iri ,!.i: ne<.!'. 'r'rrh.i dr,cJr tfril.l.. 1jLi r;. th:r i.alj I r e i : , r . ! i < r u ? s . i i r i : 2 : i r . : i i r n , l ? e . : i . . , r . j . r e r L i . . h . 1 o , , 1 For me, branding and design i€ the key iss!es. Customers can easity find good quatity and value-for money att our compeiitou off€r this. To sutujve, you need more than this, yo! n€€d brandirq. Without a strorq brard, you have no customer toyatty ard no pncing power. Ard tinked to brandirq is design - cusiomers witt piy for desiqn. These are the major batttefietds in moderr burinesr, not cost or
|
{"source": 972, "title": "from dpo"}
|
the computed similarities and fairness? ### 4.1. Answer to RQ1. How does true preference alignment compare to similarity-based alignment in measuring fairness within RecLLMs? Results for this RQ are summarized in Table 4, and Figure 2. The comparison between true preference alignment (β pref subscript 𝛽 𝑝 𝑟 𝑒 𝑓\beta_{pref}italic_β start_POSTSUBSCRIPT italic_p italic_r italic_e italic_f end_POSTSUBSCRIPT) and similarity-based alignment (β item subscript 𝛽 𝑖 𝑡 𝑒 𝑚\beta_{item}italic_β start_POSTSUBSCRIPT italic_i italic_t italic_e italic_m end_POSTSUBSCRIPT) reveals distinct trade-offs in understanding and evaluating fairness within RecLLMs. Overall, true preference alignment consistently results in lower similarity scores across all sampling strategies and sensitive attribute groups, indicating a divergence between the two recommendation approaches. For instance, under random sampling, the Jaccard similarity for the Sex category is 0.0313 0.0313 0.0313 0.0313 for β pref subscript 𝛽 𝑝 𝑟 𝑒 𝑓\beta_{pref}italic_β start_POSTSUBSCRIPT italic_p italic_r italic_e italic_f end_POSTSUBSCRIPT compared to 0.1680 0.1680 0.1680 0.1680 for β item subscript 𝛽 𝑖 𝑡 𝑒 𝑚\beta_{item}italic_β start_POSTSUBSCRIPT italic_i italic_t italic_e italic_m end_POSTSUBSCRIPT. Similarly, in the Age category, true preference alignment yields similarity scores of 0.0226 0.0226 0.0226 0.0226 (Teen), 0.0199 0.0199 0.0199 0.0199 (Young), and 0.0181 0.0181 0.0181 0.0181 (Adult), markedly lower than their β item subscript 𝛽 𝑖 𝑡 𝑒 𝑚\beta_{item}italic_β start_POSTSUBSCRIPT italic_i italic_t italic_e italic_m end_POSTSUBSCRIPT counterparts of 0.1669 0.1669 0.1669 0.1669, 0.1847 0.1847 0.1847 0.1847, and 0.1421 0.1421 0.1421 0.1421 respectively. This reduction underscores
|
{"source": 2636, "title": "from dpo"}
|
probabilities is small. Other taggers will assign so-called ambiguity or portman-teau tags , as in the following example from the BNC: (23) Ford/NP0-NN1 faces/NN2-VVZ strike/VVB-NN1 over/AVP-PRP pay/NN1-VVB deal/NN1-VVB ./PUN (BNC AAC) First, such cases must obviously be kept in mind when constructing queries: the query ⟨ VBB ⟩ will miss the word strike in this sentence (as will the query ⟨ > NN1 ⟩). In order to find words with ambiguity tags, we have to indicate that the ag we are interested in may be preceded or followed by another tag (one such way is provided by regular expressions, see Section 4.1 below). Second, such cases demonstrate vividly why the two operational definitions of parts of speech – by tagging guide line and by tagger – are fundamentally different: no human annotator, even one with a very sketchy tagging guideline, would produce the annotation in (23). On the other hand, it is simply not feasible to annotate a 100-million-word corpus using human annotators (though advances in crowd-sourcing technology may change this), so we are stuck with a choice between using a tagger or having no POS annotation at all. Existing taggers tend to have an accuracy of around 95 to 97 percent. For ex-ample, it has been estimated (Leech et al. 1994) that 1.5 percent of all words in the BNC are tagged incorrectly. In a further 3.5 percent, the automatic tagger was not able to make a decision, assigning ambiguity tags (as shown in (23) above). This leaves 95 percent of the words in the corpus tagged correctly and un-ambiguously. As impressive as this sounds at first, a closer look reveals two problems. First, an accuracy of 95 percent means that roughly one word in 20 is tagged incorrectly. Assuming a mean sentence length of 20 words (actual esti-mates
|
{"source": 4958, "title": "from dpo"}
|
unit variance. (Either operation can be de-selected via arguments center and scale .) Other operations can only be applied to vectors, and so must be applied to each column in turn. This is the purpose of the function lapply , so we could scale the columns of crabs by > scrabs scrabs 17 Transform to zero mean and unit variance. 34 Data Manipulation using a simple anonymous function (and if ... else ; see page 58). The right-hand side gives a list without row names, which we use to replace all the columns in the data frame. We can find out which variables are numeric by > > sapply(crabs, is.numeric) sp sex index FL RW CL CW BD FFTTTTTT Function sapply is very similar to lapply , but attempts to simplify the result to a vector or matrix. Operations on rows Operating on each row is much tricker. Whereas each column is a variable of a single class, a row can be rather diverse. However, in the special case that all columns are numeric, or all are character or factor, we can make progress by coercing the data frame to a (numeric or character) matrix. Function apply operates on arrays, 18 but here we need only the
|
{"source": 6256, "title": "from dpo"}
|
accelerates each particle toward its optimum locations according to simple mathematical rules. In a related approach, Shvalb et al. (2024) introduced a statistical-physics-based framework for controlling large-scale multi-robot systems. By modeling robots as particles within a statistical ensemble, the study leverages macroscopic parameters—such as density and flow fields—to guide collective behavior without the need for individual identification or direct communication between agents. This method enables scalable and robust control of robot swarms, drawing conceptual parallels to particle swarm optimization by utilizing global information to influence local agent dynamics. Particle swarm optimization has been applied in many areas. It has few parameters to adjust, and a version that works well for a specific applications can also work well with minor modifications across a range of related applications. A book by Kennedy and Eberhart describes some philosophical aspects of particle swarm optimization applications and swarm intelligence. An extensive survey of applications is made by Poli. ==== Altruism ==== Researchers in Switzerland have developed an algorithm based on Hamilton's rule of kin selection. The algorithm shows how altruism in a swarm of entities can, over time, evolve and result in more effective swarm behaviour. == Biological swarming == The earliest evidence of swarm behaviour in animals dates back about 480 million years. Fossils of the trilobite Ampyx priscus have been recently described as clustered in lines along the ocean floor. The animals were all mature adults, and were all facing the same direction as though they had formed a conga line or a peloton. It has been suggested they line up in this manner to migrate, much as spiny lobsters migrate in single-file queues; it has also been suggested that the formation is the precursor for mating, as with the fly Leptoconops torrens. The findings suggest animal collective behaviour has very early evolutionary
|
{"page_id": 207874, "title": "Swarm behaviour"}
|
Dangerous goods are substances that are a risk to health, safety, property or the environment during transport. Certain dangerous goods that pose risks even when not being transported are known as hazardous materials (syllabically abbreviated as HAZMAT or hazmat). An example of dangerous goods is hazardous waste which is waste that threatens public health or the environment. Hazardous materials are often subject to chemical regulations. Hazmat teams are personnel specially trained to handle dangerous goods, which include materials that are radioactive, flammable, explosive, corrosive, oxidizing, asphyxiating, biohazardous, toxic, poisonous, pathogenic, or allergenic. Also included are physical conditions such as compressed gases and liquids or hot materials, including all goods containing such materials or chemicals, or may have other characteristics that render them hazardous in specific circumstances. Dangerous goods are often indicated by diamond-shaped signage on the item (see NFPA 704), its container, or the building where it is stored. The color of each diamond indicates its hazard, e.g., flammable is indicated with red, because fire and heat are generally of red color, and explosive is indicated with orange, because mixing red (flammable) with yellow (oxidizing agent) creates orange. A nonflammable and nontoxic gas is indicated with green, because all compressed air vessels were this color in France after World War II, and France was where the diamond system of hazmat identification originated. == Global regulations == The most widely applied regulatory scheme is that for the transportation of dangerous goods. The United Nations Economic and Social Council issues the UN Recommendations on the Transport of Dangerous Goods, which form the basis for most regional, national, and international regulatory schemes. For instance, the International Civil Aviation Organization has developed dangerous goods regulations for air transport of hazardous materials that are based upon the UN model but modified to accommodate unique aspects
|
{"page_id": 1476975, "title": "Dangerous goods"}
|
the beads may be controlled by moving the magnets along the vertical axis. Moving them up decreases the field strength at the position of the bead and vice versa. Torques on the magnetic beads may be exerted by turning the magnets around the vertical axis to change the direction of the field. The size of the magnets is in the order of millimeters as well as their spacing. Electromagnets The use of electromagnets in magnetic tweezers has the advantage that the field strength and direction can be changed just by adjusting the amplitude and the phase of the current for the magnets. For this reason, the magnets do not need to be moved which allows a faster control of the system and reduces mechanical noise. In order to increase the maximum field strength, a core of a soft paramagnetic material with high saturation and low remanence may be added to the solenoid. In any case, however, the typical field strengths are much lower compared to those of permanent magnets of comparable size. Additionally, using electromagnets requires high currents that produce heat that may necessitate a cooling system. === Bead tracking system === The displacement of the magnetic beads corresponds to the response of the system to the imposed magnetic field and hence needs to be precisely measured: In a typical set-up, the experimental volume is illuminated from the top so that the beads produce diffraction rings in the focal plane of an objective which is placed under the tethering surface. The diffraction pattern is then recorded by a CCD-camera. The image can be analyzed in real time by a computer. The detection of the position in the plane of the tethering surface is not complicated since it corresponds to the center of the diffraction rings. The precision can be up
|
{"page_id": 9368062, "title": "Magnetic tweezers"}
|
The Carl R. Woese Institute for Genomic Biology (IGB) is an interdisciplinary facility for genomics research at the University of Illinois Urbana-Champaign. The Institute was built in 2006 to centralize biotechnology research at the University of Illinois. Current research at the IGB explores the genomic bases of a wide range of phenomena, including the progression of cancer, the ecological impact of global change, tissue and organ growth, and the diversity of animal behavior. == History == === Construction === Plans for what would become the Carl R. Woese Institute for Genomic Biology (IGB) were formed in the late 1990s. In 2000, $67.5 million was appropriated by the state of Illinois for its construction. Due to economic hardships, the state halted plans for construction in 2001. In 2002, funds were re-appropriated. Construction began in April 2004 and was completed in November 2006. The building was dedicated in March 2007. Initially named the Institute for Genomic Biology, it officially changed its name to the Carl R. Woese Institute for Genomic Biology in 2015 to honor the scientific contributions of Carl R. Woese. === Leadership === The IGB was initially led by Harris Lewin, then a professor in the Department of Animal Sciences at the University of Illinois. Lewin served as the founding director until 2011, when he accepted the position of Research Vice Chancellor at University of California, Davis. Gene E. Robinson, a professor in the Entomology department, took over as Interim Director, and was named the new Director of IGB in January 2012. == Research == The IGB houses approximately 130 faculty and 600 graduate students, postdoctoral fellows, and research personnel. Research is organized into themes, they are reviewed every five years; new themes may be added or existing themes modified to reflect the current state of genomics research. Current themes
|
{"page_id": 39607668, "title": "Carl R. Woese Institute for Genomic Biology"}
|
ConceptBase (a.k.a. ConceptBase.cc) is a deductive and object-oriented database management system developed at University of Skövde. Earlier development was done at University of Passau (1987-1992), University of Aachen (1992-2003), and University of Tilburg (1997-2013). It is mainly used for conceptual modeling and metamodeling in the domain of software engineering and related domains. ConceptBase.cc is free and open-source software. ConceptBase combines the following features: Object-oriented concepts such as classes and inheritance Deductive rules evaluated by a Datalog engine Active rules conforming to the event condition action (ECA) paradigm Recursive function definitions Metamodeling with arbitrarily many abstraction levels (metaclasses, meta metaclasses) ConceptBase implements O-Telos, which is a variant of the knowledge representation Telos. == See also == MetaCASE tool == References == == External links == ConceptBase ConceptBase ECArules
|
{"page_id": 14208798, "title": "ConceptBase"}
|
announced that the nuclear reactor was operational. The Khushab reactor project was initiated in 1986 by Munir Khan, who informed the world that the reactor was totally indigenous, i.e. that it was designed and built by Pakistani scientists and engineers. Various Pakistani industries contributed in 82% of the reactor's construction. The Project-Director for this project was Sultan Bashiruddin Mahmood. According to public statements made by the US Government officials, this heavy-water reactor can produce up to 8 to 10 kg of plutonium per year with increase in the production by the development of newer facilities, sufficient for at least one nuclear weapon. The reactor could also produce 3H if it were loaded with 6Li, although this is unnecessary for the purposes of nuclear weapons, because modern nuclear weapon designs use 6Li directly. According to J. Cirincione of Carnegie Endowment for International Peace, Khushab's Plutonium production capacity has allowed Pakistan to develop lighter nuclear warheads that would be easier to deliver to any place in the range of the ballistic missiles. PAEC also created a separated electromagnetic isotope separation program alongside the enrichment program, under Dr. G D Allam, a theoretical physicist. The plutonium electromagnetic separation takes place at the New Laboratories, a reprocessing plant, which was completed by 1981 by PAEC and is next to the Pakistan Institute of Nuclear Science and Technology (PINSTECH) near Islamabad, which is not subject to IAEA inspections and safeguards. In late 2006, the Institute for Science and International Security released intelligence reports and imagery showing the construction of a new plutonium reactor at the Khushab nuclear site. The reactor is deemed to be large enough to produce enough plutonium to facilitate the creation of as many as "40 to 50 nuclear weapons a year." The New York Times carried the story with the insight
|
{"page_id": 872930, "title": "Pakistan and weapons of mass destruction"}
|
calls the Toyota Way "a system designed to provide the tools for people to continually improve their work." According to Liker, the 14 principles of The Toyota Way are organized into four sections: long-term philosophy, the right process will produce the right results, add value to the organization by developing your people, and continuously solving root problems drives organizational learning. === Long-term philosophy === The first principle involves managing with a long-term view rather than for short-term gain. It reflects a belief that people need a purpose to find motivation and establish goals. === Right process will produce right results === The following seven principles are focused on process with an eye towards a quality outcome. Following these principles, work processes are redesigned to eliminate waste (muda) through continuous improvement — kaizen. The seven types of muda are (1) overproduction; (2) waiting, time on hand; (3) unnecessary transport or conveyance; (4) overprocessing or incorrect processing; (5) excess inventory; (6) motion; and (7) defects. The principles in this section empower employees despite the automaker's bureaucratic processes. Any employee in the Toyota Production System has the authority to stop production to signal a quality issue, emphasizing that quality takes precedence (jidoka). The way the Toyota bureaucratic system is implemented allows for continuous improvement (kaizen) from the people affected by that system so that any employee may aid in the growth and improvement of the company. Recognition of the value of employees is also part of the principle of measured production rate (heijunka), as a level workload helps avoid overburdening people and equipment (muri), but this is also intended to minimize waste (muda) and avoid uneven production levels (mura). These principles are also designed to ensure that only essential materials are employed (to avoid overproduction), that the work environment is maintained efficiently (the
|
{"page_id": 9040377, "title": "The Toyota Way"}
|
The ampere balance (also current balance or Kelvin balance) is an electromechanical apparatus used for the precise measurement of the SI unit of electric current, the ampere. It was invented by William Thomson, 1st Baron Kelvin. The current to be measured is passed in series through two coils of wire, one of which is attached to one arm of a sensitive balance. The magnetic force between the two coils is measured by the amount of weight needed on the other arm of the balance to keep it in equilibrium. This is used to calculate the numerical value of the current. The main weakness of the ampere balance is that the calculation of the current involves the dimensions of the coils. So the accuracy of the current measurement is limited by the accuracy with which the coils can be measured, and their mechanical rigidity. A more complicated version of an ampere balance, that removes this source of inaccuracy by a calibration step, is the Kibble balance, invented by Bryan Kibble in 1975. This experimental device was developed at government metrology laboratories worldwide with the goal of providing a more accurate definition of the kilogram, the world's standard of mass. In this application, the Kibble balance functions in the reverse sense to the Ampere balance: it was used to weigh the International Prototype of the Kilogram, defining the kilogram in terms of an electric current and a voltage. In 2019, the kilogram, ampere, kelvin, and mole were redefined in terms of fundamental constants, removing the dependence on physical objects. == Usage == Approximate readings may be obtained by reading the position of the weight on the scale, or a more accurate reading may be obtained as follows: The upper edge of the shelf on which the weights slide is graduated into equal
|
{"page_id": 692024, "title": "Ampere balance"}
|
One of the ways biological systems adapt to environments is through the use of redundancy. Many organs are redundant in humans. The kidney is one such example. Humans generally only need one kidney, but having a second kidney allows room for failure. This same principle may be taken to apply to software, but there are some challenges. When applying the principle of redundancy to computer science, blindly adding code is not suggested. Blindly adding code introduces more errors, makes the system more complex, and renders it harder to understand. Code that does not provide any reinforcement to the already existing code is unwanted. The new code must instead possess equivalent functionality, so that if a function is broken, another providing the same function can replace it, using manual or automated software diversity. To do so, the new code must know how and when to accommodate the failure point. This means more logic needs to be added to the system. But as a system adds more logic, components, and increases in size, it becomes more complex. Thus, when making a more redundant system, the system also becomes more complex and developers must consider balancing redundancy with complexity. Currently, computer science practices do not focus on building robust systems. Rather, they tend to focus on scalability and efficiency. One of the main reasons why there is no focus on robustness today is because it is hard to do in a general way. == Areas == === Robust programming === Robust programming is a style of programming that focuses on handling unexpected termination and unexpected actions. It requires code to handle these terminations and actions gracefully by displaying accurate and unambiguous error messages. These error messages allow the user to more easily debug the program. ==== Principles ==== Paranoia When building software, the
|
{"page_id": 27206541, "title": "Robustness (computer science)"}
|
in was described by Digital Domain as the most difficult item of clothing they could have chosen, due to its transparency and layering. To render 150 frames (equating to 5–6 seconds of actual footage) of Mya moving in the gown in low-resolution required approximately 6 hours of processing time; the final high-resolution shots took longer. Mya's rendering was so complex she crashed the computers at Digital Domain several times. Mya's creators said they had difficulty making Mya appear as if she were "alive", and focused intensely on movements, specular highlights and eye blinks in order to "bring her to life". The specular highlights also had the intended effect of making Mya shine in an inhuman manner; when the light hit Mya at certain angles, a rainbow would appear. Mya's skin was described as "part china doll, part disco ball." Her distinct shine was based on that of a china plate that the commercial's director, Alex Proyas, had bought in Australia. In some shots of Mya, images were deliberately downgraded and had scan lines added to make the character appear more artificial. Mya's visual representation, however, appeared solely during advertising and on her website. Only her voice was to be heard when using the actual program. Demonstrations of Mya's abilities and images of the character could be viewed at the now defunct website, mya.com. Raimondi said he believed the name Mya was a play on the words 'My assistant', as did Sidney Matrix in the book Cyberpop: Digital Lifestyles and Commodity Culture. == Debut and appearances == Mya made her debut on March 26, 2000 in a 60-second advertisement shown during the 72nd Academy Awards. The ad featured Mya dressed in her evening gown and wearing a headset. In the ad Mya steps out of a stretch limousine and walks down
|
{"page_id": 43236335, "title": "Mya (program)"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.