text
stringlengths 5
10.5k
| source
stringlengths 33
146
|
|---|---|
Grain growth occurs by the movement of grain boundaries and also by coalescence (i.e. like water droplets) Grain growth competition between Ordered coalescence and the movement of grain boundaries Boundary movement may be discontinuous and the direction of motion may change suddenly during abnormal grain growth. One grain may grow into another grain whilst being consumed from the other side The rate of consumption often increases when the grain is nearly consumed A curved boundary typically migrates towards its centre of curvature == Classical driving force == The boundary between one grain and its neighbour (grain boundary) is a defect in the crystal structure and so it is associated with a certain amount of energy. As a result, there is a thermodynamic driving force for the total area of boundary to be reduced. If the grain size increases, accompanied by a reduction in the actual number of grains per volume, then the total area of grain boundary will be reduced. In the classic theory, the local velocity of a grain boundary at any point is proportional to the local curvature of the grain boundary, i.e.: v = M σ κ {\displaystyle v=M\sigma \kappa } , where v {\displaystyle v} is the velocity of grain boundary, M {\displaystyle M} is grain boundary mobility (generally depends on orientation of two grains), σ {\displaystyle \sigma } is the grain boundary energy and κ {\displaystyle \kappa } is the sum of the two principal surface curvatures. For example, shrinkage velocity of a spherical grain embedded inside another grain is v = M σ 2 R {\displaystyle v=M\sigma {\frac {2}{R}}} , where R {\displaystyle R} is radius of the sphere. This driving pressure is very similar in nature to the Laplace pressure that occurs in foams. In comparison to phase transformations the energy available to
|
{"page_id": 4301763, "title": "Grain growth"}
|
difference in arrival time between Fermi and INTEGRAL helped to improve the sky localization. This GRB was relatively faint given the proximity of the host galaxy NGC 4993, possibly due to its jets not being pointed directly toward Earth, but rather at an angle of about 30 degrees off axis. == Electromagnetic follow-up == A series of alerts to other astronomers were issued, beginning with a report of the gamma-ray detection and single-detector LIGO trigger at 13:21 UTC, and a three-detector sky location at 17:54 UTC. These prompted a massive search by many survey and robotic telescopes. In addition to the expected large size of the search area (about 150 times the area of a full moon), this search was challenging because the search area was near the Sun in the sky and thus visible for at most a few hours after dusk for any given telescope. In total six teams (One-Meter, Two Hemispheres (1M2H), DLT40, VISTA, Master, DECam, and Las Cumbres Observatory (Chile)) imaged the same new source independently in a 90-minute interval.: 5 The first to detect optical light associated with the collision was the 1M2H team running the Swope Supernova Survey, which found it in an image of NGC 4993 taken 10 hours and 52 minutes after the GW event by the 1-meter diameter (3.3 ft) Swope Telescope operating in the near infrared at Las Campanas Observatory, Chile. They were also the first to announce it, naming their detection SSS17a in a circular issued 12h26m post-event. The new source was later given an official International Astronomical Union (IAU) designation AT 2017gfo. The 1M2H team surveyed all galaxies in the region of space predicted by the gravitational wave observations, and identified a single new transient. By identifying the host galaxy of the merger, it is possible to provide
|
{"page_id": 55053716, "title": "GW170817"}
|
and water spray as the laser rangefinder. A continuous wave frequency modulated (CWFM) or pulsed radar waveform is normally used to provide range resolution. Since the beam diverges, the linear size of the footprint is directly proportional to range, while the area of the footprint is proportional to the square of range. One example of a microwave range finder is the Miros SM-094, which is designed to measure waves and water level, including tides. This sensor is used as an air gap (bridge clearance) sensor in NOAA's PORTS system. Another example is the WaveRadar REX, which is a derivative of a Rosemount tank radar. From data on the elevation of the surface of the water at three or more locations, a directional spectrum of wave height can be computed. The algorithm is similar to the one which generates a directional spectrum from data on heave (vertical motion), pitch and roll at a single location, as provided by a disc-shaped wave buoy. An array of three vertical radars, having footprints at the vertices of a horizontal, equilateral triangle, can provide the necessary data on water surface elevation. “Directional WaveGuide” is a commercial radar system based on this technique. It is available from the Dutch companies Enraf and Radac. === Marine navigation radars === Marine navigation radars (X band) provide sea clutter images which contain a pattern resembling a sea wave pattern. By digitizing the radar video signal it can be processed by a digital computer. Sea surface parameters may be calculated on the basis of these digitized images. The marine navigation radar operates in low grazing angle mode and wind generated surface ripple must be present. The marine navigation radar is non-coherent and is a typical example of an indirect wave sensor, because there is no direct relation between wave height
|
{"page_id": 14448292, "title": "Wave radar"}
|
The International Knockout Mouse Consortium (IKMC) is a scientific endeavour to produce a collection of mouse embryonic stem cell lines that together lack every gene in the genome, and then to distribute the cells to scientific researchers to create knockout mice to study. Many of the targeted alleles are designed so that they can generate both complete and conditional gene knockout mice. The IKMC was initiated on March 15, 2007, at a meeting in Brussels. By 2011, Nature reported that approximately 17,000 different genes have already been disabled by the consortium, "leaving only around 3,000 more to go". The consortium encompasses four major, high-throughput gene-targeted mutagenesis programs: the National Institutes of Health (NIH)-sponsored Knockout Mouse Program (KOMP) and state-funded Texas Institute for Genomic Medicine (TIGM) in the U.S., the North American Conditional Mouse Mutagenesis (NorCOMM) Program in Canada, and the European Conditional Mouse Mutagenesis (EUCOMM) Programme in Europe. The first of its annual meetings of members and funders, hosted by the country of its rotating chair, was held at the NIH in Bethesda, Maryland, in the United States for 2007–2008, with Toronto, Canada, hosting for 2008–2009. == References == == External links == International Knockout Mouse Consortium Knockout Mouse Project (KOMP) Repository North American Conditional Mouse Mutagenesis (NorCOMM) Program Archived 2022-03-07 at the Wayback Machine European Union Conditional Mouse Mutagenesis (EUCOMM) Programme Texas Institute for Genomic Medicine (TIGM)
|
{"page_id": 19983315, "title": "International Knockout Mouse Consortium"}
|
issues, with the argument that the value of the lake as a recreation area was considered by some to be stronger than the potential benefits of the polder. Additionally, it was argued that, in case of drought, the lake would be very useful for the production of drinking water, and that in heavy weather, the lake serves as a buffer zone. Finally, in 2003, it was decided not to build this polder. However discussions never completely closed. In 2012, plans emerged to create the Marker Wadden, a group of islands designed to establish nature reserves in the north of the Markermeer. In contrast to the Markerwaard, no human occupation is planned. The creation process began in early 2016. == References ==
|
{"page_id": 1669107, "title": "Markerwaard"}
|
Neuralink Corp. is an American transhumanist neurotechnology company that has developed, as of 2024, implantable brain–computer interfaces (BCIs), also known as brain implants. It was founded by Elon Musk and a team of eight scientists and engineers. Neuralink was launched in 2016 and first publicly reported in March 2017. The company is based in Fremont, California, with plans to build a three-story building with office and manufacturing space near Austin, Texas, in Del Valle, about 10 miles east of Gigafactory Texas, Tesla's headquarters and manufacturing plant that opened in 2022. Since its founding, the company has hired several high-profile neuroscientists from various universities. By 2019, it had received $158 million in funding ($100 million was from Musk) and had 90 employees. At that time, Neuralink announced that it was working on a "sewing machine-like" device capable of implanting very thin (4 to 6 μm in width) threads into the brain, and demonstrated a system that reads information from a lab rat via 1,500 electrodes. It anticipated starting experiments with humans in 2020, but later moved that to 2023. As of May 2023, it has been approved for human trials in the United States. On January 29, 2024, Musk announced that Neuralink had successfully implanted a Neuralink device in a human and that the patient was recovering. The company has faced criticism for the large number of primates that were euthanized after medical trials. Veterinary records of the monkeys showed complications with surgically implanted electrodes. In September 2024, the company announced that its latest development effort, Blindsight, would enable blind people whose visual cortex is undamaged to regain some level of vision. The development received "breakthrough" status from the U.S. federal government, which will accelerate development. == Company history == Neuralink was founded in 2016 by Elon Musk and a founding
|
{"page_id": 53615490, "title": "Neuralink"}
|
attack, data can be corrupted with the control system. The industrial facility operator would have no way of knowing the data has been compromised, until someone such as a security engineer recognized the attack was occurring. As operators are trained to provide a prompt, appropriate response to stabilize the industrial facility, there is a likelihood that the corrupt data would lead to the operator reacting to the situation and lead to a plant upset. In a resilient control system, as per Fig. 2, cyber and physical data is fused to recognize anomalous situations and warn the operator. 2) As our society becomes more automated for a variety of drivers, including energy efficiency, the need to implement ever more effective control algorithms naturally follow. However, advanced control algorithms are dependent upon data from multiple sensors to predict the behaviors of the industrial operation and make corrective responses. This type of system can become very brittle, insofar as any unrecognized degradation in the sensor itself can lead to incorrect responses by the control algorithm and potentially a worsened condition relative to the desired operation for the industrial facility. Therefore, implementation of advanced control algorithms in a resilient control system also requires the implementation of diagnostic and prognostic architectures to recognize sensor degradation, as well as failures with industrial process equipment associated with the control algorithms. == Resilient control system solutions and the need for interdisciplinary education == In our world of advancing automation, our dependence upon these advancing technologies will require educated skill sets from multiple disciplines. The challenges may appear simply rooted in better design of control systems for greater safety and efficiency. However, the evolution of the technologies in the current design of automation has created a complex environment in which a cyber-attack, human error (whether in design or operation),
|
{"page_id": 22982978, "title": "Resilient control systems"}
|
Leptin (from Greek λεπτός leptos, "thin" or "light" or "small"), also known as obese protein, is a protein hormone predominantly made by adipocytes (cells of adipose tissue). Its primary role is likely to regulate long-term energy balance. As one of the major signals of energy status, leptin levels influence appetite, satiety, and motivated behaviors oriented toward the maintenance of energy reserves (e.g., feeding, foraging behaviors). The amount of circulating leptin correlates with the amount of energy reserves, mainly triglycerides stored in adipose tissue. High leptin levels are interpreted by the brain that energy reserves are high, whereas low leptin levels indicate that energy reserves are low, in the process adapting the organism to starvation through a variety of metabolic, endocrine, neurobiochemical, and behavioral changes. Leptin is coded for by the LEP gene. Leptin receptors are expressed by a variety of brain and peripheral cell types. These include cell receptors in the arcuate and ventromedial nuclei, as well as other parts of the hypothalamus and dopaminergic neurons of the ventral tegmental area, consequently mediating feeding. Although regulation of fat stores is deemed to be the primary function of leptin, it also plays a role in other physiological processes, as evidenced by its many sites of synthesis other than fat cells, and the many cell types beyond hypothalamic cells that have leptin receptors. Many of these additional functions are yet to be fully defined. In obesity, a decreased sensitivity to leptin occurs (similar to insulin resistance in type 2 diabetes), resulting in an inability to detect satiety despite high energy stores and high levels of leptin. == Effects == Predominantly, the "energy expenditure hormone" leptin is made by adipose cells, and is thus labeled fat cell-specific. In the context of its effects, the short describing words central, direct, and primary are not
|
{"page_id": 214938, "title": "Leptin"}
|
The orthicon's performance was similar to that of the image iconoscope, but it was also unstable under sudden flashes of bright light, producing "the appearance of a large drop of water evaporating slowly over part of the scene". === Image orthicon === The image orthicon (sometimes abbreviated IO), was common in American broadcasting from 1946 until 1968. A combination of the image dissector and the orthicon technologies, it replaced the iconoscope in the United States, which required a great deal of light to work adequately. The image orthicon tube was developed at RCA by Albert Rose, Paul K. Weimer, and Harold B. Law. It represented a considerable advance in the television field, and after further development work, RCA created original models between 1939 and 1940. The National Defense Research Committee entered into a contract with RCA where the NDRC paid for its further development. Upon RCA's development of the more sensitive image orthicon tube in 1943, RCA entered into a production contract with the U.S. Navy, the first tubes being delivered in January 1944. RCA began production of image orthicons for civilian use in the second quarter of 1946. While the iconoscope and the intermediate orthicon used capacitance between a multitude of small but discrete light sensitive collectors and an isolated signal plate for reading video information, the image orthicon employed direct charge readings from a continuous electronically charged collector. The resultant signal was immune to most extraneous signal crosstalk from other parts of the target, and could yield extremely detailed images. Image orthicon cameras were still being used by NASA for capturing Apollo/Saturn rockets nearing orbit, although the television networks had phased the cameras out. An image orthicon camera can take television pictures by candlelight because of the more ordered light-sensitive area and the presence of an electron multiplier
|
{"page_id": 516757, "title": "Video camera tube"}
|
a screen transmission cable attached. The screened transmission cable must be RG-58C/U coaxial cable, jacketed with high-density polyethylene, rated for direct burial and resistant to nicks and cuts. 3. Operate over a temperature range from -40 to 160 degrees F. 4. Have a signal to noise ratio equal to or greater than 10 to 1. 5. Have an output uniformity range of plus or minus 20 percent. 6. Have an output signal of a minimum 250 mV for a wheel load of 400 lb at 55 mph and 70 degrees F. 7. Have an insulation resistance greater than 500 MΩ. 8. Have a life cycle of a minimum 25 million equivalent single axle loadings. 86-1.02F Conductors and Cables 86-1.02F(1) General Conductors and cables must be clearly and permanently marked the entire length of their outer surface with: 1. Manufacturer's name or trademark 2. Insulation-type letter designation 3. Conductor size 4. Voltage 5. Temperature rating 6. Number of conductors for a cable The minimum insulation thickness and color code requirements must comply with NEC. 86-1.02F(2) Conductors 86-1.02F(2)(a) General A conductor must be UL listed or NRTL certified and rated for 600 V(ac). Conductors must be identified as shown in the following table: Conductor Identification Circuit Signal phase or function Identification Copper Size Insulation color Band symbols Base Stripe aSECTION 86 GENERAL 1111 Signals (vehicle) a, > b 2, 6 Red, yellow, brown Black 2, 6 14 4, 8 Red, yellow, brown Orange 4, 8 14 1, 5 Red, yellow, brown None 1, 5 14 3, 7 Red, yellow, brown Purple 3, 7 14 Ramp meter 1 Red, yellow, brown None No band required 14 Ramp meter 2 Red, yellow, brown Black No band required 14 Pedestrian signals 2p, 6p Red, brown Black 2p, 6p 14 4p, 8p Red, brown Orange 4p,
|
{"source": 1498, "title": "from dpo"}
|
= ( 3 0 0 6 ) . 𝑃 ∗ 𝐴 𝑃 = ( 3 0 0 6 ) . (Diagonalization of real inner products) Every real inner product on R n 𝑅 𝑛 may be written as ( u , v ) = y t A x ( 𝑢 , 𝑣 ) = 𝑦 𝑡 𝐴 𝑥 , where A 𝐴 is a symmetric n × n 𝑛 × 𝑛 matrix and x , y 𝑥 , 𝑦 are the coordinates of the two vectors u , v 𝑢 , 𝑣 in the chosen basis. If we change basis, the coordinates representing the two vectors u , v 𝑢 , 𝑣 will change to x = M z , y = M w 𝑥 = 𝑀 𝑧 , 𝑦 = 𝑀 𝑤 for some invertible matrix M 𝑀 , where z 𝑧 and w 𝑤 are the coordinates of the vectors u , v 𝑢 , 𝑣 in the new basis. In the new basis the inner product will take the form ( u , v ) = y t A x = w t M t A M z = w t B z ( 𝑢 , 𝑣 ) = 𝑦 𝑡 𝐴 𝑥 = 𝑤 𝑡 𝑀 𝑡 𝐴 𝑀 𝑧 = 𝑤 𝑡 𝐵 𝑧 , where the symmetric matrix B = M t A M 𝐵 = 𝑀 𝑡 𝐴 𝑀 represents the same inner product but in a different basis. As we have mentioned before, every symmetric matrix A 𝐴 can be orthogonally diagonalized, i.e. there exists an orthogonal matrix P 𝑃 such that P A P t = D 𝑃 𝐴 𝑃 𝑡 = 𝐷 , for D 𝐷 diagonal, then we have ( u , v ) = y t P t
|
{"source": 3881, "title": "from dpo"}
|
scripts. > • Improve authentication, or use only attributes for access control that can be authenticated. You can protect the cookie by utilizing the browser’s security policy, e.g. by putting the visited web page in a zone that is not given permission to access document.cookie .You could forgo the use of cookies for creating sessions and use other mechanisms for authenticating client requests (see Section 18.5). For example, unpredictable one-time URLs sent by the server to the client during user authentication can authenticate requests as coming directly from the client . 350 18 WEB SECURITY # 18.5 C R O S S - S I T E R E Q U E S T F O R G E R Y A cross-site request forgery (XSRF, ‘sea surf’, also cross-site reference forgery, session riding) attack executes actions at a target website with the privileges of a ‘trusted’ user . Here, trust means an authenticated session between client and web server. It does not matter how the session was established, whether by TLS, by password-based HTTP authentication, or by any other means. What matters is that requests within this session are executed with security permissions attributed to the client. In a reflected XSRF attack the user has to visit the attacker’s web page, which contains hidden actions at the target site, e.g. in an HTML form. Simultaneously, the client must have established an active session to the target. When the user visits the attacker’s page, the browser automatically submits the form data to the target. The target interprets the request as coming from the client and the action in the form is accepted by the server as coming from an authenticated user. Thus, XSRF evades the target’s origin-based security policy. In a stored XSRF attack a malicious page is stored at
|
{"source": 5831, "title": "from dpo"}
|
the progress of science and the prestige of the Corps would be enhanced if Fresnel could come to Paris for a time. He arrived in March 1816, and his leave was subsequently extended through the middle of the year. Meanwhile, in an experiment reported on 26 February 1816, Arago verified Fresnel's prediction that the internal fringes were shifted if the rays on one side of the obstacle passed through a thin glass lamina. Fresnel correctly attributed this phenomenon to the lower wave velocity in the glass. Arago later used a similar argument to explain the colors in the scintillation of stars. Fresnel's updated memoir was eventually published in the March 1816 issue of Annales de Chimie et de Physique, of which Arago had recently become co-editor. That issue did not actually appear until May. In March, Fresnel already had competition: Biot read a memoir on diffraction by himself and his student Claude Pouillet, containing copious data and arguing that the regularity of diffraction fringes, like the regularity of Newton's rings, must be linked to Newton's "fits". But the new link was not rigorous, and Pouillet himself would become a distinguished early adopter of the wave theory. ==== "Efficacious ray", double-mirror experiment (1816) ==== On 24 May 1816, Fresnel wrote to Young (in French), acknowledging how little of his own memoir was new. But in a "supplement" signed on 14 July and read the next day, Fresnel noted that the internal fringes were more accurately predicted by supposing that the two interfering rays came from some distance outside the edges of the obstacle. To explain this, he divided the incident wavefront at the obstacle into what we now call Fresnel zones, such that the secondary waves from each zone were spread over half a cycle when they arrived at the observation
|
{"page_id": 1141, "title": "Augustin-Jean Fresnel"}
|
Oxygen vacancies Intermixing Structural distortions ===== Polar gating ===== Polar gating was the first mechanism used to explain the conductivity at LaAlO3/SrTiO3 interfaces. It postulates that the LaAlO3, which is polar in the 001 direction (with alternating sheets of positive and negative charge), acts as an electrostatic gate on the semiconducting SrTiO3. When the LaAlO3 layer grows thicker than three unit cells, its valence band energy rises above the Fermi level, causing holes (or positively charged oxygen vacancies ) to form on the outer surface of the LaAlO3. The positive charge on the surface of the LaAlO3 attracts negative charge to nearby available states. In the case of the LaAlO3/SrTiO3 interface, this means electrons accumulate in the surface of the SrTiO3, in the Ti d bands. The strengths of the polar gating hypothesis are that it explains why conductivity requires a critical thickness of four unit cells of LaAlO3 and that it explains why conductivity requires the SrTiO3 to be TiO2-terminated. The polar gating hypothesis also explains why alloying the LaAlO3 increases the critical thickness for conductivity. One weakness of the hypothesis is that it predicts that the LaAlO3 films should exhibit a built-in electric field; so far, x-ray photoemission experiments and other experiments have shown little to no built-in field in the LaAlO3 films. The polar gating hypothesis also cannot explain why Ti3+ is detected when the LaAlO3 films are thinner than the critical thickness for conductivity. The polar gating hypothesis is sometimes called the polar catastrophe hypothesis, alluding to the counterfactual scenario where electrons don't accumulate at the interface and instead voltage in the LaAlO3 builds up forever. The hypothesis has also been called the electronic reconstruction hypothesis, highlighting the fact that electrons, not ions, move to compensate the building voltage. ===== Oxygen vacancies ===== Another hypothesis is
|
{"page_id": 40133154, "title": "Lanthanum aluminate-strontium titanate interface"}
|
original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach. The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows: In a thermodynamic process involving a closed system (no transfer of matter), the increment in the internal energy is equal to the difference between the heat accumulated by the system and the thermodynamic work done by it. Reflecting the experimental work of Mayer and of Joule, Clausius wrote: In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced. Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels.
|
{"page_id": 166404, "title": "First law of thermodynamics"}
|
Winston Edward Kock (December 5, 1909 – November 25, 1982) was an American electrical engineer and musician, who was the first Director of NASA Electronics Research Center (NASA ERC) in Cambridge, Massachusetts, from September 1, 1964, to October 1, 1966. The center was created for multidisciplinary scientific research, its proximity to certain colleges, its proximity to a local U.S. Air Force research facility, and was perceived as part of the nation's cold War effort. Kock was also a novelist under the pseudonym Wayne Kirk. Kock also wrote books about topics in engineering and acoustics. These included radar, sonar, holography, and lasers. Kock's seminal research in artificial dielectrics, carried out at AT&T Bell Laboratories in the 1940s, is a historical connection to metamaterials. == Early life and education == Winston Edward Kock was born on December 5, 1909 in Cincinnati, Ohio. At age four Kock started learning piano, and by high school he could play full recitals. In college he began composing music. He then took electrical engineering courses at the University of Cincinnati and continued studying piano and organ at the College of Music of Cincinnati. In the 1930s, as partial fulfillment of his bachelor's degree, he built an electronic organ. He used the more economical neon glow tubes for his electronic organ rather than radio vacuum tubes as sources for tones. In 1932 he received his B.S. degree in electrical engineering. For his master's degree thesis Kock grappled with the problem of pitch stabilization for 70 neon tubes in an electronic organ. In 1933 he received his Master of Science degree. In 1934, he received his Ph.D. in experimental and theoretical physics from the University of Berlin. His examiners were Professors Max von Laue and Arthur Wehnelt. As part of the thesis, Kock, together with another candidate, developed an
|
{"page_id": 31207791, "title": "Winston E. Kock"}
|
point a transplant becomes necessary for survival. Evidence suggests that, although survival rates have improved with modern medical treatment, in patients with moderate to severe poisoning up to half of those who did recover suffered permanent liver damage. However, a follow-up study has shown that most survivors recover completely without any sequelae if treated within 36 hours of mushroom ingestion. == Potential uses == Amanita virosa extract has antibacterial efficacy against Pseudomonas aeruginosa and Staphylococcus aureus in vitro. It also has shown inhibitory activity on thrombin. == See also == List of Amanita species List of deadly fungi == References == == Sources == Benjamin, Denis R. (1995). Mushrooms: poisons and panaceas — a handbook for naturalists, mycologists and physicians. New York: WH Freeman and Company. ISBN 978-0-7167-2600-5. Jordan Peter; Wheeler Steven. (2001). The Ultimate Mushroom Book. London: Hermes House. ISBN 978-1-85967-092-7.
|
{"page_id": 827226, "title": "Amanita virosa"}
|
In the philosophy of artificial intelligence, GOFAI ("Good old fashioned artificial intelligence") is classical symbolic AI, as opposed to other approaches, such as neural networks, situated robotics, narrow symbolic AI or neuro-symbolic AI. The term was coined by philosopher John Haugeland in his 1985 book Artificial Intelligence: The Very Idea. Haugeland coined the term to address two questions: Can GOFAI produce human level artificial intelligence in a machine? Is GOFAI the primary method that brains use to display intelligence? AI founder Herbert A. Simon speculated in 1963 that the answers to both these questions was "yes". His evidence was the performance of programs he had co-written, such as Logic Theorist and the General Problem Solver, and his psychological research on human problem solving. AI research in the 1950s and 60s had an enormous influence on intellectual history: it inspired the cognitive revolution, led to the founding of the academic field of cognitive science, and was the essential example in the philosophical theories of computationalism, functionalism and cognitivism in ethics and the psychological theories of cognitivism and cognitive psychology. The specific aspect of AI research that led to this revolution was what Haugeland called "GOFAI". == Western rationalism == Haugeland places GOFAI within the rationalist tradition in western philosophy, which holds that abstract reason is the "highest" faculty, that it is what separates man from the animals, and that it is the most essential part of our intelligence. This assumption is present in Plato and Aristotle, in Shakespeare, Hobbes, Hume and Locke, it was central to the Enlightenment, to the logical positivists of the 1930s, and to the computationalists and cognitivists of the 1960s. As Shakespeare wrote: What a piece of work is a man, How noble in reason, how infinite in faculty ... In apprehension how like a god, The
|
{"page_id": 44636570, "title": "GOFAI"}
|
29, 2024 in Monterey Superior Court against the California DPR and the Monterey County Agricultural Commissioner by Earthjustice on behalf of the Pajaro Valley Federation of Teachers, Safe Ag Safe Schools, Center for Farmworker Families, Monterey Bay Central Labor Council and Californians for Pesticide Reform alleged that students at three schools in the Pajaro Valley—including one named in the original Angelita C. complaint—are exposed to more than twice the levels of 1,3-dichloropropene (1,3-D) that the CDR has said was the maximum safe dose, yet the DPR continues to routinely approve applications for further use of the chemical. === EPA === Methyl iodide was approved for use in the United States despite its health risks, after EPA director Stephen Johnson appointed as a regulator Elin Miller, previously the CEO of the North American branch of Arysta, the Japanese manufacturer of methyl iodide. The eminent science journal Nature accused Johnson, a George W. Bush appointee, of "reckless disregard for law, science or the agency's own rules — or, it seems, the anguished protests of his own subordinates." After the approval of methyl iodide's registration as a pesticide available in the United States, Arysta sold for $2.2 billion.In 2006, the Japanese chemical giant Arysta presented it to the EPA as the perfect candidate to replace methyl bromide. The pitch: It works just as well on nematodes, but it doesn’t harm the ozone layer. As for farmworkers, well… Fifty-four scientists signed a letter of protest to the EPA strongly recommending against its approval, citing omissions of peer-reviewed evidence, failure to document the modeling used, missing information and failure to identify vulnerable subpopulations. The agency's assessment of risk to nearby populations also incorrectly treated exposure as "missing" and it should have used the AERMOD model instead. Estimates of risk due to the length of the
|
{"page_id": 76655780, "title": "Angelita C. et al. v. California Department of Pesticide Regulation"}
|
moderate turbulence and an uncomfortable ride for aircraft passengers. level 3 corresponds to a red radar return, indicating heavy precipitation, leading to the possibility of thunderstorms and severe turbulence and structural damage to the aircraft. Aircraft will try to avoid level 2 returns when possible, and will always avoid level 3 unless they are specially-designed research aircraft. ===== Precipitation types ===== Some displays provided by commercial television outlets (both local and national) and weather websites, like The Weather Channel and AccuWeather, show precipitation types during the winter months: rain, snow, mixed precipitations (sleet and freezing rain). This is not an analysis of the radar data itself but a post-treatment done with other data sources, the primary being surface reports (METAR). Over the area covered by radar echoes, a program assigns a precipitation type according to the surface temperature and dew point reported at the underlying weather stations. Precipitation types reported by human operated stations and certain automatic ones (AWOS) will have higher weight. Then the program does interpolations to produce an image with defined zones. These will include interpolation errors due to the calculation. Mesoscale variations of the precipitation zones will also be lost. More sophisticated programs use the numerical weather prediction output from models, such as NAM and WRF, for the precipitation types and apply it as a first guess to the radar echoes, then use the surface data for final output. Until dual-polarization (section Polarization below) data are widely available, any precipitation types on radar images are only indirect information and must be taken with care. === Velocity === Precipitation is found in and below clouds. Light precipitation such as drops and flakes is subject to the air currents, and scanning radar can pick up the horizontal component of this motion, thus giving the possibility to estimate the
|
{"page_id": 675776, "title": "Weather radar"}
|
Taiwan and made the trip going to Taiwan. Ministry of Foreign Affairs officials advised that any visit to Taiwan by an incumbent prime minister would be diplomatically impossible, hence the trip was planned a month before Lee assumed the premiership and in his capacity as a private citizen, not as a government minister or as the head of government, with the PRC embassy informed on 9 July 2004. On the same day's afternoon, the PRC government summoned the Singapore ambassador in Beijing and urged the cancellation of Lee's trip, citing the likelihood of Chen Shui Bian's administration in Taiwan exploiting the trip as a diplomatic coup and using it to promote the independence of Taiwan, with the PRC claiming that Singapore was making a "historical error" for the trip. Then-Foreign Minister S. Jayakumar replied to the PRC counterpart of him Li Zhaoxing that the ROC government had been asked to keep the visit low-profile and that it would proceed as planned. The PRC later retaliated by cancelling several visits by high-ranking PRC officials to Singapore and delaying planned signing ceremonies, hinting that free trade negotiations would also be pushed back. The matter was further complicated and magnified when Taiwanese media headlined the visit and portrayed it as a diplomatic breakthrough, which raised tensions with the PRC. The Singapore government later published the full records of the discussion with the Chinese embassy in Singapore's local media. On 28 August 2004, in his first National Day Rally speech and as prime minister, Lee criticised some Taiwanese political leaders over their lack of understanding over the shifting balance of power across the Taiwan strait and Taiwan's international position in their zeal for Taiwanese independence; and he criticised Taiwanese for capitalising on his private visit. He reiterated the reasons for the visit and said
|
{"page_id": 363326, "title": "Lee Hsien Loong"}
|
drawn from those points to where they intersect will reveal the navigator's location. == In navigation == When resecting or fixing a position, the geometric strength (angular disparity) of the mapped points affects the precision and accuracy of the outcome. Accuracy increases as the angle between the two position lines approaches 90 degrees. Magnetic bearings are observed on the ground from the point under location to two or more features shown on a map of the area. Lines of reverse bearings, or lines of position, are then drawn on the map from the known features; two and more lines provide the resection point (the navigator's location). When three or more lines of position are utilized, the method is often popularly (though erroneously) referred to as triangulation (in precise terms, using three or more lines of position is still correctly called resection, as angular law of tangents (cot) calculations are not performed). When using a map and compass to perform resection, it is important to allow for the difference between the magnetic bearings observed and grid north (or true north) bearings (magnetic declination) of the map or chart. Resection continues to be employed in land and inshore navigation today, as it is a simple and quick method requiring only an inexpensive magnetic compass and map/chart. == In surveying == In surveying work, the most common methods of computing the coordinates of a point by angular resection are the Collin's "Q" point method (after John Collins) as well as the Cassini's Method (after Giovanni Domenico Cassini) and the Tienstra formula, though the first known solution was given by Willebrord Snellius (see Snellius–Pothenot problem). For the type of precision work involved in surveying, the unmapped point is located by measuring the angles subtended by lines of sight from it to a minimum of
|
{"page_id": 17837335, "title": "Position resection and intersection"}
|
short list candidates for later stages of the recruitment process, such as interview. When used in recruitment, the tests normally include a series of text passages regarding a random topic. Then there will be a series of statements regarding the passages. The candidate must then determine if the statement is true, false or they can not tell (it is ambiguous). The candidate is not expected to know anything about the topics, and the answer is to be based purely on the information in the passage. == Concepts == This section of the article briefly elucidates the general elements relating to verbal reasoning in order of increasing complexity. === Vocabulary and grammar === Vocabulary (the knowledge of words' meanings in a language) and grammar (knowledge of words' proper relation to one another in a language) can function both as prerequisites as well as topics of focus of verbal reasoning. In the former capacity, they are used to form propositions and arguments (see below), while in the latter capacity they are the subject of analysis and evaluation, where verbal reasoning synthesizes linguistic information and analyzes relationships among component parts of sentences, words, and concepts. === Propositions === The basic element of reasoning (verbal, or otherwise) is the proposition. A proposition is simply the meaning behind a declarative sentence that can be either true or false (note: special care is taken here to mention that the proposition is specifically what is meant by such a sentence, and is not the actual sentence itself). In other words, a proposition is something that one can know, believe, think, assume, or so on. Worth explicitly mentioning here is that only some (and not necessarily all) statements count as propositions. This is because the defining feature of a proposition is that it is necessarily making some assertion
|
{"page_id": 3932694, "title": "Verbal reasoning"}
|
thus be isolated directly under an inverted microscope using a tapered Pasteur pipette before being transferred to sterile medium. An alternative is to use an atomizer which sprays water droplets containing isolated algae on the culture medium. Once the colonies are formed, it is then possible to transfer them on sterile medium. Selective enrichment Selective enrichment consists in growing the algae in a medium that only allows the growth of certain target organisms and that causes the death or the stop of the growth of undesirable organisms. It is then necessary to adapt the culture media according to the qualities of the alga that we want to isolate. This method therefore requires prior knowledge of the strain that we seek to isolate. Isolation is probably done under an inverted microscope using a tapered pipette (Richmond & Hu, 2013). Streak method The streak method involves raking the culture solution onto a fairly solid petri dish (about 1%) and then harvesting the colonies and seeding them within 96well plates. The grown colonies can then be successively transferred to 48 and 24 wells plates before being put in flasks. Density gradient centrifugation The use of a centrifugation gradient to separate algal species of different densities has been performed (Whitelam et al., 1983)using colloidal silica (otherwise called silica sol or Percol). This isolation allows the formation of discrete bands of algae of different density within the Persol gradient. Flow cytometry and cell sorting Flow cytometry can also separate microalgae according to their size and pigment content. However, this method can be inaccurate and does not guarantee to obtain an axenic culture (Richmond & Hu, 2013). 2.2.3.2 Choice between axenic and nonaxenic culture methods Sometimes the design of a genetic system requires the use of axenic strains. It is then nec essary to eliminate all
|
{"source": 2311, "title": "from dpo"}
|
Demand Network using fMRI-MEG Fusion **Unique Code:** TP001130 **Authors:** Hamid Karimi-Rouzbahani - MRC Cognition and Brain Sciences Unit University of Cambridge, Anina Rich -Department of Cognitive Science Macquarie University, Alexandra Woolgar - MRC Cognition and Brain Sciences Unit University of Cambridge, **Topic:** Circuit dynamics and oscillations A multiple-demand network (MDN), composed of frontal and parietal cortices, is activated by, and encodes relevant information in, a wide variety of demanding cognitive tasks. They are proposed to select and integrate task-relevant information from across neural systems (Duncan et al. 2020) but the temporal dynamics of information processing and exchange remain unclear. For example, it is unknown whether information coding arises in sub-regions of the MDN one-after-another or simultaneously. Here, we used fMRI-MEG fusion to obtain spatiotemporal insight into information coding in the MDN during a cognitively demanding task. By comparing the temporal evolution of information (from MEG) to the information contained in different brain regions (fMRI), fusion can reveal the temporal order of information coding across regions. Then, to study the exchange of this information between regions, we developed a novel implementation of Granger-causality-based connectivity analysis using the estimated time-resolved response in each region. On each trial, one of four visual stimuli appeared (white squares on grey background; two in top-right and two in top-left hemi-fields), and participants (MEG: n=24; fMRI: n=30) had to press one of four buttons. They learned two orthogonal stimulus-button mappings (rules; indicated by fixation color) in a training phase. Using Representational Similarity Analysis, we fused four aspects of information: coarse stimulus (left vs. right stimuli), fine stimulus (inner vs. outer stimuli), rule and response information. Both coarse and fine stimulus information appeared first in posterior followed by anterior regions of the MDN. Rule information appeared in the same order. Parietal MDN was the region which showed the
|
{"source": 4940, "title": "from dpo"}
|
x1x2x3 ⊕ x1x2x4 ⊕ x1x2 ⊕ x1x3 ⊕ x1x4 ⊕ x2x3 ⊕ x2x4 ⊕ x3x4 ⊕ x3 ⊕ x4 ⊕ 1 (10) f4(x) = x1x3x4 ⊕ x1x4 ⊕ x2x4 ⊕ x1 ⊕ x3. (11) The inverse quantum circuit of the S-Box is also required for removing the S-Box input using the S-Box > Figure 6. Quantum circuit for the S-Box defined in table 1. > Figure 7. Quantum circuit for the inverse component Boolean functions defined in eqs (12–15) output as the input to the inverse circuit. The inverse component Boolean functions are described in eqs (12)– (15) with the corresponding quantum circuit in figure 7. Assume that F represents the quantum gate shown in figure 6 which takes qubits storing x as the control qubits and store the result in the ancilla qubits. Simi-larly, assume that F−1 denotes the quantum gate shown in figure 7 which takes qubits that contain S-Box output as control qubits and store the result in targeted qubits. Using these quantum gates, S-Box operation can be realised such that the input qubits store the S-Box output as shown in figure 8. Figure 8 also shows the notation for the complete circuit. The quantum circuit can be eval-uated on the input qubits containing a superposition of S-Box input to produce the output in superposition. f −11 (f) = f2 f3 f4 ⊕ f1 ⊕ f2 ⊕ f3 ⊕ f4 ⊕ 1 (12) f −12 (f) = f1 f2 f3 ⊕ f1 f3 f4 ⊕ f1 f3 ⊕ f1 f4 ⊕ f2 ⊕ f4 ⊕ 1 (13) f −13 (f) = f1 f2 f4 ⊕ f1 f2 ⊕ f1 f4 ⊕ f2 ⊕ f3 ⊕ 1 (14) f −14 (f) = f1 f2 f4 ⊕ f2 f3 f4 ⊕ f1 f2 ⊕ f1 f4 ⊕ f2 f3
|
{"source": 6175, "title": "from dpo"}
|
A neutrophile is a neutrophilic organism that thrives in a neutral pH environment between 6.5 and 7.5. == Environment == The pH of the environment can support growth or hinder neutrophilic organisms. When the pH is within the microbe's range, they grow and within that range there is an optimal growth pH. Neutrophiles are adapted to live in an environment where the hydrogen ion concentration is at equilibrium. They are sensitive to the concentration, and when the pH become too basic or acidic, the cell's proteins can denature. Depending on the microbe and the pH, the microbe's growth can be slowed or stopped altogether. Manipulation of the pH of the environment that the microbe is in is used by the food industry to control its growth in order to increase the shelf life of food. == See also == Acidophile Acidophobe Alkaliphile Extremophile Mesophile == References ==
|
{"page_id": 1647600, "title": "Neutrophile"}
|
hyperparameter can be adjusted to pick the optimal bias-variance trade-off in advantage estimation. It uses an exponentially decaying average of n-step returns with λ {\displaystyle \lambda } being the decay strength. == Variants == Asynchronous Advantage Actor-Critic (A3C): Parallel and asynchronous version of A2C. Soft Actor-Critic (SAC): Incorporates entropy maximization for improved exploration. Deep Deterministic Policy Gradient (DDPG): Specialized for continuous action spaces. == See also == Reinforcement learning Policy gradient method Deep reinforcement learning == References == Konda, Vijay R.; Tsitsiklis, John N. (January 2003). "On Actor-Critic Algorithms". SIAM Journal on Control and Optimization. 42 (4): 1143–1166. doi:10.1137/S0363012901385691. ISSN 0363-0129. Sutton, Richard S.; Barto, Andrew G. (2018). Reinforcement learning: an introduction. Adaptive computation and machine learning series (2 ed.). Cambridge, Massachusetts: The MIT Press. ISBN 978-0-262-03924-6. Bertsekas, Dimitri P. (2019). Reinforcement learning and optimal control (2 ed.). Belmont, Massachusetts: Athena Scientific. ISBN 978-1-886529-39-7. Grossi, Csaba (2010). Algorithms for Reinforcement Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning (1 ed.). Cham: Springer International Publishing. ISBN 978-3-031-00423-0. Grondman, Ivo; Busoniu, Lucian; Lopes, Gabriel A. D.; Babuska, Robert (November 2012). "A Survey of Actor-Critic Reinforcement Learning: Standard and Natural Policy Gradients". IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 42 (6): 1291–1307. doi:10.1109/TSMCC.2012.2218595. ISSN 1094-6977.
|
{"page_id": 78938430, "title": "Actor-critic algorithm"}
|
Mu hemoglobin is a predicted protein encoded in the HBM gene. The mRNA is expressed at moderate levels, but the protein has not been detected by mass spectrometry. The order of genes is: 5' - zeta - pseudozeta - mu - pseudoalpha-1 - alpha-2 - alpha-1 - theta1 - 3'. == References ==
|
{"page_id": 21424477, "title": "Mu hemoglobin"}
|
Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a mass spectrum, a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is used in many different fields and is applied to pure samples as well as complex mixtures. A mass spectrum is a type of plot of the ion signal as a function of the mass-to-charge ratio. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds. In a typical MS procedure, a sample, which may be solid, liquid, or gaseous, is ionized, for example by bombarding it with a beam of electrons. This may cause some of the sample's molecules to break up into positively charged fragments or simply become positively charged without fragmenting. These ions (fragments) are then separated according to their mass-to-charge ratio, for example by accelerating them and subjecting them to an electric or magnetic field: ions of the same mass-to-charge ratio will undergo the same amount of deflection. The ions are detected by a mechanism capable of detecting charged particles, such as an electron multiplier. Results are displayed as spectra of the signal intensity of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses (e.g. an entire molecule) to the identified masses or through a characteristic fragmentation pattern. == History of the mass spectrometer == In 1886, Eugen Goldstein observed rays in gas discharges under low pressure that traveled away from the anode and through channels in a perforated cathode, opposite to the direction of negatively charged cathode rays
|
{"page_id": 283810, "title": "Mass spectrometry"}
|
The Bay Area Biosystematists is a group of biologists, geneticists, paleontologists, and systematists that are also interested in evolution. The group has been active in the San Francisco Bay Area since 1936, and is notable as a connection between many of the leading evolutionary biologists of the 20th century, including Herbert Baker, Theodosius Dobzhansky and G. Ledyard Stebbins who led the modern synthesis. Meetings generally occur the second Tuesday of every month during the academic year at one of the Bay Area campuses (UC Berkeley, UC Davis, the California Academy of Sciences, San Jose State U, etc.). == References == == External links == Website William Z. Lidicker Jr. An Essay on the History of the Biosystematists of the San Francisco Bay Area
|
{"page_id": 3458559, "title": "Bay Area Biosystematists"}
|
Upon the death of his father, instead of becoming Baron Archibald, he renounced the peerage, expressing the opinion that hereditary honours were empty honours. He retired from the University of British Columbia in 1991 and moved back to Britain. == Selected publications == with R. G. Lipsey (1958) "Monetary and Value Theory: A Critique of Lange and Patinkin" The Review of Economic Studies 26(1): pp. 1–22, doi:10.2307/2295854, applied Patinkin's theory to stock flows and stock equilibrium and developed that relationship for the first in modern monetary economics. (1967) "Refutation or Comparison" The British Journal for the Philosophy of Science 17(4): pp. 279–296, doi:10.1093/bjps/17.4.279, detailed some of what measurement and testing can and cannot accomplish. (1992) Information, Incentives, and the Economics of Control Cambridge University Press, Cambridge, England, ISBN 0-521-33045-9 (new edition republished in 2005), is considered a staple in the field. == References == Lipsey, Richard G., (1996) "Obituary: George Christopher Archibald, 1926-1996" The Canadian Journal of Economics / Revue canadienne d'Economique 29(4): pp. 1004–1006
|
{"page_id": 15287972, "title": "George Christopher Archibald"}
|
In semiconductor manufacturing, the 2 nm process is the next MOSFET (metal–oxide–semiconductor field-effect transistor) die shrink after the 3 nm process node. The term "2 nanometer", or alternatively "20 angstrom" (a term used by Intel), has no relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by the Institute of Electrical and Electronics Engineers (IEEE), a "2.1 nm node range label" is expected to have a contacted gate pitch of 45 nanometers and a tightest metal pitch of 20 nanometers. As such, 2 nm is used primarily as a marketing term by the semiconductor industry to refer to a new, improved generation of chips in terms of increased transistor density (a higher degree of miniaturization), increased speed, and reduced power consumption compared to the previous 3 nm node generation. TSMC began risk production of its 2 nm process in July 2024, with mass production planned for the second half of 2025, and Samsung plans to start production in 2025. Intel initially forecasted production in 2024 but scrapped its 2 nm node in favor of the smaller 18 angstrom (18A) node. == Background == By 2018, a number of transistor architectures had been proposed for the eventual replacement of FinFET, most of which were based on the concept of GAAFET: horizontal and vertical nanowires, horizontal nanosheet transistors (Samsung MBCFET, Intel Nanoribbon), vertical FET (VFET) and other vertical transistors, complementary FET (CFET), stacked FET, several kinds of horizontal gate-all-around transistors such as nano-ring, hexagonal wire, square wire, and round wire gate-all-around transistors and negative-capacitance FET (NC-FET) which uses drastically different materials. In late 2018, TSMC chairman Mark Liu predicted chip scaling would continue to 3
|
{"page_id": 65397747, "title": "2 nm process"}
|
2 colors is equivalent to determining whether or not the graph is bipartite, and thus computable in linear time using breadth-first search or depth-first search. More generally, the chromatic number and a corresponding coloring of perfect graphs can be computed in polynomial time using semidefinite programming. Closed formulas for chromatic polynomials are known for many classes of graphs, such as forests, chordal graphs, cycles, wheels, and ladders, so these can be evaluated in polynomial time. If the graph is planar and has low branch-width (or is nonplanar but with a known branch-decomposition), then it can be solved in polynomial time using dynamic programming. In general, the time required is polynomial in the graph size, but exponential in the branch-width. === Exact algorithms === Brute-force search for a k-coloring considers each of the k n {\displaystyle k^{n}} assignments of k colors to n vertices and checks for each if it is legal. To compute the chromatic number and the chromatic polynomial, this procedure is used for every k = 1 , … , n − 1 {\displaystyle k=1,\ldots ,n-1} , impractical for all but the smallest input graphs. Using dynamic programming and a bound on the number of maximal independent sets, k-colorability can be decided in time and space O ( 2.4423 n ) {\displaystyle O(2.4423^{n})} . Using the principle of inclusion–exclusion and Yates's algorithm for the fast zeta transform, k-colorability can be decided in time O ( 2 n n ) {\displaystyle O(2^{n}n)} for any k. Faster algorithms are known for 3- and 4-colorability, which can be decided in time O ( 1.3289 n ) {\displaystyle O(1.3289^{n})} and O ( 1.7272 n ) {\displaystyle O(1.7272^{n})} , respectively. Exponentially faster algorithms are also known for 5- and 6-colorability, as well as for restricted families of graphs, including sparse graphs. === Contraction
|
{"page_id": 426743, "title": "Graph coloring"}
|
Project BAMBI (BAllistic Missile Boost Intercept) was a project as part of the United States national missile defense. At the end of the Second World War, the United States and the Soviet Union began confiscating various German intellectual property for use by their own countries. Among these plans were the plans for intercontinental ballistic missiles (ICBMs) that arrived in New York in 1946. The Pentagon spent the next several decades studying and developing both ICBM and anti-ICBM technology. In the early 1950s, both the United States and the Soviet Union were capable of waging nuclear war, but not without inviting retaliatory strikes. At the time, nuclear equipped aerial bombs carried by strategic bomber were the only means of deploying a nuclear strike on another country. In order to prevent nuclear attacks of this nature, the United States army developed Project Nike. The missiles designed by Project Nike were intended to intercept the nuclear armed enemy aircraft before they were able to drop their payload. On May 15 of 1957, the Soviet Union launched the world's first ICBM, the R-7. In response, the United States launched their test model ICBM, Atlas A, in June of the same year. Although both of these ICBMs had less than stellar performances, the technology to wage war around the world using nuclear warheads was now on the horizon. Two years after the start of the space race, the Soviet Union revolutionized the world of atomic defense with the successful launch of the world's first artificial satellite, Sputnik, on October 4, 1957. The United States quickly realized that by employing this satellite technology, the Soviet Union could potentially deploy nuclear armed ICBMs from orbit, where they would be poised to perform highly accurate nuclear strikes. A United States missile defense program, the Advanced Research Projects Agency
|
{"page_id": 67311718, "title": "Project BAMBI"}
|
The 2002 Eastern Mediterranean Event was a high-energy upper atmosphere explosion over the Mediterranean Sea, around 34°N 21°E (between Libya and Crete) on June 6, 2002. This explosion, similar in power to a small atomic bomb, has been related to a small asteroid undetected while approaching Earth. The object disintegrated as a meteor air burst over the sea, and no meteorite fragments were recovered. The event occurred during the 2001–2002 India–Pakistan standoff, and there were concerns by General Simon Worden of the U.S. Air Force that if the upper atmosphere explosion had occurred closer to Pakistan or India, it could have sparked a nuclear war between the two countries. == See also == Impact event Near-Earth object Potentially hazardous asteroid Vela incident == References ==
|
{"page_id": 8892305, "title": "2002 Eastern Mediterranean event"}
|
and class-based models pro-vided only minor additional improvement. SRILM (Stolcke, 2002) and KenLM (Heafield 2011, Heafield et al. 2013) are publicly available toolkits for building n-gram language models. Large language models are based on neural networks rather than n-grams, en-abling them to solve the two major problems with n-grams: (1) the number of param-eters increases exponentially as the n-gram order increases, and (2) n-grams have no way to generalize from training examples to test set examples unless they use iden-tical words. Neural language models instead project words into a continuous space in which words with similar contexts have similar representations. We’ll introduce transformer-based large language models in Chapter 9, along the way introducing feedforward language models (Bengio et al. 2006, Schwenk 2007) in Chapter 7 and recurrent language models (Mikolov, 2012) in Chapter 8. EXERCISES 23 # Exercises 3.1 Write out the equation for trigram probability estimation (modifying Eq. 3.11). Now write out all the non-zero trigram probabilities for the I am Sam corpus on page 4. 3.2 Calculate the probability of the sentence i want chinese food . Give two probabilities, one using Fig. 3.2 and the ‘useful probabilities’ just below it on page 6, and another using the add-1 smoothed table in Fig. 3.7. Assume the additional add-1 smoothed probabilities P(i| ) = 0.19 and P(|food ) = 0.40. 3.3 Which of the two probabilities you computed in the previous exercise is higher, unsmoothed or smoothed? Explain why. 3.4 We are given the following corpus, modified from the one in the chapter: I am Sam Sam I am I am Sam I do not like green eggs and Sam Using a bigram language model with add-one smoothing, what is P(Sam | am)? Include and in your counts just like any other token. 3.5 Suppose we didn’t use the
|
{"source": 1031, "title": "from dpo"}
|
of modern video- and image-quality metrics commonly employ videos compressed using older standards, such as AVC. In this paper, we present a new benchmark for video-quality metrics that evaluates video compression. It is based on a new dataset consisting of about 2,500 streams encoded using different standards, including AVC, HEVC, AV1, VP9, and VVC. Subjective scores were collected using crowdsourced pairwise comparisons. The list of evaluated metrics includes recent ones based on machine learning and neural networks. The results demonstrate that new no-reference metrics exhibit high correlation with subjective quality and approach the capability of top full-reference metrics. _Hua Wei, Jingxiao Chen, Xiyang Ji, Hongyang Qin, Minwen Deng, Siqin Li, Liang Wang, Weinan Zhang, Yong Yu, Liu Linc, Lanxiao Huang, Deheng Ye, QIANG FU, Yang Wei_ !Image 294 environment based on the Honor of Kings, one of the world’s most popular games at present. Compared to other environments studied in most previous work, ours presents new generalization challenges for competitive reinforcement learning. It is a multi-agent problem with one agent competing against its opponent; and it requires the generalization ability as it has diverse targets to control and diverse opponents to compete with. We describe the observation, action, and reward specifications for the Honor of Kings domain and provide an open-source Python-based interface for communicating with the game engine. We provide twenty target heroes with a variety of tasks in Honor of Kings Arena and present initial baseline results for RL-based methods with feasible computing resources. Finally, we showcase the generalization challenges imposed by Honor of Kings Arena and possible remedies to the challenges. All of the software, including the environment-class, are publicly available. _Jiaqi Leng, Yuxiang Peng, Yi-Ling Qiao, Ming Lin, Xiaodi Wu_ **tl;dr:** a scalable differentiable programming
|
{"source": 3339, "title": "from dpo"}
|
Warn if a function takes a Shared_pointer by value or by reference to const and does not copy or move it to another Shared_pointer on at least one code path. Suggest taking a T* or T& instead. (Simple) ((Foundation)) Warn if a function takes a Shared_pointer by rvalue reference. Suggesting taking it by value instead. R.36: Take a const shared_ptr& parameter to express that it might retain a reference count to the object ??? Reason This makes the function’s ??? explicit. Example, good void share(shared_ptr); // share -- "will" retain refcount void reseat(shared_ptr&); // "might" reseat ptr void may_share(const shared_ptr&); // "might" retain refcount Enforcement (Simple) Warn if a function takes a Shared_pointer parameter by lvalue reference and does not either assign to it or call reset() on it on at least one code path. Suggest taking a T* or T& instead. (Simple) ((Foundation)) Warn if a function takes a Shared_pointer by value or by reference to const and does not copy or move it to another Shared_pointer on at least one code path. Suggest taking a T* or T& instead. (Simple) ((Foundation)) Warn if a function takes a Shared_pointer by rvalue reference. Suggesting taking it by value instead. R.37: Do not pass a pointer or reference obtained from an aliased smart pointer Reason Violating this rule is the number one cause of losing reference counts and finding yourself with a dangling pointer. Functions should prefer to pass raw pointers and references down call chains. At the top of the call tree where you obtain the raw pointer or reference from a smart pointer that keeps the object alive. You need to be sure that the smart pointer cannot inadvertently be reset or reassigned from within the call tree below. Note To do this, sometimes you need to take a
|
{"source": 5230, "title": "from dpo"}
|
from a mathematical point we deduce general 2 Table 1. Comparison of previous and our main differential-linear cryptanalytic results on DES and Summary of main crypt-analytic results on CTC2 and Serpent Cipher Key Size Rounds Attack Technique Data Time Source Note DES 56 8 Differential-linear 768 CP 214 .6 Enc. 9 215 .75 CP 229 .17 Enc. † 12 250 .6 CP 252 .34 Enc. This paper 13 252 .27 CP 252 .97 Enc.+2 57 .27 MA This paper CTC2 255 6 Algebraic 4 CP 2253 Enc. (a 255-bit 7 Differential 215 CP 215 Enc. block size) 8 Differential-linear 237 CP 237 Enc. † 10 2142 CP 2207 Enc. This paper Serpent 128 , 7 Differential 284 CP 278 .9 Enc. 192 , 10 Linear 2120 .6 KP 285 Enc. 256 2118 .6 KP 285 Enc. This paper 10 Differential-linear 2101 .2 CP 2115 .2 Enc. † 2123 .4 CP 2123 .4 Enc. This paper 192 , 8 Amplified boomerang 2 114 CP 2179 Enc. 256 10 Boomerang 2126 .3 ACPC 2 165 Enc. 10 Rectangle 2126 .3 CP 2165 Enc. 11 Linear 2122 .9 KP 2189 Enc. 11 Differential-linear 2121 .8 CP 2135 .7 MA † 11 2125 .5 CP 2148 .1 Enc. This paper 256 8 Differential 284 CP 2206 .7 Enc. 9 Amplified boomerang 2 110 CP 2252 Enc. 12 Differential-linear 2123 .5 CP 2249 .4 Enc. † 2125 .5 CP 2244 .9 Enc. This paper †: The result is based on Biham et al.’s methodology. methodologies for computing the probabilities. Using the new methodology we present differential-linear attacks on 13-round DES, 10-round CTC2 with a 255-bit block size and key, and 12-round Serpent with a 256-bit key. Table 1
|
{"source": 6150, "title": "from dpo"}
|
cover and evapotranspiration provides a localised mitigation solution. On a larger scale, The Mau Forest complex in Western Kenya was deforested from 5,200 km2 in 1986 to 3,400 km2 in 2009. Satellite images revealed temperature increases with deforested areas being 20 °C hotter or more. There were about six trillion trees on the planet, but human activity has destroyed roughly half. Increasing terrestrial biomass will cool the planet. Of the latent heat that escapes at recondensation at cloud level half departs the atmosphere into space, as the photons escape in a part of the spectrum that does not get reabsorbed by greenhouse gases. Using satellite imagery, the impact of regeneration processes restoring vegetation in arid areas is visible from space and can tracked over time. Vegetation restoration is clearly visible in images of the Penbamoto project in Tanzania.Seeing African Restoration from Space: Planet and Justdiggit... The data associated with these images reveal a temperature reduction in the topsoil up to 0.75 °C. This temperature reduction was achieved in four years. We can anticipate a larger reduction as the vegetation cover increases. == The movement of water vapour and thermal energy in the atmosphere == The movement of heat embodied in water vapour as it leaves vegetation is not well understood given the complexity of the dynamics. While the movement of water into the atmosphere through evapotranspiration and consequent cooling is broadly accepted, the movement of water further into the atmosphere is more contentious. There are observable phenomena that provide some clues; mornings following cloudless skies will be cooler than cloudy nights, and deserts get very hot during the day and cool rapidly at night. Heat transfer physics are complex, and involve energy carriers including photons. When energy is freed upon condensation, photons are emitted, transferring energy both upward and downward
|
{"page_id": 72304119, "title": "Transpirational cooling (biological)"}
|
analysis === Historically, multilinear principal component analysis has been referred to as "M-mode PCA", a terminology which was coined by Peter Kroonenberg. In 2005, Vasilescu and Terzopoulos introduced the Multilinear PCA terminology as a way to better differentiate between multilinear tensor decompositions that computed 2nd order statistics associated with each data tensor mode, and subsequent work on Multilinear Independent Component Analysis that computed higher order statistics for each tensor mode. MPCA is an extension of PCA. === Multilinear independent component analysis === Multilinear independent component analysis is an extension of ICA. === Multilinear linear discriminant analysis === Multilinear extension of LDA TTP-based: Discriminant Analysis with Tensor Representation (DATER) TTP-based: General tensor discriminant analysis (GTDA) TVP-based: Uncorrelated Multilinear Discriminant Analysis (UMLDA) === Multilinear canonical correlation analysis === Multilinear extension of CCA TTP-based: Tensor Canonical Correlation Analysis (TCCA) TVP-based: Multilinear Canonical Correlation Analysis (MCCA) TVP-based: Bayesian Multilinear Canonical Correlation Analysis (BMTF) A TTP is a direct projection of a high-dimensional tensor to a low-dimensional tensor of the same order, using N projection matrices for an Nth-order tensor. It can be performed in N steps with each step performing a tensor-matrix multiplication (product). The N steps are exchangeable. This projection is an extension of the higher-order singular value decomposition (HOSVD) to subspace learning. Hence, its origin is traced back to the Tucker decomposition in 1960s. A TVP is a direct projection of a high-dimensional tensor to a low-dimensional vector, which is also referred to as the rank-one projections. As TVP projects a tensor to a vector, it can be viewed as multiple projections from a tensor to a scalar. Thus, the TVP of a tensor to a P-dimensional vector consists of P projections from the tensor to a scalar. The projection from a tensor to a scalar is an elementary multilinear projection (EMP).
|
{"page_id": 30909817, "title": "Multilinear subspace learning"}
|
2 . {\displaystyle x(t)={\frac {K}{(t_{c}-t)^{2}}}.} == See also == Exponential growth Logistic growth Mathematical singularity == References == == Bibliography == Alexander V. Markov, and Andrey V. Korotayev (2007). "Phanerozoic marine biodiversity follows a hyperbolic trend". Palaeoworld. Volume 16. Issue 4. Pages 311-318]. Kremer, Michael. 1993. "Population Growth and Technological Change: One Million B.C. to 1990," The Quarterly Journal of Economics 108(3): 681-716. Korotayev A., Malkov A., Khaltourina D. 2006. Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth. Moscow: URSS. ISBN 5-484-00414-4 . Rein Taagepera (1979) People, skills, and resources: An interaction model for world population growth. Technological Forecasting and Social Change 13, 13-30.
|
{"page_id": 2363275, "title": "Hyperbolic growth"}
|
(Y,Z)\times \operatorname {Hom} (X,Y)\to \operatorname {Hom} (X,Z),\,(g,f)\mapsto g\circ f} , For each object X, an identity morphism id X ∈ Hom ( X , X ) {\displaystyle \operatorname {id} _{X}\in \operatorname {Hom} (X,X)} subject to the conditions: for any morphisms f : X → Y {\displaystyle f:X\to Y} , g : Y → Z {\displaystyle g:Y\to Z} and h : Z → W {\displaystyle h:Z\to W} , ( h ∘ g ) ∘ f = h ∘ ( g ∘ f ) {\displaystyle (h\circ g)\circ f=h\circ (g\circ f)} and id Y ∘ f = f ∘ id X = f {\displaystyle \operatorname {id} _{Y}\circ f=f\circ \operatorname {id} _{X}=f} . For example, a partially ordered set can be viewed as a category: the objects are the elements of the set and for each pair of objects x, y, there is a unique morphism x → y {\displaystyle x\to y} if and only if x ≤ y {\displaystyle x\leq y} ; the associativity of composition means transitivity. category of 1. The category of (small) categories, denoted by Cat, is a category where the objects are all the categories which are small with respect to some fixed universe and the morphisms are all the functors. 2. Category of modules, Category of topological spaces, Category of groups, Category of metric spaces, etc. classifying space The classifying space of a category C is the geometric realization of the nerve of C. co- Often used synonymous with op-; for example, a colimit refers to an op-limit in the sense that it is a limit in the opposite category. But there might be a distinction; for example, an op-fibration is not the same thing as a cofibration. codensity monad Codensity monad. coend The coend of a functor F : C op × C → X {\displaystyle
|
{"page_id": 2390225, "title": "Glossary of category theory"}
|
A total lunar eclipse will occur at the Moon’s ascending node of orbit on Sunday, July 7, 2047, with an umbral magnitude of 1.7529. It will be a central lunar eclipse, in which part of the Moon will pass through the center of the Earth's shadow. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A total lunar eclipse occurs when the Moon's near side entirely passes into the Earth's umbral shadow. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. A total lunar eclipse can last up to nearly two hours, while a total solar eclipse lasts only a few minutes at any given place, because the Moon's shadow is smaller. Occurring about 3.4 days after perigee (on July 4, 2047, at 0:55 UTC), the Moon's apparent diameter will be larger. Totality will last 100 minutes 49 seconds, the second longest for this Saros series. == Visibility == The eclipse will be completely visible over eastern Australia, Antarctica, and the central and eastern Pacific Ocean, seen rising over east Asia and western Australia and setting over North and South America. == Eclipse details == Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse. == Eclipse season == This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each
|
{"page_id": 21961829, "title": "July 2047 lunar eclipse"}
|
The Rapid Eye Mount telescope (REM) is a fully automatic, 60 cm aperture telescope located at ESO's La Silla Observatory at 2,400 metres altitude on the edge of the Atacama Desert in Chile. The telescope's aim is to catch the afterglows of gamma-ray bursts (GRBs). REM is triggered by a signal from a high-energy satellite such as Swift and rapidly points to the detected location in the sky. It is operated for the Italian National Institute for Astrophysics since 2002. == Telescope == The telescope has been designed to be a fast pointing instrument, and its relatively small size is in fact balanced by a 10°/s accurate fast pointing. This velocity makes REM suitable for immediate response to random alerts. The telescope hosts two instruments: REMIR, an infrared imaging camera, and ROSS, a visible imager and slitless spectrograph. The two cameras can observe simultaneously thanks to a dichroic placed before telescope focus the same field of view of 10×10 arc minutes. In the infrared range from 1 to 2.3 μm REMIR can use a (z′, J, H, K′) filter set. ROSS is equipped with a standard filter set (V, R, I) and an slitless Amici prism. The observing procedure is completely robotic and the nightly schedule is optimized for the observation of scheduled targets but it is immediately overdriven by GRB (or other) alerts. Typically REM can observe the new target after 30 seconds from notification. REM has been installed in its place during June 2003 and has been gathering data on GRB and other sources since then. Also it is a bench for experimental instrumentation and equipment. In 2006 a wide-field camera parallel to the REM telescope, the TORTORA camera (Telescopio Ottimizzato per la Ricerca dei Transienti Ottici RApidi) was installed. TORTORA has a field of view of 24°x32°
|
{"page_id": 33127110, "title": "Rapid Eye Mount telescope"}
|
the roads to transport timber; archaeologist Neil Judd offered a similar hypothesis. == Archaeoastronomy == === Sun Dagger === Two whorl-shaped etchings near the top of Fajada Butte compose what is called the "Sun Dagger" petroglyph that is tucked behind the eponymous rock panels of the "Three-Slab Site". They are symbolically focal. It consists of two spirals—one principal and one ancillary. The latter left-hand spiral captured both spring and fall equinoxes; its artifice was revealed by a descending spear of light, filtered through the slabs, that shone upon it and split it in two. The former and larger whorl to its right was lit by the titular "sun dagger", which bisected it through another interplay of slab and sunlight. Light struck it, brilliantly, as the summer sun attains its solstice midday peak. The Chacoans were said to be marking, as Anna Sofaer, artist, "Sun Dagger" discoverer, and leading proponent puts it, "the middle of time". Each turn of the 9.25-turn large spiral was found to mark one year in the 18.6-year "lunar excursion cycle" of the rising mid-winter full moon. This record is kept by a slab-cast lunar shadow whose edge strikes in succession each ring. As the full "minimum moon" closest to the winter solstice rises, the shadow's edge precisely strikes the center of the larger spiral; it steps outward year by year, ring by ring, until it strikes the outermost edge of it during the full "maximum moon", again in mid-winter. Fajada Butte bears five other petroglyphs—including a carving of a "rattlesnake", other spirals, and a rectangle—that are conspicuously lit by contrasts between sunbeams and shadows during equinoxes or solstices. Public access to the butte was curtailed when, in 1989, erosion from modern foot traffic was found to be responsible for one of the three screening slabs at
|
{"page_id": 925271, "title": "Chaco Culture National Historical Park"}
|
to be a lack of general awareness surrounding health and data privacy. Terms of service agreements are often long and difficult to understand, leading users to agree to data collection without fully comprehending the implications. A well-designed UI and UX should prioritize transparency, providing clear and accessible privacy settings, easy-to-understand consent processes, and secure authentication methods. Unfortunately, formal assessment or peer review of mobile applications remains largely untested in the context of wearable devices. Enhancing privacy controls through better design can help users take ownership of their data and minimize risks associated with unauthorized access. == Issues and concerns == The FDA drafted a guidance for low risk devices advises that personal health wearables are general wellness products if they only collect data on weight management, physical fitness, relaxation or stress management, mental acuity, self-esteem, sleep management, or sexual function. This was due to the privacy risks that were surrounding the devices. As more and more of the devices were being used as well as improved soon enough these devices would be able to tell if a person is showing certain health issues and give a course of action. With the rise of these devices being consumed so to the FDA drafted this guidance in order to decrease risk of a patient in case the app does not function properly. It is argued the ethics of it as well because although they help track health and promote independence there is still an invasion of privacy that ensues to gain information. This is due to the huge amounts of data that has to be transferred which could raise issues for both the user and the companies if a third partied gets access to this data. There was an issue with Google Glass that was used by surgeons in order to track
|
{"page_id": 23770249, "title": "Wearable technology"}
|
most detail in thermophilics, particularly the orders Sulfolobales and Thermoproteales. Two groups of single-stranded DNA viruses that infect archaea have been recently isolated. One group is exemplified by the Halorubrum pleomorphic virus 1 (Pleolipoviridae) infecting halophilic archaea, and the other one by the Aeropyrum coil-shaped virus (Spiraviridae) infecting a hyperthermophilic (optimal growth at 90–95 °C) host. Notably, the latter virus has the largest currently reported ssDNA genome. Defenses against these viruses may involve RNA interference from repetitive DNA sequences that are related to the genes of the viruses. == Reproduction == Archaea reproduce asexually by binary or multiple fission, fragmentation, or budding; mitosis and meiosis do not occur, so if a species of archaea exists in more than one form, all have the same genetic material. Cell division is controlled in a cell cycle; after the cell's chromosome is replicated and the two daughter chromosomes separate, the cell divides. In the genus Sulfolobus, the cycle has characteristics that are similar to both bacterial and eukaryotic systems. The chromosomes replicate from multiple starting points (origins of replication) using DNA polymerases that resemble the equivalent eukaryotic enzymes. In Euryarchaeota the cell division protein FtsZ, which forms a contracting ring around the cell, and the components of the septum that is constructed across the center of the cell, are similar to their bacterial equivalents. In cren- and thaumarchaea, the cell division machinery Cdv fulfills a similar role. This machinery is related to the eukaryotic ESCRT-III machinery which, while best known for its role in cell sorting, also has been seen to fulfill a role in separation between divided cell, suggesting an ancestral role in cell division. Both bacteria and eukaryotes, but not archaea, make spores. Some species of Haloarchaea undergo phenotypic switching and grow as several different cell types, including thick-walled structures that
|
{"page_id": 19179592, "title": "Archaea"}
|
CS Camelopardalis (CS Cam; HD 21291) is a binary star in reflection nebula VdB 14, in the constellation Camelopardalis. It is a 4th magnitude star, and is visible to the naked eye under good observing conditions. It forms a group of stars known as the Camelopardalis R1 association, part of the Cam OB1 association. The near-identical supergiant CE Camelopardalis is located half a degree to the south. As a binary star, CS Cam is designated as Struve 385 (STF 385, Σ385). The primary component, CS Camelopardalis A, is a blue-white B-type supergiant with a mean apparent magnitude of 4.21m. The star was found to be a variable star when the Hipparcos data was analyzed. It was given its variable star designation in 1999. It is classified as an Alpha Cygni type variable star and its brightness varies from magnitude 4.19m to 4.23m. Its companion, CS Camelopardalis B, is a magnitude 8.7m blue giant star located 2.4 arcseconds from the primary. == References == == External links == Image CS Camelopardalis Nebula vdB 14 Van Den Bergh 14 and 15
|
{"page_id": 4973983, "title": "CS Camelopardalis"}
|
the way to go — they … Mar 6, 2022: I was reminded about the post on streaming apps by John Siracusa when I was browsing Apple Maps today. Do you know that after a search, it takes three … Mar 5, 2022: Some More Updates of Broadtail I’ve made some more changes to Broadtail over the last couple of weeks. The home page now shows a list of recently published videos below the … Mar 5, 2022: We had a bit of rain last night so the Currawongs were out in force today. It’s always a pleasure listening to their calls, especially when they’re … Mar 4, 2022: Super busy week this week. A couple of long days, with a bunch of long design and planning meetings that my voice is now hoarse. But only a couple of … Mar 4, 2022: Slack’s not great for resolving arguments. I’m in one now at work. Nothing heated, just a disagreement about a certain design choice. But … Mar 3, 2022: Two things in life there’s never enough of: time, and available USB ports. Mar 2, 2022: Speaking of nice development experiences, I took a look at the Playdate SDK yesterday. Docs and tools are really well polished. Managed to get a … Mar 2, 2022: Time and Money Spending a lot of time in Stripe recently. It’s a fantastic payment gateway and a pleasure to use, compared to something like PayPal which … Mar 1, 2022: Nothing so focuses the mind like a deadline, and the mandate to keep it. The easy deadlines are the ones imposed by others. Much harder, and one that … Feb 28, 2022: 🔗 Simulating Amazon DynamoDB unique constraints using transactions A technique to simulate a uniqueness constraint on a field not used in the
|
{"source": 1705, "title": "from dpo"}
|
, if at least some fraction of the total possible -tuples in intersect, then there must be at least some other fraction of all sets in that intersect. Meanwhile, it has been proven that is the fractional Helly number for convex sets in . While this focuses on convex sets, our research studies the fractional version of Helly’s theorem for linear partitions, which are defined as unions of subspaces that are in general position. Helly’s theorem for linear partitions (Arocha, Bracho, Montejano, 2007) states that in a finite family of linear partitions in , if every or fewer linear partitions in have a non-empty intersection, then all of them do. We upper bound the dual VC dimension of any finite family of linear partitions in by . Thus, by the theorem of fractional Helly for bounded VC-dimension (Matoušek, 2004), we conclude that the fractional Helly number of any finite family of linear partitions in is upper bounded by . (Received September 17, 2024) 1203-52-46360 Ayooluwanitemi Aitokhuehi , Benjamin Braiman , David Owen Horace Cutler , Tamas Darvas , Robert Deaton* , Prakhar Gupta , Jude Horsley , Vasanth Pidaparthy , Jen Tang . Hölder estimates for a family of quasi-metrics on the space of convex bodies. The space of convex, compact subsets of some convex body in finds its prototypical metric structure with the Hausdorff distance, under which it becomes a complete metric space. Alternatively, each convex body can be equipped with a function on pairs of its convex compact subsets, where this class of functions generalizes a quasi-metric on the compact convex subsets of the unit simplex arising from complex geometry. Given an arbitrary convex body , we show that the topology induced by the Hausdorff distance restricted to compact convex subsets of is equivalent to the topology generated
|
{"source": 3883, "title": "from dpo"}
|
problem in a finite field k = Fp is to find in-tegers s and t such that the equation g s · ht = w q holds for some w ∈ k, where q is a divisor of the or-der of the multiplicative group k×. If t is coprime to q we have then computed log g(h) mod q since we have log g(h) ≡ − st −1 mod q. Having computed log g(h) modulo the different primes dividing the order of k× we can then recover log g(h) using the Chinese Remainder Theorem. We are thus led to the problem of construct-ing q–th powers in k, i.e., relations of the form g s · ht = w q . Index calculus techniques can be ap-plied to this problem. If we choose R to be the ring Information theory 289 of integers of a more general, non trivial num-ber field, we are led to techniques related to the Number Field Sieve. Contrary to the techniques described above, the Number Field Sieve approach uses two factor bases: one consisting of small rational primes, the other consisting of algebraic primes of small norm. This is due to the fact that the Number Field Sieve approach uses two different maps φ: one is the natural projection φ1 : N → Fp, while the sec-ond one φ2 is a certain projection from the ring of algebraic integers R to Fp.In the sieving stage, pairs of smooth elements s1, s2 are collected which have the additional prop-erty that the equality φ1 (s1 ) = φ2 (s2 ) holds. Again we are led to a system of linear equations, solv-ing this system will lead to the construction of a qth power in Fp and will thus yield the solution of the discrete logarithm problem. Schirokauer pre-sented
|
{"source": 5836, "title": "from dpo"}
|
These are useful in studying outer shell structure of nuclei. Transfer reactions can occur: from the projectile to the target - stripping reactions from the target to the projectile - pick-up reactions Examples: (α,n) and (α,p) reactions. Some of the earliest nuclear reactions studied involved an alpha particle produced by alpha decay, knocking a nucleon from a target nucleus. (d,n) and (d,p) reactions. A deuteron beam impinges on a target; the target nuclei absorb either the neutron or proton from the deuteron. The deuteron is so loosely bound that this is almost the same as proton or neutron capture. A compound nucleus may be formed, leading to additional neutrons being emitted more slowly. (d,n) reactions are used to generate energetic neutrons. The strangeness exchange reaction (K, π) has been used to study hypernuclei. The reaction 14N(α,p)17O performed by Rutherford in 1917 (reported 1919), is generally regarded as the first nuclear transmutation experiment. ==== Reactions with neutrons ==== Reactions with neutrons are important in nuclear reactors and nuclear weapons. While the best-known neutron reactions are neutron scattering, neutron capture, and nuclear fission, for some light nuclei (especially odd-odd nuclei) the most probable reaction with a thermal neutron is a transfer reaction: Some reactions are only possible with fast neutrons: (n,2n) reactions produce small amounts of protactinium-231 and uranium-232 in the thorium cycle which is otherwise relatively free of highly radioactive actinide products. 9Be + n → 2α + 2n can contribute some additional neutrons in the beryllium neutron reflector of a nuclear weapon. 7Li + n → T + α + n unexpectedly contributed additional yield in the Bravo, Romeo and Yankee shots of Operation Castle, the three highest-yield nuclear tests conducted by the U.S. === Compound nuclear reactions === Either a low-energy projectile is absorbed or a higher energy particle
|
{"page_id": 460322, "title": "Nuclear reaction"}
|
evaporators requires a driving force to move the film against gravity and this causes a limitation because there is a requirement for a sufficient temperature difference between the heating surfaces to provide the driving force. Limited product versatility:- Another major limitation of rising film evaporators is the requirement for the products to be of low viscosity and have minimal fouling tendencies. Competitive process designs like plate-type evaporators can handle liquids that are more viscous with higher fouling tendencies because the inner parts are more easily accessible for cleaning and maintenance. == Main characteristic and assessments == Evaporators have the aim of concentrating a solution by vaporising the solvent. To assess the performance of a rising film evaporator, the capacity and efficiency of the evaporator is measured. Capacity is the amount of water vaporized per unit time while the steam economy is the amount of solvent vaporised. Hence, the main process attributes are the characteristics of the process that significantly affect these two areas. === Overall rate of heat transfer === Considering that rising film evaporators use the same heat transfer principle as a general shell and tube heat exchanger. Therefore, the overall heat transfer rate is crucial in determining the performance of the evaporator. This factor will determine the capacity of the rising film evaporator. The fundamental general formula which gives the overall heat transfer rate is, where Q is the heat transfer Rate U is the overall heat transfer coefficient A is the overall heat transfer area Tlm is the temperature difference or log mean temperature difference For a general shell and tube heat exchanger, U is given by the equation where ho is the outside fluid film coefficient hi is the inside fluid film coefficient hod is the outside dirt coefficient (fouling factor) hid is the inside dirt
|
{"page_id": 40801748, "title": "Rising film evaporator"}
|
to function properly. === Export === After processing is complete, mRNA needs to be transported from the cell nucleus to cytoplasm. This is a three-step process involving the generation of a cargo-carrier complex in the nucleus followed by translocation of the complex through the nuclear pore complex and finally release of the cargo into cytoplasm. The carrier is then subsequently recycled. TAP/NXF1:p15 heterodimer is thought to be the key player in mRNA export. Over-expression of TAP in Xenopus laevis frogs increases the export of transcripts that are otherwise inefficiently exported. However TAP needs adaptor proteins because it is unable interact directly with mRNA. Aly/REF protein interacts and binds to the mRNA recruiting TAP. === mRNA localization === mRNA localization is critical for regulation of gene expression by allowing spatially regulated protein production. Through mRNA localization proteins are translated in their intended target site of the cell. This is especially important during early development when rapid cell cleavages give different cells various combinations of mRNA which can then lead to drastically different cell fates. RBPs are critical in the localization of this mRNA that insures proteins are only translated in their intended regions. One of these proteins is ZBP1. ZBP1 binds to beta-actin mRNA at the site of transcription and moves with mRNA into the cytoplasm. It then localizes this mRNA to the lamella region of several asymmetric cell types where it can then be translated. In 2008 it was proposed that FMRP was involved in the stimulus-induced localization of several dendritic mRNAs in the neuronal dendrites of cultured hippocampal neurons. More recent studies of FMRP-bound RNAs present in microdissected dendrites of CA1 hippocampal neurons revealed no changes in localization in wild type versus FMRP-null mouse brains. === Translation === Translational regulation provides a rapid mechanism to control gene expression. Rather
|
{"page_id": 3131507, "title": "RNA-binding protein"}
|
planet. Superior planet is also different from gas giant. == References ==
|
{"page_id": 243441, "title": "Inferior and superior planets"}
|
receiver to reply with a negative acknowledgement meaning "I did not receive your last message correctly, please resend it" (e.g., if the data was corrupted en route). Handshaking facilitates connecting relatively heterogeneous systems or equipment over a communication channel without the need for human intervention to set parameters. == Example == === TCP three-way handshake === Establishing a normal TCP connection requires three separate steps: The first host (Alice) sends the second host (Bob) a "synchronize" (SYN) message with its own sequence number x {\displaystyle x} , which Bob receives. Bob replies with a synchronize-acknowledgment (SYN-ACK) message with its own sequence number y {\displaystyle y} and acknowledgement number x + 1 {\displaystyle x+1} , which Alice receives. Alice replies with an acknowledgment (ACK) message with acknowledgement number y + 1 {\displaystyle y+1} , which Bob receives and to which he doesn't need to reply. In this setup, the synchronize messages act as service requests from one server to the other, while the acknowledgement messages return to the requesting server to let it know the message was received. The reason for the client and server not using a default sequence number such as 0 for establishing the connection is to protect against two incarnations of the same connection reusing the same sequence number too soon, which means a segment from an earlier incarnation of a connection might interfere with a later incarnation of the connection. === SMTP === The Simple Mail Transfer Protocol (SMTP) is the key Internet standard for email transmission. It includes handshaking to negotiate authentication, encryption and maximum message size. === TLS handshake === When a Transport Layer Security (SSL or TLS) connection starts, the record encapsulates a "control" protocol—the handshake messaging protocol (content type 22). This protocol is used to exchange all the information required by both
|
{"page_id": 41229, "title": "Handshake (computing)"}
|
The Web Services Business Process Execution Language (WS-BPEL), commonly known as BPEL (Business Process Execution Language), is an OASIS standard executable language for specifying actions within business processes with web services. Processes in BPEL export and import information by using web service interfaces exclusively. == Overview == One can describe web service interactions in two ways: as executable business processes and as abstract business processes. An executable business process: models an actual behavior of a participant in a business interaction. An abstract business process: is a partially specified process that is not intended to be executed. Contrary to Executable Processes, an Abstract Process may hide some of the required concrete operational details. Abstract Processes serve a descriptive role, with more than one possible use case, including observable behavior and/or process template. WS-BPEL aims to model the behavior of processes, via a language for the specification of both Executable and Abstract Business Processes. By doing so, it extends the Web Services interaction model and enables it to support business transactions. It also defines an interoperable integration model that should facilitate the expansion of automated process integration both within and between businesses. Its development came out of the notion that programming in the large and programming in the small required different types of languages. As such, it is serialized in XML and aims to enable programming in the large. == Programming in the large/small == The concepts of programming in the large and programming in the small distinguish between two aspects of writing the type of long-running asynchronous processes that one typically sees in business processes: Programming in the large generally refers to the high-level state transition interactions of a process. BPEL refers to this concept as an Abstract Process. A BPEL Abstract Process represents a set of publicly observable behaviors in
|
{"page_id": 334947, "title": "Business Process Execution Language"}
|
José Costas Gual (19 January 1918 – 9 July 2011) was a Spanish amateur astronomer. == Biography == José Costas Gual founded the group Pro Divulgación Astronómicä (PDA) on 24 September 1936, in the municipality of San Celoni, a province of Barcelona. All the observations made by José Costas after that date have filled the "Diarios" of the PDA to date, including activities, observations, or thoughts related to astronomy. They contain more than 72 years of astronomical history, in more than 25 volumes, which are currently in the process of being digitized and published on his official page. Costas had a brief but intense relationship with the Spanish astronomer José Comas y Solá, until the death of Comas in December 1937. After 1959, Costas dedicated himself to polishing mirrors for small reflecting telescopes, of which more than 3500 were built, an activity which won him great popularity in Spain. == See also == List of astronomers == References == Oliver, Josep M. (1997). Historia de la Astronomía amateur en España (in Spanish). Equipo Sirius. ISBN 978-84-86639-84-6. == External links == Página oficial de Josep Costas
|
{"page_id": 33664929, "title": "José Costas Gual"}
|
strong. Victor is able to reverse engineer the alien boom tube technology and teleport all the invading army including Darkseid away, saving the Earth and then helps found the Justice League. Silas attempt to study his son more from a scientific perspective, but Victor refuses instead focusing on helping people as a superhero leading both to remain at odds. After David Graves makes an attack against the Justice League, Cyborg learns that he walks the line between life and death after he sees a false apparition of his human self. The appriation tries to convince him that the real Victor died and Cyborg is just his body being animated by the robotics to believe it's still Victor. Victor is able to get past that ideal as just a ruse, though later leads him to question his humanity or lack thereof. Flash attempts to be there for Victor during his time of questioning. During the Throne of Atlantis storyline, Cyborg is offered an upgrade his father has that would allow him to operate underwater at the price of his remaining lung, which Victor rejects at first. However following the capture of the rest of the Justice League by Ocean Master, Cyborg reluctantly accepts the upgrade. This allows him and Mera to rescue the others. During the "Trinity War" storyline, Cyborg gets a visual of Shazam heading to Kahndaq, to which Batman assembles the Justice League with the help from Zatanna to meet in Kahndaq to stop Shazam. Following the supposed death of Doctor Light in Kahndaq, Batman tells Superman that Cyborg and Martian Manhunter are doing an autopsy to prove his death was not Superman's fault. As Wonder Woman leads the Justice League Dark to look for Pandora, Cyborg is among the superheroes who remain at A.R.G.U.S. while Batman, Flash, Aquaman,
|
{"page_id": 676520, "title": "Cyborg (DC Comics)"}
|
him to spawn clones of himself. When the Vermin clones attack Spider-Man, he is saved by Kraven the Hunter. After a Norman Osborn sin-induced Spider-Man injects Kraven the Hunter with a compound that contained some pheromones, he enters the sewer where he is attacked by Vermin and his clones. Kraven the Hunter defeats most of them before being knocked out. During the "Gang War" storyline, Vermin and his clones attack the F.E.A.S.T. building amidst the gang wars. They are repelled by Spider-Boy. Vermin was later brought into the Sewer Enclave by Miles Morales' clone Shift. == Powers and abilities == Vermin's strength was enhanced by the experimental mutagenic process designed by Arnim Zola, and forced upon him. He resembles a humanoid rat and possesses enhanced strength, durability, senses, and agility. Vermin has the ability to control rats and dogs within a two-mile (3 km) radius of himself. == Reception == In 2021, Comic Book Resources (CBR) ranked Vermin 5th in their "Marvel: 10 Characters Baron Zemo Created In The Comics" list. == Other versions == === Earth-71290 === An alternate universe variant of Edward Whelan from Earth-71290 appears in Spider-Society #2. This version works as an assistant to Ashley Kafka at Ravencroft. === Ultimate Marvel === An alternate universe variant of Edward Whelan / Vermin from Earth-1610 appears in All-New Ultimates #7. This version previously worked for S.H.I.E.L.D. until it was dissolved. Following this, he took to living in a sewer system where he developed a psychic connection to Agent Crock and become tyrants until they encountered the Young Ultimates. In the ensuing fight, Shadowcat kills Crock, which kills Vermin as well due to their connection. == In other media == Vermin was originally meant to appear in Spider-Man (1995). Vermin appears as a boss in The Amazing Spider-Man (2012),
|
{"page_id": 30861584, "title": "Vermin (character)"}
|
convert to maghemite on exposure to heat. Temperatures sufficient to elevate maghemite on a landscape scale indicate the influence of fire. Given the rarity of such natural phenomena in the modern day, magnetic susceptibility in Chernozem likely relates to control of fire by early humans. Humification can darken soils (melanization) absent a pyrogenic carbon component. Given the symphony of pedogenic processes that contribute to the formation of dark earth, Chernozem summarizes different types of black soils with the same appearance but different formation histories. == See also == Loam Dark earth Terra preta Vertisol Mollisol Soil organic matter == Notes == == References == IUSS Working Group WRB: World Reference Base for Soil Resources, fourth edition. International Union of Soil Sciences, Vienna 2022. ISBN 979-8-9862451-1-9 ([1]). == Further reading == W. Zech, P. Schad, G. Hintermaier-Erhard: Soils of the World. Springer, Berlin 2022, Chapter 5.3.2. ISBN 978-3-540-30460-9 == External links == profile photos (with classification) WRB homepage IUSS profile photos (with classification) Archived 9 September 2018 at the Wayback Machine IUSS World of Soils
|
{"page_id": 491376, "title": "Chernozem"}
|
year 1507 in the presence of Cardinal Bernhard and the little corpse was found still uncorrupted. In 1647 it was transferred to Baden. The gravestone, still present in the castle-church at Pforzheim, declares explicitly, handed down under the exact date, that the child was killed by Jews: "Margaretha a Judeis occisa ob. feliciter Anno Domini MCCLXVII. Cal. Jul. fer. VI" (Sachs: Geschichte der Markgrafschaft BadenCarlsruhe[History of the Margravate of Baden-Carlsruhe], II, 1767, p. 15 and following -- Also briefly mentioned in the Zeitschrift für die Geschichte des Oberrheins [Magazine for the History of the Upper Rhine], IX, Karlsruhe, 1858, p. 271, Nr. 17). In a later report the question is raised in connection with this crime, as to why the Jews had the custom in every (!) nation in which they were living, of shedding Christian blood. So one should surely know that every year in each nation the relevant city or region would be chosen by lot, which would have to supply the Christian blood necessary for ritual purposes to the Jews (Thomas de Cantimpré: De vita instituenda, II, Chapters 29, 23)! Likewise around this time (1270) a Jew at St. Dié, who (21) had violated his Christian servant-girl after previously rendering her unconscious in order to gain her blood -- the Jewish compiler of this document speaks of an "operation" -- was brought before the court of the Duke of Lotharingia and condemned. His execution was done in this manner: tied to the tail of a horse, he was dragged to the gibbet and hanged upside-down. The contemporary report, however, brings out the following extremely typical turn of events: As the Jew, preparing himself at the place of execution, wanted to speak once more, to confess the reasons (!) for his crime, he was prevented from doing so
|
{"source": 959, "title": "from dpo"}
|
automatically scheduled across basic block boundaries according to data dependencies. Each VLIW word in a frame is uniquely identified and has a prefix that includes the identifier of the latest VLIW word that it depends on. Thus, we guarantee that all source operands for all ops in the word will be ready before it executes. This simple dependence check mechanism avoids the need for empty VLIW words when there are not 7enough useful ops to insert between a producer and consumer word, and also potentially stalls on the first use of the result of a load instruction that misses in the cache. > Figure 4. Example code with its OoO execution data-flow and steps to build the VLIW Frame through the FCT and the VFU Register handling. VLIW frames are atomic units of execution. Live-ins are the values that are used but not produced inside the frame and have to be read from the initial architectural state. To simplify register handling, live-ins are specified by their architectural register IDs. Live-outs are the last values written to any architectural register inside the frame, which have to be written back to the architectural state when the VLIW frame commits. Any value is created and destroyed (i.e., its architectural register is overwritten) inside the frame is treated as a temporary register and statically assigned to a physical register in the VRF. The VRF is split into two blocks: one block Arch-out that holds live-outs and directly maps to the architectural registers and one block VRF-temp that holds temporary registers. Note that the micro-ops sources and destinations are specified with both architectural and PRF IDs when sent to the VFU. To detect live-ins and live-outs, and to map PRF registers to VRF registers during frame creation, the VFU uses two simple tables: PRF-to-VRF and
|
{"source": 2317, "title": "from dpo"}
|
2021 at 9:37 am]( Will add to the graph paper method: •Once you have your clean to-scale drawing of the space you’re working with, make a bunch of photocopies of it. Use these to sketch ideas. • Especially in an older house, check that the distance between, say, door frame and window frame at the ceiling is the same at the floor. If you aren’t sure what you want, I think the best thing is to look at a LOT of samples from each company. Go with the one whose designs evoke “Oh, this looks nice. I like how they solved this” responses from you. 1.  is monotonic. We also say that \({\textbf {f}}\) is strongly consistent with cumulative domination if \(\,{\textbf {f}}({\textbf {x}}) > {\textbf {f}}({\textbf {y}})\) whenever \({\textbf {x}}\) cumulative dominates \({\textbf {y}}\) and \(\,{\textbf {x}} \ne {\textbf {y}}\). Then we simply say that \(\,{\textbf {f}}\) is **strongly**_c_**-monotonic**. This property implies that \(\,{\textbf {f}}\) is _c_-monotonic thus monotonic. 6. 6. The function \({\textbf {f}}\) is consistent with Axiom Z if \(\, {\epsilon }\in X^{T}\), \(\,\epsilon \succcurlyeq _c 0\) implies \(\,{\textbf {f}}(\epsilon ) \geqslant 0\). 7. 7. The function \({\textbf {f}}\) is delay-averse if \(\,{\textbf {f}}( {\textbf {x}} ) \geqslant {\textbf {f}} ({\textbf {x}} - h \textbf{e}_i + h\textbf{e}_{i+1})\) when \(i\in \{1, \ldots , T-1\}\), \(h>0\), and \({\textbf {x}} - h \textbf{e}_i + h\textbf{e}_{i+1} \in X^T\). It is strongly delay-averse if we require \(\,{\textbf {f}}( {\textbf {x}} ) > {\textbf {f}} ({\textbf {x}} - h \textbf{e}_i + h\textbf{e}_{i+1})\) in the conclusion. Obviously, strongly delay-averse implies delay-averse. In cases such as \(X=\mathbb {R}_+^T\), \({\textbf {f}}\,\) is delay-averse if and only if \(\,{\textbf {f}}( {\textbf {x}} ) \geqslant {\textbf {f}} ({\textbf {x}} - h \textbf{e}_i + h\textbf{e}_{j})\) when \(i, j\in N\), \(j>i\), \(h>0\), and \({\textbf {x}} - h \textbf{e}_i + h\textbf{e}_{j} \in X^T\). We only need to apply a simple recursive argument to prove this equivalence. A comparative tool that is weaker than an evaluative function is a binary relation. A preorder on \(X^T\) is a reflexive and transitive binary relation _R_ on \(X^T\). It is complete when for all \(\,{\textbf {x}}, {\textbf {y}}\in X^T\), either \(\,{\textbf {x}}R{\textbf {y}}\) or \(\,{\textbf {y}} R {\textbf {x}}\) is true. The concepts above can be transferred _ceteris paribus_ to this language. For example, a binary relation _R_ on \(X^{T}\) is monotonic if \(\,{\textbf {x}} \geqslant {\textbf {y}}\) implies \(\,{\textbf {x}}
|
{"source": 6202, "title": "from dpo"}
|
studies in murine models have shown that the presence of XX or XY chromosomes in brain cells can lead to sex-specific differences in neuronal development and function, independent of gonadal hormone influence. === C. elegans (Roundworms) === The nematode Caenorhabditis elegans has provided key insights into CASI in hermaphroditic and male individuals. Sex determination in C. elegans is controlled by the X:A ratio (the number of X chromosomes relative to autosomes), which regulates a cascade of sex-specific gene expression. Each cell independently interprets this ratio, leading to cell-autonomous decisions about sexual differentiation. === Butterflies and Moths (Lepidoptera) === In Lepidoptera, sex determination involves a WZ/ZZ system, similar to birds. Studies have shown that the sex of individual cells is influenced by chromosomal composition, with evidence of CASI playing a significant role in the development of sex-specific traits, such as wing patterns and pheromone production. == Implications of Cell Autonomous Sex Identity == The discovery and study of cell autonomous sex identity have far-reaching implications across various fields of biology, medicine, and evolution. By highlighting the intrinsic properties of cells in determining sex-specific traits, CASI has challenged traditional hormone-centric models of sexual differentiation and opened new avenues of research and application. === Evolutionary Biology === CASI provides critical insights into the evolution of sex determination systems. The existence of cell-autonomous mechanisms suggests that sex-specific traits can evolve independently of hormonal influences, potentially allowing for greater plasticity in evolutionary pathways. This understanding helps explain the diversity of sex determination strategies observed across taxa, from chromosomal to environmental systems. === Developmental Biology === CASI has redefined our understanding of sexual development by emphasizing the role of intrinsic cellular mechanisms. This has implications for studying developmental disorders related to sexual differentiation, such as androgen insensitivity syndrome and Turner syndrome, as it highlights the interplay
|
{"page_id": 78914959, "title": "Cell autonomous sex identity"}
|
Waffles is a collection of command-line tools for performing machine learning operations developed at Brigham Young University. These tools are written in C++, and are available under the GNU Lesser General Public License. == Description == The Waffles machine learning toolkit contains command-line tools for performing various operations related to machine learning, data mining, and predictive modeling. The primary focus of Waffles is to provide tools that are simple to use in scripted experiments or processes. For example, the supervised learning algorithms included in Waffles are all designed to support multi-dimensional labels, classification and regression, automatically impute missing values, and automatically apply necessary filters to transform the data to a type that the algorithm can support, such that arbitrary learning algorithms can be used with arbitrary data sets. Many other machine learning toolkits provide similar functionality, but require the user to explicitly configure data filters and transformations to make it compatible with a particular learning algorithm. The algorithms provided in Waffles also have the ability to automatically tune their own parameters (with the cost of additional computational overhead). Because Waffles is designed for script-ability, it deliberately avoids presenting its tools in a graphical environment. It does, however, include a graphical "wizard" tool that guides the user to generate a command that will perform a desired task. This wizard does not actually perform the operation, but requires the user to paste the command that it generates into a command terminal or a script. The idea motivating this design is to prevent the user from becoming "locked in" to a graphical interface. All of the Waffles tools are implemented as thin wrappers around functionality in a C++ class library. This makes it possible to convert scripted processes into native applications with minimal effort. Waffles was first released as an open source project
|
{"page_id": 32867182, "title": "Waffles (machine learning)"}
|
\oint dS_{\text{Sys}}=0} (as a cyclic process), ∮ d S Total = ∮ d S Res + ∮ d S Sys = 0. {\displaystyle \oint dS_{\text{Total}}=\oint dS_{\text{Res}}+\oint dS_{\text{Sys}}=0.} The Clausius inequality is a consequence of applying the second law of thermodynamics at each infinitesimal stage of heat transfer, and is thus in a sense a weaker condition than the Second Law itself. == Heat engine efficiency == In the heat engine model with two thermal reservoirs (hot and cold reservoirs), the limit of the efficiency of any heat engine η = W Q 1 {\displaystyle \eta ={\frac {W}{Q_{1}}}} , where W {\displaystyle W} and Q 1 {\displaystyle Q_{1}} are work done by the heat engine and heat transferred from the hot thermal reservoir to the engine, respectively, can be derived by the first law of thermodynamics (i.e., the law of conservation of energy) and the Clausius theorem or inequality. In respecting the abovementioned sign convention of heat, Q 1 + Q 2 = W → η = W Q 1 = 1 + Q 2 Q 1 {\displaystyle Q_{1}+Q_{2}=W\to \eta ={\frac {W}{Q_{1}}}=1+{\frac {Q_{2}}{Q_{1}}}} , where Q 2 {\displaystyle Q_{2}} is heat transferred from the engine to the cold reservoir. The Clausius inequality Q 1 T 1 + Q 2 T 2 ≤ 0 {\displaystyle {\frac {Q_{1}}{T_{1}}}+{\frac {Q_{2}}{T_{2}}}\leq 0} can be expressed as Q 2 Q 1 ≤ − T 2 T 1 {\displaystyle {\frac {Q_{2}}{Q_{1}}}\leq -{\frac {T_{2}}{T_{1}}}} . By substituting this inequality to the above equation results in, η = W Q 1 ≤ 1 − T 2 T 1 {\displaystyle \eta ={\frac {W}{Q_{1}}}\leq 1-{\frac {T_{2}}{T_{1}}}} . This is the limit of heat engine efficiencies, and the equality of this expression is what is called the Carnot efficiency, that is the efficiency of all reversible heat engines and the maximum efficiency
|
{"page_id": 4320083, "title": "Clausius theorem"}
|
Phantom eye syndrome (PES) is a phantom pain in the eye and visual hallucinations after the removal of an eye (enucleation, evisceration). == Symptoms == Many patients experience one or more phantom phenomena after the removal of the eye: Phantom pain in the (removed) eye (prevalence: 26%) Non-painful phantom sensations Visual hallucinations. About 30% of patients report visual hallucinations of the removed eye. Most of these hallucinations consist of basic perceptions (shapes, colors). In contrast, visual hallucinations caused by severe visual loss without removal of the eye itself (Charles Bonnet syndrome) are less frequent (prevalence 10%) and often consist of detailed images. == Pathogenesis == === Causes === Triggers of Phantom Eye Syndrome encompass a range of factors that can initiate or intensify phantom sensations and pain following eye removal. These triggers commonly include fatigue, stress, and fluctuations in lighting conditions. Some cases suggest a correlation between the duration of pain prior to eye removal and the presence of preoperative conditions, such as headache or eye pain, with the likelihood of experiencing subsequent phantom sensations. === Phantom pain and non-painful phantom sensations === Phantom sensations in Phantom Eye Syndrome (PES) encompass various tactile perceptions such as paresthesia, dysesthesia, and hyperpathia, excluding pain. These sensations can manifest in different forms, including kinetic, kinesthetic, or exteroceptive perceptions, and are commonly experienced by almost all PES patients. Some cases have highlighted the prevalence of Phantom Eye Pain (PEP) in PES, with rates as high as 47% reported. PEP includes pain felt around the amputated eye (periocular pain), contributing to a higher prevalence compared to studies defining PEP solely as pain in the amputated eye. Frequency and characteristics of PEP vary, with paroxysmal episodes lasting for a few seconds or minutes being common, and weather conditions such as cold and humid weather serving as
|
{"page_id": 4478702, "title": "Phantom eye syndrome"}
|
even-toed ungulates. It is a member of the unranked clade Cetacea, with all the whales, dolphins, and porpoises, and further classified into Odontoceti, containing all the toothed whales and dolphins. It is the sole extant species of its genus, Physeter, in the family Physeteridae. Two species of the related extant genus Kogia, the pygmy sperm whale Kogia breviceps and the dwarf sperm whale K. sima, are placed either in this family or in the family Kogiidae. In some taxonomic schemes the families Kogiidae and Physeteridae are combined as the superfamily Physeteroidea (see the separate entry on the sperm whale family). Swedish ichthyologist Peter Artedi described it as Physeter catodon in his 1738 work Genera piscium, from the report of a beached specimen in Orkney in 1693 and two beached in the Netherlands in 1598 and 1601. The 1598 specimen was near Berkhey. The sperm whale is one of the species originally described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. He recognised four species in the genus Physeter. Experts soon realised that just one such species exists, although there has been debate about whether this should be named P. catodon or P. macrocephalus, two of the names used by Linnaeus. Both names are still used, although most recent authors now accept macrocephalus as the valid name, limiting catodon's status to a lesser synonym. Until 1974, the species was generally known as P. catodon. In that year, however, Dutch zoologists Antonius M. Husson and Lipke Holthuis proposed that the correct name should be P. macrocephalus, the second name in the genus Physeter published by Linnaeus concurrently with P. catodon. This proposition was based on the grounds that the names were synonyms published simultaneously, and, therefore, the ICZN Principle of the First Reviser should apply. In this instance,
|
{"page_id": 313530, "title": "Sperm whale"}
|
tons, requiring an initial thrust exceeding 1,000 tons, and assumed the use of a three-stage rocket. In a classified report, the agency described the event as a "stupendous scientific achievement" and concluded that the USSR had likely perfected an intercontinental ballistic missile (ICBM) capable of accurately targeting any location. In reality, the launch weight of the Soviet rocket was 267 metric tons with an initial thrust of 410 tons with one and a half stages. The CIA's misjudgement was caused by extrapolating the parameters of the US Atlas rocket developed at the same time (launch weight 82 tons, initial thrust 135 tones, maximum payload of 70 kg for low Earth orbit). In part, the favourable data of the Soviet launcher was based on concepts proposed by the German rocket scientists headed by Helmut Gröttrup on Gorodomlya Island, such as, among other things, the rigorous weight saving, the control of the residual fuel quantities and a reduced thrust to weight relation of 1.4 instead of usual factor 2. The CIA had heard about such details already in January 1954 when it interrogated Göttrup after his return from the USSR but did not take him seriously. ==== US reactions ==== The Soviet success raised a great deal of concern in the United States. For example, economist Bernard Baruch wrote in an open letter titled "The Lessons of Defeat" to the New York Herald Tribune: "While we devote our industrial and technological power to producing new model automobiles and more gadgets, the Soviet Union is conquering space. ... It is Russia, not the United States, who has had the imagination to hitch its wagon to the stars and the skill to reach for the moon and all but grasp it. America is worried. It should be." Eisenhower ordered project Vanguard to move up
|
{"page_id": 84237, "title": "Space Race"}
|
timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. However, he made no progress regarding the question of how exactly the Moon created the tides. Medieval rule-of-thumb methods for predicting tides were said to allow one "to know what Moon makes high water" from the Moon's movements. Dante references the Moon's influence on the tides in his Divine Comedy. Medieval European understanding of the tides was often based on works of Muslim astronomers that became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi, in his Introductorium in astronomiam, taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji contributed the notion that the tides were caused by the general circulation of the heavens. Medieval Arabic astrologers frequently referenced the Moon's influence on the tides as evidence for the reality of astrology; some of their treatises on the topic influenced western Europe. Some theorized that the influence was caused by lunar rays heating the ocean's floor. === Modern era === Simon Stevin in his 1608 De spiegheling der Ebbenvloet (The Theory of Ebb and Flood) dismisses a large number of misconceptions that still existed about ebb and flood. Stevin pleads for the idea that the attraction of the Moon was responsible for the tides and writes in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made.
|
{"page_id": 10610469, "title": "Theory of tides"}
|
≃ 3.5 {\displaystyle |{\bar {T}}_{max}|\simeq 3.5} for V S 2 = 2000 m / s {\displaystyle V_{S_{2}}=2000~m/s} (green curve), | T ¯ m a x | ≃ 6 {\displaystyle |{\bar {T}}_{max}|\simeq 6} for V S 2 = 5000 m / s {\displaystyle V_{S_{2}}=5000~m/s} (yellow curve). The red curve corresponds to a large velocity contrast between the layer and the half-space ( χ ¯ ≫ 1 {\displaystyle {\bar {\chi }}\gg 1} ); the amplification is thus very large. As displayed in Fig.3, the maximum amplification is reached at certain frequencies corresponding to the resonance of the sedimentary layer. The fundamental frequency of the layer (or 1st resonance frequency) may be easily calculated under the form: f 0 = V S 1 4 h {\displaystyle f_{0}={\frac {V_{S_{1}}}{4h}}} . The fundamental mode thus corresponds to a quarter wavelength resonance. The "quarter wavelength" approach can be used to estimate site amplifications due to the impedance contrast. When the sedimentary layers are not horizontal (e.g. sedimentary basin), the analysis is more complex since surface waves generated by the lateral heterogeneities (e.g. basin edges) should be accounted for. In such cases, it is possible to perform empirical studies but also theoretical analyses for simple geometries or numerical simulations for more complex cases. == Seismic site effects in sedimentary basins: the case of Caracas == In sedimentary basins, site effects also lead to the generation of surface waves at the basin edges. This phenomenon may significantly strengthen the amplification of the seismic motion. The aggravation of the amplification level when compared to the case of horizontal layering may be up to a factor of 5 or 10. It depends on the velocity contrast between the layers and the geometry of the basin. Such phenomena are named basin effects and we may consider the analogy with the vibrations
|
{"page_id": 43681280, "title": "Seismic site effects"}
|
then IV may identify the causal parameter of interest where OLS fails. Because there are multiple specific ways of using and deriving IV estimators even in just the linear case (IV, 2SLS, GMM), we save further discussion for the Estimation section below. == Graphical definition == IV techniques have been developed among a much broader class of non-linear models. General definitions of instrumental variables, using counterfactual and graphical formalism, were given by Pearl (2000; p. 248). The graphical definition requires that Z satisfy the following conditions: ( Z ⊥ ⊥ Y ) G X ¯ ( Z ⧸ ⊥ ⊥ X ) G {\displaystyle (Z\perp \!\!\!\perp Y)_{G_{\overline {X}}}\qquad (Z\not \!\!{\perp \!\!\!\perp }X)_{G}} where ⊥ ⊥ {\displaystyle \perp \!\!\!\perp } stands for d-separation and G X ¯ {\displaystyle G_{\overline {X}}} stands for the graph in which all arrows entering X are cut off. The counterfactual definition requires that Z satisfies ( Z ⊥ ⊥ Y x ) ( Z ⧸ ⊥ ⊥ X ) {\displaystyle (Z\perp \!\!\!\perp Y_{x})\qquad (Z\not \!\!{\perp \!\!\!\perp }X)} where Yx stands for the value that Y would attain had X been x and ⊥ ⊥ {\displaystyle \perp \!\!\!\perp } stands for independence. If there are additional covariates W then the above definitions are modified so that Z qualifies as an instrument if the given criteria hold conditional on W. The essence of Pearl's definition is: The equations of interest are "structural", not "regression". The error term U stands for all exogenous factors that affect Y when X is held constant. The instrument Z should be independent of U. The instrument Z should not affect Y when X is held constant (exclusion restriction). The instrument Z should not be independent of X. These conditions do not rely on specific functional form of the equations and are applicable therefore
|
{"page_id": 1514405, "title": "Instrumental variables estimation"}
|
Psychology, 1, 296–303. Also republished as: Meehl, Paul E. (March 2000). "The dynamics of 'structured' personality tests, 1945". Journal of Clinical Psychology. 56 (3): 367–373. doi:10.1002/(sici)1097-4679(200003)56:3<367::aid-jclp12>3.0.co;2-u. PMID 10726672. Meehl, Paul E.; Hathaway, Starke R. (1946). "The K factor as a suppressor variable in the Minnesota Multiphasic Personality Inventory" (PDF). Journal of Applied Psychology. 30 (5): 525–564. doi:10.1037/h0053634. ISSN 1939-1854. PMID 20282179. MacCorquodale, Kenneth; Meehl, Paul E. (March 1948). "On a distinction between hypothetical constructs and intervening variables" (PDF). Psychological Review. 55 (2): 95–107. doi:10.1037/h0056029. PMID 18910284. Reprinted in Meehl 1991, pp. 249–264. Hathaway, Starke R.; Meehl, Paul E. (1951). An atlas for the clinical use of the MMPI. Minneapolis: University of Minnesota Press. ISBN 9780816600700. OCLC 166026. {{cite book}}: ISBN / Date incompatibility (help) A case history handbook for professional uses of the Minnesota Multiphasic Personality Inventory. Meehl, Paul E. (1954). Clinical versus statistical prediction: a theoretical analysis and a review of the evidence. Minneapolis: University of Minnesota Press. doi:10.1037/11281-000. ISBN 9780816600960. OCLC 374235. {{cite book}}: ISBN / Date incompatibility (help) Reprinted with new preface in 1996 by Jason Aronson (ISBN 978-0963878496) and in 2013 by Echo Point Books & Media (ISBN 978-0963878496). Cronbach, Lee J.; Meehl, Paul E. (1955). "Construct validity in psychological tests" (PDF). Psychological Bulletin. 52 (4): 281–302. doi:10.1037/h0040957. hdl:11299/184279. PMID 13245896. S2CID 5312179. Reprinted in Feigl, Herbert; Scriven, Michael, eds. (1956). The foundations of science and the concepts of psychology and psychoanalysis. Minnesota studies in the philosophy of science. Minneapolis: University of Minnesota Press. pp. 174–204. hdl:11299/184279. ISBN 9780816601226. OCLC 576505. {{cite book}}: ISBN / Date incompatibility (help) Also reprinted in Meehl 1973a, pp. 3–31 and Waller et al. 2006, pp. 9–30. Meehl, Paul E.; Rosen, Albert (May 1955). "Antecedent probability and the efficiency of psychometric signs, patterns, or cutting scores" (PDF). Psychological Bulletin. 52 (3):
|
{"page_id": 3125942, "title": "Paul E. Meehl"}
|
𝐴 ∗ : 𝑅 𝑝 × 𝑞 → 𝑆 𝑛 is given by 𝐴 ∗ ( 𝑌 ) = 1 2 ( 𝐺 𝑇 𝑌 𝐻 + 𝐻 𝑇 𝑌 𝑇 𝐺 ) Solution: for 𝑌 ∈ 𝑅 𝑛 × 𝑛 , we can have ⟨ 𝑌 , 𝐴 ( 𝑋 ) ⟩ = tr ( 𝑌 𝑇 𝐴 ( 𝑋 ) ) = tr ( 𝑌 𝑇 𝐺 𝑋 𝐻 𝑇 ) = tr ( 𝐻 𝑇 𝑌 𝑇 𝐺 𝑋 ) = 1 2 tr ( ( 𝐻 𝑇 𝑌 𝑇 𝐺 + 𝐺 𝑇 𝑌 𝐻 ) 𝑋 ) = ⟨ 𝑋 , 1 2 ( 𝐻 𝑇 𝑌 𝑇 𝐺 + 𝐺 𝑇 𝑌 𝐻 ) ⏟ symmetric ⟩ because tr ( 𝐻 𝑇 𝑌 𝑇 𝐺 𝑋 ) = tr ( 𝑋 𝐻 𝑇 𝑌 𝑇 𝐺 ) = tr ( 𝐺 𝑇 𝑌 𝐻 𝑋 𝑇 ) = tr ( 𝐺 𝑇 𝑌 𝐻 𝑋 ) 𝑋 is symmetric Semidefinite least squares problem if 𝑓 ( 𝑋 ) = 1 2 | 𝑋 − 𝐵 | 2 where 𝐵 ∈ 𝑆 𝑛 is a given matrix, Missing or unrecognized delimiter for \left Missing or unrecognized delimiter for \left Let 𝑌 := 𝑈 × 𝑉 , 𝐾 := 0 𝑈 × 𝐶 ⊂ 𝑌 . Define 𝐺 : 𝑋 → 𝑌 by 𝐺 ( 𝑥 ) := ( 𝑔 ( 𝑥 ) , ℎ ( 𝑥 ) ) , 𝑥 ∈ 𝑋 . 7.1.1 COP problem ==Then rewrite the OP into more compact form COP== (COP) min 𝑓 ( 𝑥 ) s.t. 𝐺 ( 𝑥 ) ∈ 𝐾 Lagrangian function 𝐿 : 𝑋 × 𝑌 → 𝑅 for COP by 𝐿 ( 𝑥 , 𝜇 ) =
|
{"source": 1183, "title": "from dpo"}
|
fever of digital twins. Among different hand morphable models, MANO has been widely used in vision and graphics community. However, MANO disregards textures and accessories, which largely limits its power to synthesize photorealistic hand data. In this paper, we extend MANO with Diverse Accessories and Rich Textures, namely DART. DART is composed of 50 daily 3D accessories which varies in appearance and shape, and 325 hand-crafted 2D texture maps covers different kinds of blemishes or make-ups. Unity GUI is also provided to generate synthetic hand data with user-defined settings, e.g., pose, camera, background, lighting, textures, and accessories. Finally, we release DARTset, which contains large-scale (800K), high-fidelity synthetic hand images, paired with perfect-aligned 3D labels. Experiments demonstrate its superiority in diversity. As a complement to existing hand datasets, DARTset boosts the generalization in both hand pose estimation and mesh recovery tasks. Raw ingredients (textures, accessories), Unity GUI, source code and DARTset are publicly available at dart2022.github.io. _Joseph DelPreto, Chao Liu, Yiyue Luo, Michael Foshey, Yunzhu Li, Antonio Torralba, Wojciech Matusik, Daniela Rus_ **tl;dr:** A multimodal dataset and recording framework use wearable sensors and synchronized ground-truth data to record humans performing kitchen tasks, with the goal of enabling insights into manipulation, task planning, and more capable robot assistants. , and the disk can quickly read or write large amounts of data. Often, accessing a page of information and reading it from a disk takes longer than examining all the information read. For this reason, in this chapter we shall look separately at the two principal components of the running time: > the number of disk accesses, and > the CPU (computing) time. We measure the number of disk accesses in terms of the number of pages of infor-mation that need to be read from or written to the disk. We note that disk-access time is not constant—it depends on the distance between the current track and the desired track and also
|
{"source": 5230, "title": "from dpo"}
|
problem was rather easy to deal with because we essentially swapped out the actual value passing on each layer for a lazy result set being passed around - because the code was clean. Sometimes you'll definitely need to massively re-engineer things though. whstl on March 1, 2023 | root | parent | next [–] "Whether you write clean or dirty code" I feel like there's a misunderstanding here. Casey is clearly not against writing non-capitalized clean code at all. His code in the end is "cleaner" than what he criticizes IMO. What he is criticizing here is capitalized (and possibly trademarked) "Clean Code", the book and philosophy spearheaded by Uncle Bob. bigbacaloa on March 1, 2023 | root | parent | prev | next [–] Maintainability and cleanliness are not the best virtues code can have. Far more important are that it work correctly and quickly. munk-a on March 1, 2023 | root | parent | next [–] I agree that correctness is pretty essential (as in - actually does what it says, though something that's mostly correct is almost always the bar... most software doesn't need to be entirely correct). But I am confused about "quickly" do you mean dev time or execution time? osigurdson on March 1, 2023 | root | parent | prev | next [–] >> Lack of concurrency/parallelism Definitely get the single-threaded house in order before attempting to speed up by running in parallel. yxhuvud on March 1, 2023 | root | parent | next [–] Depends. For example, if the slowness comes from sequentially emitting a lot of http requests, a lot of performance can be gotten from doing it concurrently. jbverschoor on March 1, 2023 | root | parent | prev | next [–] Well that's exactly the difference between systems programming and
|
{"source": 6594, "title": "from dpo"}
|
water sources, harming the natural ecology and thus, indirectly effect human populations through biomagnification and bioaccumulation. === Insect decline === Both number of insects and number of insect species have declined dramatically and continuously over past decades, causing much concern. Many causes are proposed to contribute to this decline, the most agreed upon are loss of habitat, intensification of farming practices, and insecticide usage. Domestic bees were declining some years ago but population and number of colonies have now risen both in the USA and worldwide. Wild species of bees are still declining. === Bird decline === Besides the effects of direct consumption of insecticides, populations of insectivorous birds decline due to the collapse of their prey populations. Spraying of especially wheat and corn in Europe is believed to have caused an 80 per cent decline in flying insects, which in turn has reduced local bird populations by one to two thirds. == Alternatives == Instead of using chemical insecticides to avoid crop damage caused by insects, there are many alternative options available now that can protect farmers from major economic losses. Some of them are: Breeding crops resistant, or at least less susceptible, to pest attacks. Releasing predators, parasitoids, or pathogens to control pest populations as a form of biological control. Chemical control like releasing pheromones into the field to confuse the insects into not being able to find mates and reproduce. Integrated Pest Management: using multiple techniques in tandem to achieve optimal results. Push-pull technique: intercropping with a "push" crop that repels the pest, and planting a "pull" crop on the boundary that attracts and traps it. == Examples == Source: == See also == Fogger Index of pesticide articles Insecticide Resistance Action Committee Integrated pest management Pesticide application Sterile insect technique == References == == Further reading
|
{"page_id": 149463, "title": "Insecticide"}
|
arm of the elevator will also be reduced, which makes it more difficult to recover from a stalled condition. For helicopters in hover, the center of mass is always directly below the rotorhead. In forward flight, the center of mass will move forward to balance the negative pitch torque produced by applying cyclic control to propel the helicopter forward; consequently a cruising helicopter flies "nose-down" in level flight. === Astronomy === The center of mass plays an important role in astronomy and astrophysics, where it is commonly referred to as the barycenter. The barycenter is the point between two objects where they balance each other; it is the center of mass where two or more celestial bodies orbit each other. When a moon orbits a planet, or a planet orbits a star, both bodies are actually orbiting a point that lies away from the center of the primary (larger) body. For example, the Moon does not orbit the exact center of the Earth, but a point on a line between the center of the Earth and the Moon, approximately 1,710 km (1,062 miles) below the surface of the Earth, where their respective masses balance. This is the point about which the Earth and Moon orbit as they travel around the Sun. If the masses are more similar, e.g., Pluto and Charon, the barycenter will fall outside both bodies. === Rigging and safety === Knowing the location of the center of gravity when rigging is crucial, possibly resulting in severe injury or death if assumed incorrectly. A center of gravity that is at or above the lift point will most likely result in a tip-over incident. In general, the further the center of gravity below the pick point, the safer the lift. There are other things to consider, such as shifting
|
{"page_id": 173961, "title": "Center of mass"}
|
Embodied cognition represents a diverse group of theories which investigate how cognition is shaped by the bodily state and capacities of the organism. These embodied factors include the motor system, the perceptual system, bodily interactions with the environment (situatedness), and the assumptions about the world that shape the functional structure of the brain and body of the organism. Embodied cognition suggests that these elements are essential to a wide spectrum of cognitive functions, such as perception biases, memory recall, comprehension and high-level mental constructs (such as meaning attribution and categories) and performance on various cognitive tasks (reasoning or judgment). The embodied mind thesis challenges other theories, such as cognitivism, computationalism, and Cartesian dualism. It is closely related to the extended mind thesis, situated cognition, and enactivism. The modern version depends on understandings drawn from up-to-date research in psychology, linguistics, cognitive science, dynamical systems, artificial intelligence, robotics, animal cognition, plant cognition, and neurobiology. == Theory == Proponents of the embodied cognition thesis emphasize the active and significant role the body plays in the shaping of cognition and in the understanding of an agent's mind and cognitive capacities. In philosophy, embodied cognition holds that an agent's cognition, rather than being the product of mere (innate) abstract representations of the world, is strongly influenced by aspects of an agent's body beyond the brain itself. An embodied model of cognition opposes the disembodied Cartesian model, according to which all mental phenomena are non-physical and, therefore, not influenced by the body. With this opposition the embodiment thesis intends to reintroduce an agent's bodily experiences into any account of cognition. It is a rather broad thesis and encompasses both weak and strong variants of embodiment. In an attempt to reconcile cognitive science with human experience, the enactive approach to cognition defines "embodiment" as follows: By using
|
{"page_id": 33034640, "title": "Embodied cognition"}
|
works, which may gain temporary prominence but ultimately fail to stand up to scientific scrutiny. Harvey compared the book unfavourably to University undergraduate quality, and added "ecology is the most complex of sciences and Lomborg has never done a shred of work in the field". In a letter, Wilson said that the greatest regret he had about the book was "the time wasted by scientists correcting the misinformation created." In January 2002, a heading from Scientific American read, "Misleading Math about the Earth" and contained set of essays written by scientists on the book. The article concluded that The Skeptical Environmentalist misrepresented both scientific evidence and opinion. The journal also refused Lomborg's request of publishing a defense print of 32 pages, rather a page in the later May issue in 2002. The magazine later published his complete rebuttal on its website, along with the counter rebuttals of John Rennie and John P. Holdren. Nature also published a harsh review of Lomborg's book, in which Stuart Pimm of the Center for Environmental Research and Conservation at Columbia University and Jeff Harvey of the Netherlands Institute of Ecology wrote: "Like bad term papers, Lomborg's text relies heavily on secondary sources. Out of around 2,000 references, about 5% come from news sources and 30% from web downloads — readily accessible, therefore, but frequently not peer reviewed." They continued that "the text employs the strategy of those who, for example, argue that gay men aren't dying of AIDS, that Jews weren't singled out by the Nazis for extermination, and so on." Peter Gleick was also highly critical, stating "there is nothing original or unique in Lomborg's book. Many of his criticisms have appeared in... previous works—and even in the work of environmental scientists themselves. What is new, perhaps, is the scope and variety of
|
{"page_id": 31492, "title": "The Skeptical Environmentalist"}
|
these systems remained in use well into the twentieth century. Harry Franklin Vickers was called the "Father of Industrial Hydraulics" by ASME. == Force and torque multiplication == A fundamental feature of hydraulic systems is the ability to apply force or torque multiplication in an easy way, independent of the distance between the input and output, without the need for mechanical gears or levers, either by altering the effective areas in two connected cylinders or the effective displacement (cc/rev) between a pump and motor. In normal cases, hydraulic ratios are combined with a mechanical force or torque ratio for optimum machine designs such as boom movements and track drives for an excavator. === Examples === ==== Two hydraulic cylinders interconnected ==== Cylinder C1 is one inch in radius, and cylinder C2 is ten inches in radius. If the force exerted on C1 is 10 lbf, the force exerted by C2 is 1000 lbf because C2 is a hundred times larger in area (S = πr²) as C1. The downside to this is that you have to move C1 a hundred inches to move C2 one inch. The most common use for this is the classical hydraulic jack where a pumping cylinder with a small diameter is connected to the lifting cylinder with a large diameter. ==== Pump and motor ==== If a hydraulic rotary pump with the displacement 10 cc/rev is connected to a hydraulic rotary motor with 100 cc/rev, the shaft torque required to drive the pump is one-tenth of the torque then available at the motor shaft, but the shaft speed (rev/min) for the motor is also only one-tenth of the pump shaft speed. This combination is actually the same type of force multiplication as the cylinder example, just that the linear force in this case is a
|
{"page_id": 1372353, "title": "Hydraulic machinery"}
|
duplications (TSDs). TSDs in animals are typically 8–12bp, slightly larger than the 4bp TSDs found in plants. == Replication cycle == The retrozyme sequence is first transcribed by a polymerase in the host. The product is an oligomeric RNA sequence which is a single transcript containing multiple copies of the retrozyme sequence. The hammerhead ribozyme motif then autocatalytically performs self-cleavage to separate the oligomeric transcript into several monomeric transcripts, each containing only one copy of the retrozyme sequence. This copy is an intermediate of the replication cycle, containing the opposite polarity of the original sequence with a 5'-hydroxyl and a 2'-3'-cyclic phosphate ends. A ligase protein in the host may then circularize this intermediate into a stable, circular RNA molecule. In plants, this ligase is a chloroplast tRNA ligase. Dependence on chloroplast tRNA ligase for circularization is also seen in the Avsunviroidae family of viroids. In animals, the ligase is an RtcB tRNA ligase. Reverse transcriptase activity is required from a different retrotransposon to generate a corresponding complementary DNA of the retrozyme RNA, and the polarity of this cDNA corresponds to the polarity of the original sequence. Plant and animal retrozymes rely on different retrotransposons to produce a cDNA copy of their RNA molecule. In plants, LTR retrotransposons of the Gypsy family are used. Although it is not clear which type of retrotransposons are relied on in animals, these could be classes such as LINEs or PLEs. After the DNA copy has been produced, the retrozyme sequence has the opportunity to re-insert itself into a genomic loci. == Relationships with mobile genetic elements == Retrozymes possess close similarities to types of mobile genetic elements (MGE), especially viroids, satellite RNAs (satRNAs), and Ribozyviria (a recently described realm of viruses). For one, the hammerhead ribozyme (HHR) motif is found in all these
|
{"page_id": 69827860, "title": "Retrozyme"}
|
The Gateway-to-Gateway Protocol (GGP) is an obsolete protocol defined for routing datagrams between Internet gateways. It was first outlined in 1982. The Gateway-to-Gateway Protocol was designed as an Internet Protocol (IP) datagram service similar to the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). However, it is classified as an Internet Layer protocol. GGP uses a minimum hop algorithm, in which it measures distance in router hops. A router is defined to be zero hops from directly connected networks, one hop from networks that are reachable through one other gateway. The protocol implements a distributed shortest-path methodology, and therefore requires global convergence of the routing tables after any change of link connectivity in the network. Each GGP message has a field header that identifies the message type and the format of the remaining fields. Because only core routers participated in GGP, and because core routers were controlled by a central authority, other routers could not interfere with the exchange. == See also == Distance-vector routing protocol Link-state routing protocol Router Information Protocol == References ==
|
{"page_id": 17441166, "title": "Gateway-to-Gateway Protocol"}
|
In electrical safety testing, portable appliance testing (PAT inspection or PAT testing) is a process by which electrical appliances are routinely checked for safety, commonly used in the United Kingdom, Ireland, New Zealand and Australia. In Australia and New Zealand it is commonly known as Test and Tag. The formal term for the process is In-service Inspection & Testing of Electrical Equipment. Testing involves a visual inspection of the equipment and verification that power cables are in good condition. Additionally, other tests may be done when required, such as a verification of earthing (grounding) continuity, a test of the soundness of insulation between the current-carrying parts, and a check for any exposed metal that could be touched. The formal limits for a pass/fail of these electrical tests vary somewhat depending on the category of equipment being tested. Other countries have similar procedures, for example, testing of equipment according to DGUV Vorschrift 3 in Germany. == Purpose == Health and safety regulations require that electrical appliances are safe and well maintained to prevent harm to workers. Many equipment manufacturers recommend testing at regular intervals to ensure continual safety, with the interval between tests varying based on both the type of appliance and the environment in which it is to be used. The European Low Voltage Directive governs the manufacture or importation of electrical appliances. Compliance with these standards has to be declared and indicated by the display of the CE mark on the product. The responsibility for testing lies with the manufacturer or the importer and is policed by Trading Standards. In Australia and New Zealand the standard used is AS/NZS3760. Testing equipment has been specifically developed for PAT inspections, based on the testing equipment used by manufacturers to ensure compliance with the British Standard Code of Practice and European product
|
{"page_id": 9090858, "title": "Portable appliance testing"}
|
stained with DCX have been shown to have a mature morphology, contrasting the idea that novel neurons are being generated within the adult brain. The role of new neurons in human adult brain function thus remains unclear. == Mechanism == === Adult neural stem cells === Neural stem cells (NSCs) are the self-renewing, multipotent cells that generate the main phenotypes of the nervous system. === Lineage reprogramming (trans-differentiation) === Emerging evidence suggests that neural microvascular pericytes, under instruction from resident glial cells, are reprogrammed into interneurons and enrich local neuronal microcircuits. This response is amplified by concomitant angiogenesis. == Model organisms of neurogenesis == === Planarian === Planarian are one of the earliest model organisms used to study regeneration with Pallas as the forefather of planarian studies. Planarian are a classical invertebrate model that in recent decades have been used to examine neurogenesis. The central nervous system of a planarian is simple, though fully formed with two lobes located in the head and two ventral nerve cords. This model reproduces asexually producing a complete and fully functioning nervous system after division allowing for consistent examination of neurogenesis. === Axolotl === The axolotl is less commonly used than other vertebrates, but is still a classical model for examining regeneration and neurogenesis. Though the axolotl has made its place in biomedical research in terms of limb regeneration, the model organism has displayed a robust ability to generate new neurons following damage. Axolotls have contributed as a bridge organism between invertebrates and mammals, as the species has the regenerative capacity to undergo complete neurogenesis forming a wide range of neuronal populations not limited to a small niche, yet the complexity and architecture is complex and analogous in many ways to human neural development. === Zebrafish === Zebrafish have long been a classical developmental
|
{"page_id": 740746, "title": "Adult neurogenesis"}
|
1 [ P ( w n ∣ Spam ) ] ∑ Spam [ P ( Spam ) ∏ n = 0 N − 1 [ P ( w n ∣ Spam ) ] ] {\displaystyle {\begin{aligned}&P({\text{Spam}}\mid w_{0}\wedge \cdots \wedge w_{N-1})\\={}&{\frac {\displaystyle P({\text{Spam}})\prod _{n=0}^{N-1}[P(w_{n}\mid {\text{Spam}})]}{\displaystyle \sum _{\text{Spam}}[P({\text{Spam}})\prod _{n=0}^{N-1}[P(w_{n}\mid {\text{Spam}})]]}}\end{aligned}}} The denominator appears to be a normalization constant. It is not necessary to compute it to decide if we are dealing with spam. For instance, an easy trick is to compute the ratio: P ( [ Spam = true ] ∣ w 0 ∧ ⋯ ∧ w N − 1 ) P ( [ Spam = false ] ∣ w 0 ∧ ⋯ ∧ w N − 1 ) = P ( [ Spam = true ] ) P ( [ Spam = false ] ) × ∏ n = 0 N − 1 [ P ( w n ∣ [ Spam = true ] ) P ( w n ∣ [ Spam = false ] ) ] {\displaystyle {\begin{aligned}&{\frac {P([{\text{Spam}}={\text{true}}]\mid w_{0}\wedge \cdots \wedge w_{N-1})}{P([{\text{Spam}}={\text{false}}]\mid w_{0}\wedge \cdots \wedge w_{N-1})}}\\={}&{\frac {P([{\text{Spam}}={\text{true}}])}{P([{\text{Spam}}={\text{false}}])}}\times \prod _{n=0}^{N-1}\left[{\frac {P(w_{n}\mid [{\text{Spam}}={\text{true}}])}{P(w_{n}\mid [{\text{Spam}}={\text{false}}])}}\right]\end{aligned}}} This computation is faster and easier because it requires only 2 N {\displaystyle 2N} products. ==== Bayesian program ==== The Bayesian spam filter program is completely defined by: Pr { D s { S p ( π ) { V a : Spam , W 0 , W 1 … W N − 1 D c : { P ( Spam ∧ W 0 ∧ … ∧ W n ∧ … ∧ W N − 1 ) = P ( Spam ) ∏ n = 0 N − 1 P ( W n ∣ Spam ) F o : { P ( Spam ) : { P ( [ Spam = false ] ) = 0.25 P
|
{"page_id": 40888645, "title": "Bayesian programming"}
|
info needed and create a new Google Calendar entry.  1203-101-41026 Anastasiya Protasov* . Fostering Student Engagement by Incorporating Collaborative Programming in Math Courses. Within the mathematics discipline, teaching methods are broadly classified into two categories: traditional and interactive approaches. The traditional approach primarily involves lectures and demonstrations of problem-solving techniques, focusing on direct instruction and structured content delivery, and therefore doesn’t encourage individual participation. In contrast, the interactive approach emphasizes student engagement through collaborative activities and hands-on learning experiences. By following the interactive approach and integrating real-time coding into problem-solving, students have the opportunity to create visual representations of mathematical phenomena, enhancing their understanding and providing immediate feedback on their solutions. However, many students encounter challenges when integrating such methods, particularly due to a lack of programming skills and the novelty of mathematical theory. Based on this, I implemented and tested programming group work in my Linear Algebra class, experimenting with different group sizes to understand their impact on learning outcomes. By assessing students' confidence and comfort levels before assignments and tracking any subsequent improvements, I determined how this tailored teaching approach is beneficial. Ultimately, I discovered that paired learning is the most effective method for students to complete programming exercises. It successfully balances positive interdependence with student autonomy, allowing students to make significant progress and achieve the best learning outcomes. It's interesting to see that research on paired programming in professional settings often focuses on the dynamics between junior and senior roles. In the context of
|
{"source": 3883, "title": "from dpo"}
|
ed. J. Feigenbaum. Springer-Verlag, Berlin, 156– 171. Knudsen, L.R. (1995). “Truncated and higher order differentials.” Fast Software Encryption, FSE’94 ,Lecture Notes in Computer Science, vol. 1008, ed. B. Preneel. Springer-Verlag, Berlin, 196–211. Knudesen, L.R. and T.A. Berson (1996). “Truncated differentials of SAFER.” Fast Software Encryption, FSE’96 , Lecture Notes in Computer Science, vol. 1039, ed. D. Gollmann. Springer-Verlag, Berlin, 15–26. # TRUST MODELS # INTRODUCTION : Public-key infrastructure (PKI) manages trust in electronic transactions. The principal elements used for maintaining that trust are the contents of the certificates and the secu-rity safeguards in effect in the environments of the various parties involved. These two elements are derived by a risk management procedure from the business purpose of the exchanges, as captured in the certificate policy. Before discussing trust management in PKI, a definition of the word “trust” is required. Reference defines trust in the following way: “Generally, an entity can be said to “trust” a second entity when it (the first entity) makes the assumption that the second entity will behave ex-actly as the first entity expects .” The first entity makes this assumption about a relevant area of the second entity’s behaviour, and so the trust between them is limited to that specific area. In PKI the behaviour of interest is related to the distribution and use of public keys for electronic commerce. Different types of trust Trust models 629 relationship are capable of conveying different types of assurance between the parties. A trust relationship based upon public-key cryptography technology is intended to ensure the authenticity of the second entity’s identifying descriptor and the enforceability of commitments undertaken by both entities. TRUST RELATIONSHIPS : Trust is a well-established concept, and there are many exam-ples of conventional trust relationships, including those between a bank and its account
|
{"source": 5836, "title": "from dpo"}
|
Cell physiology is the biological study of the activities that take place in a cell to keep it alive. The term physiology refers to normal functions in a living organism. Animal cells, plant cells and microorganism cells show similarities in their functions even though they vary in structure. == General characteristics == There are two types of cells: prokaryotes and eukaryotes. Prokaryotes were the first of the two to develop and do not have a self-contained nucleus. Their mechanisms are simpler than later-evolved eukaryotes, which contain a nucleus that envelops the cell's DNA and some organelles. === Prokaryotes === Prokaryotes have DNA located in an area called the nucleoid, which is not separated from other parts of the cell by a membrane. There are two domains of prokaryotes: bacteria and archaea. Prokaryotes have fewer organelles than eukaryotes. Both have plasma membranes and ribosomes (structures that synthesize proteins and float free in cytoplasm). Two unique characteristics of prokaryotes are fimbriae (finger-like projections on the surface of a cell) and flagella (threadlike structures that aid movement). === Eukaryotes === Eukaryotes have a nucleus where DNA is contained. They are usually larger than prokaryotes and contain many more organelles. The nucleus, the feature of a eukaryote that distinguishes it from a prokaryote, contains a nuclear envelope, nucleolus and chromatin. In cytoplasm, endoplasmic reticulum (ER) synthesizes membranes and performs other metabolic activities. There are two types, rough ER (containing ribosomes) and smooth ER (lacking ribosomes). The Golgi apparatus consists of multiple membranous sacs, responsible for manufacturing and shipping out materials such as proteins. Lysosomes are structures that use enzymes to break down substances through phagocytosis, a process that comprises endocytosis and exocytosis. In the mitochondria, metabolic processes such as cellular respiration occur. The cytoskeleton is made of fibers that support the structure of the
|
{"page_id": 6227883, "title": "Cell physiology"}
|
(produced in the decay chain of 291Lv, 287Fl, and 283Cn) may be advantageous for future experiments. The experiments relied on the expectation that rutherfordium would be a 6d element in group 4 and should therefore form a volatile molecular tetrachloride, that would be tetrahedral in shape. Rutherfordium(IV) chloride is more volatile than its lighter homologue hafnium(IV) chloride (HfCl4) because its bonds are more covalent. A series of experiments confirmed that rutherfordium behaves as a typical member of group 4, forming a tetravalent chloride (RfCl4) and bromide (RfBr4) as well as an oxychloride (RfOCl2). A decreased volatility was observed for RfCl4 when potassium chloride is provided as the solid phase instead of gas, highly indicative of the formation of nonvolatile K2RfCl6 mixed salt. === Aqueous phase === Rutherfordium is expected to have the electron configuration [Rn]5f14 6d2 7s2 and therefore behave as the heavier homologue of hafnium in group 4 of the periodic table. It should therefore readily form a hydrated Rf4+ ion in strong acid solution and should readily form complexes in hydrochloric acid, hydrobromic or hydrofluoric acid solutions. The most conclusive aqueous chemistry studies of rutherfordium have been performed by the Japanese team at Japan Atomic Energy Research Institute using the isotope 261mRf. Extraction experiments from hydrochloric acid solutions using isotopes of rutherfordium, hafnium, zirconium, as well as the pseudo-group 4 element thorium have proved a non-actinide behavior for rutherfordium. A comparison with its lighter homologues placed rutherfordium firmly in group 4 and indicated the formation of a hexachlororutherfordate complex in chloride solutions, in a manner similar to hafnium and zirconium. 261mRf4+ + 6 Cl− → [261mRfCl6]2− Very similar results were observed in hydrofluoric acid solutions. Differences in the extraction curves were interpreted as a weaker affinity for fluoride ion and the formation of the hexafluororutherfordate ion, whereas hafnium
|
{"page_id": 25927, "title": "Rutherfordium"}
|
The Bunsen reaction is a chemical reaction that describes water, sulfur dioxide, and iodine reacting to form sulfuric acid and hydrogen iodide: 2H2O + SO2 + I2 → H2SO4 + 2HI This reaction is the first step in the sulfur-iodine cycle to produce hydrogen. The products separate into two aqueous layers, with the sulfuric acid floating on top, and a mixture of hydrogen iodide and unreacted iodine on the bottom. While the two layers are generally considered immiscible, small amounts of sulfuric acid may still remain in the hydrogen iodide layer and vice versa. This can lead to unwanted side reactions, one of which precipitates out sulfur, a potential obstruction to the reaction vessel. The reaction is named after Robert Bunsen. He did not discover the reaction, but he described it in detail in 1853. A similar reaction is the basis for Karl Fischer titration. Note that at sufficiently high temperatures, concentrated H2SO4 may react with HI, giving I2, SO2 and H2O, which reverses the reaction. Many chemical processes are reversible reactions, such as ammonia production from N2 and H2, and removing the desired product will shift equilibrium to the right of the equation favoring reaction products as per the Le Chatelier principle. == References ==
|
{"page_id": 12241503, "title": "Bunsen reaction"}
|
longitudinal member called a keel. In fiberglass or composite hulls, the structure may resemble wooden or steel vessels to some extent, or be of a monocoque arrangement. In many cases, composite hulls are built by sandwiching thin fiber-reinforced skins over a lightweight but reasonably rigid core of foam, balsa wood, impregnated paper honeycomb, or other material. Perhaps the earliest proper hulls were built by the Ancient Egyptians, who by 3000 BC knew how to assemble wooden planks into a hull. == Hull shapes == Hulls come in many varieties and can have composite shape, (e.g., a fine entry forward and inverted bell shape aft), but are grouped primarily as follows: Chined and hard-chined. Examples are the flat-bottom (chined), v-bottom, and multi-chine hull (several gentler hard chines, still not smooth). These types have at least one pronounced knuckle throughout all or most of their length. Moulded, round bilged or soft-chined. These hull shapes all have smooth curves. Examples are the round bilge, semi-round bilge, and s-bottom hull. === Planing and displacement hulls === Displacement hull: here the hull is supported exclusively or predominantly by buoyancy. Vessels that have this type of hull travel through the water at a limited rate that is defined by the waterline length except for especially narrow hulls such as sailing multihulls that are less limited this way. Planing hull: here, the planing hull form is configured to develop positive dynamic pressure so that its draft decreases with increasing speed. The dynamic lift reduces the wetted surface and therefore also the drag. Such hulls are sometimes flat-bottomed, sometimes V-bottomed and more rarely, round-bilged. The most common form is to have at least one chine, which makes for more efficient planing and can throw spray down. Planing hulls are more efficient at higher speeds, although they still require
|
{"page_id": 13755, "title": "Hull (watercraft)"}
|
Antennapedia (abbreviated Antp) is a Hox gene first discovered in Drosophila which controls the formation of legs during development. Loss-of-function mutations in the regulatory region of this gene result in the development of the second leg pair into ectopic antennae. By contrast gain-of-function alleles convert antennae into ectopic legs. This is just one illustration of the tendency of organisms to exhibit variations on a theme: modulated repetition. Legs and antennae are related to one another as much as molars are to incisors, fingers are to toes, and arms are to legs. Antp also refers to a gene complex (ANT-C) in Drosophila ending with the Antp gene. It is responsible for formation and differentiation of the thoracic and head segments of the fly's body. == Origin of Antennapedia-class homeobox gene == The origin of the ancestor homeobox gene is an important aspect of the evolution of the Antp-class Hox genes. Early evolution of the Antp-class genes may have predated the divergence of cnidarians. However, the role that Antp plays in the spatial body development of cnidarians remains unclear. A widely accepted theory is that the ancestor Hox cluster containing three genes arose in the early metazoan era. It is suggested that Antennapedia arose from Evx, a non-Hox family of genes. This duplication event of Evx into the Antp-class probably occurred prior to cnidarian divergence, as there are Cnidarians with Evx and without Hox class genes and vice versa. == Antennapedia in arachnids == Recent studies have observed that down-regulation of the Antp gene in Parasteatoda tepidariorum leads to the development of a pair of ectopic legs, resulting in 10-legged mutant spiders. Drosophila Antp is thought to play an important role in the role of ectopic leg or antenna placement, but not in abdominal leg suppression. However, recent research supported that leg
|
{"page_id": 2925248, "title": "Antennapedia"}
|
Transgranular fracture is a type of fracture that occurs through the crystal grains of a material. In contrast to intergranular fractures, which occur when a fracture follows the grain boundaries, this type of fracture traverses the material's microstructure directly through individual grains. This type of fracture typically results from a combination of high stresses and material defects, such as voids or inclusions, that create a path for crack propagation through the grains. A broad range of ductile or brittle materials, including metals, ceramics, and polymers, can experience transgranular fracture. When examined under scanning electron microscopy, this type of fracture reveals cleavage steps, river patterns, feather markings, dimples, and tongues. The fracture may change directions somewhat when entering a new grain in order to follow the new lattice orientation of that grain but this is a less severe direction change then would be required to follow the grain boundary. This results in a fairly smooth looking fracture with fewer sharp edges than one that follows the grain boundaries. This can be visualized as a jigsaw puzzle cut from a single sheet of wood with the wood grain showing. A transgranular fracture follows the grains in the wood, not the jigsaw edges of the puzzle pieces. This is in contrast to an intergranular fracture which, in this analogy, would follow the jigsaw edges, not the wood grain. == Mechanism of transgranular fracture == The mechanism of transgranular fracture may vary depending on the material and surrounding conditions under which the fracture occurs. However, some general steps are typically involved in the transgranular fracture process: Crack initiation: The first step in transgranular fracture is the initiation of a crack within the material. This can be caused by a range of factors, such as manufacturing defects, surface defects, or exposure to high-stress conditions. Crack
|
{"page_id": 3219248, "title": "Transgranular fracture"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.