source stringlengths 31 227 | text stringlengths 9 2k |
|---|---|
https://en.wikipedia.org/wiki/Richard%20B.%20Hoover | Richard Brice Hoover (born January 3, 1943) is a physicist who has authored 33 volumes and 250 papers on astrobiology, extremophiles, diatoms, solar physics, X-ray/EUV optics and meteorites. He holds 11 U.S. patents and was 1992 NASA Inventor of the Year. He was employed at the United States' NASA Marshall Space Flight Center from 1966 to 2012, where he worked on astrophysics and astrobiology. He established the Astrobiology Group there in 1997 and until his retirement in late 2011 he headed their astrobiology research. He conducted research on microbial extremophiles in the Antarctic, microfossils, and chemical biomarkers in precambrian rocks and in carbonaceous chondrite meteorites. Hoover has published claims to have discovered fossilized microorganisms in a collection of select meteorites on multiple occasions.
Early life
Hoover was born in Sikeston, Missouri on January 3, 1943. He obtained his B.Sc. degree with majors in physics, mathematics and French in 1964 from Henderson State University in Arkadelphia, Arkansas. He did graduate work in theoretical mathematics at Duke University on an NSF fellowship translating the Nicolas Bourbaki French volume on multi-dimensional vector spaces, and was completing his thesis on X-ray diffraction in the Physics Department of the University of Arkansas when he left the University in 1966 to join NASA.
Career
Working at the NASA Marshall Space Flight Center in Huntsville, Alabama, beginning in 1966, Hoover has taken part in astrobiological research carried out there since 1997. In 1998, he participated in two of the astrobiology proposals funded by the newly formed NASA Virtual Astrobiology Institute. He was co-investigator with David McKay (PI) of the NASA Johnson Space Center on the study of biomarkers and microfossils in meteorites, astromaterials and ancient terrestrial rocks, and collaborated with Kenneth Nealson (PI) from NASA Jet Propulsion Laboratory on the investigation of microbial extremophiles from some of the |
https://en.wikipedia.org/wiki/Wildlife%20disease | Disease is described as a decrease in performance of normal functions of an individual caused by many factors, which is not limited to infectious agents. Furthermore, wildlife disease is a disease when one of the hosts includes a wildlife species. In many cases, wildlife hosts can act as a reservoir of diseases that spillover into domestic animals, people and other species. Furthermore, there are many relationships that must be considered when discussing wildlife disease, which are represented through the Epidemiological Triad Model. This model describes the relationship between a pathogen, host and the environment. There are many routes to infection of a susceptible host by a pathogen, but when the host becomes infected that host now has the potential to infect other hosts. Whereas, environmental factors affect pathogen persistence and spread through host movement and interactions with other species. An example to apply to the ecological triad is Lyme disease, where changes in environment have changed the distribution of Lyme disease and its vector, the Ixodes tick.
Wildlife Disease Management
The challenges associated with wildlife disease management, some are environmental factors, wildlife is freely moving, and the effects of anthropogenic factors. Anthropogenic factors have driven significant changes in ecosystems and species distribution globally. The changes in ecosystems can be caused by introduction of invasive species, habitat loss and fragmentation, and overall changes in the function of ecosystems. Due to the significant changes in the environment because of humans, there becomes a need for wildlife management, which manages the interactions between domestic animals and humans, and wildlife.
Wildlife species are freely moving within different areas, and come into contact with domestic animals, humans, and even invade new areas. These interactions can allow for disease transmission, and disease spillover into new populations. Disease spillover can beco |
https://en.wikipedia.org/wiki/Fagopyrin | Fagopyrin is a term used for several closely related naturally occurring substances in the buckwheat plant.
Their chemical structure contains a naphthodianthrone skeleton similar to that of hypericin.
Fagopyrin is located almost exclusively in the cotyledons of the buckwheat herb. When ingested, fagopyrins cause sensitivity to light. |
https://en.wikipedia.org/wiki/Somite | The somites (outdated term: primitive segments) are a set of bilaterally paired blocks of paraxial mesoderm that form in the embryonic stage of somitogenesis, along the head-to-tail axis in segmented animals. In vertebrates, somites subdivide into the
dermatomes, myotomes,
sclerotomes and syndetomes that give rise to the vertebrae of the vertebral column, rib cage, part of the occipital bone, skeletal muscle, cartilage, tendons, and skin (of the back).
The word somite is sometimes also used in place of the word metamere. In this definition, the somite is a homologously-paired structure in an animal body plan, such as is visible in annelids and arthropods.
Development
The mesoderm forms at the same time as the other two germ layers, the ectoderm and endoderm. The mesoderm at either side of the neural tube is called paraxial mesoderm. It is distinct from the mesoderm underneath the neural tube, which is called the chordamesoderm that becomes the notochord. The paraxial mesoderm is initially called the "segmental plate" in the chick embryo or the "unsegmented mesoderm" in other vertebrates. As the primitive streak regresses and neural folds gather (to eventually become the neural tube), the paraxial mesoderm separates into blocks called somites.
Formation
The pre-somitic mesoderm commits to the somitic fate before mesoderm becomes capable of forming somites. The cells within each somite are specified based on their location within the somite. Additionally, they retain the ability to become any kind of somite-derived structure until relatively late in the process of somitogenesis.
The development of the somites depends on a clock mechanism as described by the clock and wavefront model. In one description of the model, oscillating Notch and Wnt signals provide the clock. The wave is a gradient of the fibroblast growth factor protein that is rostral to caudal (nose to tail gradient). Somites form one after the other down the length of the embryo from the head to t |
https://en.wikipedia.org/wiki/Basis%20pursuit%20denoising | In applied mathematics and statistics, basis pursuit denoising (BPDN) refers to a mathematical optimization problem of the form
where is a parameter that controls the trade-off between sparsity and reconstruction fidelity, is an solution vector, is an vector of observations, is an transform matrix and . This is an instance of convex optimization and also of quadratic programming.
Some authors refer to basis pursuit denoising as the following closely related problem:
which, for any given , is equivalent to the unconstrained formulation for some (usually unknown a priori) value of . The two problems are quite similar. In practice, the unconstrained formulation, for which most specialized and efficient computational algorithms are developed, is usually preferred.
Either types of basis pursuit denoising solve a regularization problem with a trade-off between having a small residual (making close to in terms of the squared error) and making simple in the -norm sense. It can be thought of as a mathematical statement of Occam's razor, finding the simplest possible explanation (i.e. one that yields ) capable of accounting for the observations .
Exact solutions to basis pursuit denoising are often the best computationally tractable approximation of an underdetermined system of equations. Basis pursuit denoising has potential applications in statistics (see the LASSO method of regularization), image compression and compressed sensing.
When , this problem becomes basis pursuit.
Basis pursuit denoising was introduced by Chen and Donoho in 1994, in the field of signal processing. In statistics, it is well known under the name LASSO, after being introduced by Tibshirani in 1996.
Solving basis pursuit denoising
The problem is a convex quadratic problem, so it can be solved by many general solvers, such as interior-point methods. For very large problems, many specialized methods that are faster than interior-point methods have been proposed.
Several popular |
https://en.wikipedia.org/wiki/GPS-aided%20GEO%20augmented%20navigation | The GPS-aided GEO augmented navigation (GAGAN) is an implementation of a regional satellite-based augmentation system (SBAS) by the Government of India. It is a system to improve the accuracy of a GNSS receiver by providing reference signals. The Airports Authority of India (AAI)'s efforts towards implementation of operational SBAS can be viewed as the first step towards introduction of modern communication, navigation and surveillance / air traffic management system over the Indian airspace.
The project has established fifteen Indian reference stations, three Indian navigation land uplink stations, three Indian mission control centres, and installation of all associated software and communication links. It will be able to help pilots to navigate in the Indian airspace by an accuracy of This will be helpful for landing aircraft in marginal weather and difficult approaches like Mangalore International and Kushok Bakula Rimpochee airports.
Implementation
The project was created in three phases through 2008 by the Airports Authority of India with the help of the Indian Space Research Organisation's (ISRO) technology and space support. The goal is to provide navigation system for all phases of flight over the Indian airspace and in the adjoining area. It is applicable to safety-to-life operations, and meets the performance requirements of international civil aviation regulatory bodies.
The space component became available after the launch of the GAGAN payload on the GSAT-8 communication satellite, which was successfully launched. This payload was also part of the GSAT-4 satellite that was lost when the Geosynchronous Satellite Launch Vehicle (GSLV) failed during launch in April 2010. A final system acceptance test was conducted during June 2012 followed by system certification during July 2013.
All aircraft being registered in India after 1 July 2021 are mandated to be outfitted with GAGAN equipment.
Technology
To begin implementing a satellite-based augmentati |
https://en.wikipedia.org/wiki/Ranked%20society | A ranked society in anthropology is one that ranks individuals in terms of their genealogical distance from the chief. Another term for a "ranked society" is a chiefdom. Closer relatives of the chief have higher rank or social status than more distant ones. Societies which follow this kind of structure associate rank with power, where other societies associate wealth with power. When individuals and groups rank about equally, competition for positions of leadership may occur. In some cases rank is assigned to entire villages rather than individuals or families. The idea of a ranked society was criticized by Max Weber and Karl Marx. Ranks in ranked society are the different levels, platforms, or social classes that determine someone’s influence on political aspects, votes, decision making, etc. A person’s ranking also gives them societal power (power within their civilisation).
See also
Rankism
Social class |
https://en.wikipedia.org/wiki/Rigid%20cohomology | In mathematics, rigid cohomology is a p-adic cohomology theory introduced by . It extends crystalline cohomology to schemes that need not be proper or smooth, and extends Monsky–Washnitzer cohomology to non-affine varieties. For a scheme X of finite type over a perfect field k, there are rigid cohomology groups H(X/K) which are finite dimensional vector spaces over the field K of fractions of the ring of Witt vectors of k. More generally one can define rigid cohomology with compact supports, or with support on a closed subscheme, or with coefficients in an overconvergent isocrystal.
If X is smooth and proper over k the rigid cohomology groups are the same as the crystalline cohomology groups.
The name "rigid cohomology" comes from its relation to rigid analytic spaces.
used rigid cohomology to give a new proof of the Weil conjectures. |
https://en.wikipedia.org/wiki/Snowflake%20schema | In computing, a snowflake schema is a logical arrangement of tables in a multidimensional database such that the entity relationship diagram resembles a snowflake shape. The snowflake schema is represented by centralized fact tables which are connected to multiple dimensions. "Snowflaking" is a method of normalizing the dimension tables in a star schema. When it is completely normalized along all the dimension tables, the resultant structure resembles a snowflake with the fact table in the middle. The principle behind snowflaking is normalization of the dimension tables by removing low cardinality attributes and forming separate tables.
The snowflake schema is similar to the star schema. However, in the snowflake schema, dimensions are normalized into multiple related tables, whereas the star schema's dimensions are denormalized with each dimension represented by a single table. A complex snowflake shape emerges when the dimensions of a snowflake schema are elaborate, having multiple levels of relationships, and the child tables have multiple parent tables ("forks in the road").
Common uses
Star and snowflake schemas are most commonly found in dimensional data warehouses and data marts where speed of data retrieval is more important than the efficiency of data manipulations. As such, the tables in these schemas are not normalized much, and are frequently designed at a level of normalization short of third normal form.
Data normalization and storage
Normalization splits up data to avoid redundancy (duplication) by moving commonly repeating groups of data into new tables. Normalization therefore tends to increase the number of tables that need to be joined in order to perform a given query, but reduces the space required to hold the data and the number of places where it needs to be updated if the data changes.
From a space storage point of view, dimensional tables are typically small compared to fact tables. This often negates the potential storage-space benef |
https://en.wikipedia.org/wiki/Stjepan%20Mohorovi%C4%8Di%C4%87 | Stjepan Mohorovičić (August 20, 1890 – February 13, 1980) was a Croatian physicist, geophysicist and meteorologist.
Biography
Mohorovičić was born in the town of Bakar. His father is the world-famous geophysicist Andrija Mohorovičić. He studied mathematics and physics at the University of Zagreb where among others his professors were Vinko Dvořák and Andrija Mohorovičić and later he studied at Göttingen where some of his professors were Arnold Sommerfeld, Woldemar Voigt and David Hilbert. Later on he received a doctorate degree from the University of Zagreb. Mohorovičić was an opponent of Einstein's theory of relativity. Because of his longtime opposition and criticisms of theory of relativity he remained a high school professor his whole life. His work went largely ignored, especially in Croatia. He died in Zagreb.
Scientific work
His scientific interests included seismology, meteorology, astrophysics and theoretical physics. He began his career in seismology with his father. In 1913 he developed a new method for locating the hypocenter of an earthquake and gave an independent verification of discontinuity theory put forward by his father. In 1916 he published an idea of the existence of smaller discontinuities in Earth's crust and mantle. He put forward his own theory about the composition and the formation of the Moon, explosive formation of lunar craters and predicted the existence of Moho layer on the Moon. The existence of Moho layer on the Moon was confirmed in 1969 by seismic measurements done by Apollo 11 crew.
Mohorovičić is often called "the father of positronium" because his most significant work is the prediction of the existence of positronium. Positronium is the bound state of an electron and a positron and therefore the lightest atom. It was experimentally discovered in 1951 by Martin Deutsch and became known as positronium. Mohorovičić in his paper also calculated spectra of positronium and predicted the existence of positronium in stars becau |
https://en.wikipedia.org/wiki/No%20instruction%20set%20computing | No instruction set computing (NISC) is a computing architecture and compiler technology for designing highly efficient custom processors and hardware accelerators by allowing a compiler to have low-level control of hardware resources.
Overview
NISC is a statically scheduled horizontal nanocoded architecture (SSHNA). The term "statically scheduled" means that the operation scheduling and Hazard handling are done by a compiler. The term "horizontal nanocoded" means that NISC does not have any predefined instruction set or microcode. The compiler generates nanocodes which directly control functional units, registers and multiplexers of a given datapath. Giving low-level control to the compiler enables better utilization of datapath resources, which ultimately result in better performance. The benefits of NISC technology are:
Simpler controller: no hardware scheduler, no instruction decoder
Better performance: more flexible architecture, better resource utilization
Easier to design: no need for designing instruction-sets
The instruction set and controller of processors are the most tedious and time-consuming parts to design. By eliminating these two, design of custom processing elements become significantly easier.
Furthermore, the datapath of NISC processors can even be generated automatically for a given application. Therefore, designer's productivity is improved significantly.
Since NISC datapaths are very efficient and can be generated automatically, NISC technology is comparable to high level synthesis (HLS) or C to HDL synthesis approaches. In fact, one of the benefits of this architecture style is its capability to bridge these two technologies (custom processor design and HLS).
Zero instruction set computer
In computer science, zero instruction set computer (ZISC) refers to a computer architecture based solely on pattern matching and absence of (micro-)instructions in the classical sense. These chips are known for being thought of as comparable to the n |
https://en.wikipedia.org/wiki/Sodium%20methylparaben | Sodium methylparaben (sodium methyl para-hydroxybenzoate) is a compound with formula Na(CH3(C6H4COO)O). It is the sodium salt of methylparaben.
It is a food additive with the E number E219 which is used as a preservative. |
https://en.wikipedia.org/wiki/Ratio%20of%20uniforms | The ratio of uniforms is a method initially proposed by Kinderman and Monahan in 1977 for pseudo-random number sampling, that is, for drawing random samples from a statistical distribution. Like rejection sampling and inverse transform sampling, it is an exact simulation method. The basic idea of the method is to use a change of variables to create a bounded set, which can then be sampled uniformly to generate random variables following the original distribution. One feature of this method is that the distribution to sample is only required to be known up to an unknown multiplicative factor, a common situation in computational statistics and statistical physics.
Motivation
A convenient technique to sample a statistical distribution is rejection sampling. When the probability density function of the distribution is bounded and has finite support, one can define a bounding box around it (a uniform proposal distribution), draw uniform samples in the box and return only the x coordinates of the points that fall below the function (see graph). As a direct consequence of the fundamental theorem of simulation, the returned samples are distributed according to the original distribution.
When the support of the distribution is infinite, it is impossible to draw a rectangular bounding box containing the graph of the function. One can still use rejection sampling, but with a non-uniform proposal distribution. It can be delicate to choose an appropriate proposal distribution, and one also has to know how to efficiently sample this proposal distribution.
The method of the ratio of uniforms offers a solution to this problem, by essentially using as proposal distribution the distribution created by the ratio of two uniform random variables.
Statement
The statement and the proof are adapted from the presentation by Gobet
Complements
Rejection sampling in
The above statement does not specify how one should perform the uniform sampling in . However, the interest of thi |
https://en.wikipedia.org/wiki/Albert%20Muchnik | Albert Abramovich Muchnik (2 January 1934 – 14 February 2019) was a Russian mathematician who worked in the field of foundations and mathematical logic.
He received his Ph.D. from Moscow State Pedagogical Institute in 1959 under the advisorship of Pyotr Novikov. Muchnik's most significant contribution was on the subject of relative computability. He and Richard Friedberg independently introduced the priority method which gave an affirmative answer to Post's problem regarding the existence of recursively enumerable Turing degrees between 0 and 0' . This result, now known as the Friedberg–Muchnik theorem, opened study of the Turing degrees of the recursively enumerable sets which turned out to possess a very complicated and non-trivial structure.
Muchnik also made significant contributions to Medvedev's theory of mass problems, introducing a generalisation of Turing degrees, called "Muchnik degrees", in 1963. Muchnik also elaborated Kolmogorov's proposal of viewing intuitionism as "calculus of problems" and proved that the lattice of Muchnik degrees is Brouwerian.
Muchnik was married to the Russian mathematician Nadezhda Ermolaeva. Their son Andrey Muchnik, who died in 2007, was also a mathematician working in foundations of mathematics. He died in February 2019.
Selected publications
A. A. Muchnik, On the unsolvability of the problem of reducibility in the theory of algorithms. Doklady Akademii Nauk SSSR (N.S.), vol. 108 (1956), pp. 194–197 |
https://en.wikipedia.org/wiki/Burning%20plasma | In plasma physics, a burning plasma is one in which most of the heating comes from fusion reactions involving thermal plasma ions. The Sun and similar stars are a burning plasma, and in 2020 the National Ignition Facility achieved burning plasma. A closely related concept is that of an ignited plasma, in which all of the heating comes from fusion reactions.
The Sun
In the Sun and other similar stars, those fusion reactions involve hydrogen ions. The high temperatures needed to sustain fusion reactions are maintained by a self-heating process in which energy from the fusion reaction heats the thermal plasma ions via particle collisions. A plasma enters what scientists call the burning plasma regime when the self-heating power exceeds any external heating.
The Sun is a burning plasma that has reached fusion ignition, meaning the Sun's plasma temperature is maintained solely by energy released from fusion. The Sun has been burning hydrogen for 4.5 billion years and is about halfway through its life cycle.
Thermonuclear weapons
Thermonuclear weapons, also known as hydrogen bombs, are nuclear weapons that use energy released by a burning plasma's fusion reactions to produce part of their explosive yield. This is in contrast to pure-fission weapons, which produce all of their yield from a neutronic nuclear fission reaction. The first thermonuclear explosion, and thus the first man-made burning plasma, was the Ivy Mike test carried out by the United States in 1952. All high-yield nuclear weapons today are thermonuclear weapons.
The National Ignition Facility
It was announced in 2022 that a burning plasma had been achieved at the National Ignition Facility, a large laser-based inertial confinement fusion research device, located at the Lawrence Livermore National Laboratory in Livermore, California. The burning plasma created was sustained for approximately 100 trillionths of a second, and the process consumed more energy than it created by a factor of approximate |
https://en.wikipedia.org/wiki/Square%20root%20of%206 | The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6. It is more precisely called the principal square root of 6, to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as:
and in exponent form as:
It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are:
.
which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about . It takes two more digits (2.4495) to reduce the error by about half. The approximation (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than , or less than one part in 47,000.
Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3, both of which are irrational algebraic numbers.
NASA has published more than a million decimal digits of the square root of six.
Rational approximations
The square root of 6 can be expressed as the continued fraction
The successive partial evaluations of the continued fraction, which are called its convergents, approach :
Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …, and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, ….
Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent:
The convergents, expressed as , satisfy alternately the Pell's equations
When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction:
The Babylonian |
https://en.wikipedia.org/wiki/Oka%E2%80%93Weil%20theorem | In mathematics, especially the theory of several complex variables, the Oka–Weil theorem is a result about the uniform convergence of holomorphic functions on Stein spaces due to Kiyoshi Oka and André Weil.
Statement
The Oka–Weil theorem states that if X is a Stein space and K is a compact -convex subset of X, then every holomorphic function in an open neighborhood of K can be approximated uniformly on K by holomorphic functions on (i.e. by polynomials).
Applications
Since Runge's theorem may not hold for several complex variables, the Oka–Weil theorem is often used as an approximation theorem for several complex variables. The Behnke–Stein theorem was originally proved using the Oka–Weil theorem.
See also
Oka coherence theorem |
https://en.wikipedia.org/wiki/Driver%20scheduling%20problem | The driver scheduling problem (DSP) is type of problem in operations research and theoretical computer science.
The DSP consists of selecting a set of duties (assignments) for the drivers or pilots of vehicles (e.g., buses, trains, boats, or planes) involved in the transportation of passengers or goods, within the constraints of various legislative and logistical criteria.
Criteria and modelling
This very complex problem involves several constraints related to labour and company rules and also different evaluation criteria and objectives. Being able to solve this problem efficiently can have a great impact on costs and quality of service for public transportation companies. There is a large number of different rules that a feasible duty might be required to satisfy, such as
Minimum and maximum stretch duration
Minimum and maximum break duration
Minimum and maximum work duration
Minimum and maximum total duration
Maximum extra work duration
Maximum number of vehicle changes
Minimum driving duration of a particular vehicle
Operations research has provided optimization models and algorithms that lead to efficient solutions for this problem. Among the most common models proposed to solve the DSP are the Set Covering and Set Partitioning Models (SPP/SCP). In the SPP model, each work piece (task) is covered by only one duty. In the SCP model, it is possible to have more than one duty covering a given work piece.
In both models, the set of work pieces that needs to be covered is laid out in rows, and the set of previously defined feasible duties available for covering specific work pieces is arranged in columns. The DSP resolution, based on either of these models, is the selection of the set of feasible duties that guarantees that there is one (SPP) or more (SCP) duties covering each work piece while minimizing the total cost of the final schedule.
See also
Crew scheduling
Deadheading (employee) |
https://en.wikipedia.org/wiki/B1%20Free%20Archiver | B1 Free Archiver is a proprietary freeware multi-platform file archiver and file manager. B1 Archiver is available for Microsoft Windows, Linux, macOS, and Android. It has full support (compression, unpacking and encryption) for ZIP and its native B1 format. The program decompresses more than 20 popular archive formats. It creates split and encrypted archives.
B1 Free Archiver is translated into more than 30 languages. Translations are made by volunteers through the Crowdin localization management platform. The program can be used through graphical user interface or command line interface.
B1 Free Archiver is written in C++/Qt and is released under a proprietary license.
Features
B1 Free Archiver supports opening most popular archive formats (such as B1, ZIP, RAR, 7z, GZIP, TAR.GZ, TAR.BZ2 and ISO) but can create only .b1 and .zip archives. The utility can also create split archives which consist of several parts each of specified size and password-protected archives, encrypted with 256 bit AES algorithm.
Desktop application supports editing of the archive - adding new files, deleting files from the archive, editing files directly in the archive.
B1 Free Archiver has full drag-and-drop support, keyboard shortcuts and hotkeys navigation.
The program works on Windows (XP and higher), Linux (Ubuntu/Debian, Fedora), macOS (10.6 and higher) and Android. There is also Online B1 Free Archiver decompression tool.
Main disadvantages are that files inside the B1 archives doesn't retain timestamps. |
https://en.wikipedia.org/wiki/78%20%28number%29 | 78 (seventy-eight) is the natural number following 77 and followed by 79.
In mathematics
78 is:
the 4th discrete tri-prime; or also termed Sphenic number, and the 4th of the form (2.3.r).
an abundant number with an aliquot sum of 90.
a semiperfect number, as a multiple of a perfect number.
the 12th triangular number.
a palindromic number in bases 5 (3035), 7 (1417), 12 (6612), 25 (3325), and 38 (2238).
a Harshad number in bases 3, 4, 5, 6, 7, 13 and 14.
an Erdős–Woods number, since it is possible to find sequences of 78 consecutive integers such that each inner member shares a factor with either the first or the last member.
the dimension of the exceptional Lie group E6 and several related objects.
the smallest number that can be expressed as the sum of four distinct nonzero squares in more than one way: , or (see image).
77 and 78 form a Ruth–Aaron pair.
In science
The atomic number of platinum.
In other fields
78 is also:
In reference to gramophone records, 78 refers those meant to be spun at 78 revolutions per minute. Compare: LP, and 45 rpm. 33 + 45 = 78
A typical tarot deck containing the 21 trump cards, the Fool and the 56 suit cards make up 78 cards
The Rule of 78s is a method of yearly interest calculation
The number used by Martin Truex Jr. and Furniture Row Racing to win the 2017 Monster Energy NASCAR Cup Series championship and 2016 Coca-Cola 600. The team and driver Regan Smith also won the 2011 Showtime Southern 500 with 78. The number is now used by owner-driver B.J. McLeod for Live Fast Motorsports. |
https://en.wikipedia.org/wiki/Paul%20Horn%20%28computer%20scientist%29 | Paul M. Horn (born August 16, 1946) is an American computer scientist and solid state physicist who has made contributions to pervasive computing, pioneered the use of copper and self-assembly in chip manufacturing, and he helped manage the development of deep computing, an important tool that provides business decision makers with the ability to analyze and develop solutions to very complex and difficult problems.
Horn was elected a member of the National Academy of Engineering in 2007 for leadership in the development of information technology products, ranging from microelectronics to supercomputing.
Early life and education
Horn was born on August 16, 1946, and graduated from Clarkson University in 1968 with a Bachelor of Science degree. He obtained his PhD from the University of Rochester in physics in 1973.
Career
Horn has, at various times, been Senior Vice President of the IBM Corporation and executive director of Research. While at IBM, he initiated the project to develop Watson, the computer that competed successfully in the quiz show Jeopardy!.
He is currently a New York University (NYU) Distinguished Scientist in Residence and NYU Stern Executive in Residence. He is also a professor at NYU Tandon School of Engineering. In 2009, he was appointed as the Senior Vice Provost for Research at NYU.
Awards
Industrial Research Institute (IRI) Medal in honor of his contributions to technology leadership, 2005
American Physical Society, George E. Pake Prize, 2002
Hutchison Medal from the University of Rochester, 2002
Distinguished Leadership award from the New York Hall of Science, 2000
Bertram Eugene Warren Award from the American Crystallographic Association, 1988 |
https://en.wikipedia.org/wiki/CYP14%20family | Cytochrome P450, family 14, also known as CYP14, is a nematoda cytochrome P450 monooxygenase family. The first gene identified in this family is the CYP14A1 from the Caenorhabditis elegans. The function of most genes in this family is unknown. |
https://en.wikipedia.org/wiki/Hermann%20von%20Helmholtz | Hermann Ludwig Ferdinand von Helmholtz (31 August 1821 – 8 September 1894) was a German physicist and physician who made significant contributions in several scientific fields, particularly hydrodynamic stability. The Helmholtz Association, the largest German association of research institutions, is named in his honor.
In the fields of physiology and psychology, Helmholtz is known for his mathematics concerning the eye, theories of vision, ideas on the visual perception of space, color vision research, the sensation of tone, perceptions of sound, and empiricism in the physiology of perception. In physics, he is known for his theories on the conservation of energy and on the electrical double layer, work in electrodynamics, chemical thermodynamics, and on a mechanical foundation of thermodynamics. As a philosopher, he is known for his philosophy of science, ideas on the relation between the laws of perception and the laws of nature, the science of aesthetics, and ideas on the civilizing power of science.
Biography
Early years
Helmholtz was born in Potsdam, the son of the local gymnasium headmaster, Ferdinand Helmholtz, who had studied classical philology and philosophy, and who was a close friend of the publisher and philosopher Immanuel Hermann Fichte. Helmholtz's work was influenced by the philosophy of Johann Gottlieb Fichte and Immanuel Kant. He tried to trace their theories in empirical matters like physiology.
As a young man, Helmholtz was interested in natural science, but his father wanted him to study medicine. Helmholtz earned a medical doctorate at Medicinisch-chirurgisches Friedrich-Wilhelm-Institute in 1842 and served a one-year internship at the Charité hospital (because there was financial support for medical students).
Trained primarily in physiology, Helmholtz wrote on many other topics, ranging from theoretical physics, to the age of the Earth, to the origin of the Solar System.
University posts
Helmholtz's first academic position was as a |
https://en.wikipedia.org/wiki/Metal%20hose | A metal hose is a flexible metal line element. There are two basic types of metal hose that differ in their design and application: stripwound hoses and corrugated hoses.
Stripwound hoses have a high mechanical strength (e.g. tensile strength and tear strength). Corrugated hoses can withstand high pressure and provide maximum leak tightness on account of their material. Corrugated hoses also exhibit corrosion resistance and pressure tightness under the most extreme conditions, such as in aggressive seawater or at extreme temperatures such as found in space or when transporting cooled liquid gas. They are particularly well suited for conveying hot and cold substances.
With a history of more than one hundred years, metal hoses have given rise to other flexible line elements, including metal expansion joints, metal bellows and semi-flexible and flexible metal pipes. In Germany alone, there are about 3500 patents relating to metal hoses.
The origins
The first metal hose was technically a stripwound hose. It was invented in 1885 by the jewellery manufacturer Heinrich Witzenmann (1829–1906) of Pforzheim, Germany, together with the French engineer Eugène Levavassèur. The hose was modelled after the goose throat necklace, a piece of jewellery that consisted of interlacing metal strips. The original design of the hose was based on a helically coiled metal strip with an S-shaped profile. The profile interlocked along the windings of the helical coil. Due to a cavity between the interlocking profiles, this did not create a tight fit. The cavity was sealed by means of a rubber thread.
The result was a permanently flexible, leak-tight steel body of any length and diameter with a high mechanical strength. In France it was patented on 4 August 1885 with the patent number 170 479, and in Germany on 27 August 1885 with the German Reichspatent No. 34 871.
From 1886 to 1905, Heinrich Witzenmann continued to develop numerous noteworthy profiles for hose production which are stil |
https://en.wikipedia.org/wiki/Th%C3%A9%C3%A2tre%20D%27op%C3%A9ra%20Spatial | Théâtre D'opéra Spatial is an image created by the generative artificial intelligence platform Midjourney, using a prompt by Jason Matthew Allen. The painting became a news story when it won the 2022 Colorado State Fair's annual fine art competition on 5 September, becoming one of the first AI generated images to win such a prize.
Winning the fair's contest for "digital arts/digitally-manipulated photography" led to a backlash from artists who accused Allen of cheating. Allen responded: "I'm not going to apologize for it. I won, and I didn’t break any rules."
The two category judges were unaware that Midjourney used AI to generate images, although they later said that had they known this, they would have awarded Allen the top prize anyway.
The image, created using a text prompt, was upscaled using Gigapixel AI.
In September 2023 the US Copyright Office review board found that Théâtre D'Opéra Spatial was not eligible for copyright protection as the rules "exclude works produced by non-humans". This decision reaffirms previous guidance given in respect of AI by the Office and a recent court case (Thaler v Comptroller-General of Patents, Designs and Trademarks) that found against Thaler on the basis of a principle of human authorship that—though not enshrined in copyright law—is a working principle used by the US Copyright Office.
Allen insists he will fight on:"Allen was dogged in his attempt to register his work. He sent a written explanation to the Copyright Office detailing how much he’d done to manipulate what Midjourney conjured, as well as how much he fiddled with the raw image, using Adobe Photoshop to fix flaws and Gigapixel AI to increase the size and resolution. He specified that creating the painting had required at least 624 text prompts and input revisions."
Some legal writers support his claim and consider it to be form of technological discrimination comparing it with the treatment of photographs and the modern use of electronic cameras.
Any leg |
https://en.wikipedia.org/wiki/HIST1H4E | Histone H4 is a protein that in humans is encoded by the HIST1H4E gene.
Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. Two molecules of each of the four core histones (H2A, H2B, H3, and H4) form an octamer, around which approximately 146 bp of DNA is wrapped in repeating units, called nucleosomes. The linker histone, H1, interacts with linker DNA between nucleosomes and functions in the compaction of chromatin into higher order structures. This gene is intronless and encodes a member of the histone H4 family. Transcripts from this gene lack polyA tails but instead contain a palindromic termination element. This gene is found in the large histone gene cluster on chromosome 6. |
https://en.wikipedia.org/wiki/Sequoia%20%28supercomputer%29 | IBM Sequoia was a petascale Blue Gene/Q supercomputer constructed by IBM for the National Nuclear Security Administration as part of the Advanced Simulation and Computing Program (ASC). It was delivered to the Lawrence Livermore National Laboratory (LLNL) in 2011 and was fully deployed in June 2012. Sequoia was dismantled in 2020, its last position on the top500.org list was #22 in the November 2019 list.
On June 14, 2012, the TOP500 Project Committee announced that Sequoia replaced the K computer as the world's fastest supercomputer, with a LINPACK performance of 17.17 petaflops, 63% faster than the K computer's 10.51 petaflops, having 123% more cores than the K computer's 705,024 cores. Sequoia is also more energy efficient, as it consumes 7.9 MW, 37% less than the K computer's 12.6 MW.
, Sequoia had dropped to sixth place on the TOP500 ranking, while it was at third position on June 17, 2013, behind Tianhe-2 and Titan. In June 2016, it slipped again, to fourth place on the TOP500 ranking. In June 2017, it slipped again, to fifth place on the TOP500 ranking.
Record-breaking science applications have been run on Sequoia, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run, while the Cardioid code, which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation.
The entire supercomputer runs on Linux, with CNK running on over 98,000 nodes, and Red Hat Enterprise Linux running on 768 I/O nodes that are connected to the Lustre filesystem.
Dawn prototype
IBM built a prototype, called "Dawn", capable of 500 teraflops, using the Blue Gene/P design, to evaluate the Sequoia design. This system was delivered in April 2009 and entered the Top500 list at 9th place in June 2009.
Purpose
Sequoia was used primarily for nuclear weapons simulation, replacing the current Blue Gene/L and ASC Purple supercomp |
https://en.wikipedia.org/wiki/Economic%20restructuring | Economic restructuring is used to indicate changes in the constituent parts of an economy in a very general sense. In the western world, it is usually used to refer to the phenomenon of urban areas shifting from a manufacturing to a service sector economic base. It has profound implications for productive capacities and competitiveness of cities and regions. This transformation has affected demographics including income distribution, employment, and social hierarchy; institutional arrangements including the growth of the corporate complex, specialized producer services, capital mobility, informal economy, nonstandard work, and public outlays; as well as geographic spacing including the rise of world cities, spatial mismatch, and metropolitan growth differentials.
Demographic impact
As cities experience a loss of manufacturing jobs and growth of services, sociologist Saskia Sassen affirms that a widening of the social hierarchy occurs where high-level, high-income, salaried professional jobs expands in the service industries alongside a greater incidence of low-wage, low-skilled jobs, usually filled by immigrants and minorities. A "missing middle" eventually develops in the wage structure. Several effects of this social polarization include the increasing concentration of poverty in large U.S. cities, the increasing concentration of black and Hispanic populations in large U.S. cities, and distinct social forms such as the underclass, informal economy, and entrepreneurial immigrant communities. In addition, the declining manufacturing sector leaves behind strained blue-collared workers who endure chronic unemployment, economic insecurity, and stagnation due to the global economy's capital flight. Wages and unionization rates for manufacturing jobs also decline. One other qualitative dimension involves the feminization of the job supply as more and more women enter the labor force usually in the service sector.
Both costs and benefits are associated with economic re |
https://en.wikipedia.org/wiki/Power%20amplifier%20classes | In electronics, power amplifier classes are letter symbols applied to different power amplifier types. The class gives a broad indication of an amplifier's characteristics and performance. The classes are related to the time period that the active amplifier device is passing current, expressed as a fraction of the period of a signal waveform applied to the input. A class A amplifier is conducting through all the period of the signal; Class B only for one-half the input period, class C for much less than half the input period. A Class D amplifier operates its output device in a switching manner; the fraction of the time that the device is conducting is adjusted so a pulse-width modulation output is obtained from the stage.
Additional letter classes are defined for special-purpose amplifiers, with additional active elements or particular power supply improvements; sometimes a new letter symbol is used by a manufacturer to promote its proprietary design.
By December 2010, AB and D classes dominated nearly all of audio amplifier market with the former being favored in portable music players, home audio and cell phone owing to lower cost of class AB chips.
Power amplifier classes
Power amplifier circuits (output stages) are classified as A, B, AB and C for linear designs—and class D and E for switching designs. The classes are based on the proportion of each input cycle (conduction angle) during which an amplifying device passes current. The image of the conduction angle derives from amplifying a sinusoidal signal. If the device is always on, the conducting angle is 360°. If it is on for only half of each cycle, the angle is 180°. The angle of flow is closely related to the amplifier power efficiency.
In the illustrations below, a bipolar junction transistor is shown as the amplifying device. However, the same attributes are found with MOSFETs or vacuum tubes.
Class A
In a class-A amplifier, 100% of the input signal is used (conduction angle Θ = 360°). The activ |
https://en.wikipedia.org/wiki/Diameter%20%28protocol%29 | Diameter is an authentication, authorization, and accounting protocol for computer networks. It evolved from the earlier RADIUS protocol. It belongs to the application layer protocols in the internet protocol suite.
Diameter Applications extend the base protocol by adding new commands and/or attributes, such as those for use with the Extensible Authentication Protocol (EAP).
Comparison with RADIUS
The name is a play on words, derived from the RADIUS protocol, which is the predecessor (a diameter is twice the radius). Diameter is not directly backward compatible but provides an upgrade path for RADIUS. The main features provided by Diameter but lacking in RADIUS are:
Support for SCTP
Capability negotiation
Application layer acknowledgements; Diameter defines failover methods and state machines (RFC 3539)
Extensibility; new commands can be defined
Aligned on 32 bit boundaries
Also:
Like RADIUS, it is intended to work in both local and roaming AAA situations.
It uses TCP or SCTP, unlike RADIUS which uses UDP.
Unlike RADIUS it includes no encryption but can be protected by transport-level security (IPSEC or TLS).
The base size of the AV identifier is 32 bit unlike RADIUS which uses 8 bit as the base AV identifier size.
Like RADIUS, it supports stateless as well as stateful modes.
Like RADIUS, it supports application-layer acknowledgment and defines failover.
Diameter is used for many different interfaces defined by the 3GPP standards, with each interface typically defining new commands and attributes.
Applications
A Diameter Application is not a software application but is a protocol based on the Diameter base protocol defined in RFC 6733 and RFC 7075 (Obsoletes: RFC 3588). Each application is defined by an application identifier and can add new command codes and/or new mandatory AVPs (Attribute-Value Pair). Adding a new optional AVP does not require a new application.
Examples of Diameter applications:
Diameter Mobile IPv4 Application (MobileIP, RFC 4004) |
https://en.wikipedia.org/wiki/Accidental%20Adversaries | Accidental Adversaries is one of the ten system archetypes used in system dynamics modelling, or systems thinking. This archetype describes the degenerative pattern that develops when two subjects cooperating for a common goal, accidentally take actions that undermine each other's success. It is similar to the escalation system archetype in terms of pattern behaviour that develops over time.
Archetype description
The archetype describes a pattern where two subjects have decided to work together because they will benefit from the alliance. Each take actions believing that it will bring benefit to the other and if the cooperation works, they will both benefit from it. Problems start arising when one or both of the subjects need to fix a local gap in performance, maybe due to external pressure. They initiate action to fix the gap and accidentally undermine each other's success. The result of these activities may produce a sense of resentment or frustration between the subjects or it may even turn the subjects into adversaries (hence the archetype name), thereby destroying the alliance.
History
The original set of system archetypes were published in The Fifth Discipline by Peter Senge. The exact source of these generic structures is not known. However "Accidental Adversaries" has a clear origin. It is derived from observations made by Jennifer Kemeny, a colleague of Senge's and a contributor to the original archetype descriptions. In her consulting work in the late 1980s she was intrigued at how often potential strategic alliances were unsuccessful, or devolved into outright hostility. Such a recurring phenomenon suggested a structural cause. The essential elements of the archetype were first described during a session Kemeny facilitated. It was the first meeting of the first ever alliance between Walmart and Procter and Gamble. Employees diagrammed how their protective business policies caused unintentional damage to the other company – which responded with |
https://en.wikipedia.org/wiki/Kleiman%27s%20theorem | In algebraic geometry, Kleiman's theorem, introduced by , concerns dimension and smoothness of scheme-theoretic intersection after some perturbation of factors in the intersection.
Precisely, it states: given a connected algebraic group G acting transitively on an algebraic variety X over an algebraically closed field k and morphisms of varieties, G contains a nonempty open subset such that for each g in the set,
either is empty or has pure dimension , where is ,
(Kleiman–Bertini theorem) If are smooth varieties and if the characteristic of the base field k is zero, then is smooth.
Statement 1 establishes a version of Chow's moving lemma: after some perturbation of cycles on X, their intersection has expected dimension.
Sketch of proof
We write for . Let be the composition that is followed by the group action .
Let be the fiber product of and ; its set of closed points is
.
We want to compute the dimension of . Let be the projection. It is surjective since acts transitively on X. Each fiber of p is a coset of stabilizers on X and so
.
Consider the projection ; the fiber of q over g is and has the expected dimension unless empty. This completes the proof of Statement 1.
For Statement 2, since G acts transitively on X and the smooth locus of X is nonempty (by characteristic zero), X itself is smooth. Since G is smooth, each geometric fiber of p is smooth and thus is a smooth morphism. It follows that a general fiber of is smooth by generic smoothness.
Notes |
https://en.wikipedia.org/wiki/Weebly | Weebly is an American web hosting and web development company headquartered in San Francisco and is a subsidiary of Block, Inc. It was founded in 2006 by Chief Executive Officer David Rusenko, Chief Technology Officer Chris Fanini, and former Chief Product Officer Dan Veltri.
History
The company's primary focus was to create software that made it easy for individuals to build personal websites. Formal development of Weebly started in January 2006, with an invitational beta release announced in June 2006 and an official private-beta launch in September 2006.
Throughout its history, Weebly received funding from various investors, including angel investors and venture capital firms. In 2018, co-founder Dan Veltri left the company.
Features
In March 2007, Weebly re-launched with its "WYSIWYG" editing interface, "Pro" accounts and Google AdSense monetization features, as well as compatibility with Google Chrome and Safari. In 2010, the company added French, Italian, Spanish, and Chinese languages, followed by integrated JotForm software into its services. On October 1, 2015, Weebly Carbon was released to allow plugin integration among other features. In 2016, Weebly began focusing on its e-commerce offerings with the release of Weebly 4 and Weebly Promote, an integrated marketing tool.
As more sellers began using the company, the company created features for doing taxes, integrations with Shippo, a Facebook Ad creator, email marketing and lead capture, abandoned cart features, the release of Mobile 5.0 to help sellers run their store from anywhere, and deep integrations with Square payment processing.
Weebly initially faced criticism for its lack of CSS/HTML editing support, but this functionality was added in 2009.
Offices
The company expanded its offices, including a 36,000 square feet warehouse in San Francisco, to accommodate its growing team. Additionally, Weebly opened a Berlin office in 2015/2016 to offer European-based support and marketing.
Acquisition |
https://en.wikipedia.org/wiki/Hyperplastic%20polyp | A hyperplastic polyp is a type of colorectal polyp.
Cancer risk
Most hyperplastic polyps are found in the distal colon and rectum. They have no malignant potential, which means that they are no more likely than normal tissue to eventually become a cancer.
Hyperplastic polyps on the right side of the colon do exhibit a malignant potential. This occurs through multiple mutations that affect the DNA-mismatch-repair pathways. As such DNA mutations during replication are not repaired. This leads to microsatellite instability which can eventually lead to malignant transformation in polyps on the right side of the colon.
Serrated polyposis syndrome
Serrated polyposis syndrome is a rare condition that has been defined by the World Health Organization as either:
≥5 serrated lesions/polyps proximal to the rectum, all ≥ 5 mm in size, with two lesions ≥10 mm
>20 serrated lesions/polyps of any size distributed throughout the large bowel with 5 proximal to the rectum.
Histopathology
Histopathologically, there are two main types of hyperplastic polyps, which have genetic differences, as well as different histologic structure, but no significant differences clinically. The two main types of hyperplastic polyps are microvesicular mucin-rich type and goblet cell-rich type. A mucin-poor type with eosinophilic cytoplasm, which is rare, was previously described. However, the mucin poor type is no longer considered a distinct subtype.
Mucin-rich type
The luminal portion has a serrated ("saw tooth") appearance formed by tufts or folds of abundant apical cytoplasm. It contains glands with star-shaped lumina. There are crypts that are elongated but straight, narrow and hyperchromatic at the base. All crypts reach to the muscularis mucosae. The basement membrane is frequently thickened.
Goblet cell-rich type
Elongated, fat crypts and little to no serration. Therefore, they may not be obvious without comparing to adjacent normal intestinal wall.
They are filled with goblet cells, e |
https://en.wikipedia.org/wiki/Deterministic%20system | In mathematics, computer science and physics, a deterministic system is a system in which no randomness is involved in the development of future states of the system. A deterministic model will thus always produce the same output from a given starting condition or initial state.
In physics
Physical laws that are described by differential equations represent deterministic systems, even though the state of the system at a given point in time may be difficult to describe explicitly.
In quantum mechanics, the Schrödinger equation, which describes the continuous time evolution of a system's wave function, is deterministic. However, the relationship between a system's wave function and the observable properties of the system appears to be non-deterministic.
In mathematics
The systems studied in chaos theory are deterministic. If the initial state were known exactly, then the future state of such a system could theoretically be predicted. However, in practice, knowledge about the future state is limited by the precision with which the initial state can be measured, and chaotic systems are characterized by a strong dependence on the initial conditions. This sensitivity to initial conditions can be measured with Lyapunov exponents.
Markov chains and other random walks are not deterministic systems, because their development depends on random choices.
In computer science
A deterministic model of computation, for example a deterministic Turing machine, is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state.
A deterministic algorithm is an algorithm which, given a particular input, will always produce the same output, with the underlying machine always passing through the same sequence of states. There may be non-deterministic algorithms that run on a deterministic machine, for example, an algorithm that relies on random choices. Generally, for such random choices, one |
https://en.wikipedia.org/wiki/Boundless%20%28video%20game%29 | Boundless is a massively multiplayer online sandbox game, developed by Guildford-based studio Wonderstruck Games. It was released through Early access on Steam for Microsoft Windows and OS X on November 13, 2014. It was fully released on September 11, 2018, and features online cross-play across all regions with PlayStation 4.
Gameplay
Played in either first-person or third-person perspective, players control a customisable avatar around procedurally generated planets, made up of various types and shapes of blocks. Players interact with these blocks and discover ancient technologies to help craft tools, weapons and machines which can then be used to shape the world around them into buildings, vast cities and guilds, and eventually allow players to open warps and portals to other worlds.
Players are able to gather resources from their local environment, which are then used to survive and build equipment. Bases are built in the form of Beacons, which reserve an area for a particular player, protecting the land from being mined or otherwise edited by others.
Players must also deal with hostile wildlife and survival matters such as hunger to survive in the alien universe of the game. Players are able to explore solo, or to play together in groups. Large groups of players collaborate to build intricate projects or undertake large-scale hunts.
Development
In July 2014, the game was announced as a goal-funded project under the title of Oort Online. It quickly gained traction with its online audience as a browser-based game and reached its first funding goal in early August 2014. After quickly outgrowing the capabilities of the browser and the scope of the original game, it was decided that Oort Online would appeal to a wider audience under the Steam Greenlight system and was successfully greenlit in September 2014.
Oort Online underwent a name change in October 2015 to the current title Boundless. Also in October 2015, it was announced that Boundless had been backed |
https://en.wikipedia.org/wiki/Biosatellite%20program | NASA's Biosatellite program was a series of three uncrewed artificial satellites to assess the effects of spaceflight, especially radiation and weightlessness, on living organisms. Each was designed to reenter Earth's atmosphere and be recovered at the end of its mission.
Its primary goal was to determine the effects of space environment, particularly weightlessness, on life processes at three levels of organization: basic biochemistry of the cell; structure of growth of cells and tissues; and growth and form of entire plants and animals.
Biosatellite 1
Biosatellite 1, also known as Biosat 1 and Biosatellite A, was the first mission in the Biosatellite program. It was launched on December 14, 1966, by a Delta G rocket from Launch Complex 17A of the Cape Canaveral Air Force Station. Biosatellite 1 was the first series Biosatellite satellites. It was inserted in an initial orbit of 296 km perigee, 309 km apogee and 33.5 degrees of orbital inclination, with a period 90.5 minutes.
Biosatellite 1 carried several specimens for the study of the effects of the space environment on biological processes. Prior to reentry, the entry capsule separated from the satellite bus properly, but its deorbit motor failed to ignite, leaving it stranded in a slowly decaying orbit. It re-entered and disintegrated on February 15, 1967.
Biosatellite 2
Biosatellite 2, also known as Biosat 2 and Biosatellite B, was the second mission in the Biosatellite program. It was launched on September 7, 1967, by a Delta G rocket from Launch Complex 17B of the Cape Canaveral Air Force Station.
Biosatellite 2 carried thirteen biological experiments involving insects, frog eggs, plants and microorganisms. The mission was ended early due to a tropical storm threat.
Despite returning approximately a day early, its 45 hours of earth-orbital flight enabled valid conclusions to be made in the thirteen experiments on board, by comparing the onboard samples to earthbound control organisms.
Biosatellite 3
|
https://en.wikipedia.org/wiki/Qualitative%20research | Qualitative research is a type of research that aims to gather and analyse non-numerical (descriptive) data in order to gain an understanding of individuals' social reality, including understanding their attitudes, beliefs, and motivation. This type of research typically involves in-depth interviews, focus groups, or observations in order to collect data that is rich in detail and context. Qualitative research is often used to explore complex phenomena or to gain insight into people's experiences and perspectives on a particular topic. It is particularly useful when researchers want to understand the meaning that people attach to their experiences or when they want to uncover the underlying reasons for people's behavior. Qualitative methods include ethnography, grounded theory, discourse analysis, and interpretative phenomenological analysis. Qualitative research methods have been used in sociology, anthropology, political science, psychology, communication studies, social work, folklore, educational research, information science and software engineering research.
Background
Qualitative research has been informed by several strands of philosophical thought and examines aspects of human life, including culture, expression, beliefs, morality, life stress, and imagination. Contemporary qualitative research has been influenced by a number of branches of philosophy, for example, positivism, postpositivism, critical theory, and constructivism.
The historical transitions or 'moments' in qualitative research, together with the notion of 'paradigms' (Denzin & Lincoln, 2005), have received widespread popularity over the past decades. However, some scholars have argued that the adoptions of paradigms may be counterproductive and lead to less philosophically engaged communities. In this regard, Pernecky proposed an alternative way to implementing philosophical concerns in qualitative inquiry so that researchers are able to maintain the needed intellectual mobility and elast |
https://en.wikipedia.org/wiki/OpenSC | OpenSC is a set of software tools and libraries to work with smart cards, with the focus on smart cards with cryptographic capabilities. OpenSC facilitate the use of smart cards in security applications such as authentication, encryption and digital signatures. OpenSC implements the PKCS #15 standard and the PKCS #11 API. For its reader backend OpenSC can use either CT-API or PC/SC.
It also provides some support for Common Data Security Architecture (CDSA) on macOS and Microsoft CryptoAPI on Windows, but it is still work in progress. |
https://en.wikipedia.org/wiki/Orion%20%28system-on-a-chip%29 | Orion is a system-on-a-chip manufactured by Marvell Technology Group and used in network-attached storage. Based on the ARMv5TE architecture, it has on-chip support for Ethernet, SATA and USB, and is used in hardware made by Hewlett-Packard and D-Link among others. It is supported by the Lenny release of Debian GNU/Linux. |
https://en.wikipedia.org/wiki/163rd%20meridian%20east | The meridian 163° east of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Asia, the Pacific Ocean, the Southern Ocean, and Antarctica to the South Pole.
The 163rd meridian east forms a great circle with the 17th meridian west.
From Pole to Pole
Starting at the North Pole and heading south to the South Pole, the 163rd meridian east passes through:
{| class="wikitable plainrowheaders"
! scope="col" width="130" | Co-ordinates
! scope="col" | Country, territory or sea
! scope="col" | Notes
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Arctic Ocean
| style="background:#b0e0e6;" |
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | East Siberian Sea
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Chukotka Autonomous Okrug Magadan Oblast — from Kamchatka Krai — from
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Sea of Okhotsk
| style="background:#b0e0e6;" | Penzhin Bay
|-
|
! scope="row" |
| Kamchatka Krai — Kamchatka Peninsula
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bering Sea
| style="background:#b0e0e6;" | Karaginsky Gulf
|-
|
! scope="row" |
| Kamchatka Krai — Kamchatka Peninsula
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Bering Sea
| style="background:#b0e0e6;" | Ozernoy Bay
|-
|
! scope="row" |
| Kamchatka Krai — Kamchatka Peninsula
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" |
|-valign="top"
|
! scope="row" |
| Island of Kosrae
|-valign="top"
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Pacific Ocean
| style="background:#b0e0e6;" | Passing just east of Sikaiana atoll, (at )
|-
| style="background:#b0e0e6;" |
! scope="row" style="background:#b0e0e6;" | Coral Sea
| style="background:#b0e0e6;" |
|-
| style="bac |
https://en.wikipedia.org/wiki/Saha%20ionization%20equation | In physics, the Saha ionization equation is an expression that relates the ionization state of a gas in thermal equilibrium to the temperature and pressure. The equation is a result of combining ideas of quantum mechanics and statistical mechanics and is used to explain the spectral classification of stars. The expression was developed by physicist Meghnad Saha in 1920.
Description
For a gas at a high enough temperature (here measured in energy units, i.e. keV or J) and/or density, the thermal collisions of the atoms will ionize some of the atoms, making an ionized gas. When several or more of the electrons that are normally bound to the atom in orbits around the atomic nucleus are freed, they form an independent electron gas cloud co-existing with the surrounding gas of atomic ions and neutral atoms. With sufficient ionization, the gas can become the state of matter called plasma.
The Saha equation describes the degree of ionization for any gas in thermal equilibrium as a function of the temperature, density, and ionization energies of the atoms. The Saha equation only holds for weakly ionized plasmas for which the Debye length is small. This means that the screening of the Coulomb interaction of ions and electrons by other ions and electrons is negligible. The subsequent lowering of the ionization potentials and the "cutoff" of the partition function is therefore also negligible.
For a gas composed of a single atomic species, the Saha equation is written:
where:
is the density of atoms in the i-th state of ionization, that is with i electrons removed.
is the degeneracy of states for the i-ions
is the energy required to remove i electrons from a neutral atom, creating an i-level ion.
is the electron density
is the thermal de Broglie wavelength of an electron
is the mass of an electron
is the temperature of the gas
is Planck's constant
The expression is the energy required to remove the electron. In the case where only one level of ionizati |
https://en.wikipedia.org/wiki/PRAM%20consistency | PRAM consistency (pipelined random access memory) also known as FIFO consistency.
All processes see memory writes from one process in the order they were issued from the process.
Writes from different processes may be seen in a different order on different processes. Only the write order needs to be consistent, thus the name pipelined.
PRAM consistency is easy to implement. In effect it says that there are no guarantees about the order in which different processes see writes, except that two or more writes from a single source must arrive in order, as though they were in a pipeline.
P1:W(x)1
P2: R(x)1W(x)2
P3: R(x)1R(x)2
P4: R(x)2R(x)1
Time ---->
Fig: A valid sequence of events for PRAM consistency.
The above sequence is not valid for Causal consistency because W(x)1 and W(x)2 are causal, so different processes must read it in the same sequence. |
https://en.wikipedia.org/wiki/Molecular%20Cell | Molecular Cell is a peer-reviewed scientific journal that covers research on cell biology at the molecular level, with an emphasis on new mechanistic insights. It was established in 1997 and is published two times per month. Its 2021 impact factor is 19.328. Molecular Cell is a Cell Press journal (an imprint of Elsevier) and is a companion to Cell.
Launched in December 1997, Molecular Cell publishes a relatively small number of papers, up to 15 articles per issue. Coverage includes structure to human diseases, concentrating on molecular analyses. Topics include gene expression, RNA processing, replication, recombination and repair, structure, chaperones, receptors, signal transduction, cell cycle, metabolism, and tumorigenesis.
The majority of papers published in Molecular Cell are articles in the format familiar from Cell. However, it also publishes short papers (up to 6 published pages) that make focused contributions on points of general interest.
External links
Academic journals established in 1997
Molecular and cellular biology journals
Biweekly journals
English-language journals
Cell Press academic journals |
https://en.wikipedia.org/wiki/Schwan%27s%20Company | Schwan's Company, formerly known as The Schwan Food Company, is a food company with approximately 7,500 employees. Having originated in the United States as a family-owned business, since 2019 the company has been a subsidiary of CJ CheilJedang of South Korea — with five major business units including Schwan's Consumer Brands, Schwan's Food Service, Strategic Partner Solutions and SFC Global Supply Chain. Schwan's Company no longer owns the home-delivery business that was known as Schwan's Home Service, which is currently rebranding to Yelloh.
The Schwan's family maintains 100 percent ownership of Minnesota-based Schwan’s Home Service, a privately held, independent entity traced to the company’s home-delivery business launched by Marvin Schwan in 1952. Schwan's Home Service sells frozen foods from home delivery trucks, in grocery store freezers, by mail, and to the food service industry.
Schwan's Company is widely known for its consumer brands, including Freschetta Pizza, Tony's Frozen Pizza, Mrs. Smith's Pies, Edwards Frozen Pies, Pagoda Egg Rolls, and Red Baron Pizza.
Corporate divisions
The company's major business units include Schwan's Consumer Brands, Schwan's Food Service, and SFC Global Supply Chain.
Schwan's Consumer Brands markets frozen food products in grocery stores primarily in the Western Hemisphere.
Schwan's Food Service markets and distributes frozen-food products to the food service industry.
SFC Global Supply Chain is a manufacturing cooperative that coordinates the company's production processes and helps develop new products.
Company history
In 1952, Marvin Schwan (1929–1993) began home delivery of his family's homemade ice cream (Schwan's Dairy and Dairy Lunch) to rural western Minnesota. Schwan's expanded to cover the Midwestern United States and made a number of acquisitions, including the Holiday Ice Cream Company and Russell Dairy. In 1957, the product line was expanded to include juice concentrates, and in 1962, Schwan's began |
https://en.wikipedia.org/wiki/WSTR-TV | WSTR-TV (channel 64), branded on-air as Star 64 (stylized as STAR64), is a television station in Cincinnati, Ohio, United States, affiliated with MyNetworkTV. It is owned by Deerfield Media, which maintains a local marketing agreement (LMA) with Sinclair Broadcast Group, owner of dual CBS/CW affiliate WKRC-TV (channel 12), for the provision of advertising sales and other services. The two stations share studios on Highland Avenue in the Mount Auburn section of Cincinnati; WSTR's transmitter, Star Tower, is located in the city's College Hill neighborhood.
WSTR-TV began broadcasting in 1980 as WBTI, which broadcast a mix of commercial advertising-supported and subscription television (STV) programs. The STV programming was relegated into overnight hours (before being dropped altogether) at the start of 1985, making way for the station to become an independent station under the callsign WIII. After financial trouble, channel 64 stabilized under ABRY Communications before being purchased by Sinclair in 1996. It was briefly an affiliate of UPN before switching to The WB in 1998 and becoming part of MyNetworkTV in 2006. WKRC-TV produces dedicated morning and late evening newscasts for air on WSTR-TV. The station is one of Cincinnati's two ATSC 3.0 (NextGen TV) transmitters, serving the market's major commercial stations, which each broadcast some of WSTR-TV's subchannels on its behalf.
History
Construction and subscription television years
On June 30, 1977, the Federal Communications Commission (FCC) granted a construction permit to Buford Television of Ohio, Inc., for a new channel 64 television station in Cincinnati, Ohio. WBTI signed on the air on January 28, 1980. It broadcast with one million watts of power and operated from studios on Fishwick Drive in the Bond Hill area; the station's original transmitter was located on Chickasaw Street.
WBTI was conceived and began broadcasting as a hybrid. During the day, it was an advertiser-supported, general-entertainment |
https://en.wikipedia.org/wiki/On%20Aggression | On Aggression (, "So-called Evil: on the natural history of aggression") is a 1963 book by the ethologist Konrad Lorenz; it was translated into English in 1966. As he writes in the prologue, "the subject of this book is aggression, that is to say the fighting instinct in beast and man which is directed against members of the same species." (Page 3)
The book was reviewed many times, both positively and negatively, by biologists, anthropologists, psychoanalysts and others. Much criticism was directed at Lorenz's extension of his findings on non-human animals to humans.
Publication
On Aggression was first published in German in 1963, and in English in 1966. It has been reprinted many times and translated into at least 12 languages.
Content
Programming
According to Lorenz, animals, particularly males, are biologically programmed to fight over resources. This behavior must be considered part of natural selection, as aggression leading to death or serious injury may eventually lead to extinction unless it has such a role.
However, Lorenz does not state that aggressive behaviors are in any way more powerful, prevalent, or intense than more peaceful behaviors such as mating rituals. Rather, he negates the categorization of aggression as "contrary" to "positive" instincts like love, depicting it as a founding basis of other instincts and its role in animal communication.
Hydraulic model
Additionally, Lorenz addresses behavior in humans, including discussion of a "hydraulic" model of emotional or instinctive pressures and their release, shared by Freud's psychoanalytic theory, and the abnormality of intraspecies violence and killing. Lorenz claimed that "present-day civilized man suffers from insufficient discharge of his aggressive drive" and suggested that low levels of aggressive behaviour prevented higher level responses resulting from "damming" them. His 'hydraulic' model, of aggression as a force that builds relentlessly without cause unless released, remains |
https://en.wikipedia.org/wiki/%27t%20Hooft%E2%80%93Polyakov%20monopole |
In theoretical physics, the t Hooft–Polyakov monopole is a topological soliton similar to the Dirac monopole but without the Dirac string. It arises in the case of a Yang–Mills theory with a gauge group G, coupled to a Higgs field which spontaneously breaks it down to a smaller group H via the Higgs mechanism. It was first found independently by Gerard 't Hooft and Alexander Polyakov.
Unlike the Dirac monopole, the 't Hooft–Polyakov monopole is a smooth solution with a finite total energy. The solution is localized around . Very far from the origin, the gauge group G is broken to H, and the 't Hooft–Polyakov monopole reduces to the Dirac monopole.
However, at the origin itself, the G gauge symmetry is unbroken and the solution is non-singular also near the origin. The Higgs field
is proportional to
where the adjoint indices are identified with the three-dimensional spatial indices. The gauge field at infinity is such that the Higgs field's dependence on the angular directions is pure gauge. The precise configuration for the Higgs field and the gauge field near the origin is such that it satisfies the full Yang–Mills–Higgs equations of motion.
Mathematical details
Suppose the vacuum is the vacuum manifold Σ. Then, for finite energies, as we move along each direction towards spatial infinity, the state along the path approaches a point on the vacuum manifold Σ. Otherwise, we would not have a finite energy. In topologically trivial 3 + 1 dimensions, this means spatial infinity is homotopically equivalent to the topological sphere S. So, the superselection sectors are classified by the second homotopy group of Σ, π2(Σ).
In the special case of a Yang–Mills–Higgs theory, the vacuum manifold is isomorphic to the quotient space G/H and the relevant homotopy group is π2(G/H). This does not actually require the existence of a scalar Higgs field. Most symmetry breaking mechanisms (e.g. technicolor) would also give rise to a 't Hooft–Polyakov monopole.
It is easy to |
https://en.wikipedia.org/wiki/Local%20convergence | In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution. Iterative methods for nonlinear equations and their systems, such as Newton's method are usually only locally convergent.
An iterative method that converges for an arbitrary initial approximation is called globally convergent. Iterative methods for systems of linear equations are usually globally convergent.
Numerical analysis
Iterative methods
Optimization algorithms and methods |
https://en.wikipedia.org/wiki/Unspecified%20behavior | Unspecified behavior is behavior that may vary on different implementations of a programming language. A program can be said to contain unspecified behavior when its source code may produce an executable that exhibits different behavior when compiled on a different compiler, or on the same compiler with different settings, or indeed in different parts of the same executable. While the respective language standards or specifications may impose a range of possible behaviors, the exact behavior depends on the implementation and may not be completely determined upon examination of the program's source code. Unspecified behavior will often not manifest itself in the resulting program's external behavior, but it may sometimes lead to differing outputs or results, potentially causing portability problems.
Definition
To enable compilers to produce optimal code for their respective target platforms, programming language standards do not always impose a certain specific behavior for a given source code construct. Failing to explicitly define the exact behavior of every possible program is not considered an error or weakness in the language specification, and doing so would be infeasible. In the C and C++ languages, such non-portable constructs are generally grouped into three categories: Implementation-defined, unspecified, and undefined behavior.
The exact definition of unspecified behavior varies. In C++, it is defined as "behavior, for a well-formed program construct and correct data, that depends on the implementation." The C++ Standard also notes that the range of possible behaviors is usually provided. Unlike implementation-defined behavior, there is no requirement for the implementation to document its behavior. Similarly, the C Standard defines it as behavior for which the standard "provides two or more possibilities and imposes no further requirements on which is chosen in any instance". Unspecified behavior is different from undefined behavior. The latter is typi |
https://en.wikipedia.org/wiki/20%20new%20shekel%20banknote | The twenty new shekel note (₪20) is the lowest value banknote of the Israeli new shekel, It was first issued in Series A 1988, with the Series B in 1999, and Series C in 2017.
Design
Security features (New Shekel Series C)
LOOK at the banknote
The transparent portrait: A watermark image of the portrait, identical to the portrait shown on the banknote observe, with the denomination next to it. Hold the banknote up to the light and make sure that the portrait and the denomination are visible. This feature can be viewed from either side of the banknote.
The perforated numerals: Tiny holes forming the shape of the banknote's denomination (20) are perforated at the top part of the banknote. Hold the banknote up to the light and make sure you notice them. This feature can be viewed from either side of the banknote.
The window thread: A blue-purple security thread is embedded in the banknote and is revealed in three "windows" on the back of the banknote. Hold the banknote up to the light and make sure that the portrait and the nominal value are clearly visible in the windows. The thread will change its shade from blue to purple when tilting the banknote.
FEEL the banknote
The raised ink: The portrait the Governor's signature, the Hebrew and Gregorian year, text in three languages, and the designated features for the blind on the banknote's margins are printed in intaglio. Touch these details with your finger, on both sides of the banknote, and you can feel the raised ink.
TILT the banknote
The glittering stripe: A transparent and glittering stripe is incorporated into the banknote vertically, next to the portrait. Tilt the banknote in various directions and make sure that the Menorah symbol and the denomination appear and disappear intermittently along the stripe.
The golden book: An artistic reflective foil element in the shape of an "open golden book". Tilt the banknote backward and forward and make sure that the "book" changes its color from gold to gree |
https://en.wikipedia.org/wiki/Computational%20audiology | Computational audiology is a branch of audiology that employs techniques from mathematics and computer science to improve clinical treatments and scientific understanding of the auditory system. Computational audiology is closely related to computational medicine, which uses quantitative models to develop improved methods for general disease diagnosis and treatment.
Overview
In contrast to traditional methods in audiology and hearing science research, computational audiology emphasizes predictive modeling and large-scale analytics ("big data") rather than inferential statistics and small-cohort hypothesis testing. The aim of computational audiology is to translate advances in hearing science, data science, information technology, and machine learning to clinical audiological care. Research to understand hearing function and auditory processing in humans as well as relevant animal species represents translatable work that supports this aim. Research and development to implement more effective diagnostics and treatments represent translational work that supports this aim.
For people with hearing difficulties, tinnitus, hyperacusis, or balance problems, these advances might lead to more precise diagnoses, novel therapies, and advanced rehabilitation options including smart prostheses and e-Health/mHealth apps. For care providers, it can provide actionable knowledge and tools for automating part of the clinical pathway.
The field is interdisciplinary and includes foundations in audiology, auditory neuroscience, computer science, data science, machine learning, psychology, signal processing, natural language processing, and vestibulology.
Applications
In computational audiology, models and algorithms are used to understand the principles that govern the auditory system, to screen for hearing loss, to diagnose hearing disorders, to provide rehabilitation, and to generate simulations for patient education, among others.
Computational models of hearing, speech and a |
https://en.wikipedia.org/wiki/Pinnation | Pinnation (also called pennation) is the arrangement of feather-like or multi-divided features arising from both sides of a common axis. Pinnation occurs in biological morphology, in crystals, such as some forms of ice or metal crystals, and in patterns of erosion or stream beds.
The term derives from the Latin word pinna meaning "feather", "wing", or "fin". A similar concept is "pectination", which is a comb-like arrangement of parts (arising from one side of an axis only). Pinnation is commonly referred to in contrast to "palmation", in which the parts or structures radiate out from a common point. The terms "pinnation" and "pennation" are cognate, and although they are sometimes used distinctly, there is no consistent difference in the meaning or usage of the two words.
Plants
Botanically, pinnation is an arrangement of discrete structures (such as leaflets, veins, lobes, branches, or appendages) arising at multiple points along a common axis. For example, once-divided leaf blades having leaflets arranged on both sides of a rachis are pinnately compound leaves. Many palms (notably the feather palms) and most cycads and grevilleas have pinnately divided leaves. Most species of ferns have pinnate or more highly divided fronds, and in ferns, the leaflets or segments are typically referred to as "pinnae" (singular "pinna"). Plants with pinnate leaves are sometimes colloquially called "feather-leaved". Most of the following definitions are from Jackson's Glossary of Botanical Terms:
Depth of divisions
pinnatifid and pinnatipartite: leaves with pinnate lobes that are not discrete, remaining sufficiently connected to each other that they are not separate leaflets.
pinnatisect: cut all the way to the midrib or other axis, but with the bases of the pinnae not contracted to form discrete leaflets.
pinnate-pinnatifid: pinnate, with the pinnae being pinnatifid.
Number of divisions
paripinnate: pinnately compound leaves in which leaflets are borne in pairs along |
https://en.wikipedia.org/wiki/Gluteome | The term gluteome is used to describe the entire set of all gluten-like proteins in grains, which consumption causes occurrence of clinical manifestations in celiac patients. These proteins include gliadins and glutenins from wheat, secalins from rye, hordeins from barley, avenins from oats and potentially homologues from other related grain species. Since not all grain storage proteins have been identified yet, the term gluteome often refers to the complete set of the known sequences of gluten and gluten-like molecules.
Alternatively, the word gluteome can depict the entire complement of grain-storage proteins in a single grain species at a given time.
The discipline of science dedicated to study gluteome is referred to as gluteomics.
Gluten |
https://en.wikipedia.org/wiki/Jugate | A jugate consists of two portraits side by side to suggest, to the viewer, the closeness of each to the other. The word comes from the Latin, jugatus, meaning joined. On coins, it is commonly used for married couples, brothers, or a father and son.
Often this would be a presidential and vice presidential candidates although sometimes a state or local candidate is included with a presidential candidate. Jugates may be seen on medals, pinbacks, buttons, posters or other campaign items. If a third figure appears on the item, it is called a trigate.
Gallery |
https://en.wikipedia.org/wiki/Lara%20Croft | Lara Croft is a character and the main protagonist of the video game franchise Tomb Raider. She is presented as a highly intelligent and athletic British archaeologist who ventures into ancient tombs and hazardous ruins around the world. Created by a team at British developer Core Design that included Toby Gard, the character first appeared in the video game Tomb Raider in 1996.
Core Design handled the initial development of the character and the series. Inspired by strong female icons, Gard designed Lara Croft to counter stereotypical female characters. The company modified the character for subsequent titles, which included graphical improvements and gameplay additions. American developer Crystal Dynamics took over the series after the 2003 sequel Tomb Raider: The Angel of Darkness was received poorly. The new developer rebooted the character along with the video game series by altering her physical proportions and giving her additional ways of interacting with game environments. Croft has been voiced by six actresses in the video game series: Shelley Blond (1996, 2022), Judith Gibbins (1997–98, 2022), Jonell Elliott (1999–2003, 2022), Keeley Hawes (2006–14, 2021), Camilla Luddington (2013–present), and Abigail Stahlschmidt (2015).
Lara Croft has further appeared in video game spin-offs, printed adaptations, a series of animated short films, feature films, and merchandise related to the series. The promotion of the character includes a brand of apparel and accessories, action figures, and model portrayals. She has been licensed for third-party promotion, including television and print advertisements, music-related appearances, and as a spokesmodel.
Critics consider Lara Croft a significant game character in popular culture. She holds six Guinness World Records, has a strong fan following, and is among the first video game characters to be successfully adapted to film. Lara Croft is also considered a sex symbol, one of the earliest in the industry to achieve wid |
https://en.wikipedia.org/wiki/Vanishing%20scalar%20invariant%20spacetime | In mathematical physics, vanishing scalar invariant (VSI) spacetimes are Lorentzian manifolds with all polynomial curvature invariants of all orders vanishing. Although the only Riemannian manifold with VSI property is flat space, the Lorentzian case admits nontrivial spacetimes with this property. Distinguishing these VSI spacetimes from Minkowski spacetime requires comparing non-polynomial invariants or carrying out the full Cartan–Karlhede algorithm on non-scalar quantities.
All VSI spacetimes are Kundt spacetimes. An example with this property in four dimensions is a pp-wave. VSI spacetimes however also contain some other four-dimensional Kundt spacetimes of Petrov type N and III. VSI spacetimes in higher dimensions have similar properties as in the four-dimensional case. |
https://en.wikipedia.org/wiki/Mickelberry%20Sausage%20Company%20plant%20explosion | The Mickelberry Sausage Company plant explosion was a massive explosion and fire that killed nine people in Chicago on February 7, 1968. The factory made meat products such as hot dogs, lard, and hams under the Mickleberry Old Farm brand. Mickelberry was founded in 1893. It was located near the corner of 49th Place and South Halsted Street in the city's Back of the Yards (Union Stock Yards) district. Nine people were killed, including four Chicago firefighters. Seventy people were injured.
A gasoline truck was making a delivery in the alley behind the plant, when a valve on the truck was sheared off by a garbage can allowing the gasoline to flow freely. The fuel leaked into the basement of the facility where it ignited causing a series of explosions.
The company's president ran back into the rubble apparently to rescue those trapped inside. He was killed. |
https://en.wikipedia.org/wiki/Minneapolis%20wireless%20internet%20network | The city of Minneapolis, Minnesota, is covered by a citywide broadband wireless internet network, sometimes called Wireless Minneapolis. The network was first proposed in 2003, at which point only a few other cities nationwide had such systems in place. Local firm US Internet beat out EarthLink to build and operate the network, with a guaranteed ten-year, multimillion-dollar contract from the city itself as the network's anchor tenant. Construction began on the project in 2006, but encountered several delays. Most of the city was covered by the network by 2010, and USI Wireless, the subsidiary of US Internet responsible for the system, set up numerous free internet access points at public locations around Minneapolis.
The network, which offers speeds of one to six megabits per second at a rate of about $20 per month, had about 20,000 residential subscribers by the end of 2010. Municipally, the network is used by city inspectors and employees, with plans in place for the police and fire departments to use it in the future. In 2007, when the I-35W Mississippi River bridge collapsed, the wireless system helped coordinate rescuers and emergency services. The city and USI Wireless have won praise for the network, which has been singled out for being one of the few successful municipal wireless ventures nationwide among a number of stalled or failed projects.
Background
At the time when the wireless network was under consideration, various other American cities already had such networks or were in the process of constructing them. Chaska and Moorhead, both in Minnesota, had city-owned and -operated wireless networks, while Philadelphia was considering building its own and Corpus Christi, Texas, was experimenting with a specialized government-use-only network.
Before the network was built, Minneapolis's city services were run on a combination of fiber optics and other services, with city inspectors, who worked throughout the city, using Sprint Cellular while working in |
https://en.wikipedia.org/wiki/Cannabis%20strain | Cannabis strains are either pure or hybrid varieties of the plant genus Cannabis, which encompasses the species C. sativa, C. indica, and C. ruderalis.
Varieties are developed to intensify specific characteristics of the plant, or to differentiate the strain for the purposes of marketing or to make it more effective as a drug. Variety names are typically chosen by their growers, and often reflect properties of the plant such as taste, color, smell, or the origin of the variety. The Cannabis strains referred to in this article are primarily those varieties with recreational and medicinal use. These varieties have been cultivated to contain a high percentage of cannabinoids. Several varieties of cannabis, known as hemp, have a very low cannabinoid content, and are instead grown for their fiber and seed.
Major variety types
Taxonomic paradigm
The two species of the Cannabis genus that are most commonly grown are Cannabis indica and Cannabis sativa. A third species, Cannabis ruderalis, is very short and produces only trace amounts of tetrahydrocannabinol (THC), and thus is not commonly grown for industrial, recreational or medicinal use. However, because Cannabis ruderalis flowers independently of the photoperiod and according to age, it has been used to breed autoflowering strains.
Pure sativas are relatively tall (reaching as high as 4.5 meters), with long internodes and branches, and large, narrow-bladed leaves. Pure indica varieties are shorter and bushier, with wider leaflets. They are often favored by indoor growers for their size. Sativas bloom later than indicas, often taking a month or two longer to mature. The subjective effects of sativas and indicas are said to differ, but the ratio of tetrahydrocannabinol (THC) to cannabidiol (CBD) in most named drug varieties of both types is similar (averaging about 200:1). Unlike most commercially developed strains, indica landraces exhibit plants with varying THC/CBD ratios. Avidekel, a medical marijuana strain dev |
https://en.wikipedia.org/wiki/Photon%20Factory | The Photon Factory (PF) is a synchrotron located at KEK, in Tsukuba, Japan, about fifty kilometres from Tokyo.
There are two major facilities, the Photon Factory itself which is a 2.5GeV synchrotron with a beam current of around 450mA, and the PF-AR 'Advanced Ring for Pulsed X-Rays', which is a 6.5GeV machine running in a single-bunch mode with a beam current of around 60mA.
Its macromolecular crystallography beamline is used substantially for Japan's structural genomics project.
External links
Photon Factory - KEK IMSS website
Synchrotron radiation facilities |
https://en.wikipedia.org/wiki/Pasting%20lemma | In topology, the pasting or gluing lemma, and sometimes the gluing rule, is an important result which says that two continuous functions can be "glued together" to create another continuous function. The lemma is implicit in the use of piecewise functions. For example, in the book Topology and Groupoids, where the condition given for the statement below is that and
The pasting lemma is crucial to the construction of the fundamental group or fundamental groupoid of a topological space; it allows one to concatenate continuous paths to create a new continuous path.
Formal statement
Let be both closed (or both open) subsets of a topological space such that , and let also be a topological space. If is continuous when restricted to both and then is continuous.
This result allows one to take two continuous functions defined on closed (or open) subsets of a topological space and create a new one.
Proof: if is a closed subset of then and are both closed since each is the preimage of when restricted to and respectively, which by assumption are continuous. Then their union, is also closed, being a finite union of closed sets.
A similar argument applies when and are both open.
The infinite analog of this result (where ) is not true for closed For instance, the inclusion map from the integers to the real line (with the integers equipped with the cofinite topology) is continuous when restricted to an integer, but the inverse image of a bounded open set in the reals with this map is at most a finite number of points, so not open in
It is, however, true if the form a locally finite collection since a union of locally finite closed sets is closed. Similarly, it is true if the are instead assumed to be open since a union of open sets is open. |
https://en.wikipedia.org/wiki/Interleukin%2034 | Interleukin 34 (IL-34) is a protein belonging to a group of cytokines called interleukins. It was originally identified in humans, by large scale screening of secreted proteins; chimpanzee, murine, rat and chicken interleukin 34 orthologs have also been found. The protein is composed of 241 amino acids, 39 kilodaltons in mass, and forms homodimers. IL-34 increases growth or survival of immune cells known as monocytes; it elicits its activity by binding the Colony stimulating factor 1 receptor.
Messenger RNA (mRNA) expression of human IL-34 is most abundant in spleen but occurs in several other tissues: thymus, liver, small intestine, colon, prostate gland, lung, heart, brain, kidney, testes, and ovary. The discovery of IL-34 protein in the red pulp of the spleen suggests involvement in growth and development of myeloid cells, consistent with its activity on monocytes. |
https://en.wikipedia.org/wiki/Hamaker%20constant | In molecular physics, the Hamaker constant (denoted ; named for H. C. Hamaker) is a physical constant that can be defined for a van der Waals (vdW) body–body interaction:
where are the number densities of the two interacting kinds of particles, and is the London coefficient in the particle–particle pair interaction. The magnitude of this constant reflects the strength of the vdW-force between two particles, or between a particle and a substrate.
The Hamaker constant provides the means to determine the interaction parameter from the vdW-pair potential,
Hamaker's method and the associated Hamaker constant ignores the influence of an intervening medium between the two particles of interaction. In 1956 Lifshitz developed a description of the vdW energy but with consideration of the dielectric properties of this intervening medium (often a continuous phase).
The Van der Waals forces are effective only up to several hundred angstroms. When the interactions are too far apart, the dispersion potential decays faster than this is called the retarded regime, and the result is a Casimir–Polder force.
See also
Hamaker theory
Intermolecular forces
van der Waals Forces |
https://en.wikipedia.org/wiki/Oneirology | In the field of psychology, the subfield of oneirology (; from Greek ὄνειρον, oneiron, "dream"; and -λογία, -logia, "the study of") is the scientific study of dreams. Current research seeks correlations between dreaming and current knowledge about the functions of the brain, as well as understanding of how the brain works during dreaming as pertains to memory formation and mental disorders. The study of oneirology can be distinguished from dream interpretation in that the aim is to quantitatively study the process of dreams instead of analyzing the meaning behind them.
History
In the 19th century, two advocates of this discipline were the French sinologists Marquis d'Hervey de Saint Denys and Alfred Maury. The field gained momentum in 1952, when Nathaniel Kleitman and his student Eugene Aserinsky discovered regular cycles. A further experiment by Kleitman and William C. Dement, then another medical student, demonstrated the particular period of sleep during which electrical brain activity, as measured by an electroencephalograph (EEG), closely resembled that of waking, in which the eyes dart about actively. This kind of sleep became known as rapid eye movement (REM) sleep, and Kleitman and Dement's experiment found a correlation of 0.80 between REM sleep and dreaming.
Field of work
Research into dreams includes exploration of the mechanisms of dreaming, the influences on dreaming, and disorders linked to dreaming. Work in oneirology overlaps with neurology and can vary from quantifying dreams, to analyzing brain waves during dreaming, to studying the effects of drugs and neurotransmitters on sleeping or dreaming. Though debate continues about the purpose and origins of dreams, there could be great gains from studying dreams as a function of brain activity. For example, knowledge gained in this area could have implications in the treatment of certain mental illnesses.
Mechanisms of dreaming
Dreaming occurs mainly during REM sleep, and brain scans recording brain |
https://en.wikipedia.org/wiki/Hypergeometric%20function%20of%20a%20matrix%20argument | In mathematics, the hypergeometric function of a matrix argument is a generalization of the classical hypergeometric series. It is a function defined by an infinite summation which can be used to evaluate certain multivariate integrals.
Hypergeometric functions of a matrix argument have applications in random matrix theory. For example, the distributions of the extreme eigenvalues of random matrices are often expressed in terms of the hypergeometric function of a matrix argument.
Definition
Let and be integers, and let
be an complex symmetric matrix.
Then the hypergeometric function of a matrix argument
and parameter is defined as
where means is a partition of , is the generalized Pochhammer symbol, and
is the "C" normalization of the Jack function.
Two matrix arguments
If and are two complex symmetric matrices, then the hypergeometric function of two matrix arguments is defined as:
where is the identity matrix of size .
Not a typical function of a matrix argument
Unlike other functions of matrix argument, such as the matrix exponential, which are matrix-valued, the hypergeometric function of (one or two) matrix arguments is scalar-valued.
The parameter α
In many publications the parameter is omitted. Also, in different publications different values of are being implicitly assumed. For example, in the theory of real random matrices (see, e.g., Muirhead, 1984), whereas in other settings (e.g., in the complex case—see Gross and Richards, 1989), . To make matters worse, in random matrix theory researchers tend to prefer a parameter called instead of which is used in combinatorics.
The thing to remember is that
Care should be exercised as to whether a particular text is using a parameter or and which the particular value of that parameter is.
Typically, in settings involving real random matrices, and thus . In settings involving complex random matrices, one has and . |
https://en.wikipedia.org/wiki/Secure64%20Software | Secure64 Software Corporation is a software development company headquartered in Fort Collins, CO, USA, building server applications.
History
Secure64 was founded in 2002 and began full-scale development in 2005. Its founders include Bill Worley, CTO, a former chief scientist of Hewlett Packard and lead developer of PA-RISC and PA-WideWord technologies. Secure64 has filed for several patents.
Technology
SourceT Micro OS
The SourceT Micro OS executes on standard Itanium server hardware, and provides the foundation for Secure64 software applications. Secure64 uses the term "micro OS" to describe SourceT, because, although it shares attributes of traditional microkernels and monolithic kernels, it does not fit the classical definition of either.
Like microkernels, SourceT adheres to the principles that minimal code should execute in kernel mode (currently less than 4,000 lines of code in SourceT), and that all applications and operating system services such as File system, device drivers and protocol stacks should not execute in kernel mode. However, like monolithic kernel architectures, SourceT's operating system services are accessed through system service calls rather than through interprocess communication with user-mode servers.
Unlike general-purpose operating systems, which are designed to execute on a wide variety of hardware platforms, SourceT was specifically designed to take advantage of some of the unique security and performance features of the Itanium microprocessor to create a high performance, highly secure architecture. These unique Itanium features include:
Completely independent read/write/execute privileges on memory pages
Hardware controlled memory compartments with protection IDs
Separation of control information from data on system stacks
Inability to execute code from system stacks
High performance from instruction-level parallelism
Self-Protecting Network Stack
Secure64 has a patent pending for the queued, non-blocking and self-prote |
https://en.wikipedia.org/wiki/Saramur%C4%83 | Saramură is a Romanian "sauce" for marinating. It can be found in nature: water containing salt; salt water spring or it can be homemade by boiling water with salt and other ingredients (depending on individual taste, such as garlic, pepper, chili, etc.). The sauce is used in the home (to preserve some foods as meat, fish, etc.), in agriculture, in the tanning industry. After being marinated and dried the alimentary goods can be smoked or cooked or they can be eaten as they are. Recipes vary greatly, the common part being meat /fish grilled (sometimes on a salt bed). Usually the dish includes vegetables, mamaliga, polenta, potatoes, etc. (The Romanian word itself means "brine".)
Lipovans would call the dish rassol, e.g., saramură de carp (carp saramura) is called karp rassol. Saramură de carp is often translated as "carp in brine" or "salted carp".
See also
List of smoked foods |
https://en.wikipedia.org/wiki/Beltrami%E2%80%93Klein%20model | In geometry, the Beltrami–Klein model, also called the projective model, Klein disk model, and the Cayley–Klein model, is a model of hyperbolic geometry in which points are represented by the points in the interior of the unit disk (or n-dimensional unit ball) and lines are represented by the chords, straight line segments with ideal endpoints on the boundary sphere.
The Beltrami–Klein model is named after the Italian geometer Eugenio Beltrami and the German Felix Klein while "Cayley" in Cayley–Klein model refers to the English geometer Arthur Cayley.
The Beltrami–Klein model is analogous to the gnomonic projection of spherical geometry, in that geodesics (great circles in spherical geometry) are mapped to straight lines.
This model is not conformal, meaning that angles and circles are distorted, whereas the Poincaré disk model preserves these.
In this model, lines and segments are straight Euclidean segments, whereas in the Poincaré disk model, lines are arcs that meet the boundary orthogonally.
History
This model made its first appearance for hyperbolic geometry in two memoirs of Eugenio Beltrami published in 1868, first for dimension and then for general n, these essays proved the equiconsistency of hyperbolic geometry with ordinary Euclidean geometry.
The papers of Beltrami remained little noticed until recently and the model was named after Klein ("The Klein disk model"). This happened as follows. In 1859 Arthur Cayley used the cross-ratio definition of angle due to Laguerre to show how Euclidean geometry could be defined using projective geometry. His definition of distance later became known as the Cayley metric.
In 1869, the young (twenty-year-old) Felix Klein became acquainted with Cayley's work. He recalled that in 1870 he gave a talk on the work of Cayley at the seminar of Weierstrass and he wrote:
"I finished with a question whether there might exist a connection between the ideas of Cayley and Lobachevsky. I was given the answer that these t |
https://en.wikipedia.org/wiki/Opengear | Opengear is a global computer network technology company headquartered in Edison, New Jersey, U.S., with R&D operations in Brisbane, Qld, Australia and production in Sandy, UT.
The company develops and manufactures "smart out-of-band infrastructure management" products aimed at allowing customers to securely access, control and automatically troubleshoot and repair their IT infrastructure remotely, including network and data-center management, for resilient operation.
Opengear solutions provide always-available wired and wireless secure remote access, with failover capabilities to automatically restore site connectivity. This enables technical staff to provision, maintain and repair infrastructure from anywhere at any time, as if they were physically present, thereby enabling both the operational costs and the risk of downtime to be reduced.
In December 2019, Opengear was acquired by Digi International.
Products
Opengear's management products include IM7200 advanced console servers that streamline management of network, server, and power infrastructure in data centers and colocation facilities; and ACM7000 remote management gateways that deliver secure remote monitoring, access and control of distributed networks and remote sites. The Lighthouse Centralized Management platform then provides a single point of scalable, secure management for these Opengear appliances and connected devices. The Opengear NetOps Console Server combine out-of-band management and NetOps tools in a single appliance, minimizing human intervention and simplifying repetitive tasks.
All Opengear products provide a secure alternate out-of-band path to the managed infrastructure, enabling accessibility even during system or network outage. They monitor, access, and control all critical infrastructure at all local and remote sites, from applications, computers and networking equipment, to security cameras, power supplies and door sensors - to proactively detect faults and remediate before t |
https://en.wikipedia.org/wiki/Project%20Euclid | Project Euclid is a collaborative partnership between Cornell University Library and Duke University Press which seeks to advance scholarly communication in theoretical and applied mathematics and statistics through partnerships with independent and society publishers. It was created to provide a platform for small publishers of scholarly journals to move from print to electronic in a cost-effective way.
Through a combination of support by subscribing libraries and participating publishers, Project Euclid has made 70% of its journal articles available as open access. As of 2010, Project Euclid provided access to over one million pages of open-access content.
Mission and goals
Project Euclid's stated mission is to advance scholarly communication in the field of theoretical and applied mathematics and statistics. Through a "mixture of open access, subscription, and hosted subscription content it provides a way for small publishers (especially societies) to host their math or statistics content".
History
In 1999, Cornell University Library received a grant from the Andrew W. Mellon Foundation for the development of an online publishing service designed to support the transition for small, non-commercial mathematics journals from paper to digital distribution. Duke University Press, which had experience in putting its own math journals online and a similar interest in assisting non-commercial math journals, worked as Cornell's partner in developing the grant application and then in developing Project Euclid's publishing model.
Cornell launched Project Euclid in May 2003 with nineteen journals. In July 2008, Cornell University Library and Duke University Press established a joint venture and began co-managing Project Euclid. Duke assumed responsibility for "marketing, financial, and order fulfillment workflows" while Cornell continued to provide and support Project Euclid's IT infrastructure.
Currently, Project Euclid hosts both open access journals and monographs, |
https://en.wikipedia.org/wiki/Doppler%20parameter | The Doppler parameter, or Doppler broadening parameter, usually denoted as , is a parameter commonly used in astrophysics to characterize the width of observed spectral lines of astronomical objects. It is defined as
,
where is the one-dimensional velocity dispersion . Given this parameter, the velocity distribution of the line-emitting/absorbing atoms and ions proximated by a Gaussian can be rewritten as
,
where is the probability of the velocity along the line of sight being in the interval .
The line width is also often specified in terms of the FWHM (full width at half maximum), which is
.
Distribution
The Doppler parameters of Lyman-alpha forest absorption lines are in the range 10–100 km s−1, with a median value around that decrease with redshift . Analyses of the HST/COS dataset of low-redshift quasars gives a median parameter of around (, ).
See also
Doppler broadening
Doppler spectroscopy |
https://en.wikipedia.org/wiki/Korean%20Mathematical%20Olympiad | The Korean Mathematical Olympiad is a mathematical olympiad held by the Korean Mathematical Society (KMS) in Republic of Korea.
History
In 1988, only high school students were tested and middle school students were supposed to take the 'high school exam'. From the 11th exam, the middle school students' test was introduced (JKMO, Junior Korean Mathematical Olympiad). In the 53rd International Mathematical Olympiad (IMO), the Korean delegation won 209 out of 252 out of the total score and 6 gold medals and ranked the first place for the first time in history. From then, Korean mathematicians have made outstanding achievements in advanced math research and International Mathematical Olympiad.
Business Background
In order to obtain excellent grades in the International Mathematics Olympiad, the Korean Mathematical Society holds the Korean Mathematical Olympiad, and through the operation of the seasonal school, KMS will discover gifted students and educate them to contribute the development of mathematics, science, and engineering in Korea.
Students who have won higher than a bronze prize in a High School Level Secondary Test or those who have completed a Middle School Level Winter School are entitled to take a Final Test. This consists of 6 questions in a narrative form. It is taken for two days at the end of March, with students required to solve 3 problems in 4 hours and 30 minutes each day. Awards are divided into Grand Prize, Excellence Prize, and Encouragement Prize, and 13 people are selected as representative candidates. Six students are selected from the candidates to represent Korea in the IMO.
Gauss Part and Euler Part
Since 2017, KMO has been divided into two parts: Gauss Part and Euler Part. The Gauss Part is open to all students under the age of 20 and the Euler Part is available for high school students other than science high school students.
Final representative selection process
About 12 to 13 students (twice the number of final candidates) are |
https://en.wikipedia.org/wiki/Elementary%20charge | The elementary charge, usually denoted by , is a fundamental physical constant, defined as the electric charge carried by a single proton or, equivalently, the magnitude of the negative electric charge carried by a single electron, which has charge −1 .
In the SI system of units, the value of the elementary charge is exactly defined as = coulombs, or 160.2176634 zeptocoulombs (zC). Since the 2019 redefinition of SI base units, the seven SI base units are defined by seven fundamental physical constants, of which the elementary charge is one.
In the centimetre–gram–second system of units (CGS), the corresponding quantity is .
Robert A. Millikan and Harvey Fletcher's oil drop experiment first directly measured the magnitude of the elementary charge in 1909, differing from the modern accepted value by just 0.6%. Under assumptions of the then-disputed atomic theory, the elementary charge had also been indirectly inferred to ~3% accuracy from blackbody spectra by Max Planck in 1901 and (through the Faraday constant) at order-of-magnitude accuracy by Johann Loschmidt's measurement of the Avogadro number in 1865.
As a unit
In some natural unit systems, such as the system of atomic units, e functions as the unit of electric charge. The use of elementary charge as a unit was promoted by George Johnstone Stoney in 1874 for the first system of natural units, called Stoney units. Later, he proposed the name electron for this unit. At the time, the particle we now call the electron was not yet discovered and the difference between the particle electron and the unit of charge electron was still blurred. Later, the name electron was assigned to the particle and the unit of charge e lost its name. However, the unit of energy electronvolt (eV) is a remnant of the fact that the elementary charge was once called electron.
In other natural unit systems, the unit of charge is defined as with the result that
where is the fine-structure constant, is the speed of light, is |
https://en.wikipedia.org/wiki/MATH%20domain | The MATH domain, in molecular biology, is a binding domain that was defined originally by a region of homology between otherwise functionally unrelated domains, the intracellular TRAF-C domains of TRAF proteins and a C-terminal region of extracellular meprins A and B.
Although apparently functionally unrelated, intracellular TRAFs and extracellular meprins share a conserved region of about 180 residues, the meprin and TRAF homology (MATH) |domain. Meprins are mammalian tissue-specific metalloendopeptidases of the astacin family implicated in developmental, normal and pathological processes by hydrolysing a variety of proteins. Various growth factors, cytokines, and extracellular matrix proteins are substrates for meprins. They are composed of five structural domains: an N-terminal endopeptidase domain, a MAM domain, a MATH domain, an EGF-like domain and a C-terminal transmembrane region. Meprin A and B form membrane bound homo-tetramers whereas homo-oligomers of meprin A are secreted. A proteolytic site adjacent to the MATH domain, only present in meprin A, allows the release of the protein from the membrane.
TRAF proteins were first isolated through their ability to interact with TNF receptors. They promote cell survival by the activation of downstream protein kinases and, ultimately, transcription factors of the NF-κB and AP-1 family. The TRAF proteins are composed of 3 structural domains: a RING finger in the N-terminal part of the protein, one to seven TRAF zinc fingers in the middle and the MATH domain in the C-terminal part. The MATH domain is necessary and sufficient for self-association and receptor interaction. Through structural analysis, two consensus sequences recognised by the TRAF domain have been defined: a major one, [PSAT]x[QE]E and a minor one, PxQxxD.
The structure of the TRAF2 protein reveals a trimeric self-association of the MATH domain. The domain forms a new, light-stranded antiparallel beta sandwich structure. A coiled-coil region adjace |
https://en.wikipedia.org/wiki/Peter%20Mansfield | Sir Peter Mansfield (9 October 1933 – 8 February 2017) was a British physicist who was awarded the 2003 Nobel Prize in Physiology or Medicine, shared with Paul Lauterbur, for discoveries concerning Magnetic Resonance Imaging (MRI). Mansfield was a professor at the University of Nottingham.
Early life
Mansfield was born in Lambeth, London on 9 October 1933, to Sidney George (b. 1904, d. 1966) and Lillian Rose Mansfield (b. 1905, d. 1984; née Turner). Mansfield was the youngest of three sons, Conrad (b. 1925) and Sidney (b. 1927).
Mansfield grew up in Camberwell. During World War II he was evacuated from London, initially to Sevenoaks and then twice to Torquay, Devon, where he was able to stay with the same family on both occasions. On returning to London after the war he was told by a school master to take the 11+ exam. Having never heard of the exam before, and having no time to prepare, Mansfield failed to gain a place at the local Grammar school. His mark was, however, high enough for him to go to a Central School in Peckham. At the age of 15 he was told by a careers teacher that science wasn't for him. He left school shortly afterwards to work as a printer's assistant.
At the age of 18, having developed an interest in rocketry, Mansfield took up a job with the Rocket Propulsion Department of the Ministry of Supply in Westcott, Buckinghamshire. Eighteen months later he was called up for National Service.
Education
After serving in the army for two years, Mansfield returned to Westcott and started studying for A-levels at night school. Two years later he was admitted to study physics at Queen Mary College, University of London.
Mansfield graduated with a BSc from Queen Mary in 1959. His final-year project, supervised by Jack Powles, was to construct a portable, transistor-based spectrometer to measure the Earth's magnetic field. Towards the end of this project Powles offered Mansfield a position in his NMR (Nuclear Magnetic Resonance) research group. Powles' |
https://en.wikipedia.org/wiki/Epidermal%20growth%20factor | Epidermal growth factor (EGF) is a protein that stimulates cell growth and differentiation by binding to its receptor, EGFR. Human EGF is 6-kDa and has 53 amino acid residues and three intramolecular disulfide bonds.
EGF was originally described as a secreted peptide found in the submaxillary glands of mice and in human urine. EGF has since been found in many human tissues, including platelets, submandibular gland (submaxillary gland), and parotid gland. Initially, human EGF was known as urogastrone.
Structure
In humans, EGF has 53 amino acids (sequence NSDSECPLSHDGYCLHDGVCMYIEALDKYACNCVVGYIGERCQYRDLKWWELR), with a molecular mass of around 6 kDa. It contains three disulfide bridges (Cys6-Cys20, Cys14-Cys31, Cys33-Cys42).
Function
EGF, via binding to its cognate receptor, results in cellular proliferation, differentiation, and survival.
Salivary EGF, which seems to be regulated by dietary inorganic iodine, also plays an important physiological role in the maintenance of oro-esophageal and gastric tissue integrity. The biological effects of salivary EGF include healing of oral and gastroesophageal ulcers, inhibition of gastric acid secretion, stimulation of DNA synthesis as well as mucosal protection from intraluminal injurious factors such as gastric acid, bile acids, pepsin, and trypsin and to physical, chemical and bacterial agents.
Biological sources
The Epidermal growth factor can be found in platelets, urine, saliva, milk, tears, and blood plasma. It can also be found in the submandibular glands, and the parotid gland. The production of EGF has been found to be stimulated by testosterone.
Polypeptide growth factors
Polypeptide growth factors include:
Mechanism
EGF acts by binding with high affinity to epidermal growth factor receptor (EGFR) on the cell surface. This stimulates ligand-induced dimerization, activating the intrinsic protein-tyrosine kinase activity of the receptor (see the second diagram). The tyrosine kinase activity, in turn, initia |
https://en.wikipedia.org/wiki/Tube%20lemma | In mathematics, particularly topology, the tube lemma, also called Wallace's theorem, is a useful tool in order to prove that the finite product of compact spaces is compact.
Statement
The lemma uses the following terminology:
If and are topological spaces and is the product space, endowed with the product topology, a slice in is a set of the form for .
A tube in is a subset of the form where is an open subset of . It contains all the slices for .
Using the concept of closed maps, this can be rephrased concisely as follows: if is any topological space and a compact space, then the projection map is closed.
Examples and properties
1. Consider in the product topology, that is the Euclidean plane, and the open set The open set contains but contains no tube, so in this case the tube lemma fails. Indeed, if is a tube containing and contained in must be a subset of for all which means contradicting the fact that is open in (because is a tube). This shows that the compactness assumption is essential.
2. The tube lemma can be used to prove that if and are compact spaces, then is compact as follows:
Let be an open cover of . For each , cover the slice by finitely many elements of (this is possible since is compact, being homeomorphic to ).
Call the union of these finitely many elements
By the tube lemma, there is an open set of the form containing and contained in
The collection of all for is an open cover of and hence has a finite subcover . Thus the finite collection covers .
Using the fact that each is contained in and each is the finite union of elements of , one gets a finite subcollection of that covers .
3. By part 2 and induction, one can show that the finite product of compact spaces is compact.
4. The tube lemma cannot be used to prove the Tychonoff theorem, which generalizes the above to infinite products.
Proof
The tube lemma follows from the generalized tube lemma by taking and
It therefore |
https://en.wikipedia.org/wiki/Average%20high%20cost%20multiple | In unemployment insurance (UI) in the United States, the average high-cost multiple (AHCM) is a commonly used actuarial measure of Unemployment Trust Fund adequacy. Technically, AHCM is defined as reserve ratio (i.e., the balance of UI trust fund expressed as % of total wages paid in covered employment) divided by average cost rate of three high-cost years in the state's recent history (typically 20 years or a period covering three recessions, whichever is longer). In this definition, cost rate for any duration of time is defined as benefit cost divided by total wages paid in covered employment for the same duration, usually expressed as a percentage.
Intuitively, the AHCM provides an estimate of the length of time (measured in number of years) current reserve in the trust fund (without taking into account future revenue income) can pay out benefits at historically high payout rate. For example, if a state's AHCM is 1.0 immediately prior to a recession, and if the incoming recession is of the average magnitude of the past three recessions, then the state is expected to be able to pay one year of UI benefits using the money already in its trust fund alone. If the AHCM is 0.5, then the state is expected to be able to pay out six months of benefits when the a similar recession hits.
Example
As of December 31, 2009, a state has a balance of $500 million in its UI trust fund. The total wages of its covered employment is $40 billion. The reserve ratio for this state on this day is $500/$40000 = 1.25%. Historically, the state experienced three highest-cost years in 1991, 2002, and 2009, when the cost rates were 1.50, 1.80, and 3.00, respectively. The average high-cost rate for this state is therefore 2.10. Thus, the average high-cost multiple is 1.25/2.10 = 0.595.
US average high-cost multiple
The following chart shows US average high-cost multiple from 1957 to 2009.
External links
UI Data Summary
ET Financial Data Handbook 394
State UI Trust Fund Calculator
Ben |
https://en.wikipedia.org/wiki/Bed%20skirt | A bed skirt, sometimes spelled bedskirt, a bed ruffle, a dust ruffle in North America, a valance, or a valance sheet in the British Isles, is a piece of decorative fabric that is placed between the mattress and the box spring of a bed. The purpose of a bed skirt is to give a stylish appearance to a bed without exposing the sides of the box spring or any space under the bed that may be used for storage. Additionally, decorative bed boots may be used to cover legs and enhance decor when bed skirts do not reach the floor.
Historical use
Historically, bed skirts were used to block drafts, which could chill the undersides of beds as well as to prevent dust from accumulating under the bed. |
https://en.wikipedia.org/wiki/BPIFA1 | BPI fold containing family A, member 1 (BPIFA1), also known as Palate, lung, and nasal epithelium clone (PLUNC), is a protein that in humans is encoded by the BPIFA1 gene. It was also formerly known as "Secretory protein in upper respiratory tracts" (SPURT). The BPIFA1 gene sequence predicts 4 transcripts (splice variants); 3 mRNA variants have been well characterized. The resulting BPIFA1 is a secreted protein, expressed at very high levels in mucosa of the airways (olfactory and respiratory and epithelium) and salivary glands; at high levels in oropharyneal epithelium, including tongue and tonsils; and at moderate levels many other tissue types and glands including pituitary, testis, lung, bladder, blood, prostate, pancreas, levels in the digestive tract (tongue, stomach, intestinal epithelium) and pancreas. The protein can be detected on the apical side of epithelial cells and in airway surface liquid, nasal mucus, and sputum.
Superfamily
BPIFA1 is a member of a BPI fold protein superfamily defined by the presence of the bactericidal/permeability-increasing protein fold (BPI fold) which is formed by two similar domains in a "boomerang" shape. This superfamily is also known as the BPI/LBP/PLUNC family or the BPI/LPB/CETP family. The BPI fold creates apolar binding pockets that can interact with hydrophobic and amphipathic molecules, such as the acyl carbon chains of lipopolysaccharide found on Gram-negative bacteria, but members of this family may have many other functions.
Genes for the BPI/LBP/PLUNC superfamily are found in all vertebrate species, including distant homologs in non-vertebrate species such as insects, mollusks, and roundworms. Within that broad grouping is the BPIF gene family whose members encode the BPI fold structural motif and are found clustered on a single chromosome, e.g., Chromosome 20 in humans, Chromosome 2 in mouse, Chromosome 3 in rat, Chromosome 17 in pig, Chromosome 13 in cow. The BPIF gene family is split into two groupings, B |
https://en.wikipedia.org/wiki/Hepatitis%20C%20virus%20nonstructural%20protein%204A | Nonstructural protein 4A (NS4A) is a viral protein found in the hepatitis C virus. It acts as a cofactor for the enzyme NS3. |
https://en.wikipedia.org/wiki/Brain%20morphometry | Brain morphometry is a subfield of both morphometry and the brain sciences, concerned with the measurement of brain structures and changes thereof during development, aging, learning, disease and evolution. Since autopsy-like dissection is generally impossible on living brains, brain morphometry starts with noninvasive neuroimaging data, typically obtained from magnetic resonance imaging (MRI). These data are born digital, which allows researchers to analyze the brain images further by using advanced mathematical and statistical methods such as shape quantification or multivariate analysis. This allows researchers to quantify anatomical features of the brain in terms of shape, mass, volume (e.g. of the hippocampus, or of the primary versus secondary visual cortex), and to derive more specific information, such as the encephalization quotient, grey matter density and white matter connectivity, gyrification, cortical thickness, or the amount of cerebrospinal fluid. These variables can then be mapped within the brain volume or on the brain surface, providing a convenient way to assess their pattern and extent over time, across individuals or even between different biological species. The field is rapidly evolving along with neuroimaging techniques—which deliver the underlying data—but also develops in part independently from them, as part of the emerging field of neuroinformatics, which is concerned with developing and adapting algorithms to analyze those data.
Background
Terminology
The term brain mapping is often used interchangeably with brain morphometry, although mapping in the narrower sense of projecting properties of the brain onto a template brain is, strictly speaking, only a subfield of brain morphometry. On the other hand, though much more rarely, neuromorphometry is also sometimes used as a synonym for brain morphometry (particularly in the earlier literature, e.g. Haug 1986), though technically is only one of its subfields.
Biology
The morphology and f |
https://en.wikipedia.org/wiki/Yeast%20in%20winemaking | The role of yeast in winemaking is the most important element that distinguishes wine from fruit juice. In the absence of oxygen, yeast converts the sugars of the fruit into alcohol and carbon dioxide through the process of fermentation. The more sugars in the grapes, the higher the potential alcohol level of the wine if the yeast are allowed to carry out fermentation to dryness. Sometimes winemakers will stop fermentation early in order to leave some residual sugars and sweetness in the wine such as with dessert wines. This can be achieved by dropping fermentation temperatures to the point where the yeast are inactive, sterile filtering the wine to remove the yeast or fortification with brandy or neutral spirits to kill off the yeast cells. If fermentation is unintentionally stopped, such as when the yeasts become exhausted of available nutrients and the wine has not yet reached dryness, this is considered a stuck fermentation.
The most common yeast associated with winemaking is Saccharomyces cerevisiae which has been favored due to its predictable and vigorous fermentation capabilities, tolerance of relatively high levels of alcohol and sulfur dioxide as well as its ability to thrive in normal wine pH between 2.8 and 4. Despite its widespread use which often includes deliberate inoculation from cultured stock, S. cerevisiae is rarely the only yeast species involved in a fermentation. Grapes brought in from harvest are usually teeming with a variety of "wild yeast" from the Kloeckera and Candida genera. These yeasts often begin the fermentation process almost as soon as the grapes are picked when the weight of the clusters in the harvest bins begin to crush the grapes, releasing the sugar-rich must. While additions of sulfur dioxide (often added at the crusher) may limit some of the wild yeast activities, these yeasts will usually die out once the alcohol level reaches about 15% due to the toxicity of alcohol on the yeast cells physiology while the more alcohol to |
https://en.wikipedia.org/wiki/Locally%20finite%20poset | In mathematics, a locally finite poset is a partially ordered set P such that for all x, y ∈ P, the interval [x, y] consists of finitely many elements.
Given a locally finite poset P we can define its incidence algebra. Elements of the incidence algebra are functions ƒ that assign to each interval [x, y] of P a real number ƒ(x, y). These functions form an associative algebra with a product defined by
There is also a definition of incidence coalgebra.
In theoretical physics a locally finite poset is also called a causal set and has been used as a model for spacetime. |
https://en.wikipedia.org/wiki/Hydnum%20repandum | Hydnum repandum, commonly known as the sweet tooth, wood hedgehog or hedgehog mushroom, is a basidiomycete fungus of the family Hydnaceae. First described by Carl Linnaeus in 1753, it is the type species of the genus Hydnum. The fungus produces fruit bodies (mushrooms) that are characterized by their spore-bearing structures—in the form of spines rather than gills—which hang down from the underside of the cap. The cap is dry, colored yellow to light orange to brown, and often develops an irregular shape, especially when it has grown closely crowded with adjacent fruit bodies. The mushroom tissue is white with a pleasant odor and a spicy or bitter taste. All parts of the mushroom stain orange with age or when bruised.
A mycorrhizal fungus, Hydnum repandum is broadly distributed in Europe where it fruits singly or in close groups in coniferous or deciduous woodland. This is a choice edible species, although mature specimens can develop a bitter taste. It has no poisonous lookalikes. Mushrooms are collected and sold in local markets of Europe and Canada.
Taxonomy
First officially described by Carl Linnaeus in his 1753 Species Plantarum, Hydnum repandum was sanctioned by Swedish mycologist Elias Fries in 1821. The species has been shuffled among several genera: Hypothele by French naturalist Jean-Jacques Paulet in 1812; Dentinum by British botanist Samuel Frederick Gray in 1821; Tyrodon by Finnish mycologist Petter Karsten in 1881; Sarcodon by French naturalist Lucien Quélet in 1886. After a 1977 nomenclatural proposal by American mycologist Ronald H. Petersen was accepted, Hydnum repandum became the official type species of the genus Hydnum. Previously, supporting arguments for making H. repandum the type were made by Dutch taxonomist Marinus Anton Donk (1958) and Petersen (1973), while Czech mycologist Zdeněk Pouzar (1958) and Canadian mycologist Kenneth Harrison (1971) thought that H. imbricatum should be the type.
Several forms and varieties of H. repandum have |
https://en.wikipedia.org/wiki/FPG-9 | The FPG-9 Foam Plate Glider is a simple, hand-launched glider made from a 9 inch () foam dinner plate, featuring a moveable rudder and elevons, allowing for an inexpensive way to teach basic flight mechanics.
The model was created by Jack Reynolds, a volunteer at the Academy of Model Aeronautics' (AMA) National Model Aviation Museum. Originally the model was used as a hands-on activity for museum visitors and museum outreach. In 2004, the AMA incorporated the model into Aerolab, an instructional program developed for middle school physical science and math programs, that uses simple flying model aircraft as tools to teach Force and Motion.
Today, besides the AMA, numerous other groups are now using the FPG-9. It may be built and flown to satisfy an elective activity for the Boy Scouts of America Aviation Merit Badge. The Children's Museum of Indianapolis uses it as part of their CSI: Flight Adventures Project, an educational program highlighting the use of model aircraft as scientific tools for research for grades 3 - 5. The National Museum of the United States Air Force also uses the model as part of their educational programming to include Project Soar (Science in Ohio through Aerospace Resources). |
https://en.wikipedia.org/wiki/Born%E2%80%93Infeld%20model | In theoretical physics, the Born–Infeld model is a particular example of what is usually known as a nonlinear electrodynamics. It was historically introduced in the 1930s to remove the divergence of the electron's self-energy in classical electrodynamics by introducing an upper bound of the electric field at the origin.
Overview
Born–Infeld electrodynamics is named after physicists Max Born and Leopold Infeld, who first proposed it. The model possesses a whole series of physically interesting properties.
In analogy to a relativistic limit on velocity, Born–Infeld theory proposes a limiting force via limited electric field strength. A maximum electric field strength produces a finite electric field self-energy, which when attributed entirely to electron mass-produces maximum field
Born–Infeld electrodynamics displays good physical properties concerning wave propagation, such as the absence of shock waves and birefringence. A field theory showing this property is usually called completely exceptional, and Born–Infeld theory is the only completely exceptional regular nonlinear electrodynamics.
This theory can be seen as a covariant generalization of Mie's theory and very close to Albert Einstein's idea of introducing a nonsymmetric metric tensor with the symmetric part corresponding to the usual metric tensor and the antisymmetric to the electromagnetic field tensor.
The compatibility of Born–Infeld theory with high-precision atomic experimental data requires a value of a limiting field some 200 times higher than that introduced in the original formulation of the theory.
Since 1985 there was a revival of interest on Born–Infeld theory and its nonabelian extensions, as they were found in some limits of string theory. It was discovered by E.S. Fradkin and A.A. Tseytlin that the Born–Infeld action is the leading term in the low-energy effective action of the open string theory expanded in powers of derivatives of gauge field strength.
Equations
We will use |
https://en.wikipedia.org/wiki/Ocean%20heat%20content | Ocean heat content (OHC) is the energy absorbed and stored by oceans. To calculate the ocean heat content, it is necessary to measure ocean temperature at many different locations and depths. Integrating the areal density of ocean heat over an ocean basin or entire ocean gives the total ocean heat content. Between 1971 and 2018, the rise in ocean heat content accounted for over 90% of Earth’s excess thermal energy from global heating. The main driver of this increase was anthropogenic forcing via rising greenhouse gas emissions. By 2020, about one third of the added energy had propagated to depths below 700 meters. In 2022, the world’s oceans were again the hottest in the historical record and exceeded the previous 2021 record maximum. The four highest ocean heat observations occurred in the period 2019–2022. The North Pacific, North Atlantic, the Mediterranean, and the Southern Ocean all recorded their highest heat observations for more than sixty years. Ocean heat content and sea level rise are important indicators of climate change.
Ocean water absorbs solar energy efficiently. It has far greater heat capacity than atmospheric gases. As a result, the top few meters of the ocean contain more thermal energy than the entire Earth's atmosphere. Since before 1960, research vessels and stations have sampled sea surface temperatures and temperatures at greater depth all over the world. Since 2000, an expanding network of nearly 4000 Argo robotic floats has measured temperature anomalies, or the change in ocean heat content. Ocean heat content has been increasing at a steady or accelerating rate since at least 1990. The net rate of change in the upper 2000 meters from 2003 to 2018 was (or annual mean energy gain of 9.3 zettajoules). It is challenging to measure temperatures over decades with sufficient accuracy and covering enough areas. This gives rise to the uncertainty in the figures.
Changes in ocean heat content have far-reaching consequences for the planet's |
https://en.wikipedia.org/wiki/Agence%20nationale%20de%20s%C3%A9curit%C3%A9%20sanitaire%20de%20l%27alimentation%2C%20de%20l%27environnement%20et%20du%20travail | The French Agency for Food, Environmental and Occupational Health & Safety (ANSES) is a French government agency whose main mission is to assess health risks in food, the environment and work, with the aim of enlightening public policy-making. ANSES is accountable to the French Ministries of Health, Agriculture, the Environment, Labour and Consumer Affairs. |
https://en.wikipedia.org/wiki/This%20is%20a%20magazine | This is a magazine is an experimental art publication founded in 2002 by Karen ann Donnachie, Andy Simionato & Sons. "One of the best known" flip-book style online magazines it has also published in a variety of formats including a PowerPoint edition, Animated GIF collections as well as video peep-shows and sound-objects. Alongside the internet specific episodes a series of hard-cover printed books or compendia and multi-media projects such as the Everything Will Be OK have been published.
The project is considered "highly influential" in Lauren Parker's Victoria & Albert Museum book on internet publications, Interplay and "a publishing phenomenon" by Adrian Shaughnessy in The UK Journal for Made Images, Varoom.
In an interview for UK magazine Graphics International, New York designer and artist Stefan Sagmeister said "I just think This is a magazine is one of the most interesting publications out there...I followed it from the start and was consistently surprised by its inventiveness"
"Redefining what it means to call something a magazine", This is a magazine'''s editorial style combines artworks by internet artists with found internet ephemera. Over the past decade the project has been exhibited in a number of museums and galleries and has been the subject for publications on art, design, and new media for its ability to move beyond traditional models of curation, publication production, and distribution.
Selected contributing artists
Miltos Manetas, GR
Sergei Sviatchenko, DK
Jodi (art collective), NL
Angelo Plessas, GR
Animal Collective, USA
Brody Condon, USA
David Shrigley, UK
Rafa%C3%ABl_Rozendaal, NE
James Victore, USA
Jason Salavon, USA
Jon Burgerman, UK
Lorna Mills, USA
Jon Rafman, CA
Yoshi Sodeoka, JP
Lia (artist), AT
Mirko Ilić, USA
Petra Cortright, USA
MTAA, USA
Nadín Ospina, CB
Pamela Anderson, USA
Starfuckers, IT
Antonio Riello, IT
Compendia
Pink Laser Beam, 2009
Who I think I am, 2007
Everything Will Be OK (book) with DVD |
https://en.wikipedia.org/wiki/Categorical%20quantum%20mechanics | Categorical quantum mechanics is the study of quantum foundations and quantum information using paradigms from mathematics and computer science, notably monoidal category theory. The primitive objects of study are physical processes, and the different ways that these can be composed. It was pioneered in 2004 by Samson Abramsky and Bob Coecke. Categorical quantum mechanics is entry 18M40 in MSC2020.
Mathematical setup
Mathematically, the basic setup is captured by a dagger symmetric monoidal category: composition of morphisms models sequential composition of processes, and the tensor product describes parallel composition of processes. The role of the dagger is to assign to each state a corresponding test. These can then be adorned with more structure to study various aspects. For instance:
A dagger compact category allows one to distinguish between an "input" and "output" of a process. In the diagrammatic calculus, it allows wires to be bent, allowing for a less restricted transfer of information. In particular, it allows entangled states and measurements, and gives elegant descriptions of protocols such as quantum teleportation. In quantum theory, it being compact closed is related to the Choi-Jamiołkowski isomorphism (also known as process-state duality), while the dagger structure captures the ability to take adjoints of linear maps.
Considering only the morphisms that are completely positive maps, one can also handle mixed states, allowing the study of quantum channels categorically.
Wires are always two-ended (and can never be split into a Y), reflecting the no-cloning and no-deleting theorems of quantum mechanics.
Special commutative dagger Frobenius algebras model the fact that certain processes yield classical information, that can be cloned or deleted, thus capturing classical communication.
In early works, dagger biproducts were used to study both classical communication and the superposition principle. Later, these two features have been separ |
https://en.wikipedia.org/wiki/Tapan%20Sarkar | Tapan Kumar Sarkar (August 2, 1948 – March 12, 2021) was an Indian-American electrical engineer and Professor Emeritus at the Department of Electrical Engineering and Computer Science at Syracuse University. He was best known for his contributions to computational electromagnetics and antenna theory.
Sarkar was the recipient of IEEE Electromagnetics Award in 2020.
Biography
Sarkar was born on August 2, 1948, in Kolkata, India. He obtained his Bachelor of Technology from IIT Kharagpur and Master of Engineering from University of New Brunswick in 1969 and 1971, respectively. He received his Master of Science and Doctor of Philosophy degrees from Syracuse University in 1975.
Between 1975 and 1976, Sarkar worked for TACO Division of General Instrument. Between 1976 and 1985, he was a faculty member at Rochester Institute of Technology; he also briefly held a research fellowship position at Gordon McKay Laboratory for Applied Sciences in Harvard University in between 1977 and 1978. In 1985, he became a professor at Syracuse University and held the position until his death. He died on March 12, 2021, in Syracuse, New York.
Sarkar acted as an associate editor for the IEEE Transactions on Electromagnetic Compatibility in between 1986-1989 and for IEEE Transactions on Antennas and Propagation in between 2004 and 2010. He was the 2014 president of IEEE Antennas & Propagation Society and the vice president of the Applied Computational Electromagnetics Society (ACES). Sarkar also served as board member for journals such as Digital Signal Processing, Journal of Electromagnetic Waves and Applications and Microwave and Optical Technology Letters.
Sarkar was the president of OHRN Enterprises, Inc., an incorporated business specializing in computer services and system analysis.
Research and awards
Sarkar's research interests focused on "numerical solutions of operator equations arising in electromagnetics and signal processing with application to system design." He is the auth |
https://en.wikipedia.org/wiki/Amazo | Amazo is a supervillain appearing in American comic books published by DC Comics. The character was created by Gardner Fox and Mike Sekowsky and first appeared in The Brave and the Bold #30 (June 1960) as an adversary of the Justice League of America. Since debuting during the Silver Age of Comic Books, the character has appeared in comic books and other DC Comics-related products, including animated television series, trading cards and video games. Traditionally, Amazo is an android created by the villain scientist Professor Ivo and gifted with technology that allows him to mimic the abilities and powers of superheroes he fights (usually the Justice League), as well as make copies of their weapons (though these copies are less powerful than the originals). His default powers are often those of Flash, Aquaman, Martian Manhunter, Wonder Woman, and Green Lantern (the Justice League founding members that he first fought). He is similar and often compared with the later created Marvel android villain Super-Adaptoid (introduced 1966).
In the New 52 timeline of DC Comics, Amazo begins as the A-Maze Operating System and then becomes an android capable of duplicating superhuman powers. Later on, a sentient Amazo Virus infects research scientist Armen Ikarus and takes over his mind. With Ikarus as a host, the Amazo Virus infects other people, granting them super-powers and controlling their minds before they die within 24 hours.
In live-action media, multiple Amazo robots appeared in the Arrowverse crossover event Elseworlds.
Publication history
Amazo first appeared in a one-off story in The Brave and the Bold #30 (June 1960) and returned as an opponent of the Justice League of America in Justice League of America #27 (May 1964) and #112 (August 1974), plus a briefer appearance in #65 when another antagonist weaponized Amazo and other items from the JLA trophy room. Other significant issues included an encounter with a depowered Superman in Action Comics #480-483 (Februar |
https://en.wikipedia.org/wiki/Luca%20Pagano | Luca Pagano (born July 28, 1978 in Treviso) is an Italian-born poker player who finished third place in the Barcelona Open, a European Poker Tour (EPT) event, in 2004. Since then he has reached six more EPT final tables, finishing in the money 20 times, placing him top of the EPT All-Time Leaderboard. He has also placed in two events at the 2006 World Series of Poker.
He was a Team PokerStar PRO member for almost 15 years and, as of 2017, his total live tournament winnings exceed $2,200,000.
Biography
Pagano attended the information-science branch at the Ca' Foscari University of Venice; quickly his managerial skills took over and the nightlife family-run business was profitably administrated.
The big love for poker comes a little later, during a trip in Slovenia where he played a game in the Casino of Nova Gorica.
He has been a poker professional since 2004, year in which he scored the third place at the EPT Barcelona Open. This rising career takes him to play in the major tournaments having places in the most eminent cities, so that PokerStars chooses to make him the perfect Italian testimonial of the Team Pro. Pagano and PokerStars parted their ways in May 2017, consensually.
Pagano adds to his palmares many successes and builds up a very solid bankroll, rapidly becoming the player with the most in the money placements in all of Italian history.
Together with poker, Pagano is an entrepreneur, Angel Investor, TV commentator and host, main host of the Italian talent and reality show La Casa degli Assi, sponsored by PokerStars and co-owner of Italian professional eSports Team QLASH.
European Poker Tour
In the European Poker Tour, Pagano gained 20 ITM placements with 7 Final Tables. Despite the strong perseverance, he never scored any first place; his best EPT result is third place in Barcelona in 2004.
On September 11, 2008, Pagano was awarded by the European Poker Tour Awards as "The Player of the Year".
Pagano holds the first position in the EPT All-Time |
https://en.wikipedia.org/wiki/Crotonyl-CoA | Crotonyl-coenzyme A is an intermediate in the fermentation of butyric acid, and in the metabolism of lysine and tryptophan. It is important in the metabolism of fatty acids and amino acids.
Crotonyl-coA and reductases
Before a 2007 report by Alber and coworkers, crotonyl-coA carboxylases and reductases (CCRs) were known for reducing crotonyl-coA to butyryl-coA. A report by Alber and coworkers concluded that a specific CCR homolog was able to reduce crotonyl-coA to (2S)-ethyl malonyl-coA which was a favorable reaction. The specific CCR homolog came from the bacterium Rhodobacter sphaeroides.
Role of Crotonyl-coA in Transcription
Post-translational modification of histones either by acetylation or crotonylation is important for the active transcription of genes. Histone crotonylation is regulated by the concentration of crotonyl-coA which can change based on environmental cell conditions or genetic factors. |
https://en.wikipedia.org/wiki/Creatine | Creatine ( or ) is an organic compound with the nominal formula . It exists in various tautomers in solutions (among which are neutral form and various zwitterionic forms). Creatine is found in vertebrates where it facilitates recycling of adenosine triphosphate (ATP), primarily in muscle and brain tissue. Recycling is achieved by converting adenosine diphosphate (ADP) back to ATP via donation of phosphate groups. Creatine also acts as a buffer.
History
Creatine was first identified in 1832 when Michel Eugène Chevreul isolated it from the basified water-extract of skeletal muscle. He later named the crystallized precipitate after the Greek word for meat, κρέας (kreas). In 1928, creatine was shown to exist in equilibrium with creatinine. Studies in the 1920s showed that consumption of large amounts of creatine did not result in its excretion. This result pointed to the ability of the body to store creatine, which in turn suggested its use as a dietary supplement.
In 1912, Harvard University researchers Otto Folin and Willey Glover Denis found evidence that ingesting creatine can dramatically boost the creatine content of the muscle. In the late 1920s, after finding that the intramuscular stores of creatine can be increased by ingesting creatine in larger than normal amounts, scientists discovered phosphocreatine (creatine phosphate), and determined that creatine is a key player in the metabolism of skeletal muscle. The substance creatine is naturally formed in vertebrates.
The discovery of phosphocreatine was reported in 1927. In the 1960s, creatine kinase (CK) was shown to phosphorylate ADP using phosphocreatine (PCr) to generate ATP. It follows that ATP, not PCr is directly consumed in muscle contraction. CK uses creatine to "buffer" the ATP/ADP ratio.
While creatine's influence on physical performance has been well documented since the early twentieth century, it came into public view following the 1992 Olympics in Barcelona. An August 7, 1992 article in The T |
https://en.wikipedia.org/wiki/Neutron%20emission | Neutron emission is a mode of radioactive decay in which one or more neutrons are ejected from a nucleus. It occurs in the most neutron-rich/proton-deficient nuclides, and also from excited states of other nuclides as in photoneutron emission and beta-delayed neutron emission. As only a neutron is lost by this process the number of protons remains unchanged, and an atom does not become an atom of a different element, but a different isotope of the same element.
Neutrons are also produced in the spontaneous and induced fission of certain heavy nuclides.
Spontaneous neutron emission
As a consequence of the Pauli exclusion principle, nuclei with an excess of protons or neutrons have a higher average energy per nucleon. Nuclei with a sufficient excess of neutrons have a greater energy than the combination of a free neutron and a nucleus with one less neutron, and therefore can decay by neutron emission. Nuclei which can decay by this process are described as lying beyond the neutron drip line.
Two examples of isotopes that emit neutrons are beryllium-13 (decaying to beryllium-12 with a mean life ) and helium-5 (helium-4, ).
In tables of nuclear decay modes, neutron emission is commonly denoted by the abbreviation n.
{| class="wikitable" align="left"
|+ Neutron emitters to the left of lower dashed line (see also: Table of nuclides)
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|-
|}
Double neutron emission
Some neutron-rich isotopes decay by the emission of two or more neutrons. For example hydrogen-5 and helium-10 decay by the emission of two neutrons, hydrogen-6 by the emission of 3 or 4 neutrons, and hydrogen-7 by emission of 4 neutrons.
Photoneutron emission
Some nuclides can be induced to eject a neutron by gamma radiation. One such nuclide is 9Be; its photodisintegration is significant in nuclear astrophysics, pertaining to the abundance of beryllium and the consequences of the instability of 8Be. This also makes this isotope useful as a |
https://en.wikipedia.org/wiki/Boston%20Internet%20Exchange | The Boston Internet Exchange is an Internet exchange point in Boston, Massachusetts, USA. The Boston IX is owned and operated by Markley Group, hosted in Markley's One Summer Street, Boston datacenter and their Prince Avenue, Lowell datacenter.
The domain bostonix.net was registered by Patrick Gilmore in 2010 and was donated to the exchange. In 2012 the Boston IX went online with the Free Software Foundation as its first participant.
The exchange point supports IPv4 and IPv6 unicast peering, as well as private virtual network interconnects.
Some consider the Boston IX to be the successor of the now defunct Boston MXP started by MAI.net and Vincent Bono. Initially in 2007, Barton Bruce from Global NAPs and TowardEX Technologies had planned to replace the aging Boston MXP due to Global NAPs closing down its business.
See also
Internet Exchange Point
List of Internet exchange points |
https://en.wikipedia.org/wiki/Piet%20Bergveld | Piet Bergveld (; born 26 January 1940) is a Dutch electrical engineer. He was professor of biosensors at the University of Twente between 1983 and 2003. He is the inventor of the ion-sensitive field-effect transistor (ISFET) sensor. Bergveld's work has focused on electrical engineering and biomedical technology.
Career
Bergveld was born in Oosterwolde, Friesland on 26 January 1940. In 1960 he started studying electrical engineering at the Eindhoven University of Technology, he had preferred to study biomedical engineering but that was not available. Between 1964 and 1965 he did a master's degree at the Philips Natuurkundig Laboratorium. In the latter half of the 1960s Bergveld started working as a scientific employee at the Technische Hogeschool Twente (which later became the University of Twente). Intrigued by discovering and measuring the origin of electronic activity in the human brain Bergveld started working on a new technique. In 1970, he completed the development of the ion-sensitive field-effect transistor (ISFET) sensor. It was based on his earlier research on the MOSFET (metal–oxide–semiconductor field-effect transistor), which he realized could be adapted into a biosensor for electrochemical and biological applications. In 1973, he earned his PhD at Twente, with a dissertation which delved deeper into the possibilities of ISFET sensors.
Bergveld worked at the University of Twente from 1965 until he took up emeritus status in February 2003. He had been a full professor since 1983. At the university he was one of the driving forces for increased biomedical technology research and one of the founding fathers of the MESA+ research institute.
In 1995 Bergveld was awarded the prize by minister Hans Wijers. He was elected a member of the Royal Netherlands Academy of Arts and Sciences in 1997. In April 2003 Bergveld was made a Knight in the Order of the Netherlands Lion.
See also
ISFET (Ion-sensitive field-effect transistor)
ChemFET (Chemical field-effect |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.