text
stringlengths
256
16.4k
39B05 General 39B22 Equations for real functions 39B32 Equations for complex functions 39B42 Matrix and operator equations 39B52 Equations for functions with more general domains and/or ranges 39B82 Stability, separation, extension, and related topics A conjecture of Gy. Petruska. V. Totik (1990) A Cosine Functional Equation with Restricted Argument. (Short Communication). L.B. Etigson (1974) A Functional Equation with Differences Gyula Maksa (1976) A Functional Inequality for Entire Functions Generalizing the Sine Functional Equation. (Short Communication). Lawrence Etigson (1974) A generalization of a theorem of S. Gołąb and M. Kucharzewski Jan Luchter (1971) A generalization of Ostrowski' theorem on ultrametric inequalities. J.L. García-Roig (1991) A necessary and sufficient condition for continuity of additive functions Jaroslav Smítal (1976) A note on a pentomino functional equation Shigeru Haruki (1973) A note on certain functional determinants. Jaromír Simsa (1992) A note on certain functional determinants. (Summary). A Proof of the Equivalence of the Equation f (x + y - xy) + f (xy) = f (x) + f (y) and Jensen's Functional Equation. (Short Communication). Halina Swiatak (1970) A Proof of theEquivalence of the Equation f(x+y-xy)+f(xy) = f(x) + (y) and Jensen's Functional Equa tion A remark on k -systems in groups Michael M. Parmenter (1999) A special solution of the inhomogeneous Schröder equation. Choczewski, Bogdan (1998) B. A. Reznick (1974) Aczél's Uniqueness Theorem and Cellular Internity. J.B. Miller (1970) Aczél's Uniqueness theorem and Cellular Internity. (Short Communication). Addition Formulae for Field-Valued Continuous Functions on Topological Groups. Christopher L. Morgan (1974) All General Solutions of Finite Equations Dragić Banković (1990) An algebraic functional equation Mills, T.M. (1974)
Convective Heat Transfer: UFAD Ventilated & Occupied Room SimScale DocumentationValidation CasesConvective Heat Transfer: UFAD Ventilated & Occupied Room The goal of this case study is to validate the Convective Heat Transfer analysis type on the SimScale platform. Validation is performed against experimental data obtained by Chen et al. [1] who analyzed the particle transport and distribution in ventilated rooms. It is possible to use their results because it was concluded that those particles were a passive scalar and thus had no impact on the fluid behavior or the room temperature. he setup consists of a room with an UFAD (Underfloor Air Distribution) ventilation system. The heat sources used were four human simulators and six ceiling lights. Cool air is fed to the room through the inlets on the floor while the humans and the lights heat up the surrounding air creating convection currents. Because the hottest air rises to the top of the room before cooling down and generating these convection currents, the exhaust is placed at the ceiling, thus ensuring that the hottest air is always leaving the room. The theory behind UFAD systems is that because of the setup previously explained, the air that is fed to the room can have a higher temperature while ensuring lower room temperatures, thus saving energy. Chen et al. [1] ran experiments to validate their own simulations for particle transport and distribution, while this study focuses solely on validating the CFD results. Fig.1. On the left is the geometry used for this case, on the right is the geometry provided by Chen et al. [1]] Although the figure provided by Chen et al. [1] has a scale, the only details provided by the paper were the dimension of the room. Based on that and other schematic figures provided by Chen et al. [1] it was possible to estimate very closely the dimensions of all geometries: they are presented in table 1. Table 1: Dimensions for all elements of the simulation.¶ x y z Room 4.8 4.2 2.4 Human Simulator 0.38 0.9 0.38 Inlets 0.25 N/A 0.25 Exhaust 0.25 N/A 0.25 Lamps 1.5 N/A 0.1 Analysis Type: Convective Heat Transfer (κ−ϵ \kappa -ϵ Turbulence Model) Domain: The computational domain was that of the room with dimensions specified in table 1. A hex-dominant mesh was generated with appropriate refinements on the lamps, human simulators, inlets, and exhaust surfaces, and an addition of boundary layer inflation on all surfaces to ensure the correct results of the thermal boundary layer. The mesh is shown on the following figures (2-4) Fig.2. Hex-dominant computational grid. Fig.3. Close-up of surface refinements for the exhaust. Fig.4. Close-up of refinements for human simulators. Surface refinement + boundary layer inflation. The simulation was a Convective Heat Transfer simulation with a κ−ϵ \kappa -ϵ Turbulence Model. The boundary conditions are specified below. Chen et al [1] provided the power generation for the human generators as a whole, so to run a simulation without a volumetric heat source this power was distributed accordingly to each face of the simulators. Boussinesq approximation was used for this simulation as no significant temperature differences would be present. Inlet: constant volumetric flow at constant temperature 0.0472 m3/s per outlet, at 293K Outlet: constant pressure outlet Gauge pressure = 0 Pa Human Simulators (Top): No-slip walls with turbulent heat flux temperature Power = 9.6W Human Simulators (sides): No-slip walls with turbulent heat flux temperature Lamps: No-slip walls with turbulent heat flux temperature Walls: All walls were No-slip but each had a different surface temperature Wall (+X): 297.7 K Wall (-X): 298 K Wall (+Z): 298.5 K Wall (-Z): 298.3 K Ceiling: 298.7 K Floor: 297 K Relaxation factors: Relaxation factor for field p_rgh = 0.7 Relaxation factor for equation U = 0.3 Relaxation factor for equation T = 0.5 Relaxation factor for equation k = 0.7 Relaxation factor for equation epsilon = 0.7 Divergence schemes: Divergence scheme for div(phi,U) = Gauss linearUpwind limitedGrad All other settings within numerics were left with the default selections/values. Chen et al. [1] probed seven lines across the middle of the room (at z=0) and for each line registred 7 points at y = {0.1, 0.3, 0.6 1.1, 1.4, 1.7, 2.2} m. The location of these seven lines was obtained from figure 5 (provided by Chen et al. [1]). Fig.5. Probe line locations provided by Chen et al. [1]. The locations of interest for this study were V1 to V3. Fig.6.Velocity results for probe line 1 (Blue = SimScale, Red = Chen et al. [1]). Fig.7. Velocity results for probe line 2. A comparison of the velocity profiles reveals that the SimScale results follow the same trend as the experimental results provided by Chen et al. [1]. Due to the fact that many dimensions had to be estimated and tested, discrepancies were expected. Additionally, the results provided by Chen et al. [1] lack symmetry across the ZY plane, which should not be the case given that the geometry is symmetric. These errors in the experimental results add to the discrepancies between SimScale and Chen et al. [1]. So the most important thing is that both results follow the same trend. Results for velocity and temperature are shown next. Temperature was normalized using the following formula: Tnorm=T−TsupplyTexhaust−Tsupply {T}_{norm}=\frac{T-{T}_{supply}}{{T}_{exhaust}-{T}_{supply}} With Tsupply {T}_{supply} = 293K and Texhaust {T}_{exhaust} Fig.9. Normalized temperature results for probe line 1 (Blue = SimScale, Red = Chen et al. [1]). Fig.10. Normalized temperature results for probe line 2. A comparison of the temperature profiles reveals that the SimScale results follow the same trend as the experimental results provided by Chen et al. [1] even more closely than the velocity results. As explained above, an error was to be expected. Having such close results for the temperature indicates that the difference on the velocity profile comes from the estimated size of the inlet sizes which impact velocity, not temperature. Convective Heat Transfer Validation Case [1] (1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14) Z. Zhang and Q. Chen, “Experimental measurements and numerical simulations of particle transport and distribution in ventilated rooms”, Atmospheric Environment, vol. 40, no. 18, pp. 3396-3408, 2006.
All horses are the same color - Wikipedia Paradox arising from an incorrect proof by mathematical induction "Horse paradox" redirects here. For a Chinese white horse paradox, see When a white horse is not a horse. All horses are the same color is a falsidical paradox that arises from a flawed use of mathematical induction to prove the statement All horses are the same color.[1] There is no actual contradiction, as these arguments have a crucial flaw that makes them incorrect. This example was originally raised by George Pólya in a 1954 book in different terms: "Are any n numbers equal?" or "Any n girls have eyes of the same color", as an exercise in mathematical induction.[2] It has also been restated as "All cows have the same color".[3] The "horses" version of the paradox was presented in 1961 in a satirical article by Joel E. Cohen. It was stated as a lemma, which in particular allowed the author to "prove" that Alexander the Great did not exist and he had an infinite number of limbs.[4] 1.1 Base case: One horse 1.2 Inductive step The argument[edit] All horses are the same color paradox, induction step failing for n = 1 The argument is proof by induction. First we establish a base case for one horse ( {\displaystyle n=1} ). We then prove that if {\displaystyle n} horses have the same color, then {\displaystyle n+1} horses must also have the same color. Base case: One horse[edit] The case with just one horse is trivial. If there is only one horse in the "group", then clearly all horses in that group have the same color. Inductive step[edit] {\displaystyle n} horses always are the same color. Consider a group consisting of {\displaystyle n+1} First, exclude one horse and look only at the other {\displaystyle n} horses; all these are the same color since {\displaystyle n} horses always are the same color. Likewise, exclude some other horse (not identical to the one first removed) and look only at the other {\displaystyle n} horses. By the same reasoning, these too, must also be of the same color. Therefore, the first horse that was excluded is of the same color as the non-excluded horses, who in turn are of the same color as the other excluded horse. Hence the first horse excluded, the non-excluded horses, and last horse excluded are all of the same color, and we have proven that: {\displaystyle n} {\displaystyle n+1} horses will also have the same color. We already saw in the base case that the rule ("all horses have the same color") was valid for {\displaystyle n=1} . The inductive step proved here implies that since the rule is valid for {\displaystyle n=1} , it must also be valid for {\displaystyle n=2} , which in turn implies that the rule is valid for {\displaystyle n=3} Thus in any group of horses, all horses must be the same color.[2][5] The argument above makes the implicit assumption that the set of {\displaystyle n+1} horses has the size at least 3,[3] so that the two proper subsets of horses to which the induction assumption is applied would necessarily share a common element. This is not true at the first step of induction, i.e., when {\displaystyle n+1=2} Let the two horses be horse A and horse B. When horse A is removed, it is true that the remaining horses in the set are the same color (only horse B remains). The same is true when horse B is removed. However the statement "the first horse in the group is of the same color as the horses in the middle" is meaningless, because there are no "horses in the middle" (common elements (horses) in the two sets). Therefore, the above proof has a logical link broken. The proof forms a falsidical paradox; it seems to show by valid reasoning something that is manifestly false, but in fact the reasoning is flawed. ^ Łukowski, Piotr (2011). Paradoxes. Springer. pp. 15. ^ a b Pólya, George (1954). Induction and Analogy in Mathematics. Princeton University Press. p. 120. ^ a b Thomas VanDrunen, Discrete Mathematics and Functional Programming, Franklin, Beedle and Associates, 2012, Section "Induction Gone Awry" ^ Cohen, Joel E. (1961), "On the nature of mathematical proofs", Worm Runner's Digest, III (3) . Reprinted in A Random Walk in Science (R. L. Weber, ed.), Crane, Russak & Co., 1973, pp. 34-36 ^ "All Horses are the Same Color". Harvey Mudd College Department of Mathematics. Archived from the original on 12 April 2019. Retrieved 6 January 2013. Retrieved from "https://en.wikipedia.org/w/index.php?title=All_horses_are_the_same_color&oldid=1073317400" Horses in popular culture
System Objects for Classification and Code Generation - MATLAB & Simulink - MathWorks India Train and Optimize Classification Models Save Classification Model to Disk Create System Object for Prediction Define Prediction Functions for Code Generation Compile MATLAB Function to MEX File Predict Labels by Using System Objects in Simulink This example shows how to generate C code from a MATLAB® System object™ that classifies images of digits by using a trained classification model. This example also shows how to use the System object for classification in Simulink®. The benefit of using System objects over MATLAB function is that System objects are more appropriate for processing large amounts of streaming data. For more details, see What Are System Objects?. This example is based on Code Generation for Image Classification, which is an alternative workflow to Digit Classification Using HOG Features (Computer Vision Toolbox). Load the digitimages. load digitimages.mat images is a 28-by-28-by-3000 array of uint16 integers. Each page is a raster image of a digit. Each element is a pixel intensity. Corresponding labels are in the 3000-by-1 numeric vector Y. For more details, enter Description at the command line. Store the number of observations and the number of predictor variables. Create a data partition that specifies to hold out 20% of the data. Extract training and test set indices from the data partition. n = size(images,3); p = numel(images(:,:,1)); Rescale the pixel intensities so that they range in the interval [0,1] within each image. Specifically, suppose {p}_{ij} is pixel intensity j within image i . For image i , rescale all of its pixel intensities by using this formula: {\underset{}{\overset{ˆ}{p}}}_{ij}=\frac{{p}_{ij}-\underset{j}{\mathrm{min}}\left({p}_{ij}\right)}{\underset{j}{\mathrm{max}}\left({p}_{ij}\right)-\underset{j}{\mathrm{min}}\left({p}_{ij}\right)}. X = double(images); minX = min(min(X(:,:,i))); maxX = max(max(X(:,:,i))); X(:,:,i) = (X(:,:,i) - minX)/(maxX - minX); For code generation, the predictor data for training must be in a table of numeric variables or a numeric matrix. Reshape the data to a matrix such that predictor variables correspond to columns and images correspond to rows. Because reshape takes elements column-wise, transpose its result. X = reshape(X,[p,n])'; Cross-validate an ECOC model of SVM binary learners and a random forest based on the training observations. Use 5-fold cross-validation. For the ECOC model, specify predictor standardization and optimize classification error over the ECOC coding design and the SVM box constraint. Explore all combinations of these values: For the ECOC coding design, use one-versus-one and one-versus-all. For the SVM box constraint, use three logarithmically spaced values from 0.1 to 100 each. For all models, store the 5-fold cross-validated misclassification rates. coding = {'onevsone' 'onevsall'}; boxconstraint = logspace(-1,2,3); cvLossECOC = nan(numel(coding),numel(boxconstraint)); % For preallocation for i = 1:numel(coding) for j = 1:numel(boxconstraint) t = templateSVM('BoxConstraint',boxconstraint(j),'Standardize',true); CVMdl = fitcecoc(X(idxTrn,:),Y(idxTrn),'Learners',t,'KFold',5,... 'Coding',coding{i}); cvLossECOC(i,j) = kfoldLoss(CVMdl); fprintf('cvLossECOC = %f for model using %s coding and box constraint=%f\n',... cvLossECOC(i,j),coding{i},boxconstraint(j)) cvLossECOC = 0.058333 for model using onevsone coding and box constraint=0.100000 cvLossECOC = 0.050000 for model using onevsone coding and box constraint=100.000000 cvLossECOC = 0.120417 for model using onevsall coding and box constraint=0.100000 cvLossECOC = 0.127917 for model using onevsall coding and box constraint=100.000000 For the random forest, vary the maximum number of splits by using the values in the sequence \left\{{3}^{2},{3}^{3},...,{3}^{m}\right\} {3}^{m} is no greater than n - 1. To reproduce random predictor selections, specify 'Reproducible',true. cvLossRF = nan(numel(maxNumSplits)); for i = 1:numel(maxNumSplits) t = templateTree('MaxNumSplits',maxNumSplits(i),'Reproducible',true); CVMdl = fitcensemble(X(idxTrn,:),Y(idxTrn),'Method','bag','Learners',t,... 'KFold',5); cvLossRF(i) = kfoldLoss(CVMdl); fprintf('cvLossRF = %f for model using %d as the maximum number of splits\n',... cvLossRF(i),maxNumSplits(i)) cvLossRF = 0.319167 for model using 9 as the maximum number of splits cvLossRF = 0.192917 for model using 27 as the maximum number of splits cvLossRF = 0.015000 for model using 243 as the maximum number of splits cvLossRF = 0.009583 for model using 2187 as the maximum number of splits For each algorithm, determine the hyperparameter indices that yield the minimal misclassification rates. minCVLossECOC = min(cvLossECOC(:)) minCVLossECOC = 0.0500 linIdx = find(cvLossECOC == minCVLossECOC,1); [bestI,bestJ] = ind2sub(size(cvLossECOC),linIdx); bestCoding = coding{bestI} bestCoding = 'onevsone' bestBoxConstraint = boxconstraint(bestJ) bestBoxConstraint = 100 minCVLossRF = min(cvLossRF(:)) minCVLossRF = 0.0096 linIdx = find(cvLossRF == minCVLossRF,1); [bestI,bestJ] = ind2sub(size(cvLossRF),linIdx); bestMNS = maxNumSplits(bestI) bestMNS = 2187 The random forest achieves a smaller cross-validated misclassification rate. Train an ECOC model and a random forest using the training data. Supply the optimal hyperparameter combinations. t = templateSVM('BoxConstraint',bestBoxConstraint,'Standardize',true); MdlECOC = fitcecoc(X(idxTrn,:),Y(idxTrn),'Learners',t,'Coding',bestCoding); t = templateTree('MaxNumSplits',bestMNS); MdlRF = fitcensemble(X(idxTrn,:),Y(idxTrn),'Method','bag','Learners',t); Create a variable for the test sample images and use the trained models to predict test sample labels. testImages = X(idxTest,:); testLabelsECOC = predict(MdlECOC,testImages); testLabelsRF = predict(MdlRF,testImages); MdlECOC and MdlRF are predictive classification models, but you must prepare them for code generation. Save MdlECOC and MdlRF to your present working folder using saveLearnerForCoder. saveLearnerForCoder(MdlECOC,'DigitImagesECOC'); saveLearnerForCoder(MdlRF,'DigitImagesRF'); Create two System objects, one for the ECOC model and the other for the random forest, that: Load the previously saved trained model by using loadLearnerForCoder. Make sequential predictions by the step method. Enforce no size changes to the input data. Enforce double-precision, scalar output. type ECOCClassifier.m % Display contents of ECOCClassifier.m file classdef ECOCClassifier < matlab.System % ECOCCLASSIFIER Predict image labels from trained ECOC model % ECOCCLASSIFIER loads the trained ECOC model from % |'DigitImagesECOC.mat'|, and predicts labels for new observations % based on the trained model. The ECOC model in % |'DigitImagesECOC.mat'| was cross-validated using the training data % in the sample data |digitimages.mat|. CompactMdl % The compacted, trained ECOC model % Load ECOC model from file obj.CompactMdl = loadLearnerForCoder('DigitImagesECOC'); y = predict(obj.CompactMdl,u); % Return false if input size is not allowed to change while % system is running type RFClassifier.m % Display contents of RFClassifier.m file classdef RFClassifier < matlab.System % RFCLASSIFIER Predict image labels from trained random forest % RFCLASSIFIER loads the trained random forest from % |'DigitImagesRF.mat'|, and predicts labels for new observations based % on the trained model. The random forest in |'DigitImagesRF.mat'| % was cross-validated using the training data in the sample data % |digitimages.mat|. CompactMdl % The compacted, trained random forest % Load random forest from file obj.CompactMdl = loadLearnerForCoder('DigitImagesRF'); Note: If you click the button located in the upper-right section of this page and open this example in MATLAB®, then MATLAB® opens the example folder. This folder includes the files used in this example. For System object basic requirements, see Define Basic System Objects. Define two MATLAB functions called predictDigitECOCSO.m and predictDigitRFSO.m. The functions: Include the code generation directive %#codegen. Accept image data commensurate with X. Predict labels using the ECOCClassifier and RFClassifier System objects, respectively. Return predicted labels. type predictDigitECOCSO.m % Display contents of predictDigitECOCSO.m file function label = predictDigitECOCSO(X) %#codegen %PREDICTDIGITECOCSO Classify digit in image using ECOC Model System object % PREDICTDIGITECOCSO classifies the 28-by-28 images in the rows of X % using the compact ECOC model in the System object ECOCClassifier, and % then returns class labels in label. classifier = ECOCClassifier; label = step(classifier,X); type predictDigitRFSO.m % Display contents of predictDigitRFSO.m file function label = predictDigitRFSO(X) %#codegen %PREDICTDIGITRFSO Classify digit in image using RF Model System object % PREDICTDIGITRFSO classifies the 28-by-28 images in the rows of X % using the compact random forest in the System object RFClassifier, and classifier = RFClassifier; Compile the prediction function that achieves better test-sample accuracy to a MEX file by using codegen. Specify the test set images by using the -args argument. if(minCVLossECOC <= minCVLossRF) codegen predictDigitECOCSO -args testImages codegen predictDigitRFSO -args testImages Verify that the generated MEX file produces the same predictions as the MATLAB function. mexLabels = predictDigitECOCSO_mex(testImages); verifyMEX = sum(mexLabels == testLabelsECOC) == numel(testLabelsECOC) mexLabels = predictDigitRFSO_mex(testImages); verifyMEX = sum(mexLabels == testLabelsRF) == numel(testLabelsRF) verifyMEX is 1, which indicates that the predictions made by the generated MEX file and the corresponding MATLAB function are the same. Create a video file that displays the test-set images frame-by-frame. v = VideoWriter('testImages.avi','Uncompressed AVI'); v.FrameRate = 1; dim = sqrt(p)*[1 1]; for j = 1:size(testImages,1) writeVideo(v,reshape(testImages(j,:),dim)); Define a function called scalePixelIntensities.m that converts RGB images to grayscale, and then scales the resulting pixel intensities so that their values are in the interval [0,1]. type scalePixelIntensities.m % Display contents of scalePixelIntensities.m file function x = scalePixelIntensities(imdat) %SCALEPIXELINTENSITIES Scales image pixel intensities % SCALEPIXELINTENSITIES scales the pixel intensities of the image such % that the result x is a row vector of values in the interval [0,1]. imdat = rgb2gray(imdat); minimdat = min(min(imdat)); maximdat = max(max(imdat)); x = (imdat - minimdat)/(maximdat - minimdat); Load the Simulink® model slexClassifyAndDisplayDigitImages.slx. SimMdlName = 'slexClassifyAndDisplayDigitImages'; open_system(SimMdlName); The figure displays the Simulink® model. At the beginning of simulation, the From Multimedia File block loads the video file of the test-set images. For each image in the video: The From Multimedia File block converts and outputs the image to a 28-by-28 matrix of pixel intensities. The Process Data block scales the pixel intensities by using scalePixelIntensities.m, and outputs a 1-by-784 vector of scaled intensities. The Classification Subsystem block predicts labels given the processed image data. The block chooses the System object that minimizes classification error. In this case, the block chooses the random forest. The block outputs a double-precision scalar label. The Data Type Conversion block converts the label to an int32 scalar. The Insert Text block embeds the predicted label on the current frame. The To Video Display block displays the annotated frame. sim(SimMdlName) The model displays all 600 test-set images and its prediction quickly. The last image remains in the video display. You can generate predictions and display them with corresponding images one-by-one by clicking the Step Forward button instead. If you also have a Simulink® Coder™ license, then you can generate C code from slexClassifyAndDisplayDigitImages.slx in Simulink® or from the command line using slbuild (Simulink). For more details, see Generate C Code for a Model (Simulink Coder). loadLearnerForCoder | saveLearnerForCoder | predict | predict Digit Classification Using HOG Features (Computer Vision Toolbox)
Physics - The <math xmlns="http://www.w3.org/1998/Math/MathML" display="inline"><mrow><mn>5</mn><mo stretchy="false" rspace="0" lspace="0">/</mo><mn>2</mn></mrow></math> enigma in a spin? 5/2 enigma in a spin? Department of Physics, 104 Davey Lab, Pennsylvania State University, University Park, PA 16802, USA Department of Theoretical Physics, Tata Institute of Fundamental Research, Homi Bhabha Road, Colaba, Mumbai 400005, India Theorists and experimentalists have long been trying to explain the existence of a fractional quantum Hall effect where it is not expected. Figure 1: Plateaus in the Hall resistance, {R}_{H} , versus magnetic field occur at quantized values of the resistance, in units of h/{e}^{2} . Each plateau is associated with a minimum in the longitudinal resistance {R}_{L} (After [1].) When a magnetic field pierces a (mostly) clean, two-dimensional electron system at very low temperatures, a plot of the Hall resistance of the electrons versus magnetic field looks like a staircase. The Hall resistance on each of the steps in the staircase—now famous as the manifestation of the integer and fractional quantum Hall effects—is precisely quantized at divided by either an integer or a simple fraction. More than 70 fractions have so far been observed. In this zoo of fractions, it may seem odd that the discovery in 1987 of a resistance plateau (Fig. 1) centered at filling factor [1] could spark a more than twenty year effort to explain it. (The filling factor denotes the number of quantized Landau levels, formed by the magnetic field, that are filled.) Why all the hubbub about a single fraction? To appreciate this, we recall how the observation of more and more fractions has led to realizing the existence of quasiparticles with bizarre properties. In the first report of what came to be known as the fractional quantum Hall effect, the Hall resistance plateau was quantized at divided by . The theory explaining this effect led to the concept of “fractionalized” excitations that no longer behave like particles with a charge but instead like complex particles with fractional charge . They are also thought to provide a realization of particles obeying fractional statistics, called “anyons.” Anyons do not follow the normal commutation rules of Fermi or Bose particles, which are, respectively, odd or even under exchange. Instead, their exchange introduces a complex phase factor. Subsequent experiments demonstrated a more extensive phenomenology, producing Hall resistance plateaus corresponding to long sequences of fractions given by the expression , and being integers. Their theoretical explanation revealed that electrons capture quantized vortices to form particles known as “composite fermions.” Composite fermions do not see the external magnetic field but rather a much reduced effective magnetic field—they have, in a manner of speaking, swallowed up part of the external magnetic field. The integer quantum Hall effect for these composite fermions manifests as fractional quantum Hall effect for electrons at these fractions. The model that takes composite fermions as weakly interacting does not produce fractional Hall states with an even denominator fraction, and new conceptual input is required. The leading theoretical contender for the state, first put forth in 1991 by Gregory Moore (now at Rutgers University) and Nicholas Read (at Yale), predicts that the quasiparticles in this state obey non-Abelian braid statistics [2–4], and thus are even more complex than the Abelian anyons of the odd denominator states. While an esoteric concept, non-Abelian particles could have a real application in constructing qubits for fault tolerant topological quantum computation [5]. Moore and Read’s theory describes the state in terms of a -wave paired state of fully spin polarized composite fermions. This means that a test of whether this model is applicable could be found in an experiment that measures the spin polarization of the state. Writing in Physical Review Letters, Michael Stern and colleagues at the Weizmann Institute of Science in Israel, with collaborators at the CNRS in Grenoble, France [6], probe the spin polarization by photoluminescence spectroscopy and find that, for their experimental conditions, the results are consistent with a state that is not spin polarized. This suggests that the physics of the state may be even richer than previously thought, with a phase diagram containing both spin polarized and spin singlet states. At filling factor , the lowest Landau level is fully occupied (contributing to the filling factor) and the remaining electrons can occupy half of one spin band of the second Landau level. The physical picture that, over the years, has been developed to describe this state is based on several concepts [2–4]. The picture assumes that the state arises from a Fermi sea of composite fermions, as is the case at the half-filled lowest Landau level. While no quantized Hall resistance plateau is seen at , there is a fractional quantum Hall effect at the filling of . To explain this difference, theory postulates that a pairing instability at opens a gap in the composite fermion sea. If composite fermions are fully spin polarized, they must undergo a -wave pairing. In the weak pairing phase, the Abrikosov vortices of the paired state support zero energy solutions that are symmetric combinations of the composite fermion creation and annihilation operators, called Majorana composite fermions. These particles obey non-Abelian statistics [2–4], and the Pfaffian wave functions of Moore and Read [2], which were motivated by conformal field theory, provide a concrete representation of this physics. The conceptual novelty of these ideas and the implications they may have on future technology have inspired a great deal of theoretical and experimental work aimed at finding definitive evidence for pairing of composite fermions as well as for the non-Abelian character of the excitations at . Stern et al. directly determined the spin polarization of the state (which occurs at a magnetic field of for the electron density in their sample) with the technique of photoluminescence spectroscopy. This involves measuring the energy of the photon emitted by a conduction band electron in the lowest Landau level when it recombines with a valence band hole. The difference between the energies of the spectral lines for and polarizations is given by the sum of the Zeeman energies of the electron in the conduction band and the hole in the valence band, plus a term that denotes the interaction of the lowest Landau level hole with the rest of the electrons. The term can be shown to be proportional to the spin polarization of the fermions in the fractional quantum Hall state. The principal finding of Stern et al. is that is vanishingly small at ; that is, the fermions are not polarized. Other experiments have searched for the spin polarization of the state. For example, inelastic light scattering can detect the long wavelength spin wave mode at the Zeeman energy, which indicates a broken symmetry state with nonzero spin polarization. Rhone et al. [7] recently used this technique to investigate the state, but the tell-tale spin wave was absent, indicating, like Stern et al.’s experiment, that the state is not fully polarized. Another approach is to apply a magnetic field within the plane of the two-dimensional electron gas, without changing the perpendicular field from satisfying the condition. This additional parallel field increases the Zeeman energy of the spins, and if the composite fermions are spin polarized, the gap is expected to either increase or stay constant, depending on whether the excitation involves a spin flip or not—an effect that transport measurements should be able to detect. Several experiments have found, instead, that the transport gap of the state decreases in a parallel field [8,9]. While initially taken as evidence that the state is not spin polarized, the decay of the fractional quantum Hall effect in these experiments is now understood as being caused either by a competing stripe phase or by physics related to the transverse thickness of the quantum wells. In a recent unpublished work, Wei Pan and collaborators [10] at Sandia National Laboratories, the University of Florida, and Princeton University have taken an alternative approach: they vary the Zeeman energy by changing the magnetic field, but keep the two-dimensional system at by controlling the sample density through a capacitive coupling to a frontgate electrode. They have found that the dependence of the gap, which increases with increasing magnetic field, in the magnetic field range of – is consistent with a fully spin-polarized state. A significant observation in their earlier work, and by others, is that the state persists at fields of up to [11] and [12]. This is relevant because the Zeeman energy at these fields is large compared to the gap, strongly suggesting that the state here is fully spin polarized. The issue of spin polarization is clearly of relevance to the physics of the state. When it occurs at sufficiently high magnetic fields, the state must surely be fully spin polarized. Even for small fields, numerical calculations make a good case for full spin polarization [13], but different experimental methods appear to lead to different conclusions. Certain open issues regarding Stern et al.’s experiment ought to be noted, however. The polarization maxima and minima are slightly offset from integer fillings, the origin of which is unclear. More importantly, the valence hole can be a strong perturbation, and may produce skyrmionlike excitations, as proposed by Wójs [14], thereby leading to a local spin depolarization. If true, it is possible that the conclusion relates to the local spin polarization in the vicinity of the hole rather than the global spin polarization of the 5/2 state. A Knight shift measurement by Tiemann et al. [15] finds a fully spin polarized state. Nonetheless, the experiments of Refs. [6,7] suggest the interesting possibility of two distinct states, one which is spin polarized and one that is not. If so, this would raise obvious questions about the origin of the spin unpolarized state, the phase boundary separating it from the polarized state, and the nature of the phase transition. It may lead the community to reconsider an -wave pairing scenario that was proposed immediately following the discovery of the state [16], as well as the role of disorder in creating spin depolarization [14]. A resolution to these questions would be important for identifying the region where non-Abelian statistics may occur. Other aspects of the physical picture leading to non-Abelian anyons at are also being tested. Several experiments [17–20] reported that the charge of the quasiparticle in the state was , as expected in Moore and Read’s theory. The charge is not, however, a proof of non-Abelian statistics. Experiments that measure the interference between quasiparticles traversing along different paths along the sample edges [19] and the tunneling conductance across a narrow junction [18] have sought to discriminate between various theoretical possibilities and also to reveal signatures of non-Abelian statistics. These experiments have all been analyzed assuming a fully polarized ground state. Finally, we note that the enigma belongs to a larger mystery surrounding the fractional quantum Hall effect in the second Landau level. The other observed fractions, such as , , , , match those observed in the lowest Landau level, but detailed calculations show, surprisingly, that the model of weakly interacting composite fermions, which is successful for the explanation of the lowest Landau level fractions, is not adequate for these second Landau level fractions. A resolution of the second Landau level fractional quantum Hall effect is likely to lead to much exciting physics. Thanks are due to Steve Simon and Arek Wójs for their insightful comments, and to the DOE for financial support under Grant No. DE-SC0005042. R. L. Willett et al., Phys. Rev. Lett. 59, 1776 (1987) C. Nayak and F. Wilczek, Nucl. Phys. B 49, 529 (1996) C. Nayak et al., Rev. Mod. Phys. 80, 1083 (2008) M. Stern, P. Plochocka, V. Umansky, D. K. Maude, M. Potemski, and I. Bar-Joseph, Phys. Rev. Lett. 105, 096801 (2010) T. D. N. Rhone et al., http://meetings.aps.org/link/BAPS.2010.MAR.Y2.3; and (unpublished) J. P. Eisenstein et al., Phys. Rev. Lett. 61, 997 (1988) C. R. Dean et al., Phys. Rev. Lett. 101, 186806 (2008) W. Pan et al. (unpublished) C. Zhang et al., Phys. Rev. Lett. 104, 166801 (2010) W. Pan et al., Solid State Commun. 119, 641 (2001) R. Morf, Phys. Rev. Lett. 80, 1505 (1998) A. Wójs et al., Phys. Rev. Lett. 104, 086801 (2010) L. Tiemann, G. Gamez, N. Kumada, and K. Muraki, in Proceedings of the 19th International Conference on the Application of High Magnetic Fields in Semiconductor Physics and Nanotechnology (HMF-19), Fukuoka, Japan, 2010 (unpublished) F. D. M. Haldane and E. H. Rezayi, Phys. Rev. Lett. 60, 956 (1988) M. Dolev et al., Nature 452, 829 (2008); arXiv:0911.3023v2 R. L. Willett, L. N. Pfeiffer, and K.W. West, Proc. Natl. Acad. Sci. USA 106, 8853 (2009) I. P. Radu et al., Science 320, 899 (2008) V. Venkatachalam, A. Yacoby, L. Pfeiffer, and Ken West, arXiv:1008.1979 (2010) Jainendra K. Jain received his Ph.D. in 1985 from Stony Brook University, where he returned in 1989 as a faculty member, before moving to Pennsylvania State University in 1998 as the Erwin W. Mueller Professor of Physics. He enjoys thinking about the collective physics of low-dimensional systems, especially the fractional quantum Hall effect. Among his honors are the Oliver E. Buckley Prize of the American Physical Society and Distinguished Alumnus Award of the Indian Institute of Technology, Kanpur. He was elected a Fellow of the American Academy of Arts and Sciences in 2008, and holds visiting positions at IISER, Kolkata, and IAS, Korea. He has written a book Composite Fermions (Cambridge University Press, 2007). Optical Probing of the Spin Polarization of the \nu =5/2 Quantum Hall State M. Stern, P. Plochocka, V. Umansky, D. K. Maude, M. Potemski, and I. Bar-Joseph \nu =5/2
54C56 Shape theory 54C15 Retraction 54C35 Function spaces C {C}^{*} 54C50 Special sets defined by functions 54C60 Set-valued maps 54C65 Selections 54C70 Entropy A counterexample concerning products in the shape category J. Dydak, S. Mardešić (2005) We exhibit a metric continuum X and a polyhedron P such that the Cartesian product X × P fails to be the product of X and P in the shape category of topological spaces. Jan Dijkstra (1996) We construct a hereditary shape equivalence that raises transfinite inductive dimension from ω to ω+1. This shows that ind and Ind do not admit a geometric characterisation in the spirit of Alexandroff's Essential Mapping Theorem, answering a question asked by R. Pol. A note on the theory of shape of compacta A suspension theorem for the proper homotopy and strong shape theories C. Elvira-Donazar, L. J. Hernandez-Paricio (1995) A Whitehead theorem in CG-shape Algebraic theory of fundamental dimension [Book] All CAT(0) boundaries of a group of the form H × K are CE equivalent Christopher Mooney (2009) M. Bestvina has shown that for any given torsion-free CAT(0) group G, all of its boundaries are shape equivalent. He then posed the question of whether they satisfy the stronger condition of being cell-like equivalent. In this article we prove that the answer is "Yes" in the situation where the group in question splits as a direct product with infinite factors. We accomplish this by proving an interesting theorem in shape theory. An intrinsic characterization of the shape of paracompacta by means of non-continuous single-valued maps. Kieboom, R.W. (1994) An introduction to shape theory for the nonspecialist J. Segal (1986) An R-stable ANR which is not FR-stable Approximate polyhedra, resolutions of maps and shape fibrations Sibe Mardešić (1981) Bimorphisms in pro-homotopy and proper homotopy Jerzy Dydak, Francisco Ruiz del Portal (1999) A morphism of a category which is simultaneously an epimorphism and a monomorphism is called a bimorphism. The category is balanced if every bimorphism is an isomorphism. In the paper properties of bimorphisms of several categories are discussed (pro-homotopy, shape, proper homotopy) and the question of those categories being balanced is raised. Our most interesting result is that a bimorphism f:X → Y of tow\left({H}_{0}\right) is an isomorphism if Y is movable. Recall that \left({H}_{0}\right) is the full subcategory of pro-{H}_{0} consisting of... {C}_{p} -Movably regular convergences Čech methods and the adjoint functor theorem Renato Betti (1985) Chapman's category isomorphism for arbitrary ARs P. Mrozik (1985) Classifying finite-sheeted covering mappings of paracompact spaces. Vlasta Matijevic (2003) The main result of the present paper is a classification theorem for finite-sheeted covering mappings over connected paracompact spaces. This theorem is a generalization of the classical classification theorem for covering mappings over a connected locally pathwise connected semi-locally 1-connected space in the finite-sheeted case. To achieve the result we use the classification theorem for overlay structures which was recently proved by S. Mardesic and V. Matijevic (Theorems 1 and 4 of [5]). Constructing exotic retracts, factors of manifolds, and generalized manifolds via decompositions Coshape-invariant functors and Mackey's induced representation theorem Heinrich Kleisli (1981) Counting shape and homotopy types among fundamental absolute neighborhood retracts: an elementary approach. M.A. Morón, F.R. Ruiz del Portal (1993) Cp-movable at infinity spaces, compact ANR divisors and property UVWn. Z. Cerin (1978)
July, 2003 Singular solutions of the Briot-Bouquet type partial differential equations In 1990, Gérard-Tahara [2] introduced the Briot-Bouquet type partial differential equation t{\partial }_{t}u=F\left(t,x,u,{\partial }_{x}u\right) , and they determined the structure of singular solutions provided that the characteristic exponent \rho \left(x\right) \rho \left(0\right)\notin \left\{1,2,\text{...}\right\} . In this paper the author determines the structure of singular solutions in the case \rho \left(0\right)\in \left\{1,2,\text{...}\right\} Hiroshi YAMAZAWA. "Singular solutions of the Briot-Bouquet type partial differential equations." J. Math. Soc. Japan 55 (3) 617 - 632, July, 2003. https://doi.org/10.2969/jmsj/1191418992 Keywords: characteristic exponent , singular solution , the Briot-Bouquet type partial differential equations Hiroshi YAMAZAWA "Singular solutions of the Briot-Bouquet type partial differential equations," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 55(3), 617-632, (July, 2003)
03E02 Partition relations 03E30 Axiomatics of classical set theory and its fragments 03E47 Other notions of set-theoretic definability 03E50 Continuum hypothesis and Martin's axiom 03E57 Generic absoluteness and forcing axioms 03E65 Other hypotheses and axioms 03E72 Fuzzy set theory \left[1,N\right] \left(1/22\right){N}^{2}+O\left(N\right) A Čech function in ZFC Fred Galvin, Petr Simon (2007) A nontrivial surjective Čech closure function is constructed in ZFC. James Cummings, Mirna Džamonja, Saharon Shelah (1995) A dichotomy for P-ideals of countable sets Stevo Todorčević (2000) A dichotomy concerning ideals of countable subsets of some set is introduced and proved compatible with the Continuum Hypothesis. The dichotomy has influence not only on the Suslin Hypothesis or the structure of Hausdorff gaps in the quotient algebra P\left(ℕ\right) / but also on some higher order statements like for example the existence of Jensen square sequences. A fine hierarchy of partition cardinals F. Drake (1974) A free group of piecewise linear transformations Grzegorz Tomkowicz (2011) We prove the following conjecture of J. Mycielski: There exists a free nonabelian group of piecewise linear, orientation and area preserving transformations which acts on the punctured disk {(x,y) ∈ ℝ²: 0 < x² + y² < 1} without fixed points. A new construction of a Kurepa tree with no Aronszajn subtree A non-special {\omega }_{2} -tree with special {\omega }_{1} -subtrees Lajos Soukup (1990) A note on a question of Abe Douglas Burke (2000) Assuming large cardinals, we show that every κ-complete filter can be generically extended to a V-ultrafilter with well-founded ultrapower. We then apply this to answer a question of Abe. A note on almost disjoint refinement A note on automorphisms and partitions B. Węglorz (1988) A note on Boolean algebras Isaac Gorelic (1994) We show that splitting of elements of an independent family of infinite regular size will produce a full size independent set. A note on forcing with ideals and Hechler forcing Anastasis Kamburelis (2002) A note on nowhere dense sets in {\omega }^{*} A note on the intersection ideal ℳ\cap 𝒩 Tomasz Weiss (2013) We prove among other theorems that it is consistent with ZFC that there exists a set X\subseteq {2}^{\omega } which is not meager additive, yet it satisfies the following property: for each {F}_{\sigma } measure zero set F X+F belongs to the intersection ideal ℳ\cap 𝒩 A note on the Ramsey-type theorem of Erdös Ondřej Zindulka (1990) A note on the structure of WUR Banach spaces Spiros A. Argyros, Sophocles Mercourakis (2005) We present an example of a Banach space E admitting an equivalent weakly uniformly rotund norm and such that there is no \Phi :E\to {c}_{0}\left(\Gamma \right) , for any set \Gamma , linear, one-to-one and bounded. This answers a problem posed by Fabian, Godefroy, Hájek and Zizler. The space E is actually the dual space {Y}^{*} of a space Y which is a subspace of a WCG space. A note on the λ-Shelah property
Bounding boxes augmentation for object detection - Albumentations Documentation Bounding boxes augmentation for object detection Bounding boxes augmentation for object detection Table of contents Different annotations formats Bounding boxes augmentation min_area and min_visibility Class labels for bounding boxes 1. You can pass labels along with bounding boxes coordinates by adding them as additional values to the list of coordinates. 2.You can pass labels for bounding boxes as a separate list (the preferred way). Step 3. Read images and bounding boxes from the disk. Step 4. Pass an image and bounding boxes to the augmentation pipeline and receive augmented images and boxes. 1. Pass class labels along with coordinates. 2. Pass class labels in a separate argument to transform (the preferred way). Bounding boxes augmentation for object detection¶ Different annotations formats¶ Bounding boxes are rectangles that mark objects on an image. There are multiple formats of bounding boxes annotations. Each format uses its specific representation of bouning boxes coordinates. Albumentations supports four formats: pascal_voc, albumentations, coco, and yolo . Let's take a look at each of those formats and how they represent coordinates of bounding boxes. As an example, we will use an image from the dataset named Common Objects in Context. It contains one bounding box that marks a cat. The image width is 640 pixels, and its height is 480 pixels. The width of the bounding box is 322 pixels, and its height is 117 pixels. The bounding box has the following (x, y) coordinates of its corners: top-left is (x_min, y_min) or (98px, 345px), top-right is (x_max, y_min) or (420px, 345px), bottom-left is (x_min, y_max) or (98px, 462px), bottom-right is (x_max, y_max) or (420px, 462px). As you see, coordinates of the bounding box's corners are calculated with respect to the top-left corner of the image which has (x, y) coordinates (0, 0). An example image with a bounding box from the COCO dataset pascal_voc¶ pascal_voc is a format used by the Pascal VOC dataset. Coordinates of a bounding box are encoded with four values in pixels: [x_min, y_min, x_max, y_max]. x_min and y_min are coordinates of the top-left corner of the bounding box. x_max and y_max are coordinates of bottom-right corner of the bounding box. Coordinates of the example bounding box in this format are [98, 345, 420, 462]. albumentations¶ albumentations is similar to pascal_voc, because it also uses four values [x_min, y_min, x_max, y_max] to represent a bounding box. But unlike pascal_voc, albumentations uses normalized values. To normalize values, we divide coordinates in pixels for the x- and y-axis by the width and the height of the image. Coordinates of the example bounding box in this format are [98 / 640, 345 / 480, 420 / 640, 462 / 480] which are [0.153125, 0.71875, 0.65625, 0.9625]. Albumentations uses this format internally to work with bounding boxes and augment them. coco¶ coco is a format used by the Common Objects in Context COCO In coco, a bounding box is defined by four values in pixels [x_min, y_min, width, height]. They are coordinates of the top-left corner along with the width and height of the bounding box. yolo¶ In yolo, a bounding box is represented by four values [x_center, y_center, width, height]. x_center and y_center are the normalized coordinates of the center of the bounding box. To make coordinates normalized, we take pixel values of x and y, which marks the center of the bounding box on the x- and y-axis. Then we divide the value of x by the width of the image and value of y by the height of the image. width and height represent the width and the height of the bounding box. They are normalized as well. Coordinates of the example bounding box in this format are [((420 + 98) / 2) / 640, ((462 + 345) / 2) / 480, 322 / 640, 117 / 480] which are [0.4046875, 0.840625, 0.503125, 0.24375]. How different formats represent coordinates of a bounding box Bounding boxes augmentation¶ Just like with images and masks augmentation, the process of augmenting bounding boxes consists of 4 steps. You read images and bounding boxes from the disk. You pass an image and bounding boxes to the augmentation pipeline and receive augmented images and boxes. Some transforms in Albumentation don't support bounding boxes. If you try to use them you will get an exception. Please refer to this article to check whether a transform can augment bounding boxes. Here an example of a minimal declaration of an augmentation pipeline that works with bounding boxes. ], bbox_params=A.BboxParams(format='coco')) Note that unlike image and masks augmentation, Compose now has an additional parameter bbox_params. You need to pass an instance of A.BboxParams to that argument. A.BboxParams specifies settings for working with bounding boxes. format sets the format for bounding boxes coordinates. It can either be pascal_voc, albumentations, coco or yolo. This value is required because Albumentation needs to know the coordinates' source format for bounding boxes to apply augmentations correctly. Besides format, A.BboxParams supports a few more settings. Here is an example of Compose that shows all available settings with A.BboxParams: ], bbox_params=A.BboxParams(format='coco', min_area=1024, min_visibility=0.1, label_fields=['class_labels'])) min_area and min_visibility¶ min_area and min_visibility parameters control what Albumentations should do to the augmented bounding boxes if their size has changed after augmentation. The size of bounding boxes could change if you apply spatial augmentations, for example, when you crop a part of an image or when you resize an image. min_area is a value in pixels. If the area of a bounding box after augmentation becomes smaller than min_area, Albumentations will drop that box. So the returned list of augmented bounding boxes won't contain that bounding box. min_visibility is a value between 0 and 1. If the ratio of the bounding box area after augmentation to the area of the bounding box before augmentation becomes smaller than min_visibility, Albumentations will drop that box. So if the augmentation process cuts the most of the bounding box, that box won't be present in the returned list of the augmented bounding boxes. Here is an example image that contains two bounding boxes. Bounding boxes coordinates are declared using the coco format. An example image with two bounding boxes First, we apply the CenterCrop augmentation without declaring parameters min_area and min_visibility. The augmented image contains two bounding boxes. An example image with two bounding boxes after applying augmentation Next, we apply the same CenterCrop augmentation, but now we also use the min_area parameter. Now, the augmented image contains only one bounding box, because the other bounding box's area after augmentation became smaller than min_area, so Albumentations dropped that bounding box. An example image with one bounding box after applying augmentation with 'min_area' Finally, we apply the CenterCrop augmentation with the min_visibility. After that augmentation, the resulting image doesn't contain any bounding box, because visibility of all bounding boxes after augmentation are below threshold set by min_visibility. An example image with zero bounding boxes after applying augmentation with 'min_visibility' Class labels for bounding boxes¶ Besides coordinates, each bounding box should have an associated class label that tells which object lies inside the bounding box. There are two ways to pass a label for a bounding box. Let's say you have an example image with three objects: dog, cat, and sports ball. Bounding boxes coordinates in the coco format for those objects are [23, 74, 295, 388], [377, 294, 252, 161], and [333, 421, 49, 49]. An example image with 3 bounding boxes from the COCO dataset 1. You can pass labels along with bounding boxes coordinates by adding them as additional values to the list of coordinates.¶ For the image above, bounding boxes with class labels will become [23, 74, 295, 388, 'dog'], [377, 294, 252, 161, 'cat'], and [333, 421, 49, 49, 'sports ball']. Class labels could be of any type: integer, string, or any other Python data type. For example, integer values as class labels will look the following: [23, 74, 295, 388, 18], [377, 294, 252, 161, 17], and [333, 421, 49, 49, 37]. Also, you can use multiple class values for each bounding box, for example [23, 74, 295, 388, 'dog', 'animal'], [377, 294, 252, 161, 'cat', 'animal'], and [333, 421, 49, 49, 'sports ball', 'item']. 2.You can pass labels for bounding boxes as a separate list (the preferred way).¶ For example, if you have three bounding boxes like [23, 74, 295, 388], [377, 294, 252, 161], and [333, 421, 49, 49] you can create a separate list with values like ['cat', 'dog', 'sports ball'], or [18, 17, 37] that contains class labels for those bounding boxes. Next, you pass that list with class labels as a separate argument to the transform function. Albumentations needs to know the names of all those lists with class labels to join them with augmented bounding boxes correctly. Then, if a bounding box is dropped after augmentation because it is no longer visible, Albumentations will drop the class label for that box as well. Use label_fields parameter to set names for all arguments in transform that will contain label descriptions for bounding boxes (more on that in Step 4). Step 3. Read images and bounding boxes from the disk.¶ Bounding boxes can be stored on the disk in different serialization formats: JSON, XML, YAML, CSV, etc. So the code to read bounding boxes depends on the actual format of data on the disk. After you read the data from the disk, you need to prepare bounding boxes for Albumentations. Albumentations expects that bounding boxes will be represented as a list of lists. Each list contains information about a single bounding box. A bounding box definition should have at list four elements that represent the coordinates of that bounding box. The actual meaning of those four values depends on the format of bounding boxes (either pascal_voc, albumentations, coco, or yolo). Besides four coordinates, each definition of a bounding box may contain one or more extra values. You can use those extra values to store additional information about the bounding box, such as a class label of the object inside the box. During augmentation, Albumentations will not process those extra values. The library will return them as is along with the updated coordinates of the augmented bounding box. Step 4. Pass an image and bounding boxes to the augmentation pipeline and receive augmented images and boxes.¶ As discussed in Step 2, there are two ways of passing class labels along with bounding boxes coordinates: 1. Pass class labels along with coordinates.¶ So, if you have coordinates of three bounding boxes that look like this: you can add a class label for each bounding box as an additional element of the list along with four coordinates. So now a list with bounding boxes and their coordinates will look the following: [23, 74, 295, 388, 'dog'], [377, 294, 252, 161, 'cat'], [333, 421, 49, 49, 'sports ball'], or with multiple labels per each bounding box: [23, 74, 295, 388, 'dog', 'animal'], [377, 294, 252, 161, 'cat', 'animal'], [333, 421, 49, 49, 'sports ball', 'item'], You can use any data type for declaring class labels. It can be string, integer, or any other Python data type. Next, you pass an image and bounding boxes for it to the transform function and receive the augmented image and bounding boxes. transformed = transform(image=image, bboxes=bboxes) transformed_bboxes = transformed['bboxes'] Example input and output data for bounding boxes augmentation 2. Pass class labels in a separate argument to transform (the preferred way).¶ Let's say you have coordinates of three bounding boxes You can create a separate list that contains class labels for those bounding boxes: class_labels = ['cat', 'dog', 'parrot'] Then you pass both bounding boxes and class labels to transform. Note that to pass class labels, you need to use the name of the argument that you declared in label_fields when creating an instance of Compose in step 2. In our case, we set the name of the argument to class_labels. transformed = transform(image=image, bboxes=bboxes, class_labels=class_labels) Example input and output data for bounding boxes augmentation with a separate argument for class labels Note that label_fields expects a list, so you can set multiple fields that contain labels for your bounding boxes. So if you declare Compose like ], bbox_params=A.BboxParams(format='coco', label_fields=['class_labels', 'class_categories']))) class_categories = ['animal', 'animal', 'item'] transformed = transform(image=image, bboxes=bboxes, class_labels=class_labels, class_categories=class_categories) transformed_class_categories = transformed['class_categories'] Using Albumentations to augment bounding boxes for object detection tasks How to use Albumentations for detection tasks if you need to keep all bounding boxes Previous Mask augmentation for segmentation Next Keypoints augmentation
Plot or return output power spectrum of time series model or disturbance spectrum of linear input/output model - MATLAB spectrum - MathWorks 한국 y\left(t\right)=He\left(t\right) y\left(t\right)=Gu\left(t\right)+He\left(t\right) z={e}^{j\mathrm{ω}{T}_{s}} to map the unit circle to the real frequency axis. The function plots the spectrum only for frequencies smaller than the Nyquist frequency Ï€/Ts, and uses the default value of 1 time unit when Ts is unspecified. y\left(t\right)=He\left(t\right) y\left(t\right)=Gu\left(t\right)+He\left(t\right)
Physics - Particle assembly from fluids Particle assembly from fluids Advances in the technology of microfluidics should make it possible to assemble complex structures out of individual particles. Credit: Schneider et al. \phantom{\rule{0.278em}{0ex}} Figure 1: Flow-driven assembly, one particle at a time, forms distinct letters. Microfluidic technologies, which refers to a wide class of methods for controlling fluid flows at the scale of a few hundred microns and smaller, are leading to new approaches for addressing biophysical and biochemical problems at the scale of individual cells and innovative uses of droplets, bubbles, and small particles. Most applications for microfluidics are focused on channeling or screening miniscule amounts of liquid and whatever chemicals or particles are carried within it. But, the same technology that allows us to control liquids on the micron scale can be helpful in the assembly of complex objects out of smaller components. This is the idea behind a proposal for a microfluidic assembly device that Tobias Schneider, Shreyas Mandre, and Michael Brenner at Harvard University present in a paper appearing in Physical Review Letters [1]. They propose basic principles, supported by numerical simulations, for using fluid motion to control the trajectories of individual particles so as to assemble the particles into more complicated aggregates. One route to organizing complex structures out of micron-scale particles is to specially design each component to recognize its mate (or mates) through selective interactions and assemble them piece by piece. For example, particles floating at a liquid-air interface can self-assemble due to capillary forces (a “capillary bond”). Here, the discrete control comes from spatially patterning different wettabilities over the surface of the particles [2]. Alternatively, thermal fluctuations can promote the binding of colloid particles containing “programmed” DNA strands [3] or the interlocking of appropriately shaped “lock-and-key” colloids [4]. Finally, small numbers of particles in close proximity can be manipulated with electric fields [5] and laser tweezers [6]. In their work Schneider et al. explore the possibility of using the force of moving fluid to bring two or many particles together into a static configuration. They consider a flat chamber— somewhat like a closed petri dish—into or out of which fluid can flow through evenly spaced inlets along the rim (Fig. 1). The flow rates of the inlets can be independently controlled. Particles introduced at the various inlets are then manipulated by the flow, and when the particles get close together, the authors assume they are irreversibly bound together, as can be achieved by chemical means. With this simple model in mind, Schneider et al. ask: How many flow rates are needed to manipulate particles? By design, the team focuses their attention on microfluidic configurations where inertial effects are typically small. This “low Reynolds number” approximation is central to their mathematical characterization of the fluid dynamics. The fluid motion in this limit depends linearly on all of the “knobs” available for generating and manipulating the flow through the inlets. In this picture, if there are no external forces on the individual particles, they tend to move with the local fluid velocity, which in turn can be regulated by the flow rates specified at various entrances to and exits from the microfluidic device. Suppose the goal is to organize particles by manipulating all of them simultaneously. In one algorithm the particle trajectories can be assumed as given and it is necessary to solve for the flow rates at each inlet that make specified particle trajectories possible. Schneider et al. find that this approach, while feasible for a few particles, cannot manipulate a large number of particles. Instead, they propose a sequential algorithm where the particles are assembled one at a time into a larger organized aggregate. To understand how a finite number of controls (i.e., the flow rates at the boundary inlets), in principle, allow a sequential algorithm for assembly, consider the two-dimensional case. To control the position of the large aggregate requires specifying two coordinates, i.e., this step corresponds to two degrees of freedom. The location of the free particle to be added to the aggregate also requires two degrees of freedom. Control of the aggregate in order to properly locate the free particle at a specified position around the periphery requires creating a stagnation point in the flow about the aggregate (technically, this step requires a velocity field that varies linearly with position measured about the aggregate), which involves the orientation of the flow and its magnitude, i.e., there are two more degrees of freedom. Finally, the system must conserve the fluid volume so the controlled flow rates must add to zero. Thus, there are seven degrees of freedom and consequently seven inlets or flow rates are required. The same arguments applied to the three-dimensional case suggest that different flow rates are required to sequentially structure an aggregate. What kinds of two-dimensional assemblies might be possible with a fluidic manipulator like this? As one example among many possibilities, Schneider et al. give an instructional demonstration where letters of the alphabet are organized by manipulating independent particles and assembling them into recognizable shapes. In this way, they show their sequential algorithm can organize letters such as I, B, and M (Fig. 1). Consequently, they suggest that in two dimensions these ideas should allow the construction of arbitrarily shaped aggregates. The authors recognize that there may be limitations to this approach, and some of the limitations may prove daunting. For example, for micron dimension particles (or smaller) Brownian motion is important and such thermal noise may prove to be a significant barrier if the trajectories of the particles are highly sensitive. The latter is possible since it is well known that coupled nonlinear ordinary differential equations, such as those that describe the particle dynamics in these systems, can be chaotic, meaning they are sensitive to small perturbations. Moreover, the Harvard group’s model is based on a set of assumptions, each of which neglect certain physical effects, some of which are known to have systematic influences on particle motion in microfluidic systems. As but one example, small inertial effects can lead to positioning of small particles due to drift across streamlines [7]. Finally, in these confined systems, hydrodynamic interactions, i.e., the movement of one particle is influenced by a boundary or another particle, may create unforeseen difficulties. An example of this kind of instability plagues the stable formation of microfluidic crystals [8]. In spite of potential hurdles, the idea of microfluidic assembly, one particle at a time, suggests intriguing possibilities. Other particle-scale manipulation approaches fall into the same category. Imagine an assembly line where particles of different material type, size, and shape and different cells are available for particle-by-particle manipulation into a complex aggregate. At the least, such an approach may make new kinds of biological assays possible, and at its most speculative, new kinds of machines may be possible. The author thanks Dr. Bo Sun for helpful feedback. T. M. Schneider, S. Mandre, and M. P. Brenner, Phys. Rev. Lett. 106, 094503 (2011) N. Bowden, A. Terfort, J. Carbeck, and G. M. Whitesides, Science 276, 233 (1997) C. A. Mirkin, R. L. Letsinger, R. C. Mucic, and J. J. Storhoff, Nature 382, 607 (1996) S. Sacanna, W. T. M. Irvine, P. M. Chaikin, and D. J. Pine, Nature 464, 575 (2010) A. E. Cohen, Phys. Rev. Lett 94, 118102 (2005) D. R. Grier and Y. Roichman, Appl. Opt. 45, 880 (2006) D. Di Carlo, Lab Chip 9, 3038 (2009) T. Beatus, T. Tlusty, and R. Bar-Ziv, Nature Phys 2, 743 (2006) Howard A. Stone is the Donald R. Dixon ’69 and Elizabeth W. Dixon Professor in Mechanical and Aerospace Engineering at Princeton University. He received his S.B. degree in chemical engineering from the University of California, Davis, and a Ph.D. in chemical engineering from Caltech. From 1989 to 2009 he was on the faculty in the School of Engineering and Applied Sciences at Harvard University. His research interests are in fluid dynamics, especially as they arise in research and applications at the interface of engineering, chemistry, and physics. He was the first recipient of the G. K. Batchelor Prize in Fluid Dynamics, which was awarded in August 2008. In 2009 he was elected to the National Academy of Engineering. In 2008 he was recognized as an Outstanding Referee by the American Physical Society. Algorithm for a Microfluidic Assembly Line Tobias M. Schneider, Shreyas Mandre, and Michael P. Brenner
34C11 Growth, boundedness 34C05 Location of integral curves, singular points, limit cycles 34C07 Theory of limit cycles of polynomial and analytic vector fields (existence, uniqueness, bounds, Hilbert's 16th problem and ramifications) 34C10 Oscillation theory, zeros, disconjugacy and comparison theory 34C12 Monotone systems 34C14 Symmetries, invariants 34C15 Nonlinear oscillations, coupled oscillators 34C20 Transformation and reduction of equations and systems, normal forms 34C23 Bifurcation 34C25 Periodic solutions 34C26 Relaxation oscillations 34C27 Almost and pseudo-almost periodic solutions 34C28 Complex behavior, chaotic systems 34C37 Homoclinic and heteroclinic solutions 34C40 Equations and systems on manifolds 34C41 Equivalence, asymptotic equivalence 34C45 Invariant manifolds 34C46 Multifrequency systems 34C55 Hysteresis 34C60 Qualitative investigation and simulation of models A class of growth and bounds to solutions of a differential equation. Knežević-Miljanović, Julka (1999) A class of nonlinear differential equations on the space of symmetric matrices. Dragan, Vasile, Freiling, Gerhard, Hochhaus, Andreas, Morozan, Toader (2004) A conterexample in the theory of linear singularly perturbed systems Raúl Naulin (1991) A modification and comparison of Filippov and Viktorovskij generalized solutions Jaroslav Pelant (1979) A note on bounded, periodic and stable solutions of differential equations J. Knežević-Miljanović (1987) A note on comparison theorems for third-order linear differential equations Jozef Rovder (1976) A note on existence of bounded solutions of an -th order ODE on the real line Bohumil Krajc (1998) A note on linear differential equations of the second order with the same basic central dispersion of the first kind Miroslav Laitoch (1981) A note on linear perturbations of oscillatory second order differential equations Renato Manfrin (2010) Under suitable hypotheses on \gamma \left(t\right) \lambda \left(t\right) q\left(t\right) we prove some stability results which relate the asymptotic behavior of the solutions of {u}^{\text{'}\text{'}}+\gamma \left(t\right){u}^{\text{'}}+\left(q\left(t\right)+\lambda \left(t\right)\right)u=0 to the asymptotic behavior of the solutions of {u}^{\text{'}\text{'}}+q\left(t\right)u=0 A note on the existence of \text{Ψ} -bounded solutions for a system of differential equations on ℝ A note on the qualitative behaviour of some second order nonlinear differential equations. Nápoles V., Juan E. (2002) A note to a certain nonlinear differential equation of the third-order Vladimír Vlček (1988) A remark on the oscillation of ordinary second order nonlinear differential equations J. Detki (1981) A Role of Abel's Equation in the Stabitity Theory of Differential Equations F. NEUMAN (1971) A singular initial value problem for second and third order differential equations Wojciech Mydlarczyk (1995)
hyad.es | iPython Notebook viewer for GNOME Sushi I made a .ipynb notebook viewer for gnome-sushi, which I'm releasing into the wild as this GitHub Gist (under the GPL per the original). Update 2020: Thanks to a recent pull request, this can now be installed by placing it in ~/.local/share/sushi/viewers. If for some reason you don't have write access to /tmp, you're gonna have a bad time (can be ameliorated by changing the tmpdir constant) No thumbnailer (seems like a lot more work needed that I don't have time for right now — note to self: https://developer.gnome.org/integration-guide/stable/thumbnailer.html.en) Without internet access, will only display a white screen. This is because Jupyter nbconvert uses MathJax, JQuery and require.min.js, which are externally hosted (but dumps half an MB of CSS into the preamble anyway). If any of these resources returns a 404, the WebKit webview crashes. This is also why I create a file custom.css in the tmpdir. I could alternatively remove references to things with some sed/awk magic, but then e.g. [\LaTeX] \LaTeX would break. Interestingly, 404's returned from CSS urls (e.g. fontawesome) don't crash the webview. I'm assuming a MIME-type of application/x-ipynb+json — change as necessary.
Set initial conditions for a transient structural model - MATLAB structuralIC - MathWorks España structuralIC Specify Initial Velocity Specify Nonconstant Initial Displacement by Using Function Handle Use Static Solution as Initial Condition struct_ic Set initial conditions for a transient structural model structuralIC(structuralmodel,'Displacement',u0,'Velocity',v0) structuralIC(___RegionType,RegionID) structuralIC(structuralmodel,Sresults) structuralIC(structuralmodel,Sresults,iT) struct_ic = structuralIC(___) structuralIC(structuralmodel,'Displacement',u0,'Velocity',v0) sets initial displacement and velocity for the entire geometry. structuralIC(___RegionType,RegionID) sets initial displacement and velocity for a particular geometry region using the arguments from the previous syntax. structuralIC(structuralmodel,Sresults) sets initial displacement and velocity using the solution Sresults from a previous structural analysis on the same geometry. If Sresults is obtained by solving a transient structural problem, then structuralIC uses the solution Sresults for the last time-step. structuralIC(structuralmodel,Sresults,iT) uses the solution Sresults for the time-step iT from a previous structural analysis on the same geometry. struct_ic = structuralIC(___) returns a handle to the structural initial conditions object. Specify initial velocity values for the entire geometry and for a particular face. Create a geometry and include it into the model. Plot the geometry. Specify the zero initial velocity on the entire geometry. When you specify only the initial velocity or initial displacement, structuralIC assumes that the omitted parameter is zero. For example, here the initial displacement is also zero. structuralIC(structuralmodel,'Velocity',[0;0;0]) InitialDisplacement: [] Update the initial velocity on face 2 to model impulsive excitation. structuralIC(structuralmodel,'Face',2,'Velocity',[0;60;0]) Specify initial z-displacement to be dependent on the coordinates x and y. Create the geometry and include it into the model. Plot the geometry. Specify the zero initial displacement on the entire geometry. structuralIC(structuralmodel,'Displacement',[0;0;0]) InitialVelocity: [] Now change the initial displacement in the z-direction on face 2 to a function of the coordinates x and y: \mathit{u}\left(0\right)=\left[\begin{array}{c}0\\ 0\\ {\mathit{x}}^{2}+{\mathit{y}}^{2}\end{array}\right] Write the following function file. Save it to a location on your MATLAB® path. function uinit = initdisp(location) uinit(3,:) = location.x.^2 + location.y.^2; Pass the initial displacement to your structural model. structuralIC(structuralmodel,'Face',2,'Displacement',@initdisp) InitialDisplacement: @initdisp Use a static solution as an initial condition for a dynamic structural model. 'SurfaceTraction', ... [0;1E6;0]); structuralmodel — Transient structural model Transient structural model, specified as a StructuralModel object. The model contains the geometry, mesh, structural properties of the material, body loads, boundary loads, boundary conditions, and initial conditions. u0 — Initial displacement Initial displacement, specified as a numeric vector or function handle. A numeric vector must contain two elements for a 2-D model and three elements for a 3-D model. The elements represent the components of initial displacement. Use a function handle to specify spatially varying initial displacement. The function must return a two-row matrix for a 2-D model and a three-row matrix for a 3-D model. Each column of the matrix corresponds to the initial displacement at the coordinates provided by the solver. For details, see More About. Example: structuralIC(structuralmodel,'Face',[2,5],'Displacement',[0;0;0.01]) v0 — Initial velocity Initial velocity, specified as a numeric vector or function handle. A numeric vector must contain two elements for a 2-D model and three elements for a 3-D model. The elements represent the components of initial velocity. Use a function handle to specify spatially varying initial velocity. The function must return a two-row matrix for a 2-D model and a three-row matrix for a 3-D model. Each column of the matrix corresponds to the initial velocity at the coordinates provided by the solver. For details, see More About. Example: structuralIC(structuralmodel,'Face',[2,5],'Displacement',[0;0;0.01],'Velocity',[0;60;0]) When you apply multiple initial condition assignments, the solver uses these precedence rules for determining the initial condition. For multiple assignments to the same geometric region, the solver uses the last applied setting. For separate assignments to a geometric region and the boundaries of that region, the solver uses the specified assignment on the region and chooses the assignment on the boundary as follows. The solver gives an 'Edge' assignment precedence over a 'Face' assignment, even if you specify a 'Face' assignment after an 'Edge' assignment. The precedence levels are 'Vertex (highest precedence), 'Edge', 'Face', 'Cell' (lowest precedence). For an assignment made with the results object, the solver uses that assignment instead of all previous assignments. Sresults — Structural model solution StaticStructuralResults object | TransientStructuralResults object Structural model solution, specified as a StaticStructuralResults or TransientStructuralResults object. Create Sresults by using solve. Example: structuralIC(structuralmodel,Sresults,21) struct_ic — Handle to initial conditions Handle to initial conditions, returned as a GeometricStructuralICs or NodalStructuralICs object. See GeometricStructuralICs Properties and NodalStructuralICs Properties. structuralIC associates the structural initial condition with the geometric region in the case of a geometric assignment, or the nodes in the case of a results-based assignment. StructuralModel | structuralProperties | structuralDamping | structuralBodyLoad | structuralBoundaryLoad | structuralBC | structuralSEInterface | solve | reduce | findStructuralIC | GeometricStructuralICs Properties | NodalStructuralICs Properties
How to calculate the price of a LP token - LP-Swap Academy How to calculate the price of a LP token Recall that LP tokens, for instance WBNB-CAKE LP tokens, are given to liquidity providers in exchange for the pair of tokens WBNB and CAKE they lock into the pool. How do we calculate the value, or price in dollar, of a WBNB-CAKE LP token? Perhaps counter-intuitively, the answer is not the mean price of WBNB and CAKE. p^{\mathrm{LP}} (in dollars) of a WBNB-CAKE LP token depends on five parameters: the price p^{\mathrm{WBNB}} of WBNB, the price p^{\mathrm{CAKE}} of CAKE, the quantity x^{\mathrm{CAKE}} of CAKE in the pool, the quantity x^{\mathrm{WBNB}} of WBNB in the pool, and the total number Q of WBNB-CAKE LP tokens representing the pool. The WBNB-CAKE LP tokens represent equal shares of the pool. So from the total value inside the pool we deduce the price of each WBNB-CAKE LP token: p^{\mathrm{LP}}=\frac{p^{\mathrm{WBNB}}\times x^{\mathrm{WBNB}} + p^{\mathrm{CAKE}} \times x^{\mathrm{CAKE}}}{Q}
(→‎Madagascar's design) (→‎RSF file format) ===RSF file format=== As previously mentioned, the lowest level of Madagascar is the RSF file format, which is the format used to exchange information between Madagascar programs. Conceptually, the RSF file format is one of the easiest to understand, as RSF files are simply regularly sampled hypercubes of information. For reference, a hypercube is a hyper dimensional cube (or array) that can best be visualized as an N-dimensional array, where N=[1,9] in Madagascar. As previously mentioned, the lowest level of Madagascar is the RSF file format, which is the format used to exchange information between Madagascar programs. Conceptually, the RSF file format is one of the easiest to understand, as RSF files are simply regularly sampled hypercubes of information. For reference, a hypercube is a hyper dimensional cube (or array) that can best be visualized as an <math>N</math>-dimensional array, where <math>N</math> is between 1 and 9 in Madagascar. RSF hypercubes are defined by two files, the header and the binary file. The header file contains information about the dimensionality of the hypercube as well as the data contained within the hypercube. Information contained in the header file includes the following: *number of elements on all axes, *the origin of the axes, *the sampling interval of elements on the axes, *the type of elements in the axes (i.e. float, integer), *the size of the elements (e.g. single or double precision), *and the location of the actual binary file. Since we often want to view this information about files without deciphering it, we store the header file as an ASCII text file in the local directory with the suffix '''.rsf''' . At any time, you can view or edit the contents of the header files using a text editor such as VIM or Emacs. Since we often want to view this information about files without deciphering it, we store the header file as an ASCII text file in the local directory, usually with the suffix '''.rsf''' . At any time, you can view or edit the contents of the header files using a text editor such as gedit, VIM, or Emacs. The binary file is a file stored remotely (i.e. in a separate directory) that contains the actual hypercube data. Because the hypercube data can be very large (<math>10s</math> of GB or TB) we usually store the binary files in a remote directory with the suffix '''.rsf@''' . The remote directory is specified by the user using the '''DATAPATH''' environmental variable. The advantage to doing this, is that we can store the large binary data file on a fast remote filesystem if we want, and we can avoid working in local directories. Sometimes though we need to store RSF files for archiving or to transfer to other machines. Fortunately, we can avoid transferring the header and binary separately by using the combined header/binary format for RSF files. Files can be constructed using the combined header/binary format by specifying additional parameters on the command line, in particular '''out=stdout''' , for any Madagascar program. The output file will then be header/binary combined, which allows you to transfer the file without fear for losing either the header or binary. Be careful though: header/binary combined files can be very large, and might slow down your local filesystem. A best practice is to only use combined header/binary files when absolutely necessary for file transfers. Note: header/binary combined files are automatically converted to header/binary separate files when processed by a Madagascar program. ===Additional documentation=== {\displaystyle N} {\displaystyle N} {\displaystyle 10s} �egin{figure} \setlength{\unitlength}{1cm} �egin{picture}(12,7)(0,0) \put(2,6){�ramebox(2,2){Header}} \put(3,6){�ector(0,-1){2}} \put(2,0){�ramebox(10,4){Binary}} \end{picture} \caption{Cartoon of the RSF file format. The header file points to the binary file, which can be separate from one another. The header file, which is text, is very small compared to the binary file.} (fig:rsfformat) Because the header and binary are separated from one another, it is possible that we can lose either the header or binary for a particular RSF file. If the header is lost, then we can simply reconstruct the header using our previous knowledge of the data and a text editor. However, if we lose the binary file, then we cannot reconstruct the data regardless of what we do. Therefore, you should try and avoid losing either the header or binary data. The best way to avoid data loss is to make your research reproducible so that your results can be replicated later. Sometimes though we need to store RSF files for archiving or to transfer to other machines. Fortunately, we can avoid transferring the header and binary separately by using the combined header/binary format for RSF files. Files can be constructed using the combined header/binary format by specifying additional parameters on the command line, in particular out=stdout , for any Madagascar program. The output file will then be header/binary combined, which allows you to transfer the file without fear for losing either the header or binary. Be careful though: header/binary combined files can be very large, and might slow down your local filesystem. A best practice is to only use combined header/binary files when absolutely necessary for file transfers. Note: header/binary combined files are automatically converted to header/binary separate files when processed by a Madagascar program. {\displaystyle \#} {\displaystyle {SOURCES[1]}} {\displaystyle {SOURCES[3]}}
Physics - Diagramming Quantum Weirdness January 24, 2022 &bullet; Physics 15, 11 Physicists try to develop a realist interpretation of quantum mechanics by drawing diagrams that represent causal connections and observer-acquired information. Physicists are developing a new formulation of quantum mechanics that could provide a causal explanation for nonclassical behavior, such as the correlations between entangled particles in Bell experiments. When faced with weird quantum effects, many physicists are content to just calculate the probabilities of experimental outcomes using the Schrödinger wave equation. But does that approach explain why particles behave like waves or how entanglement is maintained over large distances? Or is the math just a computational tool that hides our lack of a fundamental understanding of quantum phenomena? Researchers working on the foundations of physics are hoping to reveal a causal explanation of quantum mechanics by diagramming the elements within an experiment. Everyone would agree that the Schrödinger equation and similar mathematical representations are powerful tools for predicting measurement outcomes (see Viewpoint: Quantum Mechanics Must Be Complex). However, most physicists believe that a measurement corresponds to “something” that exists regardless of being observed. “Deep down they are realists; they think there’s a world out there,” says David Schmid from the International Centre for Theory of Quantum Technologies in Poland. That might not seem surprising, but the problem is that Bell’s theorem—laid out in 1964 by the physicist John Stewart Bell—ruled out “local realism” (see Viewpoint: Closing the Door on Einstein and Bohr’s Quantum Debate). The Bell ruling means that a particle can’t carry with it, locally, a “hidden variable” that determines measurement outcomes. D. Schmid et al., arXiv:2009.03297 This diagram breaks down the causal relationships and inferences within a generic Bell experiment. Two particles (represented by S {S}^{\prime } ) are distributed to Alice on the left and Bob on the right. Alice chooses a measurement setting (X) and records an outcome (A), while Bob has his own setting (Y) and records his own outcome (B). The underlying causal structure is shown with vertical lines, whereas knowledge about the systems is shown with horizontal lines.This diagram breaks down the causal relationships and inferences within a generic Bell experiment. Two particles (represented by S {S}^{\prime } ) are distributed to Alice on the left and Bob on the right. Alice chooses a measurement setting (X) and records... Show more So, what causes the particle to behave the way it does? Schmid and his colleagues are working on a new mathematical formalism that they hope will lead to a “quantum realism” that respects the Bell theorem while also appealing to our classical intuition. Schmid presented this work at the 20th European Conference on Foundations of Physics held last fall in Paris. The formalism that Schmid and co-workers are developing can be understood as a method for interpreting quantum experiments such as the Bell test. The Bell test involves entangling two particles and then distributing them to two observers, Alice and Bob, who are separated by a large distance. Alice chooses to measure some property of her particle—say, the spin along the horizontal direction—and Bob independently chooses to measure the spin of his particle in another direction. The statistics of the two outcomes show a correlation, which can be captured by the Schrödinger equation. But this math doesn’t explain particular outcomes, like Bob recording spin-up during one run of the experiment (see Viewpoint: Causality in the Quantum World). “It’s unclear whether the wave equation actually gives a causal explanation of what’s going on,” Schmid says. Causal explanations are desirable, Schmid says, because they let us understand how interventions lead to changes, distinguishing correlation from causation. For example, knowing that taking a drug is correlated with recovering from an illness does not imply causation, nor does it teach us what would happen if the drug were used to treat a different illness. But extracting a casual explanation of how the drug works can enable us to answer such questions. Physicists and philosophers have proposed several types of causal explanations for quantum physics. One option is to assume that particles communicate with each other through some sort of superluminal signal (what Albert Einstein called “spooky action at a distance”). Another possibility is a superdeterministic framework in which all the parts of the experiment (including Alice and Bob’s measurement choices) were pre-determined by a single cause that occurred long ago. Other quantum interpretations include many-world theories (in which the Universe “splits” into two paths at each measurement) and relational quantum mechanics (in which reality is subjective, with Alice and Bob experiencing different worlds). “None of these interpretations have gained a majority following among physicists,” Schmid says. The reason for this reluctance, he thinks, is that these interpretations require accepting a radical idea that doesn’t fit into our everyday experience of the world. “Each of these interpretations picks a poison and asks you to drink it,” Schmid says. That “poison” could be violations of relativity theory or the existence of many worlds. Schmid and his colleagues are looking for a more “palatable” alternative—a quantum realism that resembles classical realism. These diagrams depict three interpretations of a Bell test experiment. The left diagram shows a “hidden variable” model, in which each particle has a property \left(\Lambda ,{\Lambda }^{\prime }\right) that determines what Alice and Bob will measure (A, B). This model has been ruled out by experiments. The middle diagram shows a superluminal model, in which Alice’s particle is measured and then communicates instantaneously with Bob’s particle to influence its measurement. The right diagram shows a superdeterministic model, in which a preliminary cause sets not only the properties of the particles but also the choices (X,Y) that Alice and Bob make when measuring.These diagrams depict three interpretations of a Bell test experiment. The left diagram shows a “hidden variable” model, in which each particle has a property \left(\Lambda ,{\Lambda }^{\prime }\right) that determines what Alice and Bob will measure (A, B). This model has been ruled ou... Show more The team’s approach is to allow for new types of mathematical inputs and operations in descriptions of the experiments. The common description of the Bell test, for example, would represent a particle by a classical random variable, say 0 or 1. That variable would serve as an input to a function that describes the measurement devices that Alice and Bob use. Schmid and colleagues abandon classical random variables and conventional functions and propose replacing them with other mathematical entities. This strategy is similar to recent proposals for defining quantum causal models. But there is, so far, no consensus on what the alternative mathematical entities are. “We don’t yet have precise mathematical proposals,” Schmid says, “but we have a framework for describing what the scope of possibilities is.” The researchers’ framework comes from a method for diagramming quantum experiments, which is similar to how an engineer might diagram an electrical circuit. The mathematical entities—the inputs and operations—are represented by lines and boxes. This pictorial method makes clear the distinction between causal links (which are oriented vertically) and subjective inferences (which are oriented horizontally). The former makes up the causal structure of a realist model, while the latter embodies the information that observers like Alice and Bob are able to glean from their measurements. The idea of diagramming quantum physics has been around for decades, says philosopher Alexei Grinbaum from CEA-Saclay in France. He is impressed by the effort of Schmid and colleagues to incorporate causal relationships and a realist picture within their diagrams. But he questions whether this complex mixture will reveal new insights. “If a result isn’t simple, people won’t get it,” Grinbaum says. Schmid and his colleagues are trying to distill a simplified picture from their diagrams. They have so far devised diagram-based rules resembling the logical rules that are the basis of classical realism. They have also shown that their formalism can avoid the constraints placed on quantum realism by the Bell test and other so-called no-go theorems (see Viewpoint: Mind the (Quantum) Context). “In the end, we hope to be left with a theory that is nonclassical but still has the basic features that let you reason causally about the world in the ways that you want to,” Schmid says.
Resume training of Gaussian kernel classification model - MATLAB resume - MathWorks Australia Resume training of Gaussian kernel classification model UpdatedMdl = resume(Mdl,X,Y) continues training with the same options used to train Mdl, including the training data (predictor data in X and class labels in Y) and the feature expansion. The training starts at the current estimated parameters in Mdl. The function returns a new binary Gaussian kernel classification model UpdatedMdl. UpdatedMdl = resume(Mdl,Tbl,ResponseVarName) continues training with the predictor data in Tbl and the true class labels in Tbl.ResponseVarName. UpdatedMdl = resume(Mdl,Tbl,Y) continues training with the predictor data in table Tbl and the true class labels in Y. Train a binary kernel classification model with relaxed convergence control training options by using the name-value pair arguments 'BetaTolerance' and 'GradientTolerance'. [Mdl,FitInfo] = fitckernel(XTrain,YTrain,'Verbose',1, ... Predict the test-set labels, construct a confusion matrix for the test set, and estimate the classification error for the test set Continue training by using resume with modified convergence control training options. [UpdatedMdl,UpdatedFitInfo] = resume(Mdl,XTrain,YTrain, ... The classification error decreases after resume updates the classification model with smaller convergence tolerances. The first training terminates after three iterations because the gradient magnitude becomes less than 1e-1. The second training terminates after 20 iterations because the gradient magnitude becomes less than 1e-2. Y — Class labels used to train Mdl categorical array | character array | string array | logical vector | vector of numeric values | cell array of character vectors Class labels used to train Mdl, specified as a categorical, character, or string array, logical or numeric vector, or cell array of character vectors. resume should run only on the same training data and observation weights used to train Mdl. The resume function uses the same training options used to train Mdl, including feature expansion. Example: UpdatedMdl = resume(Mdl,X,Y,'GradientTolerance',1e-5) resumes training with the same options used to train Mdl, except the absolute gradient tolerance. If you supply weights, resume normalizes the weights to sum up to the value of the prior probability in the respective class. {B}_{t}=\left[{\beta }_{t}{}^{\prime }\text{\hspace{0.17em}}\text{\hspace{0.17em}}{b}_{t}\right] {‖\frac{{B}_{t}-{B}_{t-1}}{{B}_{t}}‖}_{2}<\text{BetaTolerance} \nabla {ℒ}_{t} {‖\nabla {ℒ}_{t}‖}_{\infty }=\mathrm{max}|\nabla {ℒ}_{t}|<\text{GradientTolerance} The default value is 1000 if the transformed data fits in memory (Mdl.ModelParameters.BlockSize), which you specify by using the name-value pair argument when training Mdl. Otherwise, the default value is 100. UpdatedMdl — Updated kernel classification model Updated kernel classification model, returned as a ClassificationKernel model object. Objective function minimization technique: 'LBFGS-fast', 'LBFGS-blockwise', or 'LBFGS-tall'. For details, see Algorithms of fitckernel. LossFunction Loss function. Either 'hinge' or 'logit' depending on the type of linear classification model. See Learner of fitckernel. Lambda Regularization term strength. See Lambda of fitckernel. History History of optimization information. This field also includes the optimization information from training Mdl. This field is empty ([]) if you specify 'Verbose',0 when training Mdl. For details, see Verbose and Algorithms of fitckernel. G\left({x}_{1},{x}_{2}\right)=〈\phi \left({x}_{1}\right),\phi \left({x}_{2}\right)〉 G\left({x}_{1},{x}_{2}\right)=〈\phi \left({x}_{1}\right),\phi \left({x}_{2}\right)〉\approx T\left({x}_{1}\right)T\left({x}_{2}\right)\text{'}, {ℝ}^{p} {ℝ}^{m} T\left(x\right)={m}^{-1/2}\mathrm{exp}\left(iZx\text{'}\right)\text{'}, Z\in {ℝ}^{m×p} N\left(0,{\sigma }^{-2}\right) The default value for the 'IterationLimit' name-value pair argument is relaxed to 20 when working with tall arrays. resume uses a block-wise strategy. For details, see Algorithms of fitckernel. ClassificationKernel | fitckernel | predict
\mathbf{R}\left(p\right)=x\left(p\right) \mathbf{i}+y\left(p\right) \mathbf{j}+z\left(p\right) \mathbf{k} p is the parameter along the curve, and R is the "arrow" from the origin to the point \left(x\left(p\right),y\left(p\right),z\left(p\right)\right) on the curve. Differentiation with respect to the parameter produces a vector \mathbf{R}\prime \left(p\right) that is tangent to the curve. The Frenet-Serret formalism is then an analysis of the properties of a space curve to the effect that two scalars, curvature and torsion, completely determine the curve.
GROW price prediction - LP-Swap Academy LP-Swap GROW price prediction Growing.fi is a portfolio management tool on the Binance Smart Chain. It includes a dashboard for tracking your portfolio and multiple farms for earning yields on your holdings. The central piece of Growing ecosystem is the GROW token (you can buy it on LP-Swap). Its current price is Loading.... We predict it to be Loading... in 1 month. Keep reading to learn why. For any question / suggestion, send us a message on Telegram. GROW utilities The fundamental value of GROW token is determined by its utilities. 1. Unlocking dashboard features. If you are PRO (i.e. you hold 1 GROW), you unlock premium dashboard features such as the chart of the evolution of your portfolio through time. 2. Sharing farms profit. When withdrawing from Growing farms, 30% of profit is redistributed to GROW stakers. 3. Incentiving farms usage. GROW rewards are distributed when you deposit and stake in Growing farms. If you are PRO (i.e. you hold 1 GROW), you get more GROW rewards. GROW supply evolution \text{price} = \frac{\text{market cap}}{\text{total supply}} , it is key to understand how the supply of GROW evolves. If a Growing farm has a 1x deposit bonus multiplier: for every 1 BNB equivalent value deposited, 0.01 GROW is minted and rewarded as deposit bonus. Deposit bonus are locked for 3 days. If you are PRO (i.e. you hold 1 GROW), you get 30% more deposit bonus. Deposit bonus calculator \frac{\text{deposit} \times \text{multiplier} \times 0.01 \times 1.3 \times \text{GROW price}}{\text{BNB price}} — GROW price = Loading... — BNB price = Loading... Each week, 1000 GROW are minted and rewarded to stakers as staking rewards. 3. Profit rewards If you stake in a Growing farm (e.g. Growing CAKE) and claim your yield profit (CAKE): 30% of your yield profit is converted to BNB and redistributed to GROW staking pool. For each 1 BNB of profit redistributed, 3 GROW are minted and sent to you as profit rewards. If you are PRO (i.e. you hold 1 GROW), you get 50% more profit rewards. 0.7\times\text{yield} + \frac{0.3\times\text{yield}\times 3 \times 1.5 \times \text{GROW price}}{\text{BNB price}} 4. Dev fund For every newly minted GROW token, another 0.1 GROW is minted for Dev Fund. 1. TVL prediction We first predict Total Value Locked (TVL) on which will depend our prediction of Market Capitalization (MC) and GROW supply. Growing has currently Loading... of TVL. As a comparison, the top 3 yield optimizers (Autofarm, Beefy, PancakeBunny) have TVLs between $500M and $2B. Currently, Growing has only 4 not-audited farms, yet they succeeded to have a TVL of Loading.... This is likely because: All their 4 farms (CAKE, BANANA, BNB, BUSD) offer highest yields on BSC. The deposit bonus incentivises to use Growing farms. Without being audited yet, they gained trust by extensively testing their smart contracts during a test phase, being very present in their Telegram and offering bug bounty rewards. As they are going to add new farms and to be audited by Certik soon, we believe the TVL of Growing could go to $100M in 1 month. 2. MC prediction GROW has currently a Market Capitalization (MC) of Loading.... As a comparison, the MCs of the tokens of the top 3 yield optimizers (AUTO, BIFI, BUNNY) are between 75M$ and 200M$. There are currently 3 reasons to buy GROW: to get access to Growing's dashboard premium features, to get bigger deposit and farming rewards, to earn 30% of Growing farms profit. We believe this last reason will be the one driving the MC up. Assuming (i) Growing has $100M of TVL and (ii) the average daily yield accross farms is 0.2%, then $0.2M of profit will be redistributed to GROW stakers each day. If GROW MC stays at Loading... and all GROW are staked, then GROW stakers would earn a daily yield of Loading.... Such a high yield will incite people to buy GROW. GROW MC will then increase and GROW daily yield will decrease. We believe an equilibrium would be found when GROW daily yield will be 0.7%. Therefore, for (i) $100M of TVL, (ii) 0.2% of average daily yield and (iii) 0.7% of GROW daily yield, GROW MC would be $29M \frac{\text{avg daily yield} \times \text{TVL}}{\text{GROW daily yield}} 3. GROW supply prediction GROW has currently a total supply of Loading.... There are 4 sources of GROW emission: Deposit bonus. In 1 month, Loading... GROW will be emitted, assuming (i) an average 1x bonus multiplier, (ii) Loading... of capital deposited in 1 month to reach $100M, (iii) all users are PRO, (iv) BNB price stays constant. Staking rewards. In 1 month, 4286 \frac{1000\times 30}{7} GROW will be emitted. Profit rewards. In 1 month, Loading... GROW will be emitted, assuming (i) an average Loading... TVL during the month, (ii) an average 0.2% daily yield, (iii) all users are PRO, (iv) BNB price stays constant. Dev fund. For each minted GROW token, another 0.1 GROW is minted for Dev Fund. During the next month, at most Loading... GROW should be emitted. So the total supply in 1 month should be at most Loading... GROW. 4. GROW price prediction We predict GROW price to be Loading... in 1 month.
Matrix Norm - Maple Help Home : Support : Online Help : Programming : Maplets : Examples : Advanced : Matrix Norm display a graphical interface to the MatrixNorm function MatrixNorm(M) The MatrixNorm(M) calling sequence displays a Maplet application that returns the matrix norm of M. A definition of the matrix norm is given in the Maplet application. By using the Norm drop-down box, control the matrix norm returned. By using the Matrix has real values check box, control the algorithm used to calculate the matrix norm. If the matrix has real entries, a faster algorithm can be used. The default value is determined by using the value of \mathrm{has}⁡\left(M,I\right) By using the Evaluate result check box, control whether the Maplet application returns the matrix norm of M or the calling sequence required to calculate the matrix norm in the worksheet. The default behavior is to evaluate the result, that is, return the norm. The MatrixNorm sample Maplet worksheet demonstrates how to write a Maplet application that functions similarly to the Maplet application displayed by this routine. \mathrm{with}⁡\left(\mathrm{Maplets}[\mathrm{Examples}][\mathrm{LinearAlgebra}]\right): \mathrm{MatrixNorm}⁡\left(〈〈1,3〉|〈2,5〉〉\right) MatrixNorm Sample Maplet
A blow-up result for a viscoelastic system in {ℝ}^{n} Kafini, Mohammad, Messaoudi, Salim A. (2007) A blowup result in a multidimensional semilinear thermoelastic system. Messaoudi, Salim A. (2001) A boundary Harnack principle for infinity-Laplacian and some related results. Bhattacharya, Tilak (2007) A cell complex structure for the space of heteroclines for a semilinear parabolic equation. Robinson, Michael (2009) A class of nonlinear elliptic variational inequalities: Qualitative properties and existence of solutions. Korkut, Luka, Pašić, Mervan, Žubrinić, Darko (2002) A comparison principle for a class of subparabolic equations in Grushin-type spaces. Bieske, Thomas (2007) A comparison principle for an American option on several assets: index and spread options. Laurence, Peter, Stredulinsky, Edward (2003) A comparison theorem for nonlinear operators W. Allegretto (1971) L\left(u\right)=\sum {}_{i=1}{}^{n}\left(\left(1+u{}_{t}{}^{2}\right)\left(u{}_{{x}_{i}{x}_{i}}+u{}_{{y}_{i}{y}_{i}}\right)+\left(u{}_{{x}_{i}}{}^{2}+u{}_{{y}_{i}}{}^{2}\right)u{}_{tt}+2\left(u{}_{{y}_{i}}-u{}_{{x}_{i}}u{}_{t}\right)u{}_{{x}_{i}t}-2\left(u{}_{{x}_{i}}+u{}_{{y}_{i}}u{}_{t}\right)u{}_{{y}_{i}t}+k\left(x,y,t\right)\left(1+|Du|{}^{2}\right){}^{3/2}=0 A control on the set where a Green's function vanishes E. Fabes, N. Garofalo, S. Salsa (1990) A critical growth rate for harmonic and subharmonic functions in the open ball in {R}^{n} Krzysztof Samotij (1987) A Fujita-type theorem for the Laplace equation with a dynamical boundary condition. Amann, H., Fila, M. (1997) A generalized maximum principle and estimates of max vrai u for nonlinear parabolic boundary value problems Jozef Kačur (1976) A Liouville-type theorem for the p-laplacian with potential term Yehuda Pinchover, Achilles Tertikas, Kyril Tintarev (2008) A nonexistence result for Yamabe type problems on thin annuli Mohamed Ben Ayed, Khalil El Mehdi, Mokhless Hammami (2002) A nonexistence theorem for an equation with critical Sobolev exponent in the half space. H. Berestycki, M. Gross, F. Pacella (1992) Luis A Caffarelli, Jean-Michel Roquejoffre (2002) A note on nodal non-radially symmetric solutions to Emden-Fowler equations. Ramos, Miguel, Zou, Wenming (2009) A note on the Poisson-Boltzmann equation A Krzywicki, T Nadzieja (1991)
United States customary units - Simple English Wikipedia, the free encyclopedia (Redirected from US gallon) U.S. customary units is the system of units of measurement used to measure things in the United States and U.S. territories. The system of Imperial units is similar and in some parts identical. Length or distance units include the inch, foot, yard and mile. Land units include square miles (2589998.47032 square meter) and acres (4046.8726 square meter). Common volume units are the teaspoon, tablespoon (3 teaspoons), fluid ounce (two tablespoons), cup (8 ounces), pint (2 cups, or 16 fluid ounces), quart (2 pints, or 32 fluid ounces), US gallon (16 cups, 128 fluid ounces, or 3.8 liters). A barrel is the unit to measure oil. Temperature is measured in degrees Fahrenheit (°F). Here is a formula to convert from °C to °F: {\displaystyle {\frac {9}{5}}C+32} Units of weight and mass include the pound (453.6 grams), which contains 16 ounces. This should not be confused with the British pound which is a type of money. The different uses of the word pound can cause confusion. Different sizes of ounce are also in use. Some people have been trying to replace these units with the metric system since the 1820s. Much infrastructure in the United States and British Empire was built in past centuries using the old measures. During the 20th century some sectors such as science, medicine and the military of the United States converted to metric but Americans still use the old units for daily purposes. On the other hand, world trade is conducted using the metric system and except for the US, the world uses the metric system for almost all purposes. 1 point (p) 0.4 mm 1 pica (P̸) 12 points 4.2 mm 1 inch (in) 6 picas 2.54 cm 1 foot (ft) 12 inches 30.48 cm 1 yard (yd) 3 feet 0.9144 m 1 mile (mi) 1760 yards 1.609344 km 1 square (survey) foot (sq ft or ft2) 0.0929 m2 1 square chain (sq ch or ch2) 4356 sq ft 40 m2 1 acre 10 sq ch 4 047 m2 1 section 640 acres 2.6 km2 1 (survey) township (twp) 36 sections 93.2 km2 1 cubic inch (cu in) or (in3) 16 mL 1 cubic foot (cu ft) or (ft3) 1728 cu in 28 L 1 cubic yard (cu yd) or (yd3) 27 cu ft 766.55 L 1 acre-foot (acre ft) 1613.333 cu yd 1,233,000 L 1 minim (min) 1 drop 0.1 mL 1 US fluid dram (fl dr) 60 min 3.7 mL 1 teaspoon (tsp) 80 min 4.92892 mL 1 tablespoon (Tbsp) 3 tsp 14.7868 mL 1 US fluid ounce (fl oz) 2 Tbsp 29.5735 mL 1 US shot (jig) 3 Tbsp 44 mL 1 US gill (gi) 4 fl oz 118.294 mL 1 US cup (cp) 2 gi 240 mL 1 (liquid) US pint (pt) 2 cp 473.176 mL 1 (liquid) US quart (qt) 2 pt 0.946353 L 1 (liquid) US gallon (gal) 4 qt 3.78541 L 1 (liquid) barrel (bbl) 31.5 gal 119.2405 L 1 oil barrel (bbl) 42 gal 158.987 L 1 hogshead 63 gal 238.481 L 1 (dry) pint (pt) 0.6 L 1 (dry) quart (qt) 2 pt 1.1 L 1 (dry) gallon (gal) 4 qt 4.4 L 1 peck (pk) 2 gal 8 L 1 bushel (bu) 4 pk 35 L 1 (dry) barrel (bbl) 3.281 bu 110 L Mass and weight[change | change source] 1 grain (gr) 60 mg 1 dram (dr) 27 11⁄32 gr 1.8 g 1 ounce (oz) 16 dr 28.3495 g 1 pound (lb) 16 oz 453.592 g 1 US hundredweight (cwt) 100 lb 45.3592 kg 1 long hundredweight 112 lb 50.802345 kg 1 short ton 20 US cwt 907.185 kg 1 long ton 20 long cwt 1,016.047 kg 1 pennyweight (dwt) 24 gr 1.55517 g 1 troy ounce (oz t) 20 dwt 31.1035 g 1 troy pound (lb t) 12 oz t 373.242 g ↑ Where "C" means "Celsius". Retrieved from "https://simple.wikipedia.org/w/index.php?title=United_States_customary_units&oldid=8014307"
blog - nixies.us I’ve used the KRC-86B bluetooth module in several projects. It is tedious trying to figure out which one I am about to connect to, because they all have the same name, so it would be great to be able to rename it. It would also be nice if the name meant something. This is a summary of this instructable. Things have changed a little since it was written, so this little post summarizes the state of play at the moment. It is still well worth watching the video in the instructable and reading the comments. To rename the module you need to modify one of the settings stored on it. This is done using the SPI interface that is part of the module. I’ve documented the pins in the image below – on the module itself they are marked as NC. KRC-86B bluetooth module pin assignments So all you have to do is to send commands over SPI! This might sound a little daunting, but actually it is quite simple. The CSR8630 chip on the module was developed by Qualcomm, and they developed windows software to do the heavy lifting. The software is called BlueSuite. Qualcomm no longer provide this, but you can get multiple versions here. You should use the latest version there, which is bluesuite.win.2.6_installer_2.6.11.1937.zip. Next you will need a USB to SPI converter. I used this one: You will need to figure out some way to attach this to the KRC-86B. Check the actual pinouts on whatever USB adapter you use. Install the BlueSuite software. Connect the 3V3 pin to Vcc on the module Connect a 10k pullup resistor between Vcc and SPI_EN to enable the SPI interface on the module. Connect MISO to MISO, MOSI to MOSI, CLK to CLK and CSB to CSB. Plug the PLC1688 (or whatever) into your PC. Run pstool (which is what is installed by the BlueSuite installer). It should detect the module and you can just hit OK. If it doesn’t detect it, troubleshoot your connections. Make a backup of the module – File>Dump Check it changed to what you want using your phone And that is it. Pretty simple. Many thanks to sjowett. I spent years searching for this. I’m writing this as much for my own use as anything else – I keep having to re-invent this solution every time I fix a vintage radio. Most vintage radios are AM only, and in my part of the world, the only thing on AM is talk radio. Even if that weren’t the case, let’s be honest, most of what people listen to these days is streamed. I don’t like my radios to be shelf-queens, I want to be able to use them and honestly, the sound from the bigger radios is great. One solution to this is to add an aux-in so that you can plug in an external source and play it through the radio. this certainly works (and you can use most of what is here to do that), but you still need a Bluetooth dongle and leads, and you have to charge it up and turn it on and switch the radio to aux-in. This solution wires the Bluetooth ‘dongle’ into the radio directly, so that it comes on when the radio comes on. This shows a high-level view of what we are going to do: Bluetooth Mod Overview Let’s look at the radio block first. The modification here is the same regardless of what external source we are going to wire in. We are going to wire our new audio source across the volume control. Find the lead going to the volume control that is not connected to ground and does not go to the audio ouput stage. we will cut this wire and route it to a double-pole double-throw switch. A double-pole double-throw switch is simply a switch with two positions (double-throw) that switches two sources at the same time (double-pole). In one position the switch will connect that wire back to the volume control. In the other position it will connect our new audio source instead. NOTE: It is a good idea to use shielded cable to make the connections between the radio and the switch. Connect the shield to chassis ground. In the diagram above you can see (hopefully) that when our new audio source is connected, the original one is connected to ground rather than just left floating. This helps stop sound from the RF/IF stages of the radio from bleeding through to the speaker. The next block to look at as the one marked Bluetooth. You will see that the right and left channels are connected together via two 1k resistors. The use of these resistors prevents the output from one channel destroying the other channel. The next block we will talk about is the auto-transformer. This can serve two purposes. The first is to boost the output of the Bluetooth module so that it has about the same loudness for a given volume setting as the radio itself. The particular model I am using is the TY-141P. If it is wired as shown in the diagram it gives a 1:2 voltage boost, which seems about right on all the radios I have modded. Triad make another transformer – the TY-142P that can give a boost of 1:2.24 or 1:4.48 depending on how it is wired. The second purpose is that it can act as an isolation transformer to decouple the new audio source from the radio. This can help to remove ground-loop hum that can arise depending on how the new source is powered. In some scenarios, the chassis ground (GNDD) may be the same as the power supply ground (GND). NOTE: The point that the auto-transformer is connected to chassis ground can be important. I have found that if you connect it to the volume ground terminal, you will hear hash from the Bluetooth receiver when listening to broadcasts. Experiment with different locations. If all else fails, use a three-pole double-throw switch to disconnect the power to the Bluetooth unit when listening to broadcasts. Bluetooth Module Details My preferred Bluetooth module is the KRC-86B, pictured below: KRC-86B Bluetooth Module There are a lot of possible connections that we aren’t going to use. We are just going to provide it with a Vcc of +5V and use OUTL and OUTR like so: KRC-86B Connections The LED indicates the state of the module. If it is connected it will be steady. If it is not connected it will be flashing, and if there is no power, it will be off! The capacitor is just a decoupling capacitor. Both of these are normally provided with the module. The KRC-86B needs 3.7V to 5V. You could provide this with a battery pack or an external 5V supply and you could wire this power source in to the switch so that it only powered the module when it was selected as an input, but you would need a three-pole double-throw switch to do that. The module only draws at most 80mA, so let’s see if we can provide that directly from the radio. We will probably need to convert some AC voltage into a DC voltage. Some radios have a transformer that provides the filament voltage, and these are probably the cleanest way to power the device. We can just convert the 6.3V AC into 5V DC. Another alternative I have used is to connect my power supply in parallel with the filament of a vacuum tube. The basic idea is to rectify the AC, smooth it and then feed this through a 5V voltage regulator. 8V-12V is ideal for using a bridge rectifier, however 6.3V won’t provide a high enough rectified and smoothed voltage to feed a 5V voltage regulator, so in this case we need to use a voltage doubler as in the circuit below: Here’s what it looks like. In this case, there was a handy post I can screw it to: This is what it looks like underneath: Some nifty point-to-point wiring Finally, here is what the module, switch and auto-transformer look like in one of the radios I modded. In this case the switch was already there from a mod that had been made in the dim and distant past: Bluetooth mod installed in a Philco 38-7 chairside radio This is the fifth post about a nixie clock project that is powered by a vacuum tube power supply, rather than the more common 12V or 5V wall wart. After part 3, I took a brief detour to talk about the controller hardware, and I had left off talking about the lack of a 5V supply. I had originally hoped to power everything from the mains transformer, but my choice of vacuum tubes made that impossible, because they pulled the 5V and 6V3 supplies up to a high potential, and my circuitry needs to share a 0V rail with the HV supply. The other aspect of all of this, that I hadn’t managed to resolve, was that I wanted to be able to turn off the HV from the control circuitry. I had mulled over just severing the input the rectifiers (because I wanted to keep the 6V3 winding alive for the 5V DC rail), but now that I couldn’t use that anyway, the problem was solved. I would just use a separate 5V supply and use a SSR (Solid State Relay) to control the mains input to the transformer. I went with a fully enclosed chassis mount 5V supply from CUI. CUI PSK-S6C-5-T One final thing I added was a varistor across the input to protect against voltage spikes. One thing I wish I had added, but didn’t, was an inrush current suppressor. So anyway, this is the final final circuit: Final Final Power Supply You’ll notice that this circuit includes a voltmeter and an ammeter. I had originally intended to add a voltage/current display using CD13 nixie tubes, which are very small, but I decided this was a step too far. In the end I went for a couple of analog meters because I could put them in the high side, unlike digital meters. I felt that they also matched the aesthetics better than digital meters would. They aren’t just for show. I wanted a simple way of monitoring the voltage and current over time so it would be more obvious if things were starting to drift. You might notice that the voltmeter is for AC (the ~ symbol under the V). I took one of these apart to see what was inside, and it is just a diode to rectify the voltage, so it would be fine with DC too, but I would have to add a resistor in series to re-calibrate it for DC. These were the smallest, old-style meters I could find, and they included the internal illumination. I drove the bulbs off the 6V3 circuit, so they are a little dim, but that is OK by me. {V}_{c}={V}_{out}\frac{{R}_{2}}{{R}_{1}+{R}_{2}} {V}_{out}={V}_{r}\frac{\left({R}_{1}+{R}_{2}\right)}{{R}_{2}} {R}_{set}=\frac{{V}_{gs}–{V}_{gs\left(on\right)}}{{I}_{d}} About 7 months ago, I watched a video by glasslinger about adding a nixie tube display to a vintage radio. This was something I had wanted to do for a while, but I didn’t know how to read the tuned frequency from the radio. There were many interesting parts to the video (I recommend you watch it), but one of the things I learned was that vacuum tube radios that had tranformers, actually generated voltages with those transformers that were high enough to drive Nixies. I spent a little while trying to find a radio with an input transformer like that, and in the meantime, I started researching vacuum tube power-supply design. That was when I finally realized that these things were still being made for guitar (and HiFi, but especially guitar) amplifiers. So you could actually buy new transformers and new vacuum tubes and new chokes that you could use in a new power supply. That is when I got really serious about this. At this point, two sites influenced my initial design. The first was ‘Fun With Tubes’, which had several simple designs. The second was ‘DIY Audio Projects’. which went in to much greater depth about tube selection and circuit calculations. In particular, the latter had a nice chart showing the voltage drop that different vacuum rectifier tubes had. Some of them were quite startling – for example the 5Y3, used by the first site, had a voltage drop of almost 45 volts for a plate current of 100mA, compared with only 15V for the 6CA4. What’s more, the 6CA4 is still in production. The DIY Audio projects site showed a long series of manual calculations to determine properties of that simple un-regulated power supply. I entered these in a spreadsheet to make life easier, however, I later found a little application called psud2 (power supply designer 2) that did the same thing using a simulator and a few values from the data sheets. The important thing being that the maximum 6CA4 plate current (which is on a cold start) does not exceed the maximum allowed plate current on the data sheet. Taking a hint from the DIY Audio Projects page, I decided to go with a one-stage choke filter to smooth the output of the rectifier tube. Confusingly (for me) everyone refers to this as the ‘input filter’. I guess because it is the input to whatever comes after this stage – usually an amplifier. So this is what I ended up with (from dsud2): Unregulated vacuum tube power supply Obviously(?) this is just the output stage. On the input side is 125V 60Hz, a 1A slow blow fuse in the hot line and a neon indicator across hot and neutral, so I could tell when it was on. Something I omitted was a resistor to drain the capacitors when power was removed. Those capacitors can hold a charge for a long time. I soon added one. Experience is wonderful thing. Here is a chart from psud2 showing the expected output: The next stage was to select some actual components and bread-board it. I ended up going with a 269BX transformer from Hammond, a 156R choke (also from Hammond) and a new 6CA4 from JJ Electronics. All the rest are just suitably spec’d parts from DigiKey and a bunch of terminal strips, screws, fuses, switches, fuse holders etc. from my local electronics store. I would show you a picture of the breadboard, but I will save that for part 2, when I discuss how to deal with the potential for voltage sag.
65F15 Eigenvalues, eigenvectors 65F18 Inverse eigenvalue problems 65F20 Overdetermined systems, pseudoinverses 65F22 Ill-posedness, regularization 65F25 Orthogonalization 65F30 Other matrix algorithms 65F35 Matrix norms, conditioning, scaling 65F60 Matrix exponential and similar matrix functions A Class of Direct Methods for Linear Systems. E. Spedicato, J. Abaffy, C. Broyden (1984) A Direct Method for Sparse Least Squares Problems with Lower and Upper Bounds. Ake Björck (1989) A Finite Element - Capacitane Method for Elliptic Problems on Regions Partitioned into Subregions. A footnote on quaternion block-tridiagonal systems. Costa, Cecília, Serôdio, Rogério (1999) A look-ahead algorithm for the solution of general Hankel systems. Roland W. Freund, Hongyuan Zha (1993) A necessary and sufficient criterion to guarantee feasibility of the interval Gaussian algorithm for a class of matrices Günter Mayer, Lars Pieper (1993) A necessary and sufficient to guarantee feasibility of the interval Gaussian algorithms for a class of matrices. We apply the interval Gaussian algorithm to an n×n interval matrix \left[A\right] the comparison matrix 〈\left[A\right]〉 of which is irreducible and diagonally dominant. We derive a new necessary and sufficient criterion for the feasibility of this method extending a recently given sufficient criterion. A nested decomposition algorithm for parallel computations of very large sparse systems. Šiljak, D.D., Zečević, A.I. (1995) A New Method for the Solution of A x = b. D.S. Henderson, A. Wassyng (1977/1978) A Note on Partial Pivoting and Gaussian Elimination. M. van Veldhuizen (1977/1978) A note on symplectic block reflectors. Sadkane, Miloud, Salam, Ahmed (2008) A note on the inversion of Sylvester matrices in control systems. Li, Hongkui, Li, Ranran (2011) A parallel algorithm for solving band systems and matrix inversion Ladislav Halada (1984) A parallel Cholesky algorithm for the solution of symmetric linear systems. Khazal, R.R., Chawla, M.M. (2004) A parallel QR-factorization/solver of quasiseparable matrices. Vandebril, Raf, Van Barel, Marc, Mastronardi, Nicola (2008) F. Dubeau, J. Savoie (1991) A remark on Jordan elimination Jan Vinař (1976) A stable and optimal complexity solution method for mixed finite element discretizations Brandts, Jan, Stevenson, Rob (2002) Jan Brandts, Rob Stevenson (2002) We outline a solution method for mixed finite element discretizations based on dissecting the problem into three separate steps. The first handles the inhomogeneous constraint, the second solves the flux variable from the homogeneous problem, whereas the third step, adjoint to the first, finally gives the Lagrangian multiplier. We concentrate on aspects involved in the first and third step mainly, and advertise a multi-level method that allows for a stable computation of the intermediate and final...
A branching method for studying stability of a solution to a delay differential equation. Dolgii, Yu.F., Nidchenko, S.N. (2005) A class of Hamiltonian systems with increasing periods. Renate Schaaf (1985) A contribution to the problem of coexistence of two periodical solutions of the Hill's equation Dragan Dimitrovski, Vladimir Rajović (2007) A counterexample to the periodic orbit conjecture A Nearly-Periodic Boundary Value Problem for Second Order Differential Equations G. L. Karakostas, P. K. Palamides (2002) A nonlinear periodic system with nonsmooth potential of indefinite sign Michael E. Filippakis, Nikolaos S. Papageorgiou (2006) In this paper we consider a nonlinear periodic system driven by the vector ordinary p -Laplacian and having a nonsmooth locally Lipschitz potential, which is positively homogeneous. Using a variational approach which exploits the homogeneity of the potential, we establish the existence of a nonconstant solution. A note on existence of bounded solutions of an A note on nonuniform nonresonance for jumping nonlinearities Sergio Invernizzi (1986) Bahman Mehri (1977) A note on periodic solutions of second order nonautonomous singular coupled systems. Cao, Zhongwei, Yuan, Chengjun, Jiang, Daqing, Wang, Xiaowei (2010) A note on rapid convergence of approximate solutions for second order periodic boundary value problems Rahmat A. Khan, Bashir Ahmad (2005) In this paper, we develop a generalized quasilinearization technique for a nonlinear second order periodic boundary value problem and obtain a sequence of approximate solutions converging uniformly and quadratically to a solution of the problem. Then we improve the convergence of the sequence of approximate solutions by establishing the convergence of order k xmlns="http://www.w3.org/1998/Math/MathML" display="inline"> ( ... A Note on the Covergence of Solutions of a System of Differential Equations. (Short Communication). Ioan Muntean (1970)
This problem is a checkpoint for addition and subtraction of mixed numbers. It will be referred to as Checkpoint 5. 5 \frac { 1 } { 2 } + 4 \frac { 2 } { 3 } 1 \frac { 5 } { 6 } + 2 \frac { 1 } { 5 } 9 \frac { 1 } { 3 } - 4 \frac { 1 } { 5 } 10-8\frac{2}{3} Answers and extra practice for the Checkpoint problems are located in the back of your printed textbook or in the Reference Tab of your eBook. If you have an eBook for Core Connections, Course 1, login and then click the following link: Checkpoint 5: Addition and Subtraction of Mixed Numbers
Which compound from the following is a disaccharide ? (d) Melitriose What is the mane of total four carbon atoms and one aldehyde group containing carbohydrate ? (a) Aldotetrose (b) Aldopentose (c) Ketotetrose (d) Ketopentose Which structure of protein is having β−pleated sheet shape? Which vitamin is insoluble in water and fat ? (b) B complex (a) A, G, C and T bases are present in DNA (b) A and T are joined together by two hydrogen bonds in DNA (c) A and C are purine bases. (d) T and U are pyrimidine bases By which linkage, two monosaccharides are joined together in disaccharide ? (b) Phosphodiester (c) Glycosidic (d) Disulphide \mathrm{Which} \mathrm{of} \mathrm{the} \mathrm{following} \mathrm{units} \mathrm{are} \mathrm{present} \mathrm{in} \mathrm{\alpha }-\left(+\right) -\mathrm{lactose} ? \mathrm{\beta }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{galactose} + \mathrm{\alpha }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{glucose} \mathrm{\beta }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{galactose} +\mathrm{\beta }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{glucose} \mathrm{\alpha }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{galactose} + \mathrm{\alpha }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{glucose} \mathrm{\alpha }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{galactose} + \mathrm{\beta }-\left(\mathrm{D}\right)-\left(+\right)-\mathrm{glucose} What is the name of total four carbon atoms and one aldehyde group containing carbohydrate ? By which linkage, tow mono-saccharides are joined together in disaccharide ? Which Structure of protein is having β-pleated sheet shape ?
MCQs of Dual Nature of Radiation and Matter | GUJCET MCQ Dual Nature of Radiation and Matter MCQs MCQs of Dual Nature of Radiation and Matter Cathode rays _____ (a) are the atoms moving towards the cathode. (b) are electromagnetic waves. (c) are negative ions travelling from cathode to anode. (d) are electrons emitted by cathode and travelling towards anode. The velocity of photon emitted in photo-electric effect depends on the properties of photosensitive surface and _____ (b) state of polarization of incident light (c) time for which the light is incident (d) intensity of incident light Photoelectric effect represents that (a) electron has a wave nature (b) light has a particle nature Cathode rays travelling in the direction from east to west enter in an electric field directed from north to south. They will deflect in _____ If photoelectric effect is not seen with the ultraviolet radiations in a given metal, photo electrons may be emitted with the _____ Photons of energy 1 eV and 2.5 eV successively illuminate a metal whose work function is 0.5 eV, The ratio of maximum speed of emitted electron is _____ In quantum mechanics, a particle _____ (a) can be regarded as a group of harmonic waves. (b) can be regarded as a single wave of definite wave-length only (c) can be regarded as only a pair of two harmonic waves (d) is a point-like object with mass Which of the following physical quantity has the dimension of plank constant (h) ? Which of the following statement is not true for a photon ? (a) Photon produces pressure (b) Photon has energy hf. \mathrm{Photon} \mathrm{has} \mathrm{momentum} \frac{hf}{c} (d) Rest mass of photon is zero De Broglie wavelentgh of a particle moving with velocity 2.25 × 108 m s-1 is same as the wavelength of photon. The ratio of kinetic energy of the particle to the energy of photon is _____ Velocity of light = 3 × 108 m s-1 \frac{1}{8} \frac{3}{8} \frac{5}{8} \frac{7}{8}
Bonding (1,1) - KlimaDAO Bonding is the process of trading assets to the protocol for KLIMA. The protocol will quote you an amount of KLIMA for your asset, and the vesting period for the trade. Today, the protocol takes in: Reserve Assets: BCT (Base Carbon Tonne; trades on SushiSwap) MCO2 (Moss Carbon Credit Token; trades on Quickswap) Liquidity Assets: KLIMA/BCT SushiSwap LP tokens KLIMA/USDC SushiSwap LP tokens KLIMA/MCO2 Quickswap LP tokens Why should I bond? Bonding allows you to buy KLIMA at a lower cost basis. Because the protocol can sell at a discount to the market price (as it can mint KLIMA at IV), bonding when there is a positive discount may be advantageous compared to simply buying KLIMA on the market and staking. Bonding Dynamics: The protocol quotes the price of a bond based on the intrinsic value of KLIMA, and the premium charged on bonds: Bond Price=1 + Premium This premium is dependent on 2 factors, the total debt of the system, and a external controllable variable. This ties the price of the bond to the amount of bonds outstanding. The more bonds there are (more demand), the higher the premium and the lower the discount is and vice versa. Premium = Debt Ratio * BCV Debt Ratio = Bonds Outstanding/ KLIMA supply Carbon Custodied: Carbon Custodied (CC) refers to the tokenized carbon offsets held in the treasury reserves. For each KLIMA token minted, there is some amount of carbon offsets in the treasury. Any excess reserves above the 1 tonne/KLIMA Intrinsic Value will eventually be paid out to stakers via rebase rewards. Because an LP share can fluctuate in value in terms of offset tonnage as offsets are bought and sold out of the pool, we mark it down to reflect the fact that not all the carbon will be left in the pool in extreme market conditions This is formularized below: CC_l= (LP/Total LP)*2sqrt(K) Where K is the constant product of the LP pool. Naked carbon assets that represent one tonne of carbon per 1 token (BCT, MCO2, etc) contribute a carbon custodied value of 1 tonne per token, though the KLIMA token itself ends up backed by a weighted average of the price and quantity of the different carbon assets, which constitute Carbon Custodied. With the bonding mechanism, not only will we acquire carbon assets into the treasury, it also allows us to acquire liquidity to facilitate market operations as well. This assists in creating a decentralized carbon market, as well as provide a passive revenue generation through the LP fees.
How many tokens you get when removing liquidity - LP-Swap Academy How many tokens you get when removing liquidity In a liquidity pool, for example one that contains x^{\mathrm{BUSD}} x^{\mathrm{WBNB}} tokens BUSD and WBNB, a liquidity provider owns a share of the BUSD and WBNB in the pool that is proportional to its number of LP tokens. For instance, if you own 1 BUSD-WBNB LP token out of a total of 100 BUSD-WBNB LP tokens shared among the liquidity providers, you will get x^{\mathrm{BUSD}}/100 x^{\mathrm{WBNB}}/100 tokens BUSD and WBNB when removing your liquidity from the pool. However, the ratio between the quantities x^{\mathrm{BUSD}} x^{\mathrm{WBNB}} may evolve with time, and so the quantities of each token that you obtain when removing your liquidity may be different to the quantities you provided initially! There are two mechanisms that explain the variations of both tokens in the pool: When liquidity providers remove or add liquidities. When a user swaps tokens using the pool, for instance BUSD against WBNB, he deposits some BUSD into the pool and in exchange he withdraws WBNB from the pool. In the first case, tokens are removed or added exactly in the proportion x^{\mathrm{BUSD}}/x^{\mathrm{WBNB}} of the pool. Therefore these operations do not change the relative proportions of each token in the pool and so there is no change in the quantities of tokens a liquidity provider will obtain from removing its liquidity. In the case of swaps, let us say that before the swap, the BUSD-WBNB pool contains x^{\mathrm{BUSD}}_{pre} x_{pre}^{\mathrm{WBNB}} tokens BUSD and WBNB respectively. After your swap, there will be x^{\mathrm{BUSD}}_{post} x_{post}^{\mathrm{WBNB}} such tokens. These quantities are related by the constant product rule: x_{post}^{\mathrm{BUSD}}\times x_{post}^{\mathrm{WBNB}}= x_{pre}^{\mathrm{BUSD}} \times x_{pre}^{\mathrm{WBNB}}. In other words, the product of the quantities of BUSD and WBNB tokens inside the pool is constant. x^{\mathrm{BUSD}}\times x^{\mathrm{WBNB}}= \underline{constant}. For example we see that if the quantity of BUSD in the pool increases then the quantity of WBNB decreases. The exact quantity of WBNB obtained by swapping in this pool also depends on a liquidity provider fee, as we will see next with the swap formula in the article In particular, we see that after the swap we have x_{post}^{\mathrm{BUSD}}/ x_{post}^{\mathrm{WBNB}}>x_{pre}^{\mathrm{BUSD}} /x_{pre}^{\mathrm{WBNB}} . Thus a liquidity provider who redeems its tokens from the pool after the swap will get more BUSD tokens and less WBNB tokens than he initially provided. There are pools with more sophisticated rules than the constant product rule, which can lead to very different and interesting behaviours. In fact one of the major updates of Uniswap V3 is to introduce pools with a new rule called Concentrated Liquidity, which roughly speaking enables different token prices to coexist inside a single pool.
Physics - Two Traps are Better than One Two Traps are Better than One Institute for Experimental Physics, University of Innsbruck, Technikerstraße 25/4 (Victor-Franz-Hess-Haus), 6020 Innsbruck, Austria An atom trap consisting of both rapidly rotating electric fields and static laser fields keeps ions securely locked into place in a latticelike potential. Figure 1: Two trap technologies combine to keep ions locked in place. (Top) The potential an ion experiences in a quadrupole trap is relatively flat on the submicrometer scale. An external or stray electric field can easily displace the ion by hundreds of nanometers, limiting the stability of the trap. (Bottom) An optical lattice superimposed on the ion-trap potential provides a periodic structure on the scale of optical wavelengths. An ion is localized at a single valley of the lattice and remains pinned even in the presence of the electric field.Two trap technologies combine to keep ions locked in place. (Top) The potential an ion experiences in a quadrupole trap is relatively flat on the submicrometer scale. An external or stray electric field can easily displace the ion by hundreds of nano... Show more Over the past 60 years, atomic physicists have developed sophisticated techniques to trap and isolate single neutral atoms and ions. Now they are tweaking these techniques so they can re-assemble isolated particles into pairs, triplets, and larger arrays. These carefully controlled multiparticle systems could be used to simulate the behavior of solids, or function as prototype quantum bits for storing information and performing computations. An important technological step toward preparing ions for these types of experiments is now reported in Physical Review Letters by Leon Karpa and colleagues at the Massachusetts Institute of Technology, Cambridge [1]. The team combined two key trapping technologies—optical lattices and Coulomb potentials—into a hybrid trap that keeps ions anchored into one position for up to times longer than existing lattice traps, even in the presence of external or stray electric fields. Ultimately, this highly stable trap could be used to capture arrays of ions to explore many-body solid-state effects. Techniques for storing charged particles were first developed in the 1950s, when Wolfgang Paul and Hans Dehmelt independently designed and developed ion traps. To this day, Paul’s quadrupole trap, which uses a combination of alternating (ac) and static (dc) fields to confine strings of ions, is a key component of mass spectrometers. A common type of quadrupole trap is the linear Paul trap, in which ac fields generate a two-dimensional, saddle-shaped potential that rotates at tens of millions of revolutions per second (radio frequencies.) The rotation is too fast for the ions to follow, so they experience an effective bowl-shaped potential that traps them at the minimum. Static fields along the third dimension provide additional confinement. By the early 1980s, physicists had confined single barium and magnesium ions in quadrupole traps. (Simultaneous advances in laser cooling, which slows ions and neutral atoms as they are loaded into a trap, played an essential role in these early trapping experiments.) Over the following decades, researchers demonstrated they could manipulate and read out the quantum states of individual ions with ever-increasing precision. Today, the world’s most accurate clock is based on the frequency of a trapped aluminum ion; and many proposed quantum-computing platforms rely on trapped ions. Given this success, why are ion trappers turning to new traps? An important goal is to gain more control over the shape of the trapping potential, especially on short length scales. Achieving this would offer greater freedom to position the ions in tailored configurations and explore different interactions between ions. While quadrupole traps create deep trapping potentials, on the order of electron volts ( kelvin), they only have a simple parabolic structure. Shaping the dc fields can produce more complex trap geometries. But the length scales of these potentials are still on the order of hundreds of micrometers, which limits how closely together two ions can be positioned if each is in its own local minimum. In contrast, an optical lattice trap, which is formed by superimposing two counterpropagating laser beams to make a standing wave, can trap atoms a few hundreds of nanometers apart. In these traps, the particles are locked by dipolar forces into the local minima of the standing wave generated by the laser field. The trap shapes can be further tailored by tuning the angles and wavelengths of the beams. These traps work particularly well for isolating and manipulating neutral atoms. But with ions, the dipole forces from the laser fields are generally too weak to overcome the effects of stray electric fields on the charged particles. Researchers at the Max Planck Institute for Quantum Optics and the University of Freiburg in Germany have successfully trapped an ion in a focused Gaussian beam for a few milliseconds (ms) [2]. In 2012, the same group trapped an ion for up to microseconds ( ) in an all-optical lattice by first catching the particle in a standard quadrupole trap, then gradually turning off the radio-frequency fields while ramping up the laser beams [3]. Hybrid traps combine the stability of quadrupole traps with the flexibility of optical traps. Moreover, the radio-frequency and optical fields produce a combination of Coulomb and dipole forces, respectively, on the ion, which can be used to simulate many-body Hamiltonians. For example, one candidate model, the Frenkel-Kontorova Hamiltonian, describes nearest-neighbor interactions within a sinusoidal potential and exhibits both classical and quantum phase transitions [4]. There have already been a few experiments with hybrid traps. A team at Aarhus University in Denmark, for example, tried leaving their quadrupole trap on and showed the ion stayed within a single well of the optical lattice [5]. Several experiments used separate, but overlapping, quadrupole and laser traps for ions and atoms, respectively, to explore collisions between the charged and neutral particles at ultracold temperatures where quantum effects are dominant [6–9]. The idea is that if a single ion is trapped in a sea of atoms, its fluorescence can be a sensitive probe of how it interacts chemically with the surrounding particles. These experiments, however, grappled with the effects of micromotion, which is the residual motion of the ion at the radio frequency used to drive the quadrupole trap. Like the researchers at Aarhus University, the MIT team uses a quadrupole trap in combination with an optical trap. The radio-frequency fields confine an ion tightly in two dimensions, while along the third axis the static fields are relaxed so that the ions see a relatively flat landscape. Karpa et al. introduce a one-dimensional lattice along this axis by driving an optical resonator with a laser field (Fig. 1.) The key step forward is that their hybrid trap locks an ion into more localized positions for a longer period of time, thanks to a built-in cooling mechanism. Specifically, an additional laser field removes one vibrational quanta after another from the trapped ion, a technique known as Raman sideband cooling. The cooling cycle leaves the ion in its vibrational ground state. The cooler ion stays spatially localized to within about nanometers (nm) and is pinned at a valley of the standing-wave structure. The team also shows they can use the same cooling and trapping method for up to three ions at a time. To prove that their trap was stiff—meaning an ion wouldn’t be easily disrupted by local variations in the electric field—Karpa et al. pushed on the ions with a time-varying electric field. In the absence of the optical lattice, the ions swung back and forth with the field over hundreds of nm, but with the lattice on, they remained fixed in place. Only when Karpa et al. slowed down the oscillations in the field could the ions respond to the force. The researchers determined that the single-site pinning lasted for up to , which is times longer than the trap’s vibrational period. The vibrational period—classically, the time it takes an ion to roll back and forth in its potential well—sets the minimum time scale for which it makes sense to say that ion is trapped In addition to other challenges, researchers working with hybrid traps will need to address the ion micromotion at radio frequencies. Although all-optical traps offer a cleaner option, they are presently limited to trapping times because laser fluctuations heat the ions [3]. Karpa et al. have already proposed one potential solution to micromotion: a hybrid trap design that uses static, as opposed to radio-frequency, fields. Exploring proposals such as this one could open the door to new approaches in a host of fields, ranging from ultracold quantum chemistry to quantum computing architectures to simulations of solid-state physics. Correction (1 November 2013): Paragraph 5, sentence 4, “manipulating neutral ions” changed to “manipulating neutral atoms.” L. Karpa, A. Bylinskii, D. Gangloff, M. Cetina, and V. Vuletić, “Suppression of Ion Transport due to Long-Lived Subwavelength Localization by an Optical Lattice,” Phys. Rev. Lett. 111, 163002 (2013) C. Schneider, M. Enderlein, T. Huber, and T. Schaetz, “Optical Trapping of an Ion,” Nature Photon. 4, 772 (2010) M. Enderlein, T. Huber, C. Schneider, and T. Schaetz, “Single Ions Trapped in a One-Dimensional Optical Lattice,” Phys. Rev. Lett. 109, 233004 (2012) M. Johanning, A. Varón, and C. Wunderlich, “Quantum Simulations with Cold Trapped Ions,” J. Phys. B 42, 154009 (2009) R. B. Linnet, I. D. Leroux, M. Marciante, A. Dantan, and M. Drewsen, “Pinning an Ion with an Intracavity Optical Lattice,” Phys. Rev. Lett. 109, 233005 (2012) A. T. Grier, M. Cetina, F. Oručević, and V. Vuletić, “Observation of Cold Collisions between Trapped Ions and Trapped Atoms,” Phys. Rev. Lett. 102, 223201 (2009) C. Zipkes, S. Palzer, C. Sias, and M. Köhl, “A Trapped Single Ion Inside a Bose-Einstein Condensate,” Nature 464, 388 (2010) S. Schmid, A. Härter, and J. Hecker Denschlag, “Dynamics of a Cold Trapped Ion in a Bose-Einstein Condensate,” Phys. Rev. Lett. 105, 133202 (2010) W. G. Rellengert, S. T. Sullivan, S. Kotochigova, A. Petrov, K. Chen, S. J. Schowalter, and E. R. Hudson, “Measurement of a Large Chemical Reaction Rate between Ultracold Closed-Shell {}^{40} Ca Atoms and Open-Shell {}^{174} {}^{+} Ions Held in a Hybrid Atom-Ion Trap,” Phys. Rev. Lett. 107, 243201 (2011) Tracy Northup is a senior scientist at the University of Innsbruck’s Institute for Experimental Physics. She received her Ph.D. in 2008 from the California Institute of Technology, followed by postdoctoral study in Innsbruck, where she held a Marie Curie fellowship and is currently an Elise Richter fellow. Her research focuses on optical cavities as quantum interfaces between ions and photons. Leon Karpa, Alexei Bylinskii, Dorian Gangloff, Marko Cetina, and Vladan Vuletić
Physics - Weighing Dark Matter Halos with the Cosmic Microwave Background Weighing Dark Matter Halos with the Cosmic Microwave Background AstroParticule et Cosmologie, Université Paris Diderot, CNRS/IN2P3, CEA/Irfu, Obs de Paris, Sorbonne Paris Cité, France Gravitational lensing by foreground dark matter halos leaves an observable imprint on the cosmic microwave background, which can be used to determine their masses Figure 1: Astronomers observe distortions in images of background objects due to gravitational bending of light by massive foreground objects, called lenses. For the first time, researchers have used the cosmic microwave background (CMB) to measure the gravitational lensing by dark matter halos, as depicted on the left. The observed distortions allow them to determine (in an average sense) the mass in the dark matter halo surrounding the galaxies. The results are consistent with previous lensing measurements performed with distant galaxies as the background object, as represented on the right.Astronomers observe distortions in images of background objects due to gravitational bending of light by massive foreground objects, called lenses. For the first time, researchers have used the cosmic microwave background (CMB) to measure the gravita... Show more Weighing something that is partially invisible is very challenging. But this is exactly what astronomers contend with when estimating masses of galaxies and their clusters. These structures contain significant amounts of still-mysterious dark matter residing in a “halo” extending beyond the standard, baryonic (and mostly luminous) matter. Gravitational lensing allows astronomers to get around this weighing problem. The effect arises when photons emitted by some distant (background) object are bent by the gravitational force of some intervening (foreground) object, known as the lens. Astronomers observe gravitational lensing as a distortion or brightening of the image of the background source, and from that, they estimate the total mass of the lens. Distant galaxies are typically used as the background objects [1], but astrophysicists suggested nearly a decade ago [2] that the cosmic microwave background (CMB) could be a particularly suitable alternative. Mathew Madhavacheril of Stony Brook University, New York, and his colleagues have now put this idea into practice, using high-resolution CMB data from the Atacama Cosmology Telescope Polarimeter (ACTpol) [3]. The team found evidence for gravitational lensing by dark matter halos on the CMB anisotropies and from that derived an estimate of their average mass. With further refinement, this approach could take advantage of the CMB’s ubiquity to provide accurate mass estimates for more objects and at greater distances (higher redshifts) than the background of distant galaxies can do. The CMB originates from a time when the Universe was very hot and dense and photons were trapped by ionized plasma. This epoch lasted for the first 380,000 years after the big bang and only ended when the overall temperature of the Universe dropped low enough for atoms to form. This process, which is referred to as recombination, freed photons to travel nearly unhindered through space. Some of them have journeyed all the way to our detectors, providing us with an image of the Universe billion years ago. These CMB photons—which were stretched into the microwave region by cosmic expansion—provide a temperature reading of the early Universe, which is very uniform across the sky with only small deviations (hot and cold spots) at the level of part in . However, the CMB image is not a pristine record of the early Universe, as the post-recombination travel of the CMB photons was not truly uneventful. Indeed, the Universe has undergone a huge transformation over the last billion years. Small density fluctuations have grown into a complex web of structures, including galaxies, clusters, and filaments, some of which eventually reionized the gas in the Universe. These changes left an imprint on the CMB photons through, for example, interactions with intracluster plasma (the Sunyaev-Zel’dovich effect), gravitational redshifts, and reionization effects. In addition, CMB photon trajectories were bent by the gravitational pull of structures forming in the Universe, as predicted by Einstein’s general relativity. Studies of this CMB lensing have generated some of the most exciting recent discoveries in the CMB field. Astronomers have detected distortions in the anisotropies of the CMB total intensity [4] but also in the polarization patterns of this radiation [4,5]. CMB lensing has so far been observed on angular scales in excess of a few arcminutes, implying that the lenses in this case are the largest structures in the Universe (i.e., filaments). These recent results have established CMB lensing as a new promising probe of large-scale matter distribution in the Universe. Madhavacheril et al. [3] complement and extend the analysis of CMB lensing, by studying the effect on smaller, arcminute angular scales, where the lensing objects are either massive galaxies or their groups (clusters). They use data collected by the ACTpol experiment—a telescope with high angular ( -arcminute) resolution, operating from a vantage point in the Chilean Atacama Desert. The method involves statistically measuring distortions in the CMB hot and cold spots by the gravitational mass of a foreground object, just as the methods using distant galaxies as the background sources involve distortions in their optical images (see Fig. 1). Using the CMB as the background source offers some interesting advantages over distant galaxies. For one, the time of the CMB’s emission is very well known as are its statistical properties, which helps in quantifying the gravitational distortion. The CMB also comes to us from all directions and can be available for many more objects of interest. Nevertheless, the detected signal has to be disentangled from the other potential post-recombination effects, which may affect the information carried by the CMB photons. For the time being, the CMB data are still too noisy to showcase fully the CMB lensing potential. The analysis presented by Madhavacheril et al. resorts, therefore, to a stacking technique, averaging together postage-stamp-like images centered at the positions of known galaxies. Nearly optically selected galaxies from the SDSS/BOSS survey have been used for this purpose [6]. Once the averaging is done, the resulting map is cross-correlated with different models that describe the distribution of dark matter around galaxies or their clusters. The researchers find that the models with a realistic dark matter halo provide a significantly better fit to the data than models with no dark matter. They estimated the average total halo mass associated with the galaxies used in the stacking process and found it was broadly in agreement with other weak lensing estimates produced using distant galaxies as the background sources [7]. The authors performed several checks to be sure they weren’t seeing a false signal. For example, they modeled potential contaminations due to the thermal Sunyaev-Zel’dovich effect or point sources and quantified how these would affect their analysis. None of these systematic errors explained the data, so the authors contend that they are genuinely seeing the gravitational effects of dark matter halos. While more work and, in particular, more high-quality data are clearly needed, this first step is encouraging for the prospects of CMB lensing at arcminute scales. Researchers studying the CMB are currently busy trying to detect the primordial large-angular scale -mode polarization signal. As a by-product, this effort will produce high signal-to-noise maps of the CMB total intensity, and as the ACTpol work demonstrates, these maps may provide their own new and interesting results. See e.g., J. A. Tyson, R. A. Wenk, and F. Valdes, “Detection of Systematic Gravitational Lens Galaxy Image Alignments - Mapping Dark Matter in Galaxy Clusters,” Astrophys. J. 349, L1 (1990) See e.g., U. Seljak and M. Zaldarriaga, “Lensing-induced Cluster Signatures in the Cosmic Microwave Background,” Astrophys. J. 538, 57 (2000); W. Hu, S. DeDeo, and C. Vale, “Cluster Mass Estimators from CMB Temperature and Polarization Lensing,” New J. Phys. 9, 441 (2007) Mathew Madhavacheril et al. (Atacama Cosmology Telescope Collaboration), “Evidence of Lensing of the Cosmic Microwave Background by Dark Matter Halos,” Phys. Rev. Lett. 114, 151302 (2015) See e.g., K. M. Smith, O. Zahn, and O. Doré, “Detection of Gravitational Lensing in the Cosmic Microwave Background,” Phys. Rev. D 76, 043510 (2007); S. Das et al., “The Atacama Cosmology Telescope: A Measurement of the Cosmic Microwave Background Power Spectrum at 148 and 218 GHz from the 2008 Southern Survey,” Astrophys. J. 729, 62 (2011); A. van Engelen et al., “A Measurement of Gravitational Lensing of the Microwave Background Using South Pole Telescope Data,” 756, 142 (2012); P. A. R. Ade et al. (Planck Collaboration), “Planck 2013 Results. XVII. Gravitational Lensing by Large-Scale Structure,” Astron. Astrophys. 571, A17 (2014) D. Hanson et al., “Detection of B -Mode Polarization in the Cosmic Microwave Background with Data from the South Pole Telescope,” Phys. Rev. Lett. 111, 141301 (2013); P. A. R. Ade et al. (POLARBEAR Collaboration), “Evidence for Gravitational Lensing of the Cosmic Microwave Background Polarization from Cross-Correlation with the Cosmic Infrared Background,” 112, 131302 (2014); R. Keisler et al., “Measurements of Sub-degree B -mode Polarization in the Cosmic Microwave Background from 100 Square Degrees of SPTpol Data,” arXiv:1503.02315 (2015); A. van Engelen et al., “The Atacama Cosmology Telescope: Lensing of CMB Temperature and Polarization Derived from Cosmic Infrared Background Cross-Correlation,” arXiv:1412.0626 (2014) L. Anderson et al., “The Clustering of Galaxies in the SDSS-III Baryon Oscillation Spectroscopic Survey: Baryon Acoustic Oscillations in the Data Release 9 Spectroscopic Galaxy Sample,” Mon. Not. R. Astron. Soc. 427, 3435 (2012) C. Heymans et al., “CFHTLenS: The Canada–France–Hawaii Telescope Lensing Survey,” Mon. Not. R. Astron. Soc. 427, 146 (2012); H. Miyatake et al., “The Weak Lensing Signal and the Clustering of BOSS Galaxies I: Measurements,” arXiv:1311.1480 (2013) Mathew Madhavacheril et al. (Atacama Cosmology Telescope Collaboration)
35P05 General topics in linear spectral theory 35P10 Completeness of eigenfunctions, eigenfunction expansions 35P15 Estimation of eigenvalues, upper and lower bounds 35P20 Asymptotic distribution of eigenvalues and eigenfunctions 35P25 Scattering theory 35P30 Nonlinear eigenvalue problems, nonlinear spectral theory A boundary value problem for the wave equation. Iraniparast, Nezam (1999) A characterization of balls using the domain derivative. Didenko, Andriy, Emamizadeh, Behrouz (2006) A class of generalized integral operators. Bekkara, Samir, Messirdi, Bekkai, Senoussaoui, Abderrahmane (2009) A directional compactification of the complex Fermi surface and isospectrality D. Bättig (1989/1990) A discreteness criterion for the spectrum of the Laplace--Beltrami operator on quasimodel manifolds. Svetlov, A.V. (2002) Michele Correggi (2008/2009) We study a two-particle quantum system given by a test particle interacting in three dimensions with a harmonic oscillator through a zero-range potential. We give a rigorous meaning to the Schrödinger operator associated with the system by applying the theory of quadratic forms and defining suitable families of self-adjoint operators. Finally we fully characterize the spectral properties of such operators. Jean Dolbeault, Maria J. Esteban, Eric Séré (2001/2002) A non-homogeneous Hardy-like inequality has recently been found to be closely related to the knowledge of the lowest eigenvalue of a large class of Dirac operators in the gap of their continuous spectrum. Bodo Dittmar, Maren Hantke (2011) For simply connected planar domains with the maximal conformal radius 1 it was proven in 1954 by G. Pólya and M. Schiffer that for the eigenvalues λ of the fixed membrane for any n the following inequality holds [...] where λ(o) are the eigenvalues of the unit disk. The aim of the paper is to give a sharper version of this inequality and for the sum of all reciprocals to derive formulas which allow in some cases to calculate exactly this sum. Alcuni risultati di teoria spettrale per l'operatore di Laplace in regioni non limitate con frontiera non regolare Antonio Bove (1977) An addition theorem for the manifolds with the Laplacian having discrete spectrum. Kuz'minov, V.I., Shvedov, I.A. (2006) An Elliptic Boundary Value problem occurring in Magnetohydrodynamcis. R. Mennicken, M. Faierman, M. Möller (1993) An inverse eigenvalue problem for an arbitrary multiply connected bounded region in {R}^{2} Zayed, E.M.E. (1991) An inverse eigenvalue problem for an arbitrary multiply connected bounded region: An extension to higher dimensions. An inverse problem for a general doubly-connected bounded domain with impedance boundary conditions. An Isoperimetric Inequality for the Principal Eigenvalue of a Periodic-Parabolic Problem. Analyse semi-classique pour l'équation de Harper B. Helffer, J. Sjöstrand (1986/1987) M. Doumic (2010) We study the mathematical properties of a general model of cell division structured with several internal variables. We begin with a simpler and specific model with two variables, we solve the eigenvalue problem with strong or weak assumptions, and deduce from it the long-time convergence. The main difficulty comes from natural degeneracy of birth terms that we overcome with a regularization technique. We then extend the results to the case with several parameters and recall the link between this...
2-distance 4-colorability of planar subcubic graphs with girth at least 22 Oleg V. Borodin, Anna O. Ivanova (2012) The trivial lower bound for the 2-distance chromatic number χ₂(G) of any graph G with maximum degree Δ is Δ+1. It is known that χ₂ = Δ+1 if the girth g of G is at least 7 and Δ is large enough. There are graphs with arbitrarily large Δ and g ≤ 6 having χ₂(G) ≥ Δ+2. We prove the 2-distance 4-colorability of planar subcubic graphs with g ≥ 22. 2-distance coloring of sparse planar graphs. Borodin, O.V., Ivanova, A.O., Neustroeva, T.K. (2004) Csilla Bujtás, E. Sampathkumar, Zsolt Tuza, M.S. Subramanya, Charles Dominic (2010) A 3-consecutive C-coloring of a graph G = (V,E) is a mapping φ:V → ℕ such that every path on three vertices has at most two colors. We prove general estimates on the maximum number {\left(\chi ̅\right)}_{3CC}\left(G\right) of colors in a 3-consecutive C-coloring of G, and characterize the structure of connected graphs with {\left(\chi ̅\right)}_{3CC}\left(G\right)\ge k for k = 3 and k = 4. A categorification for the chromatic polynomial. Helme-Guizon, Laure, Rong, Yongwu (2005) A characterization of locating-total domination edge critical graphs Mostafa Blidia, Widad Dali (2011) For a graph G = (V,E) without isolated vertices, a subset D of vertices of V is a total dominating set (TDS) of G if every vertex in V is adjacent to a vertex in D. The total domination number γₜ(G) is the minimum cardinality of a TDS of G. A subset D of V which is a total dominating set, is a locating-total dominating set, or just a LTDS of G, if for any two distinct vertices u and v of V(G)∖D, {N}_{G}\left(u\right)\cap D\ne {N}_{G}\left(v\right)\cap D . The locating-total domination number {\gamma }_{L}^{t}\left(G\right) is the minimum cardinality of a locating-total dominating set... A class of 4-pseudomanifolds similar to the lense spaces. Anna Donati (1987) H. R. Maimani, M. R. Pournaki, S. Yassemi (2010) A graph is called weakly perfect if its chromatic number equals its clique number. In this note a new class of weakly perfect graphs is presented and an explicit formula for the chromatic number of such graphs is given. A classification for maximal nonhamiltonian Burkard-Hammer graphs Ngo Dac Tan, Chawalit Iamjaroen (2008) A graph G = (V,E) is called a split graph if there exists a partition V = I∪K such that the subgraphs G[I] and G[K] of G induced by I and K are empty and complete graphs, respectively. In 1980, Burkard and Hammer gave a necessary condition for a split graph G with |I| < |K| to be hamiltonian. We will call a split graph G with |I| < |K| satisfying this condition a Burkard-Hammer graph. Further, a split graph G is called a maximal nonhamiltonian split graph if G is nonhamiltonian but G+uv is... A clone-theoretic formulation of the Erdos-Faber-Lovász conjecture Lucien Haddad, Claude Tardif (2004) The Erdős-Faber-Lovász conjecture states that if a graph G is the union of n cliques of size n no two of which share more than one vertex, then χ(G) = n. We provide a formulation of this conjecture in terms of maximal partial clones of partial operations on a set. A generalization of combinatorial Nullstellensatz. Lasoń, Michał (2010) A generalization of the dichromatic polynomial of a graph. Farrell, E.J. (1981)
s p A canonical trace associated with certain spectral triples. Paycha, Sylvie (2010) A characterization of commutators with Hilbert transforms D. Przeworska-Rolewicz (1972) {\omega }_{m}\left(t\right)/\omega ₙ\left(t\right)\to \infty A Kleinecke-Shirokov type condition with Jordan automorphisms Matej Brešar, Ajda Fošner, Maja Fošner (2001) Let φ be a Jordan automorphism of an algebra . The situation when an element a ∈ satisfies 1/2\left(\phi \left(a\right)+{\phi }^{-1}\left(a\right)\right)=a is considered. The result which we obtain implies the Kleinecke-Shirokov theorem and Jacobson’s lemma. A lower bound of the norm of the operator X → AXB + BXA. Mohamed Barraa, Mohamed Boumazgour (2001) A new characterization of Anderson’s inequality in {C}_{1} S. Mecheri (2007) ℋ be a separable infinite dimensional complex Hilbert space, and let ℒ\left(ℋ\right) denote the algebra of all bounded linear operators on ℋ into itself. Let A=\left({A}_{1},{A}_{2},\cdots ,{A}_{n}\right) B=\left({B}_{1},{B}_{2},\cdots ,{B}_{n}\right) n -tuples of operators in ℒ\left(ℋ\right) ; we define the elementary operators {\Delta }_{A,B}\phantom{\rule{0.222222em}{0ex}}ℒ\left(ℋ\right)↦ℒ\left(ℋ\right) {\Delta }_{A,B}\left(X\right)={\sum }_{i=1}^{n}{A}_{i}X{B}_{i}-X. In this paper, we characterize the class of pairs of operators A,B\in ℒ\left(ℋ\right) satisfying Putnam-Fuglede’s property, i.e, the class of pairs of operators A,B\in ℒ\left(ℋ\right) {\sum }_{i=1}^{n}{B}_{i}T{A}_{i}=T {\sum }_{i=1}^{n}{A}_{i}^{*}T{B}_{i}^{*}=T T\in {𝒞}_{1}\left(ℋ\right) (trace class operators). The main result is the equivalence between this property and the fact that... A note on a pair of derivations of semiprime rings. Chaudhry, Muhammad Anwar, Thaheem, A.B. (2004) A note on compact semiderivations Matej Brešar, Yuri Turovskii (2005) Let 𝓐 be a Banach algebra without nonzero finite dimensional ideals. Then every compact semiderivation on 𝓐 is a quasinilpotent operator mapping 𝓐 into its radical. A note on the commutator of two operators on a locally convex space C AB-BA of two bounded operators A B acting on a locally convex topological vector space. If AC-CA=0 C is a quasinilpotent operator and we prove that if AC-CA is a compact operator, then C is a Riesz operator. A note on the range of generalized derivation. Mohamed Amouch (2006) Let L(H) denote the algebra of bounded linear operators on a complex separable and infinite dimensional Hilbert space H. For A, B ∈ L(H), the generalized derivation δA,B associated with (A, B), is defined by δA,B(X) = AX - XB for X ∈ L(H). In this note we give some sufficient conditions for A and B under which the intersection between the closure of the range of δA,B respect to the given topology and the kernel of δA*,B* vanishes. L\left(H\right) denote the algebra of all bounded linear operators on a separable infinite dimensional complex Hilbert space H into itself. Given A\in L\left(H\right) , we define the elementary operator {\Delta }_{A}:L\left(H\right)⟶L\left(H\right) {\Delta }_{A}\left(X\right)=AXA-X . In this paper we study the class of operators A\in L\left(H\right) which have the following property: ATA=T A{T}^{*}A={T}^{*} for all trace class operators T\in {C}_{1}\left(H\right) . Such operators are termed generalized quasi-adjoints. The main result is the equivalence between this character and the fact that the ultraweak closure of the range of {\Delta }_{A} is closed under taking... A result on two one-parameter groups of automorphisms. A. van Daele, A.B. Thaheem (1982) A Short Proof of Radjavi's Theorem on Self-Commutators. A.R. Sourour (1979) A stability result for p -harmonic systems with discontinuous coefficients. Stroffolini, Bianca (2001) C* Absolute continuity and hyponormal operators. Putnam, C.R. (1981) Additivity of maps on triangular algebras. Cheng, Xuehan, Jing, Wu (2008)
A bifurcation theory for periodic solutions of nonlinear dissipative hyperbolic equations A certain type of partial differential equations on tori Michal Fečkan (1992) The existence of classical solutions for some partial differential equations on tori is shown. A condition on the potential for the existence of doubly periodic solutions of a semi-linear fourth-order partial differential equation. Chang, Chen (2000) A further improved tanh function method exactly solving the \left(2+1\right) -dimensional dispersive long wave equations. Zhang, Sheng, Xia, Tie-cheng (2008) A Hopf bifurcation generated by variation of the domain Vegas, José M. (1982) A linear and weakly nonlinear equation of a beam: the boundary-value problem for free extremities and its periodic solutions Naděžda Krylová, Otto Vejvoda (1971) A Massera type criterion for a partial neutral functional differential equation. Hernández M., Eduardo (2002) A mathematical aspect for Liesegang phenomena Ohnishi, Isamu, Mimura, Masayasu (2007) A maximum principle for systems with variational structure and an application to standing waves Nicholas D. Alikakos, Giorgio Fusco (2015) We establish via variational methods the existence of a standing wave together with an estimate on the convergence to its asymptotic states for a bistable system of partial differential equations on a periodic domain. The main tool is a replacement lemma which has as a corollary a maximum principle for minimizers. A monotonicity method for solving hyperbolic problems with hysteresis Pavel Krejčí (1988) A nonlinear second order problem with a nonlocal boundary condition. Amster, P., De Nápoli, P. (2006) A note on asymptotic expansion for a periodic boundary condition The aim of this contribution is to present a new result concerning asymptotic expansion of solutions of the heat equation with periodic Dirichlet–Neuman boundary conditions with the period going to zero in 3 A note on the existence of positive solutions of one-dimensional p -Laplacian boundary value problems Yuji Liu (2010) This paper is concerned with the existence of positive solutions of a multi-point boundary value problem for higher-order differential equation with one-dimensional p -Laplacian. Examples are presented to illustrate the main results. The result in this paper generalizes those in existing papers. A note to a bifurcation result of H. Kielhöfer for the wave equation Otto Vejvoda, Pavel Krejčí (1991) A modification of a classical number-theorem on Diophantine approximations is used for generalizing H. kielhöfer's result on bifurcations of nontrivial periodic solutions to nonlinear wave equations. A note to the theory of periodic solutions of a parabolic equation Dana Lauerová (1980) A Riesz representation formula for super-biharmonic functions. Abkar, Ali, Hedenmalm, Håkan (2001) A river water quality model for time varying BOD discharge concentration. Oppenheimer, Seth F., Adrian, Donald Dean, Alshawabkeh, Akram (1999) A two-species cooperative Lotka-Volterra system of degenerate parabolic equations. Sun, Jiebao, Zhang, Dazhi, Wu, Boying (2011) Almost periodic solutions of nonlinear hyperbolic equations with time delay. Poorkarimi, Hushang, Wiener, Joseph (2001)
{L}^{p} {H}^{p} {L}^{1} Marouane Rabaoui (2008) In this article, we prove a generalisation of Bochner-Godement theorem. Our result deals with Olshanski spherical pairs \left(G,K\right) defined as inductive limits of increasing sequences of Gelfand pairs {\left(G\left(n\right),K\left(n\right)\right)}_{n\ge 1} . By using the integral representation theory of G. Choquet on convex cones, we establish a Bochner type representation of any element \varphi {𝒫}^{♮}\left(G\right) K -biinvariant continuous functions of positive type on G A Fourier Transform for Compact Nilmanifolds with Flat Orbits. Ole A. Nielsen, Michael Rains (1985) A new approach to function spaces on quasi-metric spaces. A simple-minded computation of heat kernels on Heisenberg groups Françoise Lust-Piquard (2003) We compute the heat kernel on the classical and nonisotropic Heisenberg groups, and on the free step two nilpotent groups {N}_{n,2} , by an elementary method, in particular without using Laguerre calculus. Waldemar Hebisch, Adam Sikora (1990) A spectral gap property for subgroups of finite covolume in Lie groups Bachir Bekka, Yves Cornulier (2010) Let G be a real Lie group and H a lattice or, more generally, a closed subgroup of finite covolume in G. We show that the unitary representation {\lambda }_{G/H} of G on L²(G/H) has a spectral gap, that is, the restriction of {\lambda }_{G/H} to the orthogonal complement of the constants in L²(G/H) does not have almost invariant vectors. This answers a question of G. Margulis. We give an application to the spectral geometry of locally symmetric Riemannian spaces of infinite volume. A Spectral Paley-Wiener Theorem. William O. Bray (1993) {ℂ}^{n} {ℂ}^{n}. f\left(z\right){e}^{\frac{1}{4}{|z|}^{2}} f B {ℂ}^{n} f×{\mu }_{r}\left(z\right)=0 r>B+|z| z\in {ℂ}^{n}. n=1 f×{\mu }_{r}\left(z\right)={\mu }_{r}×f\left(z\right)=0 r>B+|z| A Unified Approach to concrete plancherel theory of homogeneous spaces. Ronald L. Lipsman (1997) A weak type (1,1) estimate for a maximal operator on a group of isometries of a homogeneous tree Michael G. Cowling, Stefano Meda, Alberto G. Setti (2010) We give a simple proof of a result of R. Rochberg and M. H. Taibleson that various maximal operators on a homogeneous tree, including the Hardy-Littlewood and spherical maximal operators, are of weak type (1,1). This result extends to corresponding maximal operators on a transitive group of isometries of the tree, and in particular for (nonabelian finitely generated) free groups. A weighted Plancherel formula II. The case of the ball Genkai Zhang (1992) The group SU(1,d) acts naturally on the Hilbert space L²\left(Bd{\mu }_{\alpha }\right)\left(\alpha >-1\right) , where B is the unit ball of {ℂ}^{d} d{\mu }_{\alpha } the weighted measure {\left(1-|z|²\right)}^{\alpha }dm\left(z\right) . It is proved that the irreducible decomposition of the space has finitely many discrete parts and a continuous part. Each discrete part corresponds to a zero of the generalized Harish-Chandra c-function in the lower half plane. The discrete parts are studied via invariant Cauchy-Riemann operators. The representations on the discrete parts are equivalent to actions on some holomorphic... A weighted Plancherel formula. III. The case of the hyperbolic matrix ball. Jaak Peetre, Gen Kai Zhang (1992) Addendum to: Crofton formulae and geodesic distance in hyperbolic spaces. Robertson, Guyan (1998) Amenable unitary representations of locally compact groups.
Home : Support : Online Help : System : Information : Updates : Maple 7 : Compatibility Compatibility Issues in Maple 7 The following is a brief description of the compatibility issues that affect users upgrading from Maple 6 to Maple 7. Obsolete Packages and Functions Numerics-related Changes New Keywords and Names No updtsrc Utility in Maple 7 The interp, ratinterp, thiele and spline functions are obsolete. The functionality of these functions is now provided by the CurveFitting[PolynomialInterpolation], CurveFitting[RationalInterpolation], CurveFitting[ThieleInterpolation] and CurveFitting[Spline] functions, respectively. The record constructor has been renamed Record for consistency with other, similar constructors. For this release, the name record will continue to work, but is now deprecated in favor of the new spelling. By default, in the exact computation environment (i.e., no floating-point numbers or computations involved) the numeric events overflow and underflow cause the corresponding exception to be raised rather than an infinity or 0 to be returned. This is similar to the Maple 6 behavior for the division_by_zero numeric event, and as with that event, this behavior can be controlled by installing a numeric event handler. See NumericEvent and NumericEventHandler. The UseHardwareFloats environment variable, which was Boolean valued (i.e., took only the values true or false) in Maple 6, now allows a third value, deduced, which is also the new default value. When UseHardwareFloats = deduced, hardware or software floats will be used for certain computations depending on whether the value of the environment variable Digits is less than evalhf(Digits). The semantics of undefined have been slightly changed in order to deal with certain loop termination conditions. In particular, undefined = undefined returns true. For details on these and other changes to Maple's numerics, see updates,Maple7,numerics. assuming, subset, implies, and xor are new Maple operator keywords. The MeijerG function of Maple 6 was using a nonstandard definition of this function. As of Maple 7, this function now has a much more standard definition; Maple 6's variation of the function is still present in Maple 7, but under the name ModifiedMeijerG. See MeijerG for additional details. The one-argument form of the save command, e.g., \mathrm{save filename} , is obsolete. The list of names to save must now be given explicitly. The behavior of codegen[maple2intrep], codegen[C], and codegen[fortran] has changed. If these functions are used to translate a procedure f which calls another procedure g, then they can determine the return type of g accurately only if the return type of g is declared; otherwise, a default return type of float is assumed. The codegen[C] function now generates standard math library function names in all cases, even when the precision=single option is provided. The new mode=single option must be used to generate single precision math function names such as ``logf''. The codegen[fortran] function now produces generic math function names and double precision variables and constants as defaults. The LinearAlgebra[LinearSolve] function now accepts an additional argument of the form \mathrm{methodoptions}=\mathrm{list} . This argument must be provided when the programming layer routine, LinearAlgebra:-LA_Main:-LinearSolve, is used. See Linear Algebra Programming Submodule Calling Sequences. To extend the size of factorials that can be computed, factorials are no longer evaluated at ``compile-time'' (during automatic simplification), and there is no longer any (practical) limit on the size of an integer argument to factorial. This change means that you can now delay factorial computations by using unevaluation quotes. Substituting into an unevaluated factorial expression now requires an evaluation to effect the computation. Small factorials are still computed in the kernel, but the built-in procedure factorial now calls a new library procedure Factorial to handle arguments that are not small integers. '20!'; \textcolor[rgb]{0,0,1}{20}\textcolor[rgb]{0,0,1}{!} subs( n = 5, 'n!' ); \textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{!} length( 40000! ); # not possible in earlier releases of Maple \textcolor[rgb]{0,0,1}{166714} X11 resources on UNIX systems start with Maple 7. There are no incompatibilities between the syntax of Maple 6 and Maple 7. Therefore, the updtsrc utility is not required for this release. For help porting code files from Maple versions prior to Maple 6 to Maple 7, please visit the Support section of our Web site, www.maplesoft.com.
{\displaystyle 2^{2^{n}}} {\displaystyle x_{i}} {\displaystyle \sum _{i=0}^{\infty }{\frac {1}{2^{2^{i}}x_{i}}}={\frac {1}{2x_{0}}}+{\frac {1}{4x_{1}}}+{\frac {1}{16x_{2}}}+\cdots } The number of distinct values representable in a single word on a 64-bit processor. Or, the number of values representable in a doubleword on a 32-bit processor. Or, the number of values representable in a quadword on a 16-bit processor, such as the original x86 processors.[4] {\displaystyle 2^{x}{\tbinom {n}{x}}.} {\displaystyle 2^{46}=70\ 368\ 744\ 177\ 664.} {\displaystyle a^{n}+b^{n}=(a^{p})^{m}+(b^{p})^{m}} {\displaystyle a^{2n}+b^{2n}} {\displaystyle a^{2n}+b^{2n}=(a^{n}+b^{n}i)\cdot (a^{n}-b^{n}i)}
Algorithmic Ground-State Cooling of Weakly Coupled Oscillators Using Quantum Logic Steven A. King, Lukas J. Spieß, Peter Micke, Alexander Wilzewski, Tobias Leopold, José R. Crespo López-Urrutia, Piet O. Schmidt The majority of ions and other charged particles of spectroscopic interest lack the fast, cycling transitions that are necessary for direct laser cooling. In most cases, they can still be cooled sympathetically through their Coulomb interaction with a second, coolable ion species confined in the same potential. If the charge-to-mass ratios of the two ion types are too mismatched, the cooling of certain motional degrees of freedom becomes difficult. This limits both the achievable fidelity of quantum gates and the spectroscopic accuracy. Here, we introduce a novel algorithmic cooling protocol for transferring phonons from poorly to efficiently cooled modes. We demonstrate it experimentally by simultaneously bringing two motional modes of a {\mathrm{Be}}^{+}\text{−}{\mathrm{Ar}}^{13+} mixed Coulomb crystal close to their zero-point energies, despite the weak coupling between the ions. We reach the lowest temperature reported for a highly charged ion, with a residual temperature of only T\lesssim 200\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mu \mathrm{K} in each of the two modes, corresponding to a residual mean motional phonon number of ⟨n⟩\lesssim 0.4 . Combined with the lowest observed electric-field noise in a radio-frequency ion trap, these values enable an optical clock based on a highly charged ion with fractional systematic uncertainty below the {10}^{-18} level. Our scheme is also applicable to (anti)protons, molecular ions, macroscopic charged particles, and other highly charged ion species, enabling reliable preparation of their motional quantum ground states in traps.
Adiabatic process - Simple English Wikipedia, the free encyclopedia An adiabatic process is a thermodynamic process where a fluid becomes warmer or cooler without getting heat from, or giving it to, something else. Usually the temperature instead changes because of changes in pressure. Adiabatic cooling is the usual cause of clouds. When warm, humid air rises due to convection or other cause, water condenses to make clouds, and in some cases precipitation. Convection also causes cold air to sink. This warms it adiabatically, often destroying a cloud and sometimes causing any precipitation to evaporate before hitting the ground. If we take some fluids in an insulated system and expand or compress the system very fast, the system won't be able to gain or loose any heat (from outside of the system). (In reality, you can't get a perfectly insulated system. But in some cases, the heat transfer is so low that we can call them adiabatic process.) From the first law of thermodynamics, we know that,[1] {\displaystyle \Delta U=\Delta Q-\Delta W} {\displaystyle \Delta U} is the change of internal energy; {\displaystyle \Delta Q} is the change of heat and {\displaystyle \Delta W} is the change of work. Since there occurs no heat transfer in adiabatic process, the {\displaystyle \Delta Q} becomes 0. So, for adiabatic process, the equation is, {\displaystyle \Delta W=-\Delta U} Again, we know that {\displaystyle \Delta W=p\Delta V} {\displaystyle W=Fs} {\displaystyle F=pA} {\displaystyle V=As} ; so, {\displaystyle \Delta W=p\Delta V} {\displaystyle V} is volume; {\displaystyle F} is force; {\displaystyle p} {\displaystyle A} is the area and {\displaystyle s} is the distance. So, the equation stands as, {\displaystyle \Delta U=-p\Delta V} Which means, by doing work (changing volume), one can change the internal energy (and so the temperature) in a adiabatic process. Adiabatic process for ideal gas (for reversible adiabatic process)Edit In adiabatic process, increasing the volume will result in decrease of the internal energy In the case of Adiabatic process (see the upper section), {\displaystyle {\text{(1)}}\qquad p\Delta V=-\Delta U} Again, in constant volume, the heat capacity of one mole gas is, {\displaystyle C_{v}={\frac {\Delta Q}{\Delta T}},} {\displaystyle \Delta T} is the change of temperature and {\displaystyle T} is the temperature। But, in constant volume (as no work is done), {\displaystyle \Delta Q=\Delta U} {\displaystyle C_{v}={\frac {\Delta U}{\Delta T}},} From the first (1) equation, {\displaystyle {\text{(2)}}\qquad p\Delta V=-C_{v}\Delta T} Now, the Ideal gas law for one mole gas is, {\displaystyle pV=RT,} {\displaystyle R} By differentiating the last equation, {\displaystyle p\Delta V+V\Delta p=R\Delta T} {\displaystyle \Delta T={\frac {p\Delta V+V\Delta p}{R}}} Now from equation (2), we get, {\displaystyle C_{v}\left({\frac {p\Delta V+V\Delta p}{R}}\right)+p\Delta V=0} {\displaystyle R=C_{p}-C_{v}} (Mayer's relation), {\displaystyle {\text{(3)}}\qquad C_{v}\left({\frac {p\Delta V+V\Delta p}{C_{p}-C_{v}}}\right)+p\Delta V=0} {\displaystyle C_{p}} is the heat capacity of gas at constant pressure. Simplifying the equation (3), {\displaystyle V\Delta p+p\Delta V\left({\frac {C_{p}}{C_{v}}}\right)=0} {\displaystyle {\frac {C_{p}}{C_{v}}}=\gamma } {\displaystyle {\text{(4)}}\qquad V\Delta p+\gamma p\Delta V=0} By dividing both sides of the equation (4) with {\displaystyle pV} {\displaystyle {\text{(5)}}\qquad {\frac {\Delta p}{p}}+\gamma {\frac {\Delta V}{V}}=0} Integrating (Integration leaves a constant) the equation (5), {\displaystyle \ln p+\gamma \ln V={\text{(constant)}}} {\displaystyle {\text{(constant)}}=\ln k} {\displaystyle {\text{(6)}}\qquad \ln p+\gamma \ln V=\ln k} {\displaystyle pV^{\gamma }={\text{(constant)}}=k} This is the relation between pressure and volume in adiabatic process. ↑ Franz, Mandl (8 January 1991). Statistical Physics (2nd ed.). John Wiley & sons. ISBN 978-0471915331. Retrieved from "https://simple.wikipedia.org/w/index.php?title=Adiabatic_process&oldid=8064440"
Train Custom LQR Agent - MATLAB & Simulink - MathWorks 한국 {x}_{t+1}=A{x}_{t}+B{u}_{t} {u}_{t}=-K{x}_{t} The control objective is to minimize the quadratic cost: \mathit{J}={∑}_{\mathit{t}=0}^{\infty }\left({{\mathit{x}}_{\mathit{t}}}^{′}{\mathit{Qx}}_{\mathit{t}}+{{\mathit{u}}_{\mathit{t}}}^{′}{\mathit{Ru}}_{\mathit{t}}\right) \begin{array}{l}\mathit{A}=\left[\begin{array}{ccc}1.05& 0.05& 0.05\\ 0.05& 1.05& 0.05\\ 0& 0.05& 1.05\end{array}\right]\\ \mathit{B}=\left[\begin{array}{ccc}0.1& 0& 0.2\\ 0.1& 0.5& 0\\ 0& 0& 0.5\end{array}\right]\end{array} \begin{array}{l}\mathit{Q}=\left[\begin{array}{ccc}10& 3& 1\\ 3& 5& 4\\ 1& 4& 9\end{array}\right]\\ \mathit{R}=\left[\begin{array}{ccc}0.5& 0& 0\\ 0& 0.5& 0\\ 0& 0& 0.5\end{array}\right]\end{array} For this environment, the reward at time \mathit{t} {r}_{t}=-{x}_{t}^{′}Q{x}_{t}-{u}_{t}^{′}R{u}_{t} , which is the negative of the quadratic cost. Therefore, maximizing the reward minimizes the cost. The initial conditions are set randomly by the reset function. For the LQR problem, the Q-function for a given control gain \mathit{K} {\mathit{Q}}_{\mathit{K}}\left(\mathit{x},\mathit{u}\right)={\left[\begin{array}{c}\mathit{x}\\ \mathit{u}\end{array}\right]}^{′}{\mathit{H}}_{\mathit{K}}\left[\begin{array}{c}\mathit{x}\\ \mathit{u}\end{array}\right] {\mathit{H}}_{\mathit{K}}=\text{ }\left[\begin{array}{cc}{\mathit{H}}_{\mathrm{xx}}& {\mathit{H}}_{\mathrm{xu}}\\ {\mathit{H}}_{\mathrm{ux}}& {\mathit{H}}_{\mathrm{uu}}\end{array}\right] is a symmetric, positive definite matrix. The control law to maximize {\mathit{Q}}_{\mathit{K}} \mathit{u}=-{\left({\mathit{H}}_{\mathrm{uu}}\right)}^{-1}{\mathit{H}}_{\mathrm{ux}}\text{ }\mathit{x} , and the feedback gain is {\mathit{K}=\left({\mathit{H}}_{\mathrm{uu}}\right)}^{-1}{\mathit{H}}_{\mathrm{ux}}\text{ } {\mathit{H}}_{\mathit{K}} \mathit{m}=\frac{1}{2}\mathit{n}\left(\mathit{n}+1\right) distinct element values, where \mathit{n} is the sum of the number of states and number of inputs. Denote \mathrm{θ} as the vector corresponding to these \mathit{m} elements, where the off-diagonal elements in {\mathit{H}}_{\mathit{K}} are multiplied by two. Represent the Q-function by \mathrm{θ} \mathrm{θ} contains the parameters to be learned. {\mathit{Q}}_{\mathit{K}}\left(\mathit{x},\mathit{u}\right)={\mathrm{θ}}^{′}\left(\mathit{K}\right)\mathrm{ϕ}\left(\mathit{x},\mathit{u}\right) \mathrm{ϕ}\left(\mathit{x},\mathit{u}\right) is the quadratic basis function in terms of \mathit{x} \mathit{u} The LQR agent starts with a stabilizing controller {\mathit{K}}_{0} . To get an initial stabilizing controller, place the poles of the closed-loop system \mathit{A}-{\mathit{BK}}_{0} inside the unit circle. To create a custom agent, you must create a subclass of the rl.agent.CustomAgent abstract class. For the custom LQR agent, the defined custom subclass is LQRCustomAgent. For more information, see Create Custom Reinforcement Learning Agents. Create the custom LQR agent using \mathit{Q} \mathit{R} {\mathit{K}}_{0} . The agent does not require information on the system matrices \mathit{A} \mathit{B} Because the linear system has three states and three inputs, the total number of learnable parameters is \mathit{m}=21 . To ensure satisfactory performance of the agent, set the number of parameter estimates {\mathit{N}}_{\mathit{p}} to be greater than twice the number of learnable parameters. In this example, the value is {\mathit{N}}_{\mathit{p}}=45 To get good estimation results for \mathrm{θ} , you must apply a persistently excited exploration model to the system. In this example, encourage model exploration by adding white noise to the controller output: {\mathit{u}}_{\mathit{t}}=-{\mathit{Kx}}_{\mathit{t}}+{\mathit{e}}_{\mathit{t}} . In general, the exploration model depends on the system models. The optimal reward is given by {\mathit{J}}_{\mathrm{optimal}}={{-\mathit{x}}_{0}}^{′}{\mathrm{Px}}_{0}
VAR Model Forecasting, Simulation, and Analysis - MATLAB & Simulink - MathWorks 日本 simulate assumes the multivariate innovations are jointly Gaussian distributed with covariance matrix Σ. simulate yields pseudorandom, Monte Carlo sample paths. Time series of forecast mean square error matrices based on the Σ, the innovations covariance matrix. The forecasts generated by forecast are also deterministic, but the mean square error matrices are based on Σ and the known response values in the forecast horizon. {\stackrel{^}{y}}_{t}={\stackrel{^}{\mathrm{Φ}}}_{1}{\stackrel{^}{y}}_{t−1}+...+{\stackrel{^}{\mathrm{Φ}}}_{p}{\stackrel{^}{y}}_{t−p}+\stackrel{^}{c}+\stackrel{^}{\mathrm{δ}}t+\stackrel{^}{\mathrm{β}}{x}_{t}, {\stackrel{^}{y}}_{s}, Generates random time series based on the model using random paths of multivariate Gaussian innovations distributed with a mean of zero and a covariance of Σ [2] Pesaran, H. H. and Y. Shin. “Generalized Impulse Response Analysis in Linear Multivariate Models.” Economic Letters. Vol. 58, 1998, 17–29.
Time-Dependent Global Attractor for a Class of Nonclassical Parabolic Equations Fang-hong Zhang, "Time-Dependent Global Attractor for a Class of Nonclassical Parabolic Equations", Journal of Applied Mathematics, vol. 2014, Article ID 748321, 6 pages, 2014. https://doi.org/10.1155/2014/748321 Fang-hong Zhang1 1Department of Mathematics, Longqiao College of Lanzhou Commercial College, Lanzhou, Gansu 730101, China Academic Editor: Gilles Lubineau Based on the recent theory of time-dependent global attractors in the works of Conti et al. (2013) and di Plinio et al. (2011), we prove the existence of time-dependent global attractors as well as the regularity of the time-dependent global attractor for a class of nonclassical parabolic equations. Let be a bounded set of with smooth boundary . For any , we consider the following nonclassical equation: where are assigned data and is a decreasing bounded function satisfying and is such that The nonlinearity , with , is assumed to satisfy the inequality along with the dissipation condition where is the first eigenvalue of in and . The classical reaction diffusion equation has a long history in mathematical physics and appears in many mathematical models. It arises in several bead mark problems of hydrodynamics and heat transfer theory, such as heat transfer, as well as in solid-fluid in Hradionconfigurations and, of course, in standard situations mass diffusion and flow through porous media [1, 2]. In 1980, Aifantis in [1] pointed out that the classical diffusion equation does not suffice to describe transport in media with two temperatures or two diffusions as well as in cases where the diffusions substances behave as a viscous fluid. It turns out that new terms appear in the classical diffusion equation when such effects are considered. In particular, the mixed spatiotemporal derivative consistently appears in several generalized reaction-diffusion models [1, 3–6] and this is our motivation for studying (1) which contains, in addition, the nonlinear term and inhomogeneous term , along with the time-dependent parameter . The presence of the has some important consequences on the character of the solution of the partial differential equation under consideration. For example, the classical reaction diffusion equation has smoothing effect; for example, although the initial data only belongs to a weaker topology space, the solution will belong to a stronger topology space with higher regularity. However, for (1), if the initial data belongs to , then the solution with is always in and has no higher regularity because of the term . In the case when is a positive constant, the asymptotic behavior of solutions to (1) has been extensively studied by several authors in [2, 7–16] and the references therein. In the general case of time dependence, that is, , the longtime behavior of the nonclassical equation has not been considered so far. In this paper, we borrow some ideas from the following previous contributors: Conti et al. in [17] who introduced the theory of time-dependent global attractors and apply the theory to the wave equations; Caraballo et al. who introduced a one-parameter family of Banach spaces in the context of cocycles for nonautonomous and random dynamical systems in [18] as well as time-dependent spaces [19] in the context of stochastic partial differential equations; and Flandoli and Schmalfuss in [20] who introduced a family of metric spaces depending on a parameter and applied it to the stochastic form of Navier-Stokes equations. In this paper, based on the recent theory of time-dependent global attractors of Conti et al. [17] and di Plinio et al. [21], we prove the existence of time-dependent global attractors as well as the regularity of the time-dependent global attractor for a class of nonclassical parabolic equations as described by (1). The paper is organized as follows. In Section 2, we present some preliminaries, establish the necessary notation and functions spaces to be used in the subsequent analysis, and give some useful lemmas. In Section 3, we prove the existence of time-dependent global attractors for the nonclassical parabolic equation and its regularity. Our main results are Theorems 14 and 16. In this section, we introduce some notations and definitions, along with a lemma. We set , with inner product and norm . For , we define the hierarchy of (compactly) nested Hilbert spaces: Then, for and , we introduce the time-dependent spaces endowed with the time-dependent product norms: The symbol is always omitted whenever zero. In particular, the time-dependent phase space where we settle the problem is Then, we have the compact embeddings: with injection constants independent of . Note that the spaces are all the same as linear spaces, and the norms and are equivalent for any fixed , . According to (5), we have the following lemma. Lemma 1. The following inequalities hold for some and : 3. Existence of the Time-Dependent Global Attractor 3.1. Well-Posedness For any , we rewrite the problem (1) as Using the Galerkin approximation method, we can obtain the following result concerning the existence and uniqueness of solutions; see, for example, [2, 7, 8, 15, 16]. Lemma 2. Under the assumptions of (2)–(5), for any , there is a unique solution of (1), on any interval with , Furthermore, for , let be two initial conditions such that and denote by the corresponding solutions to the problem of (11). Then the following estimates hold as follows: for some constant . According to Lemma 2 above, the family of maps with acting as where is the unique solution of (11) with initial time and initial condition , defines a strongly continuous process on the family . 3.2. Time-Dependent Absorbing Set Definition 3. A time-dependent absorbing set for the process is a uniformly bounded family with the following property: for every there exists such that Lemma 4. Under the assumptions of (2)–(5), for , , let be the solution of (1); then, there exist positive constants , and an increasing positive function such that Proof. Multiplying (11) by , we obtain Noting that and using (3) and Young and Poincaré inequalities, for small, we infer that By the Gronwall lemma, we have This completes the proof. Lemma 5 (time-dependent absorbing set). Under the assumptions of (2)–(5), there exists a constant , such that the family is a time-dependent absorbing set for . Proof. From the proof of Lemma 4, for , there exists , provided that , This concludes the proof of the existence of the time-dependent absorbing set. We can assume that the time-dependent absorbing set is positively invariant (namely, for all ). Indeed, calling the entering time of such that we can substitute with the invariant absorbing family: 3.3. Time-Dependent Global Attractor As introduced in [17], for , let be a family of normed spaces; we consider the collection When we say that the process is asymptotically compact. Definition 6. One calls a time-dependent global attractor the smallest element of ; that is, the family such that for any element . Theorem 7 (see [17]). If is asymptotically compact, then the time-dependent attractor exists and coincides with the set . In particular, it is unique. According to Definition 6, the existence of the time-dependent global attractor will be proved by a direct application of the abstract Theorem 7. Precisely, in order to show that the process is asymptotically compact, we will exhibit a pullback attracting family of compact sets. To this aim, the strategy classically consists in finding a suitable decomposition of the process in the sum of a decaying part and of a compact one. 3.3.1. The First Decomposition of the System Equations For the nonlinearity , following [11, 15, 17], we decompose as follows: where satisfy, for some , Noting that is a time-dependent absorbing set for , then for each initial data , we decompose as where and solve the following equations, respectively: Lemma 8. Under assumptions of (2)–(5), (27)–(30), there exists such that Proof. Multiplying (32) by , we obtain Using (28) and noting that , by Young and Poincaré inequalities, for small, we infer By the Gronwall lemma, we complete the proof. Remark 9. From Lemmas 4 and 8, we have the uniform bound Lemma 10. Under the assumptions of (2)–(5), (27)–(30), there exists such that Proof. Multiplying (33) by , we obtain In view of Remark 9 and the growth of and , using the embedding , we have Noting that , by Young and Poincaré inequalities, for small, we infer By the Gronwall lemma, we complete the proof. Remark 11. From Lemma 10, we immediately have the following regularity result: is bounded in (with a bound independent of ). Theorem 12 (see [17]). If is a -closed process for some , which possesses a time-dependent global attractor , then is invariant. Remark 13 (see [17]). If the process is closed, it is -closed, for any . Note that if the process is a continuous (or even norm-to-weak continuous) map for all , then the process is closed. Theorem 14 (existence of the time-dependent global attractor). Under the assumptions of (2)–(5), the process generated by problem (1) admits an invariant time-dependent global attractor . According to Lemma 10, we consider the family , where where is compact by the compact embedding ; besides, since the injection constants are independent of , is uniformly bounded. Hence, according to Lemmas 5, 8, and 10, is pullback attracting, and the process is asymptotically compact, which proves the existence of the unique time-dependent global attractor. In order to state the invariance of the time-dependent global attractor, due to the strong continuity of the process stated in Lemma 2, according to Remark 13, the process is closed, and it is -closed, for some ; then by Theorem 12, we know that the time-dependent global attractor is invariant. 3.4. Regularity of the Time-Dependent Global Attractor 3.4.1. The Second Decomposition of the System Equations We fix and each initial data , decomposing as where and solve the following equations, respectively: As a particular case of Lemma 8, we learn that Lemma 15. Under assumptions of (2)–(5), for some , one has the uniform bound Proof. Multiplying (46) by , we obtain We denote noting that Noting that (2) and using Young and Poincaré inequalities, for small, we infer Denoting by a generic constant depending on the size of in , using the invariance of the attractor, we find Exploiting the embeddings , , we get this yields noting that , we infer By the Gronwall lemma, we can get (48) immediately. Therefore, we have the following regularity result. Theorem 16 (regularity of the time-dependent global attractor). Under the assumptions of (2)–(5), the time-dependent global attractor , is bounded in , with a bound independent of . In fact, we define according to inequalities (47) and (48), for all , we have where denotes the Hausdorff semidistance in ; that is, From Theorem 14, we know that the time-dependent global attractor is invariant; this means that Hence, ; that is, is bounded in , with a bound independent of . The author expresses her sincere thanks to the anonymous reviewer for his/her careful reading of the paper, giving valuable comments and suggestions. She also thanks the editors for their kind help. E. C. Aifantis, “On the problem of diffusion in solids,” Acta Mechanica, vol. 37, no. 3-4, pp. 265–296, 1980. View at: Publisher Site | Google Scholar | Zentralblatt MATH R. Temam, Infinite-Dimensional Dynamical Systems in Mechanics and Physic, Springer, New York, NY, USA, 1997. View at: Publisher Site | MathSciNet J. L. Lions and E. Magenes, Non-Homogeneous Boundary Value Problems and Applications, Spring, Berlin, Germany, 1972. K. Kuttler and E. C. Aifantis, “Existence and uniqueness in nonclassical diffusion,” Quarterly of Applied Mathematics, vol. 45, no. 3, pp. 549–560, 1987. View at: Google Scholar | Zentralblatt MATH | MathSciNet K. Kuttler and E. Aifantis, “Quasilinear evolution equations in nonclassical diffusion,” SIAM Journal on Mathematical Analysis, vol. 19, no. 1, pp. 110–120, 1988. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet E. C. Aifantis, “Gradient nanomechanics: applications to deformation, fracture, and diffusion in nanopolycrystals,” Metallurgical and Materials Transactions A, vol. 42, no. 10, pp. 2985–2998, 2011. View at: Publisher Site | Google Scholar V. K. Kalantarov, “On the attractors for some non-linear problems of mathematical physics,” Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta imeni V. A. Steklova Akademii Nauk SSSR (LOMI), vol. 152, pp. 50–54, 1986. View at: Google Scholar Y. Xiao, “Attractors for a nonclassical diffusion equation,” Acta Mathematicae Applicatae Sinica, vol. 18, no. 2, pp. 273–276, 2002. View at: Publisher Site | Google Scholar | MathSciNet C. Y. Sun, S. Y. Wang, and C. K. Zhong, “Global attractors for a nonclassical diffusion equation,” Acta Mathematica Sinica, English Series, vol. 23, no. 7, pp. 1271–1280, 2007. View at: Google Scholar S. Y. Wang, D. S. Li, and C. K. Zhong, “On the dynamics of a class of nonclassical parabolic equations,” Journal of Mathematical Analysis and Applications, vol. 317, no. 2, pp. 565–582, 2006. View at: Publisher Site | Google Scholar | MathSciNet C. Sun and M. Yang, “Dynamics of the nonclassical diffusion equations,” Asymptotic Analysis, vol. 59, no. 1-2, pp. 51–81, 2008. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet Y. F. Liu and Q. Z. Ma, “Exponential attractors for a nonclassical diffusion equation,” Electronic Journal of Differential Equations, vol. 9, pp. 1–9, 2009. View at: Google Scholar | MathSciNet Q. Z. Ma, Y. F. Liu, and F. H. Zhang, “Global attractors in {H}^{\text{1}}\left({\mathbb{R}}^{N}\right) for nonclassical diffusion equations,” Discrete Dynamics in Nature and Society, vol. 2012, Article ID 672762, 16 pages, 2012. View at: Publisher Site | Google Scholar H. Q. Wu and Z. Y. Zhang, “Asymptotic regularity for the nonclassical diffusion equation with lower regular forcing term,” Dynamical Systems, vol. 26, no. 4, pp. 391–400, 2011. View at: Publisher Site | Google Scholar | MathSciNet L.-X. Pan and Y.-F. Liu, “Robust exponential attractors for the non-autonomous nonclassical diffusion equation with memory,” Dynamical Systems, vol. 28, no. 4, pp. 501–517, 2013. View at: Publisher Site | Google Scholar | MathSciNet F.-H. Zhang and Y.-F. Liu, “Pullback attractors in {H}^{\text{1}}\left({\mathbb{R}}^{N}\right) for non-autonomous nonclassical diffusion equations,” Dynamical Systems, vol. 29, no. 1, pp. 106–118, 2014. View at: Publisher Site | Google Scholar | MathSciNet M. Conti, V. Pata, and R. Temam, “Attractors for processes on time-dependent spaces. Applications to wave equations,” Journal of Differential Equations, vol. 255, no. 6, pp. 1254–1277, 2013. View at: Publisher Site | Google Scholar | Zentralblatt MATH | MathSciNet T. Caraballo, M. J. Garrido-Atienza, and B. Schmalfuss, “Existence of exponentially attracting stationary solutions for delay evolution equations,” Discrete and Continuous Dynamical Systems B, vol. 18, no. 2-3, pp. 271–293, 2007. View at: Publisher Site | Google Scholar | MathSciNet T. Caraballo, P. E. Kloeden, and B. Schmalfuß, “Exponentially stable stationary solutions for stochastic evolution equations and their perturbation,” Applied Mathematics and Optimization, vol. 50, no. 3, pp. 183–207, 2004. View at: Publisher Site | Google Scholar | MathSciNet F. Flandoli and B. Schmalfuss, “Random attractors for the 3 D stochastic Navier-Stokes equation with multiplicative white noise,” Stochastics and Stochastics Reports, vol. 59, no. 1-2, pp. 21–45, 1996. View at: Google Scholar | MathSciNet F. di Plinio, G. S. Duane, and R. Temam, “Time-dependent attractor for the oscillon equation,” Discrete and Continuous Dynamical Systems A, vol. 29, no. 1, pp. 141–167, 2011. View at: Publisher Site | Google Scholar | MathSciNet Copyright © 2014 Fang-hong Zhang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Five color theorem - Wikipedia A Five-Color Map The five color theorem is a result from graph theory that given a plane separated into regions, such as a political map of the counties of a state, the regions may be colored using no more than five colors in such a way that no two adjacent regions receive the same color. The five color theorem is implied by the stronger four color theorem, but is considerably easier to prove. It was based on a failed attempt at the four color proof by Alfred Kempe in 1879. Percy John Heawood found an error 11 years later, and proved the five color theorem based on Kempe's work. 1 Outline of the proof by contradiction 2 Linear time five-coloring algorithm Outline of the proof by contradiction[edit] First of all, one associates a simple planar graph {\displaystyle G} to the given map, namely one puts a vertex in each region of the map, then connects two vertices with an edge if and only if the corresponding regions share a common border. The problem is then translated into a graph coloring problem: one has to paint the vertices of the graph so that no edge has endpoints of the same color. {\displaystyle G} is a simple planar, i.e. it may be embedded in the plane without intersecting edges, and it does not have two vertices sharing more than one edge, and it doesn't have loops, then it can be shown (using the Euler characteristic of the plane) that it must have a vertex shared by at most five edges. (Note: This is the only place where the five-color condition is used in the proof. If this technique is used to prove the four-color theorem, it will fail on this step. In fact, an icosahedral graph is 5-regular and planar, and thus does not have a vertex shared by at most four edges.) Find such a vertex, and call it {\displaystyle v} Now remove {\displaystyle v} {\displaystyle G} {\displaystyle G'} obtained this way has one fewer vertex than {\displaystyle G} , so we can assume by induction that it can be colored with only five colors. If the coloring did not use all five colors on the five neighboring vertices of {\displaystyle v} , it can be colored in {\displaystyle G} with a color not used by the neighbors. So now look at those five vertices {\displaystyle v_{1}} {\displaystyle v_{2}} {\displaystyle v_{3}} {\displaystyle v_{4}} {\displaystyle v_{5}} that were adjacent to {\displaystyle v} in cyclic order (which depends on how we write G). So we can assume that {\displaystyle v_{1}} {\displaystyle v_{2}} {\displaystyle v_{3}} {\displaystyle v_{4}} {\displaystyle v_{5}} are colored with colors 1, 2, 3, 4, 5 respectively. Now consider the subgraph {\displaystyle G_{1,3}} {\displaystyle G'} consisting of the vertices that are colored with colors 1 and 3 only and the edges connecting them. To be clear, each edge connects a color 1 vertex to a color 3 vertex (this is called a Kempe chain). If {\displaystyle v_{1}} {\displaystyle v_{3}} lie in different connected components of {\displaystyle G_{1,3}} , we can swap the 1 and 3 colors on the component containing {\displaystyle v_{1}} without affecting the coloring of the rest of {\displaystyle G'} . This frees color 1 for {\displaystyle v} completing the task. If on the contrary {\displaystyle v_{1}} {\displaystyle v_{3}} lie in the same connected component of {\displaystyle G_{1,3}} , we can find a path in {\displaystyle G_{1,3}} joining them that consists of only color 1 and 3 vertices. Now turn to the subgraph {\displaystyle G_{2,4}} {\displaystyle G'} consisting of the vertices that are colored with colors 2 and 4 only and the edges connecting them, and apply the same arguments as before. Then either we are able to reverse the 2-4 coloration on the subgraph of {\displaystyle G_{2,4}} {\displaystyle v_{2}} and paint {\displaystyle v} color 2, or we can connect {\displaystyle v_{2}} {\displaystyle v_{4}} with a path that consists of only color 2 and 4 vertices. Such a path would intersect the 1-3 colored path we constructed before since {\displaystyle v_{1}} {\displaystyle v_{5}} were in cyclic order. This is clearly absurd as it contradicts the planarity of the graph. {\displaystyle G} can in fact be five-colored, contrary to the initial presumption. Linear time five-coloring algorithm[edit] In 1996, Robertson, Sanders, Seymour, and Thomas described a quadratic four-coloring algorithm in their "Efficiently four-coloring planar graphs".[1] In the same paper they briefly describe a linear-time five-coloring algorithm, which is asymptotically optimal. The algorithm as described here operates on multigraphs and relies on the ability to have multiple copies of edges between a single pair of vertices. It is based on Wernicke's theorem, which states the following: Wernicke's theorem: Assume G is planar, nonempty, has no faces bounded by two edges, and has minimum degree 5. Then G has a vertex of degree 5 which is adjacent to a vertex of degree at most 6. We will use a representation of the graph in which each vertex maintains a circular linked list of adjacent vertices, in clockwise planar order. In concept, the algorithm is recursive, reducing the graph to a smaller graph with one less vertex, five-coloring that graph, and then using that coloring to determine a coloring for the larger graph in constant time. In practice, rather than maintain an explicit graph representation for each reduced graph, we will remove vertices from the graph as we go, adding them to a stack, then color them as we pop them back off the stack at the end. We will maintain three stacks: S4: Contains all remaining vertices with either degree at most four, or degree five and at most four distinct adjacent vertices (due to multiple edges). S5: Contains all remaining vertices that have degree five, five distinct adjacent vertices, and at least one adjacent vertex with degree at most six. Sd: Contains all vertices deleted from the graph so far, in the order that they were deleted. In the first step, we collapse all multiple edges to single edges, so that the graph is simple. Next, we iterate over the vertices of the graph, pushing any vertex matching the conditions for S4 or S5 onto the appropriate stack. Next, as long as S4 is non-empty, we pop v from S4 and delete v from the graph, pushing it onto Sd, along with a list of its neighbors at this point in time. We check each former neighbor of v, pushing it onto S4 or S5 if it now meets the necessary conditions. When S4 becomes empty, we know that our graph has minimum degree five. If the graph is empty, we go to the final step 5 below. Otherwise, Wernicke's Theorem tells us that S5 is nonempty. Pop v off S5, delete it from the graph, and let v1, v2, v3, v4, v5 be the former neighbors of v in clockwise planar order, where v1 is the neighbor of degree at most 6. We check if v1 is adjacent to v3 (which we can do in constant time due to the degree of v1). There are two cases: If v1 is not adjacent to v3, we can merge these two vertices into a single vertex. To do this, we remove v from both circular adjacency lists, and then splice the two lists together into one list at the point where v was formerly found. Provided that v maintains a reference to its position in each list, this can be done in constant time. It's possible that this might create faces bounded by two edges at the two points where the lists are spliced together; we delete one edge from any such faces. After doing this, we push v3 onto Sd, along with a note that v1 is the vertex that it was merged with. Any vertices affected by the merge are added or removed from the stacks as appropriate. Otherwise, v2 lies inside the face outlined by v, v1, and v3. Consequently, v2 cannot be adjacent to v4, which lies outside this face. We merge v2 and v4 in the same manner as v1 and v3 above. At this point S4, S5, and the graph are empty. We pop vertices off Sd. If the vertex were merged with another vertex in step 3, the vertex that it was merged with will already have been colored, and we assign it the same color. This is valid because we only merged vertices that were not adjacent in the original graph. If we had removed it in step 2 because it had at most 4 adjacent vertices, all of its neighbors at the time of its removal will have already been colored, and we can simply assign it a color that none of its neighbors is using. ^ Robertson, Neil; Sanders, Daniel P.; Seymour, Paul; Thomas, Robin (1996), "Efficiently four-coloring planar graphs" (PDF), Proc. 28th ACM Symposium on Theory of Computing (STOC), New York: ACM Press . Heawood, P. J. (1890), "Map-Colour Theorems", Quarterly Journal of Mathematics, Oxford, vol. 24, pp. 332–338 Retrieved from "https://en.wikipedia.org/w/index.php?title=Five_color_theorem&oldid=1084439028"
MCQs of Electrochemistry | GUJCET MCQ MCQs of Electrochemistry Which reaction take place in electrochemical cell ? Which cell differs from principle point of view? (a) Storage cell The electrochemical cell stops working after sometime. Why ? (b) The difference of cell potentials of both electrodes become zero. (c) By reversing the direction of reaction taking place in the cell. (d) Due to change in concentration What is used for the measurement of accurate potential of electrochemical cell ? If M, N, O, P and Q are in the increasing order of their standard potentials in standard conditions of their standard half cells, then by combination of which two half cells maximum cell potential will be obtained ? (b) M and Q (c) M and P (d) M and O {E}_{red}^{0} = {E}_{oxi}^{0} -{E}_{red}^{0} -{E}_{oxi}^{0} {E}_{redox}^{0} The solution of silver nitrate become coloured when pieces of nickle are added to solution of silver nitrate because (a) Nickel is oxidised (b) Silver is oxidised (c) Nickel is reduced (d) Silver is precipitated The values of standard reduction potential of X, Y and Z metals are 0.34 V, 0.80 V and -0.45 V. Mention their order of strength as reducing agent (a) Z > Y > X (b) Z > X > Y (c) X > Y > Z (d) Y > Z > X The resistance of any uniform conductor is (a) in inverse proportion to its length (b) in direct proportion to its length (c) in the inverse proportion of the square of its area of cross section (d) in direct proportion of the area of its cross section Which is more corroded when the iron plates of steamers are connected with block of Zn metal and kept in contact of sea water ? (d) Neither of the metals
Long-term stability estimates and existence of a global attractor in a finite element approximation of the Navier-Stokes equations with numerical sub-grid scale modeling - Badia et al 2010a - Scipedia Long-term stability estimates and existence of a global attractor in a finite element approximation of the Navier-Stokes equations with numerical sub-grid scale modeling S. Badia, R. Codina, J. Gutiérrez-Santacreu Variational multiscale methods lead to stable finite element approximations of the NavierStokes equations, both dealing with the indefinite nature of the system (pressure stability) and the velocity stability loss for high Reynolds numbers. These methods enrich the Galerkin formulation with a sub-grid component that is modelled. In fact, the effect of the sub-grid scale on the captured scales has been proved to dissipate the proper amount of energy needed to approximate the correct energy spectrum. Thus, they also act as effective large-eddy simulation turbulence models and allow to compute flows without the need to capture all the scales in the system. In this article, we consider a dynamic sub-grid model that enforces the sub-grid component to be orthogonal to the finite element space in L 2 sense. We analyze the long-term behavior of the algorithm, proving the existence of appropriate absorbing sets and a compact global attractor. The improvements with respect to a finite element Galerkin approximation are the long-term estimates for the sub-grid component, that are translated to effective pressure and velocity stability. Thus, the stabilization introduced by the sub-grid model into the finite element problem is not deteriorated for infinite time intervals of computation. S. Badia, R. Codina and J. Gutiérrez-Santacreu, Long-term stability estimates and existence of a global attractor in a finite element approximation of the Navier-Stokes equations with numerical sub-grid scale modeling, SIAM J. Numer. Anal. (2010). Vol. 48 (3), pp. 1013-1037 URL https://www.scipedia.com/public/Badia_et_al_2010a Variational multiscale methods lead to stable finite element approximations of the NavierStokes equations, both dealing with the indefinite nature of the system (pressure stability) and the velocity stability loss for high Reynolds numbers. These methods enrich the Galerkin formulation with a sub-grid component that is modelled. In fact, the effect of the sub-grid scale on the captured scales has been proved to dissipate the proper amount of energy needed to approximate the correct energy spectrum. Thus, they also act as effective large-eddy simulation turbulence models and allow to compute flows without the need to capture all the scales in the system. In this article, we consider a dynamic sub-grid model that enforces the sub-grid component to be orthogonal to the finite element space in {\displaystyle L^{2}} sense. We analyze the long-term behavior of the algorithm, proving the existence of appropriate absorbing sets and a compact global attractor. The improvements with respect to a finite element Galerkin approximation are the long-term estimates for the sub-grid component, that are translated to effective pressure and velocity stability. Thus, the stabilization introduced by the sub-grid model into the finite element problem is not deteriorated for infinite time intervals of computation. Navier-Stokes problem • Long-term stability • Absorbing set • Global attractor • Stabilized finite element methods • Orthogonal sub-grid scales • Dynamic subgrid scales
HasSelfLoop - Maple Help Home : Support : Online Help : Mathematics : Discrete Mathematics : Graph Theory : GraphTheory Package : HasSelfLoop test if graph has a self loop NumberOfSelfLoops count number of self loops in graph construct list of self loops in graph HasSelfLoop(G) HasSelfLoop(G, v) NumberOfSelfLoops(G) SelfLoops(G) If v is a vertex of the graph, HasSelfLoop(G,v) returns true if the graph G has an edge or arc v to itself, and false otherwise. The NumberOfSelfLoops(G) command returns the number of self-loops in G. The SelfLoops(G) command returns a set of self-loops in G. Because the data structure for a graph is an array of sets of neighbors, the test for self-loop existence checks each neighbor set and the cost is O(n) where n is the number of vertices. \mathrm{with}⁡\left(\mathrm{GraphTheory}\right): G≔\mathrm{Graph}⁡\left({[1,2],[2,3],[3,3],[3,4],[4,1]}\right) \textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}\textcolor[rgb]{0,0,1}{\mathrm{Graph 1: a directed unweighted graph with 4 vertices, 4 arc\left(s\right), and 1 self-loop\left(s\right)}} \mathrm{HasSelfLoop}⁡\left(G,2\right) \textcolor[rgb]{0,0,1}{\mathrm{false}} \mathrm{HasSelfLoop}⁡\left(G,3\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{NumberOfSelfLoops}⁡\left(G\right) \textcolor[rgb]{0,0,1}{1} The GraphTheory[HasSelfLoop], GraphTheory[NumberOfSelfLoops] and GraphTheory[SelfLoops] commands were introduced in Maple 2020.
𝒲 A Banach fixed point theorem for topolocical spaces Ivan Kupka (1992) A Basic Fixed Point Theorem Lech Pasicki (2006) The paper contains a fixed point theorem for stable mappings in metric discus spaces (Theorem 10). A consequence is Theorem 11 which is a far-reaching extension of the fundamental result of Browder, Göhde and Kirk for non-expansive mappings. A Certain Class Of Maps And Fixed Point Theorems Ljubomir Ćirić (1976) A certain class of maps and fixed point theorems. Lj.B. Ciric (1976) A class of Fan-Browder type fixed-point theorems and its applications in topological spaces. Chen, Yi-An, Zhang, Yi-Ping (2010) A common end point theorem for set-valued generalized \left(\psi ,\varphi \right) -weak contraction. Abbas, Mujahid, Đorić, Dragan (2010) A common fixed point theorem B. E. Rhoades (1976) Kiyoshi Iseki (1975) A common fixed point theorem for commuting expanding maps on nilmanifolds. Tauraso, Roberto (2005) A common fixed point theorem for compatible mappings on a normed vector space. Pathak, H.K., Fisher, Brian (1996) A common fixed point theorem for compatible mappings on a normed vector space H. K. Pathak, Brian Fisher (1997) A common fixed theorem is proved for two pairs of compatible mappings on a normed vector space. A Common Fixed Point Theorem for Expansive Mappings under Strict Implicit Conditions on b-Metric Spaces Mohamed Akkouchi (2011) In the setting of a b-metric space (see [Czerwik, S.: Contraction mappings in b-metric spaces Acta Math. Inform. Univ. Ostraviensis 1 (1993), 5–11.] and [Czerwik, S.: Nonlinear set-valued contraction mappings in b-metric spaces Atti Sem. Mat. Fis. Univ. Modena 46, 2 (1998), 263–276.]), we establish two general common fixed point theorems for two mappings satisfying the (E.A) condition (see [Aamri, M., El Moutawakil, D.: Some new common fixed point theorems under strict contractive conditions Math.... A common fixed point theorem for multivalued Ćirić type mappings with new type compatibility. Altun, Ishak (2009) A common fixed point theorem for weakly compatible mappings. Babu, Gutti Venkata Ravindranadh, Negash, Alemayehu Geremew (2010) A common fixed point theorem for weakly compatible mappings in non-Archimedean Menger PM-spaces Amit Singh, R. C. Dimri, Sandeep Bhatt (2011) A common fixed point theorem in {D}^{*} Sedghi, Shaban, Shobe, Nabi, Zhou, Haiyun (2007) A common fixed point theorem of Meir and Keeler type. Cho, Y.J., Murthy, P.P., Jungck, G. (1993) A common fixed point theorem of weakly commuting mappings. Popa, Valeriu (1990)
2013 Further Results on the Traveling Wave Solutions for an Integrable Equation Chaohong Pan, Zhengrong Liu The objective of this paper is to extend some results of pioneers for the nonlinear equation {m}_{t}=\left(1/2\right){\left(1/{m}^{k}\right)}_{xxx}-\left(1/2\right){\left(1/{m}^{k}\right)}_{x} introduced by Qiao. The equivalent relationship of the traveling wave solutions between the integrable equation and the generalized KdV equation is revealed. Moreover, when k=-\mathrm{\left(p}/\mathrm{q\right)}\left(p\ne q p,q\in {ℤ}^{+}\right) , we obtain some explicit traveling wave solutions by the bifurcation method of dynamical systems. Chaohong Pan. Zhengrong Liu. "Further Results on the Traveling Wave Solutions for an Integrable Equation." J. Appl. Math. 2013 1 - 6, 2013. https://doi.org/10.1155/2013/681383 Chaohong Pan, Zhengrong Liu "Further Results on the Traveling Wave Solutions for an Integrable Equation," Journal of Applied Mathematics, J. Appl. Math. 2013(none), 1-6, (2013)
A generalization of the Liouville theorem to polyharmonic functions January, 2001 A generalization of the Liouville theorem to polyharmonic functions Toshihide FUTAMURA, Kyoko KISHI, Yoshihiro MIZUTA The aim of this note is to generalize the Liouville theorem to polyharmonic functions u {R}^{n} . We give a condition on spherical means to assure that u Toshihide FUTAMURA. Kyoko KISHI. Yoshihiro MIZUTA. "A generalization of the Liouville theorem to polyharmonic functions." J. Math. Soc. Japan 53 (1) 113 - 118, January, 2001. https://doi.org/10.2969/jmsj/05310113 Keywords: Almansi expansion , Green's formula , mean value property , polyharmonic functions Toshihide FUTAMURA, Kyoko KISHI, Yoshihiro MIZUTA "A generalization of the Liouville theorem to polyharmonic functions," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 53(1), 113-118, (January, 2001)
MCQs of Chemical Kinetics | GUJCET MCQ MCQs of Chemical Kinetics Which unit of time is selected for fast reactions ? (d) Nanosecond. The reaction rate increases with increase in temperature, because _____ (a) Energy barrier decreases (b) Threshold energy increases (c) Activation energy increases (d) The number of molecules undergoing effective collision increases. For the reaction n1A + n2B → Products; rate =K[A]3[B]0 if concentration of A is doubled and concentration of B is halved, then reaction rate _____ (a) Increases by four times (b) Increases by eight times (c) Is doubled (d) Becomes ten times. Which of the following is indicated by the stoichiometry of the reaction ? (a) Order of reaction (b) Mechanism of reaction (c) The number of intermediate compound (d) The relative mole number of reactants and products. What is the SI unit of reaction rate ? (a) mol sec-1 (b) mol m-3 sec-1 (c) mol dm sec-1 (d) mol lit-1 If the order of reaction is 'n', then what will be the unit of its rate constant ? {\mathrm{lit}}^{\mathrm{n}} {\mathrm{mol}}^{-{\mathrm{n}}^{2}} {\mathrm{sec}}^{-1} \left(\mathrm{mol} {\mathrm{lit}}^{-1}{\right)}^{\mathrm{n}-1} {\mathrm{sec}}^{-1} {\left[\frac{\mathrm{lit}}{\mathrm{mol}}\right]}^{\mathrm{n}-1} {\mathrm{Sec}}^{\frac{-1}{\mathrm{n}}} {\mathrm{mol}}^{\mathrm{n}} {\mathrm{lit}}^{-\mathrm{n}} {\mathrm{sec}}^{-1} \mathrm{If} \mathrm{the} \mathrm{relation} \mathrm{between} \mathrm{half} \mathrm{reaction} \mathrm{time} \mathrm{and} {\left[\mathrm{R}\right]}_{0}, \mathrm{is} {\mathrm{t}}_{\frac{1}{2}} \mathrm{\alpha } \frac{1}{{\left[\mathrm{R}\right]}_{0}^{\mathrm{n}-1}} \mathrm{then} \mathrm{what} \mathrm{will} \mathrm{be} \mathrm{the} \mathrm{order} \mathrm{of} \mathrm{reaction} ? \frac{\mathrm{n}-2}{2} \frac{1}{\mathrm{n}-2} \mathrm{The} \mathrm{order} \mathrm{of} \mathrm{following} \mathrm{reactions} \mathrm{are} \mathrm{respectively} \mathrm{_____}\phantom{\rule{0ex}{0ex}}{\mathrm{H}}_{2}\left(\mathrm{g}\right) + {\mathrm{Cl}}_{2}\left(\mathrm{g}\right) \stackrel{\mathrm{hv}}{\to } 2\mathrm{HCl}\left(\mathrm{g}\right)\phantom{\rule{0ex}{0ex}}{\mathrm{H}}_{2}\left(\mathrm{g}\right) + {\mathrm{Br}}_{2}\left(\mathrm{g}\right) \to 2\mathrm{HBr}\left(\mathrm{g}\right) \phantom{\rule{0ex}{0ex}}{\mathrm{H}}_{2}\left(\mathrm{g}\right) + {\mathrm{I}}_{2}\left(\mathrm{g}\right) \to 2\mathrm{HI}\left(\mathrm{g}\right) (c) 0, 1.5, 2 (d) 2, 1.5, 2 If two reactants are taking part in the reaction, then the reaction will never be (c) Monomolecular (d) Bimolecular. Of which of the following factors, the rate of reaction depends on (a) Molecular mass of reactant (b) Atomic mass of reactant (c) Equivalent weight of reactant (d) Active mass of reactant.
20E36 Automorphisms of infinite groups [For automorphisms of finite groups, see ] 20E42 Groups with a BN -pair; buildings A certain property of abelian groups Witold Seredyński, Jacek Świątkowski (1993) A conjugacy theorem for subgroups of G{L}_{n} containing the group of diagonal matrices N. A. Vavilov (1987) A Finitely Presented Solvable Group that is not Residually Finite. Gilbert Baumslag (1973) A matrix characterization of the maximal groups in {\beta }_{X} Darald J. Hartfiel, Carlton J. Maxson (1975) A New Bound for the Genus of a Nilpotent Group Claude Lemaire (1976) A Note on Baer Groups of Finite Rank. A note on infinite aS Reza Nikandish, Babak Miraftab (2015) G be a group. If every nontrivial subgroup of G has a proper supplement, then G aS -group. We study some properties of aS -groups. For instance, it is shown that a nilpotent group G aS -group if and only if G is a subdirect product of cyclic groups of prime orders. We prove that if G aS -group which satisfies the descending chain condition on subgroups, then G is finite. Among other results, we characterize all abelian groups for which every nontrivial quotient group is an aS -group.... A simple proof of Baer;s and Sato's theorems on lattice-isomorphisms between groups Mario Mainardis (1992) A simple proof is given of a well-known result of the existance of lattice-isomorphisms between locally nilpotent quaternionfree modular groups and abelian groups. Abelian quasinormal subgroups of groups Stewart E. Stonehewer, Giovanni Zacher (2004) G be any group and let A be an abelian quasinormal subgroup of G n is any positive integer, either odd or divisible by 4 , then we prove that the subgroup {A}^{n} is also quasinormal in G Actions of groups on lattices. Tărnăuceanu, M. (2002) Adjoint groups and the Mal'cev correspondence (a tale of four functors) Affinities of groups Roland Schmidt (1984) Alcuni aspetti della teoria reticolare dei gruppi infiniti Carmela Musella (2000) Almost finitely presented soluble groups. Artinian and noetherian factorized groups Bernhard Amberg (1976) Automorphisms of Locally Nilpotent FC-Groups. Stewart E. Stonehewer (1976)
MCQs of Electromagnetic Induction | GUJCET MCQ MCQs of Electromagnetic Induction When electric current in a coil steadily changes from +2 A to -2 A in 0.05 s, an induced emf of 8.0 v is generated in it. Then the self-inductance of the coil is _____ H. X and Y coils are joined in a circuit in such a way that when the change of current in X is 2 A , the change in the magnetic flux in Y is 0.4 Wb . The mutual induction of the system of two coils is _____ H. A coil is placed in a time-varying magnetic field. Electrical power is dissipated in the form of Joule heat due to the current induced in the coil. If the number of turns were to be quadrupled and the wire radius halved, the electrical power dissipated would be _____ . The self-inductance of two solenoids A and B having equal length are same. If the number of turns in two solenoids A and B are 100 and 200 respectively, the ratio of the radii of their cross-section will be _____ . A coil of surface area 100 cm2 having 50 turns is held perpendicular to the magnetic field of intensity 0.02 Wbm-2 . The resistance of the coil is 2 Ω . If it is removed from the magnetic field in 1 s, the induced charge in the coil is _____ . (b) 0.5 C (c) 0.05 C The magnetic flux linked with a coil is changing with time t (second) according to \Phi =6{t}^{2}-5t+1 . Where Φ is in Wb. At t = 0.5 s, the induced current in the coil is _____ . (the resistance of the circuit is 10 Ω) The mutual inductance of the system of two coils is 5 mH. The current in the first coil varies according to the equation I =I0sinωt , where I0 = 10A and ω = 100π rads-1. The value of maximum induced emf in the second coil is _____ . (a) 2π V (b) 5π V (c) π V (d) 4π V A magnet is moving towards a coil along its axis and the emf induced in the coil is ε. If the coil also starts moving towards the magnet with the same speed, the induced emf will be _____ . \frac{\epsilon }{2} (c) 2ε (d) 4ε A square conducting coil of area 100 cm2 is placed normally inside a uniform magnetic field of 103 Wbm-2. The magnetic flux linked with the coil is _____ Wb. The distance between two extreme points of two wings of an aeroplane is 50 m. It is flying at a speed of 360 kmh-1 in horizontal direction. If the vertical component of earth's magnetic field at that place is 2 × 10-4 Wbm-2 , the induced emf between these two end points is _____ V.
Annual Percentage Rate (APR) Definition , and yet another for balance transfers from another card. Issuers also charge high-rate penalty APRs to customers for late payments or violating other terms of the cardholder agreement. There’s also the introductory APR—a low or 0% rate—with which many credit card companies try to entice new customers to sign up for a card. \begin{aligned} &\text{APY} = (1 + \text{Periodic Rate} ) ^ n - 1 \\ &\textbf{where:} \\ &n = \text{Number of compounding periods per year} \\ \end{aligned} ​APY=(1+Periodic Rate)n−1where:n=Number of compounding periods per year​ \begin{aligned} &( ( 1 + .0006273 ) ^ {365} ) - 1 = .257 \\ \end{aligned} ​((1+.0006273)365)−1=.257​ \begin{aligned} &\text{APR} = \left ( \left ( \frac{ \frac{ \text{Fees} + \text{Interest} }{ \text {Principal} } }{ n } \right ) \times 365 \right ) \times 100 \\ &\textbf{where:} \\ &\text{Interest} = \text{Total interest paid over life of the loan} \\ &\text{Principal} = \text{Loan amount} \\ &n = \text{Number of days in loan term} \\ \end{aligned} ​APR=((nPrincipalFees+Interest​​)×365)×100where:Interest=Total interest paid over life of the loanPrincipal=Loan amountn=Number of days in loan term​
Fluctuating Nature of Light-Enhancedd-Wave Superconductivity: A Time-Dependent Variational Non-Gaussian Exact Diagonalization Study Yao Wang, Tao Shi, Cheng-Chien Chen Engineering quantum phases using light is a novel route to designing functional materials, where light-induced superconductivity is a successful example. Although this phenomenon has been realized experimentally, especially for the high- {T}_{c} cuprates, the underlying mechanism remains mysterious. Using the recently developed variational non-Gaussian exact diagonalization method, we investigate a particular type of photoenhanced superconductivity by suppressing a competing charge order in a strongly correlated electron-electron and electron-phonon system. We find that the d -wave superconductivity pairing correlation can be enhanced by a pulsed laser, consistent with recent experiments based on gap characterizations. However, we also find that the pairing correlation length is heavily suppressed by the pump pulse, indicating that light-enhanced superconductivity may be of fluctuating nature. Our findings also imply a general behavior of nonequilibrium states with competing orders, beyond the description of a mean-field framework.
Kenmotsu type representation formula for surfaces with prescribed mean curvature in the hyperbolic 3-space October, 2000 Kenmotsu type representation formula for surfaces with prescribed mean curvature in the hyperbolic 3-space Reiko AIYAMA, Kazuo AKUTAGAWA Our primary object of this paper is to give a representation formula for surfaces with prescribed mean curvature in the hyperbolic 3-space of curvature -1 in terms of their normal Gauss maps. For CMC (constant mean curvature) surfaces, we derive another representation formula in terms of their adjusted Gauss maps. These formulas are hyperbolic versions of the Kenmotsu representation formula for surfaces in the Euclidean 3-space. As an application, we give a construction of complete simply connected CMC H\left(|H|<1\right) surfaces embedded in the hyperbolic 3-space. Reiko AIYAMA. Kazuo AKUTAGAWA. "Kenmotsu type representation formula for surfaces with prescribed mean curvature in the hyperbolic 3-space." J. Math. Soc. Japan 52 (4) 877 - 898, October, 2000. https://doi.org/10.2969/jmsj/05240877 Keywords: Harmonic Maps , normal Gauss maps , representation formula , Surfaces in the hyperbolic 3-space Reiko AIYAMA, Kazuo AKUTAGAWA "Kenmotsu type representation formula for surfaces with prescribed mean curvature in the hyperbolic 3-space," Journal of the Mathematical Society of Japan, J. Math. Soc. Japan 52(4), 877-898, (October, 2000)
35L02 First-order hyperbolic equations 35L03 Initial value problems for first-order hyperbolic equations 35L04 Initial-boundary value problems for first-order hyperbolic equations 35L25 Higher-order hyperbolic equations 35L51 Second-order hyperbolic systems 35L52 Initial value problems for second-order hyperbolic systems 35L53 Initial-boundary value problems for second-order hyperbolic systems 35L57 Initial-boundary value problems for higher-order hyperbolic systems 35L60 Nonlinear first-order hyperbolic equations 35L70 Nonlinear second-order hyperbolic equations 35L71 Semilinear second-order hyperbolic equations 35L72 Quasilinear second-order hyperbolic equations 35L75 Nonlinear higher-order hyperbolic equations 35L76 Semilinear higher-order hyperbolic equations 35L77 Quasilinear higher-order hyperbolic equations 35L81 Singular hyperbolic equations 35L85 Linear hyperbolic unilateral problems and linear hyperbolic variational inequalities 35L86 Nonlinear hyperbolic unilateral problems and nonlinear hyperbolic variational inequalities 35L87 Unilateral problems and variational inequalities for hyperbolic systems 35L90 Abstract hyperbolic equations {ℝ}^{n} A Carleman estimates based approach for the stabilization of some locally damped semilinear hyperbolic equations Louis Tebou (2008) First, we consider a semilinear hyperbolic equation with a locally distributed damping in a bounded domain. The damping is located on a neighborhood of a suitable portion of the boundary. Using a Carleman estimate [Duyckaerts, Zhang and Zuazua, Ann. Inst. H. Poincaré Anal. Non Linéaire (to appear); Fu, Yong and Zhang, SIAM J. Contr. Opt. 46 (2007) 1578–1614], we prove that the energy of this system decays exponentially to zero as the time variable goes to infinity. Second, relying on another Carleman... First, we consider a semilinear hyperbolic equation with a locally distributed damping in a bounded domain. The damping is located on a neighborhood of a suitable portion of the boundary. Using a Carleman estimate [Duyckaerts, Zhang and Zuazua, Ann. Inst. H. Poincaré Anal. Non Linéaire (to appear); Fu, Yong and Zhang, SIAM J. Contr. Opt.46 (2007) 1578–1614], we prove that the energy of this system decays exponentially to zero as the time variable goes to infinity. Second, relying on another Carleman... {C}^{1,1} Hart F. Smith (1998) In this article we give a construction of the wave group for variable coefficient, time dependent wave equations, under the hypothesis that the coefficients of the principal term possess two bounded derivatives in the spatial variables, and one bounded derivative in the time variable. We use this construction to establish the Strichartz and Pecher estimates for solutions to the Cauchy problem for such equations, in space dimensions n=2 n=3 A simple construction of a parametrix for a regular hyperbolic operator Fausto Segala (1988) A Simple Example of Localized Parametric Resonance for the Wave Equation Colombini, Ferruccio, Rauch, Jeffrey (2008) 2000 Mathematics Subject Classification: 35L05, 35P25, 47A40.The problem studied here was suggested to us by V. Petkov. Since the beginning of our careers, we have benefitted from his insights in partial differential equations and mathematical physics. In his writings and many discussions, the conjuction of deep analysis and specially interesting problems has been a source inspiration for us.The research of J. Rauch is partially supported by the U.S. National Science Foundation under grant NSF-DMS-0104096... A stability estimate for a solution to the problem of determination of two coefficients of a hyperbolic equation. Glushkova, D.I., Romanov, V.G. (2003) A stability estimate for a solution to the wave equation with the Cauchy data on a timelike cylindrical surface. Romanov, V.G. (2005) A THeorem of Global Existence of Solutions to Nonlinear Wave Equations in Four Space Dimensions. Jianmin Gao (1990) About the influence of oscillations on Strichartz-type decay estimates. Reissig, M., Yagdjian, K. (2000) Abstract linear hyperbolic equations and asymptotic equivalence as t\to +\infty Arosio, Alberto (1982) An analytic method for the initial value problem of the electric field system in vertical inhomogeneous anisotropic media Valery Yakhno, Ali Sevimlican (2011) The time-dependent system of partial differential equations of the second order describing the electric wave propagation in vertically inhomogeneous electrically and magnetically biaxial anisotropic media is considered. A new analytical method for solving an initial value problem for this system is the main object of the paper. This method consists in the following: the initial value problem is written in terms of Fourier images with respect to lateral space variables, then the resulting problem... Michael Dreher (2003) An iteration method for nonlinear second order evolution equations Konrad Gröger (1976) Analysis of an algorithm for the Galerkin-characteristic method. Rodolfo Bermejo (1991/1992) Application of a noncompactness technique to an investigation of the existence of solutions to a nonlocal multivalued Darboux problem. Byszewski, Ludwik, Papageorgiou, Nikolaos S. (1999) Asymptotic and numerical description of the kink/antikink interaction. Omel'yanov, Georgii A., Segundo-Caballero, Israel (2010) Asymptotic Energy Propagation and Scattering of Waves in Waveguides with Cylinders. William C. Lydord (1976)
Some of the axes and graphs below have errors, and some do not. For each one, decide if it contains errors. If there are errors, redraw the graph correctly. This one is wrong. Think about how we indicate exactly where each value on the axes is represented. This one is wrong too. Where is 0 on the graph? On each axis, the distance between 0 3 should be the same as the distance between 3 6 How could you redraw this one correctly? This one is correct!
MCQs of Continuity and Differentiability | GUJCET MCQ Continuity and Differentiability MCQs MCQs of Continuity and Differentiability {\left[\frac{d}{dx}se{c}^{-1}x\right]}_{x=-3}= \frac{1}{\sqrt{{x}^{2}-1}} -\frac{1}{\sqrt{{x}^{2}-1}} \frac{1}{6\sqrt{2}} \frac{-1}{6\sqrt{2}} \frac{d}{dx}{x}^{x} = _____ \left(x >0\right) {x}^{x-1} {x}^{x} {x}^{x}\left(1 + log x\right) \frac{d}{dx}\left(si{n}^{-1}x + co{s}^{-1}x\right) = _____ \left(\left|x\right|<1\right) \frac{2}{\sqrt{1-{x}^{2}}} \frac{1}{\sqrt{1-{x}^{2}}} \mathrm{does} \mathrm{not} \mathrm{exist} \frac{d}{dx} {a}^{a}= _____ \left(a>0\right) {a}^{a}\left( 1 + log a\right) {a}^{a} \mathrm{does} \mathrm{not} \mathrm{exist} \frac{d}{dx} {e}^{5x }= _____ . {e}^{5x } 5{e}^{5x } 5x {e}^{5x -1} \frac{d}{dx}log \left|x\right| = _____ . \left(x \ne 0\right) \frac{1}{\left|x\right|} \frac{1}{x} \mathrm{does} \mathrm{not} \mathrm{exist} {e}^{x} \frac{d}{dx} si{n}^{3}x = _____ . 3si{n}^{2}x 3co{s}^{2}x 3si{n}^{2}x cosx -3co{s}^{2}x sinx \frac{d}{dx}ta{n}^{n}x= ____. nta{n}^{n-1}x nta{n}^{n-1}x se{c}^{2}x nse{c}^{2n}x nta{n}^{n-1}x se{c}^{n-1}x \mathrm{If} f\left(x\right) =\left\{\begin{array}{l}x x\in \left(0,1\right)\\ 1 x\ge 1\end{array}\right\ f \mathrm{is} \mathrm{continuous} \mathrm{at} x = 1 \mathrm{only} f \mathrm{is} \mathrm{discontinuous} \mathrm{at} x=1 \mathrm{only} f \mathrm{is} \mathrm{continuous} \mathrm{on} {\mathrm{R}}^{+} f \mathrm{is} \mathrm{not} \mathrm{defined} \mathrm{for} x=1 If f\left(x\right) =\left\{\begin{array}{cc}ax+b& 1\le x<5\\ 7x-5& 5\le x<10\\ bx+3a& x\ge 10\end{array}\right\ is continuous, \left(a,b\right) = _____ .\phantom{\rule{0ex}{0ex}}
32A07 Special domains (Reinhardt, Hartogs, circular, tube) 32A10 Holomorphic functions 32A15 Entire functions 32A17 Special families of functions 32A18 Bloch functions, normal functions 32A20 Meromorphic functions 32A22 Nevanlinna theory (local); growth estimates; other inequalities 32A26 Integral representations, constructed kernels (e.g. Cauchy, Fantappiè-type kernels) 32A27 Local theory of residues 32A30 Other generalizations of function theory of one complex variable (should also be assigned at least one classification number from Section 30) {H}^{p} -spaces, Nevanlinna spaces 32A36 Bergman spaces 32A37 Other spaces of holomorphic functions (e.g. bounded mean oscillation (BMOA), vanishing mean oscillation (VMOA)) 32A40 Boundary behavior of holomorphic functions 32A50 Harmonic analysis of several complex variables 32A65 Banach algebra techniques A lower bound on the radius of analyticity of a power series in a real Banach space Timothy Nguyen (2009) Let F be a power series centered at the origin in a real Banach space with radius of uniform convergence ϱ. We show that F is analytic in the open ball B of radius ϱ/√e, and furthermore, the Taylor series of F about any point a ∈ B converges uniformly within every closed ball centered at a contained in B. A note on matrix transformations of holomorphic Dirichlet series. Lê Hai Khôi (1999) A pointwise growth estimate for analytic functions in tubes. Carmichael, Richard D., Hayashi, Elmer K. (1980) In 1996, Braaksma and Faber established the multi-summability, on suitable multi-intervals, of formal power series solutions of locally analytic, nonlinear difference equations, in the absence of “level {1}^{+} ”. Combining their approach, which is based on the study of corresponding convolution equations, with recent results on the existence of flat (quasi-function) solutions in a particular type of domains, we prove that, under very general conditions, the formal solution is accelero-summable. Its sum... Jacob Sznajdman (2010) We give an elementary proof of the Briançon-Skoda theorem. The theorem gives a criterionfor when a function \phi belongs to an ideal I of the ring of germs of analytic functions at 0\in {ℂ}^{n} ; more precisely, the ideal membership is obtained if a function associated with \phi I is locally square integrable. If I can be generated by m elements,it follows in particular that \overline{{I}^{min\left(m,n\right)}}\subset I \overline{J} denotes the integral closure of an ideal J An expansion theorem for nonanalytic functions in several complex variables. Mario O. González (1973) X un espace de Banach complexe, et notons B\left(R\right)\subset X la boule de rayon R centrée en 0 . On considère le problème d’approximation suivant: étant donnés 0<r<R ϵ>0 f holomorphe dans B\left(R\right) , existe-t-il toujours une fonction g , holomorphe dans X |f-g|<ϵ B\left(r\right) ? On démontre que c’est bien le cas si X est l’espace {l}^{1} des suites sommables. Approximation of entire functions of two complex variables in Banach spaces. Approximation of holomorphic functions of infinitely many variables II X be a Banach space and B\left(R\right)\subset X R 0 . Can any holomorphic function on B\left(R\right) be approximated by entire functions, uniformly on smaller balls B\left(r\right) ? We answer this question in the affirmative for a large class of Banach spaces. Jean-Pierre Vigué (1978) Comparing modules of differential operators by their evaluation on polynomials Herwig Hauser (1989) Continuity of intersection of analytic sets P. Tworzewski, T. Winiarski (1983) Convergence Sets of a Formal Power Series. N. Levenberg, R.E. Molzon (1988) Convergence sets of divergent power series. Avinash Sathaye (1976) Deformaciones de gérmenes analíticos equivariantes. José Ferrer Llop, Fernando Puerta Sales (1981) Der Hauptsatz über Iteration im Ring der formalen Potenzreihen. Günther H. Mehring (1987) Sławomir Cynk, Piotr Tworzewski (1994) {ℂ}^{m} {ℂ}^{m+1}
MCQs of Semiconductor Electronics: Materials, Devices and Simple Circuits | GUJCET MCQ Semiconductor Electronics: Materials, Devices and Simple Circuits MCQs MCQs of Semiconductor Electronics: Materials, Devices and Simple Circuits The density of electron an holes in an intrinsic semiconductor is ne and nh respectively, Which of the following options are true? {\mathrm{n}}_{\mathrm{h}}>{\mathrm{n}}_{\mathrm{e}} {\mathrm{n}}_{\mathrm{e}}>{\mathrm{n}}_{\mathrm{h}} {\mathrm{n}}_{\mathrm{e}}={\mathrm{n}}_{\mathrm{h}} {\mathrm{n}}_{\mathrm{h}}>>{\mathrm{n}}_{\mathrm{e}} The amplifier has voltage gain equal to 200 and its input signal is 0.5 cos(313 t) V. The output signal will be equal to _____ volt. 100 \mathrm{cos}\left(313 \mathrm{t} + 90°\right) 100 \mathrm{cos}\left(313 \mathrm{t} + 180°\right) 100 \mathrm{cos}\left(493 \mathrm{t} \right) 0.5 \mathrm{cos}\left(313 \mathrm{t} + 180\right)
Lorentz transformations, light cones – ebvalaim.log Briefly about rotations A coordinate system being transformed by a rotation (click to see the animation) Rotations are probably quite familiar to everyone. You can just grab an object and move it around, you can spin something on a stick, the wheels of a bike or merry-go-rounds are rotating. Everyone learns to recognize things regardless of how rotated they are since early childhood. We understand rotations intuitively and we know what to expect of them. Nevertheless, in order to understand the similarities and differences between rotations and Lorentz transformations in more depth, we have to take a look at rotations on a slightly more abstract, mathematical level. Actually, what are transformations, whether in space or in space-time? Well, simply put, a transformation is something that can take a point, let's call it A, and it will give us another point, let's call it A'. If we are on a plane, we can describe a given point for example with a pair of coordinates (x,y) - a transformation will change it into some (x', y'). In space-time we would usually describe a point with a set of four coordinates: (t, x, y, z), and this will become (t', x', y', z') after a transformation. We can use formulas to describe a rotation on a plane by an angle \varphi the following way: x=0 y=0 x'=0 y'=0 - or, in other words, the rotation doesn't affect the origin. If we give the point (0,0) to a rotation to be transformed, it will give us back the same, unchanged (0,0). The distance of a point from the origin is the same before and after the rotation: x^2 + y^2 = x'^2 + y'^2 (I recommend calculating this yourself from the equations above as an excercise - you just need to remember the Pythagorean trigonometric identity: \sin^2 \varphi + \cos^2 \varphi = 1 The Pythagorean theorem tells us that the distance of a point (x,y) from the origin (0,0) is \sqrt{x^2 + y^2} . The value in the square root doesn't change under a rotation, so the point after the transformation will be at the same distance from (0,0). To expand on the previous point somewhat - if we have two points A and B, whose coordinates differ by \Delta x \Delta y , and we transform them into A' and B', whose coordinates will differ by \Delta x' \Delta y' , then still (\Delta x)^2 + (\Delta y)^2 = (\Delta x')^2 + (\Delta y')^2 The second bullet point above also means that rotations transform circles centered at (0,0) into themselves - however you rotate a circle, it looks the same. A circle is a set of points at a given distance from the center - so if we have a point on a circle, at a given distance from the center, it will be at the same distance from the center after the rotation, so also on the same circle. You can actually see that in the animation above. Why am I writing about all this? It should become clear in a moment - when we start talking about Lorentz transformations. A coordinate system being transformed by a Lorentz transformation (click to see the animation) The Lorentz transformations aren't as intuitive to people as rotations. In a sense, we also deal with them since childhood (they are the transformations describing the relationships between moving observers, after all), but it's definitely much less visible. Full Lorentz tranformations work in a 4-dimensional space-time, but just like in the previous article, we will limit ourselves to two dimensions for simplicity - the dimensions being time and one spatial dimension. Such a 2-dimensional space-time is very similar to a plane, and you can also describe the points in it with two coordinates - but we will be using (t,x) instead of (x,y). Just like with rotations, we can write formulas that describe Lorentz transformations. In the context of the theory of relativity they are usually written like below: \eta is for now (it has something to do with the velocity of the observer), right now we will focus on the similarities with rotations. And there are a lot of similarities, indeed! Like in rotations, sines and cosines appear - except hyperbolic, not "regular" (you can read more about hyperbolic functions here). Also like with rotations, the point (0,0) is transformed into (0,0) - the transformation doesn't touch it. And again like in rotation, there is a value associated with every point that the transformation doesn't change. Let us remind ourselves: rotations didn't change the distance of points from the origin, nor, what follows, the square of the distance, equal to x^2 + y^2 . According to the hint for the calculations, it is related to the Pythagorean trigonometric identity, which is the fact that for any angle \varphi \sin^2 \varphi + \cos^2 \varphi = 1 Well, there is actually a hyperbolic identity as well: for any \eta \cosh^2 \eta - \sinh^2 \eta = 1 holds. And also because of that identity, the Lorentz transformations don't change the value t^2 - x^2 (or, if we want to measure time and distance in different units: (ct)^2 - x^2 - the equations written above simply assume c=1 ). This value is called the space-time interval. Just like in rotations, not only the interval between a point and the origin is conserved, but also between any two points: (c\Delta t)^2 - (\Delta x)^2 = (c\Delta t')^2 - (\Delta x')^2 As it turns out, the space-time interval has many properties not unlike those of distance. The main difference is that the square of the distance between two different points is always positive - the interval, on the other hand, can be either positive, negative or even zero. Since it is conserved, any points separated by a positive interval will also be separated with a positive interval after the transformation - and the same holds for negative and zero intervals. What does it mean? In order to solve this riddle, let us consider the meaning of a zero interval. Assume that we have two points, A: (t_1, x_1) and B: (t_2, x_2) , which have a zero interval between them: Transforming this equation, we can get: Let us remember that points in space-time are events. The event A happened in the place x_1 t_1 , and event B happened in place x_2 t_2 |x_2 - x_1| is thus the distance between events A and B, and |t_2 - t_1| is the time that passed since A until B, or the other way round. Dividing the distance by the time we get the speed we would need to move at in order to cover this distance in this time - so in order to get from A to B, you have to move at c - the speed of light. And now the most important thing - the Lorentz transformations don't change the interval! This means that if we look from the point of view of a different observer - which corresponds to transforming A and B into A' and B' with a Lorentz transformation - the interval between A' and B' will also be zero! This means that if something moves at the speed of light in one frame of reference, it will be moving at the speed of light in all frames of reference. This is the famous invariance of the speed of light. Let us take another look at the animated picture above. You can see two dark-yellow-brownish, oblique lines. These are the lines that correspond to moving at the speed of light. You can see that they are staying in place regardless of how the picture is transformed. A similar thing applies to the cyan hyperbolas. Just as rotations don't affect circles, because they are the sets of points at a constant distance from the origin, the Lorentz transformations don't affect hyperbolas - the sets of points at a constant interval from the origin. I'll refrain from going into the details of the analysis, but just like the lines correspond to a zero interval from the origin, the top and bottom hyperbolas correspond to positive intervals, and the left and right hyperbolas - to negative intervals. All in all, we can look at our space-time as divided into four quadrants with the light lines - all points in the upper and lower quadrant are separated by a positive interval from the origin, and all points in the left and right quadrants - by a negative interval. Since the Lorentz transformations don't change the interval, no point from either quadrant can ever be transformed into a point in another one! This limitation becomes slightly weaker, though, if we add some spatial dimensions. Adding a second spatial dimension, which we can imagine as rotating the picture around the time axis (the vertical one), will change the light lines into a light cone and will divide the space-time into three regions instead of four quadrants. A light cone These three regions are: the upper part of the cone - the future; the lower part of the cone - the past; and everything to the sides - so-called "elsewhere" - these are the events that can't be reached from the origin by moving at subluminal speeds. One could ask - why the futue is not just the upper half of the diagram, and the past - the lower half? After all, the points in the upper half all have time coordinates greater than zero, and the lower half - below zero... It's a very good question. Let us take another look at the animation, and specifically at what happens to the points in the left and right quadrants. The animation shows the points being transformed one way and the other way, in alternating cycles. As the transformation distorts the picture, pretty much every point in the left and right quadrants sometimes gets to the upper half, and sometimes to the lower. This means that a point with a negative time coordinate can get transformed into a point with a positive one, and vice versa - so it can get "moved" from the future into the past, or the other way round! It cannot be said then, that any of these points is in the future or in the past - it depends on the observer! This only applies to the points from "elsewhere", though - the points from the upper quadrant (the upper part of the cone) are in the future of all observers, and the points in the lower quadrant (lower part of the cone) - in the past of all observers (careful, though: of all observers that are at (0,0) - the observers in other points have their own cones, slightly shifted relative to this one). Why Lorentz transformations? I've said a lot about the Lorentz transformations so far, but nothing about how we know that they in fact govern our reality. Well - as you can expect, we have reasons to think that. It's not as if someone just came up with the idea and everyone just took it at their word. The problem is, it is pretty complicated to show where the transformations come from. To be more precise - it's quite easy to derive Lorentz transformations once you assume that the speed of light is independent from the observer. This is how it was done in high school when I was a student (although I have no idea if it is still done this way, nor if it's even still a part of the school curriculum...). There are some further complications if we don't want to just believe that (even though the fact that the speed of light is constant is rather well documented) - some more effort is required then, but it is still possible; you can read more about it here. That's all for this part. I'm still not sure what the next one will be about. The long-term plan was to move slowly towards explaining black holes and effects related to them, so the next post will probably be about curvature. Another possibility is a slightly deeper dive into Special Relativity - like analysing the twin paradox, for example. If you have some other topic you would like to see covered - leave a comment, and I'll be sure to consider it. Any comments about the clarity of the text will also be appreciated! I'll gladly get to know what is not clear and improve it - I'd like the articles to be as easy to understand as possible.
In parts (a) through (c) below, refer to the previous problem. You will find the length of the line segments in problem 5-82 by substituting given values for the variables. For example, if x 3 units in part (a) of problem 5-82, the line segment would be 3 + 1 + 3 = 7 Find the length of the line segment in part (a) of problem 5-82 using x = 4\frac { 1 } { 2 } 4\frac{1}{2} x 4\frac{1}{2}+1+4\frac{1}{2}=4+4+1+\left(\frac{1}{2}+\frac{1}{2}\right) = 10\text{ units} Find the length of the line segment in part (b) of problem 5-82 using m = 4 4 m 4 + 4 + 4 + 5 Find the length of the line segment in part (c) of problem 5-82 using y = 5.5 5.5 + 2 + 5.5 + 2
InsertTask - Maple Help Home : Support : Online Help : Programming : Document Tools : InsertTask insert a task into a worksheet InsertTask(task, minimal=m) string ; name of the task to insert truefalse ; whether to insert minimal content (true) or default content (false) (default: true) The InsertTask function inserts a task into the worksheet. The m option indicates whether to insert the task's minimal content (true) or default content (false). The default is true. Note: If a command to insert a task into a worksheet is re-executed, the re-inserted task will overwrite the section of the worksheet where the task was originally inserted. In order to make it clear that this is what will happen, the background shading of a task inserted in this way is set to gray. \mathrm{with}⁡\left(\mathrm{DocumentTools}\right): \mathrm{InsertTask}⁡\left("my task"\right) The DocumentTools[InsertTask] command was introduced in Maple 16.
Miquel Royo, Massimiliano Stengel We develop a fundamental theory of the long-range electrostatic interactions in two-dimensional crystals by performing a rigorous study of the nonanalyticities of the Coulomb kernel. We find that the dielectric functions are best represented by 2×2 matrices, with nonuniform macroscopic potentials that are two-component hyperbolic functions of the out-of-plane coordinate z . We demonstrate our arguments by deriving the long-range interatomic forces in the adiabatic regime, where we identify a formerly overlooked dipolar coupling involving the out-of-plane components of the dynamical charges. The resulting formula is exact up to an arbitrary multipolar order, which we illustrate in practice via the explicit inclusion of dynamical quadrupoles. By performing numerical tests on monolayer BN, {\mathrm{SnS}}_{2} {\mathrm{BaTiO}}_{3} membranes, we show that our method allows for a drastic improvement in the description of the long-range electrostatic interactions, with comparable benefits to the quality of the interpolated phonon band structure.
Physics - Controlling for the “look-elsewhere effect” Controlling for the “look-elsewhere effect” August 29, 2011 &bullet; Physics 4, s127 When searching for new physics at the Large Hadron Collider, researchers have to keep track of how many places they look. One of the primary purposes of the Large Hadron Collider (LHC) is to search for the unexpected. Usually this means looking for a “bump” in the data—an excess of events over a known background level. Such an excess could be due to new physics. However, experimentalists must account for the fact that if they look at enough areas of parameter space, they are certain to see statistical fluctuations. To control for this “look-elsewhere effect,” the data must be normalized for the number of places searched in which a fluctuation could be observed. The Compact Muon Solenoid (CMS) collaboration at the LHC has now performed many searches for new physics, and seen very few potential bumps. But, in a paper appearing in Physical Review Letters, in which they analyze data taken in 2010, CMS reports a small excess of events that could correspond to a pair of new giga-electron-volt particles, each decaying into three hadronic jets. According to theory, supersymmetric partners of the gluon, called gluinos, can produce such events if a symmetry known as parity is violated, but at a rate lower than what CMS has seen. After taking into account the “look-elsewhere effect,” the statistical significance of the bump is only standard deviations. As of now, the collaboration has taken times more data than they did last year. Analyzing these new data should show whether CMS is indeed just seeing a fluctuation. – Robert Garisto Search for Three-Jet Resonances in pp \sqrt{\mathbit{s}}=7\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{TeV} pp \sqrt{\mathbit{s}}=7\text{\hspace{0.17em}}\text{\hspace{0.17em}}\mathrm{TeV}
Core - Maple Help Home : Support : Online Help : Mathematics : Group Theory : Core construct the core of a subgroup of a group Core( H, G ) H G , then the core of H G is the largest normal subgroup of G H The Core( H, G ) command computes the core of the subgroup H of the permutation group G. \mathrm{with}⁡\left(\mathrm{GroupTheory}\right): G≔\mathrm{Group}⁡\left(\mathrm{Perm}⁡\left([[1,2,3,4,5]]\right),\mathrm{Perm}⁡\left([[1,2,3]]\right)\right) \textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}〈\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\right)\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\right)〉 H≔\mathrm{Subgroup}⁡\left({[[1,5,4,3,2]]},G\right) \textcolor[rgb]{0,0,1}{H}\textcolor[rgb]{0,0,1}{≔}〈\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{5}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\right)〉 C≔\mathrm{Core}⁡\left(H,G\right) \textcolor[rgb]{0,0,1}{C}\textcolor[rgb]{0,0,1}{≔}〈〉 \mathrm{Generators}⁡\left(C\right) [] \mathrm{GroupOrder}⁡\left(C\right) \textcolor[rgb]{0,0,1}{1} G≔\mathrm{SymmetricGroup}⁡\left(4\right) \textcolor[rgb]{0,0,1}{G}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{\mathbf{S}}}_{\textcolor[rgb]{0,0,1}{4}} H≔\mathrm{Subgroup}⁡\left({\mathrm{Perm}⁡\left([[1,2],[3,4]]\right),\mathrm{Perm}⁡\left([[1,4],[2,3]]\right)},G\right) \textcolor[rgb]{0,0,1}{H}\textcolor[rgb]{0,0,1}{≔}〈\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\right)\left(\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right)\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right)\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\right)〉 C≔\mathrm{Core}⁡\left(H,G\right) \textcolor[rgb]{0,0,1}{C}\textcolor[rgb]{0,0,1}{≔}〈\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\right)\left(\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right)\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right)\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\right)〉 \mathrm{Generators}⁡\left(C\right) [\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\right)\left(\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right)\textcolor[rgb]{0,0,1}{,}\left(\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right)\left(\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{3}\right)] \mathrm{GroupOrder}⁡\left(C\right) \textcolor[rgb]{0,0,1}{4} The GroupTheory[Core] command was introduced in Maple 17.
60B20 Random matrices (probabilistic aspects; for algebraic aspects see ) A characterization of the domain of attraction of a normal distribution in a Hilbert space M. Kłosowska (1973) A general bounded continuous moment problem and its sets of uniqueness Josef Štěpán (1994) A limit theorem for functionals of a Poisson process A metric property of some random functions A *-mixing convergence theorem for convex set valued processes. de Korvin, A., Kleyle, R. (1987) A New Isoperimetric Inequality for Product Measure and the Tails of Sums of Independent Variables. M. Talagrand (1991) A noncompact Choquet theorem A non-Skorokhod topology on the Skorokhod space. Jakubowski, Adam (1997) A. Al-Hussaini (1974) A note on almost sure convergence and convergence in measure P. Kříž, Josef Štěpán (2014) The present article studies the conditions under which the almost everywhere convergence and the convergence in measure coincide. An application in the statistical estimation theory is outlined as well. A note on probabilistic representations of operator semigroups. D. Pfeifer (1984) A Simple Proof of the Majorizing Measure Theorem. A theorem on weak type estimates for Riesz transforms and martingale transforms Nicolas Th. Varopoulos (1981) The Riesz transforms of a positive singular measure \nu \in M\left({\mathbf{R}}^{n}\right) satisfy the weak type inequality m\left[\sum _{j=1}^{n}|{R}_{j}\nu |>\lambda \right]\ge \frac{C\parallel \nu \parallel }{\lambda },\phantom{\rule{3.33333pt}{0ex}}\lambda >0 m denotes Lebesgue measure and C is a positive constant only depending on m Admissible translates of stable measures Joel Zinn (1976) J. Kuelbs (1974) An isomorphic Dvoretzky's theorem for convex bodies Y. Gordon, O. Guédon, M. Meyer (1998) We prove that there exist constants C>0 and 0 < λ < 1 so that for all convex bodies K in {ℝ}^{n} with non-empty interior and all integers k so that 1 ≤ k ≤ λn/ln(n+1), there exists a k-dimensional affine subspace Y of {ℝ}^{n} d\left(Y\cap K,{B}_{2}^{k}\right)\le C\left(1+\surd \left(k/ln\left(n/\left(kln\left(n+1\right)\right)\right)\right) . This formulation of Dvoretzky’s theorem for large dimensional sections is a generalization with a new proof of the result due to Milman and Schechtman for centrally symmetric convex bodies. A sharper estimate holds for the n-dimensional simplex. Analytically normed spaces. V. Dobric (1987)
Physics - The Quantum Hall Effect Gets More Practical The Quantum Hall Effect Gets More Practical State Key Laboratory of Low-Dimensional Quantum Physics, Department of Physics, Tsinghua University, Beijing 100084, China Thin films of magnetic topological insulators can exhibit a nearly ideal quantum Hall effect without requiring an applied magnetic field. Ke He/Tsinghua University; Image on Homepage: A. J. Bestwick et al. [1]; Figure 1: (Left) The quantum Hall effect (QHE) occurs in a two-dimensional electron system under a large applied magnetic field. The transverse resistance ( {\rho }_{x\phantom{\rule{0}{0ex}}y} ) takes on quantized values while the longitudinal resistance ( {\rho }_{x\phantom{\rule{0}{0ex}}x} ) vanishes. (Right) The quantum anomalous Hall effect has quantum Hall features without an applied field. The transverse resistance is precisely quantized at h/{e}^{2} h is Planck’s constant and e the elementary charge, and the longitudinal resistance vanishes at zero field.(Left) The quantum Hall effect (QHE) occurs in a two-dimensional electron system under a large applied magnetic field. The transverse resistance ( {\rho }_{x\phantom{\rule{0}{0ex}}y} {\rho }_{x\phantom{\rule{0}{0ex}}x} ) vanishes. (Right) The quantum ano... Show more The quantum Hall effect is the striking quantization of resistance observed under a large applied magnetic field in two-dimensional electron systems like graphene. In a quantum Hall system, the transverse resistance (measured across the width of the sample) takes on quantized values , where is Planck’s constant, the elementary charge, and an integer or a fraction. The extreme precision with which the Hall resistance can be measured has important applications in metrology, providing today’s standard definition of the ohm. Another key feature of the effect is that the longitudinal resistance (measured along the length of the sample) vanishes: electrons can be transported without dissipation along the edges of the sample. Quantum Hall systems could thus act as perfect wires with little energy consumption. From a technological perspective, a dissipationless current is an exciting prospect. But the quantum Hall effect is generally only possible at impractically low temperatures and under strong external magnetic fields. Two independent studies, one by a team led by David Goldhaber-Gordon at Stanford University, California [1], the other by Jagadeesh S. Moodera at the Massachusetts Institute of Technology, Cambridge, and co-workers [2], have now demonstrated that thin films of topological insulators can exhibit a nearly ideal “quantum anomalous Hall effect,” that is, a quantum Hall effect at zero external field [3–5]. The materials feature, in absence of an applied field, a perfect quantization of the transverse resistance and a longitudinal resistance as low as 1 ohm ( ). Quantum Hall transport can be seen in analogy to atomic physics. In an atom, electrons move around the nucleus without losing their energy, a property guaranteed by the laws of quantum mechanics. A quantum Hall sample is like an atom, but much bigger, allowing electrons to travel a macroscopic distance along the sample edges without energy loss. Such dissipationless quantum Hall edge states result from the unique topological properties of the band structure induced by the magnetic field, which protects electrons from localization or backscattering. The external field required for observing the quantum Hall effect is typically as large as several tesla. Early theoretical studies [3,4] suggested that a quantum anomalous Hall effect could be possible in materials naturally possessing a topologically nontrivial band structure similar to that induced by the magnetic field (see Fig. 1). The idea, first proposed in 1988 [3], was, however, never implemented until the discovery of topological insulators [6]. In a thin film of a ferromagnetic topological insulator, the combination of spontaneous magnetization and electrons with topological properties could take over the role of an external magnetic field in producing quantum Hall states [4]. In 2013, the quantum anomalous Hall effect was first experimentally observed in thin films of Cr-doped topological insulator [5]. But the longitudinal resistance at zero field was of the order of several k , suggesting that dissipative channels other than the quantum Hall edge states provided a significant contribution to conduction. The resistance only dropped to zero in an applied magnetic field of several tesla, no weaker than that needed for the usual quantum Hall effect [5]. Two mechanisms are likely to contribute to the residual longitudinal resistance of a quantum anomalous Hall sample. First, if the ferromagnetism is not uniform, small regions with different or weaker magnetization can scatter edge electrons into dissipative channels such as surface and bulk states. Second, the dissipative conduction channels can independently carry part of the electrical current. Loosely speaking, the first mechanism adds a resistance in series to the edge-state resistance, while the second creates a parallel resistive channel. This suggests two possible approaches to reduce the zero-field dissipation. The first approach is to use materials with better ferromagnetic order. This is the approach followed by Moodera’s team. The authors used a magnetic topological insulator material, V-doped , which has an exceptionally large coercivity (the field that reverses the magnetization of a ferromagnetic material): tesla at 25 millikelvin [2]. With such a large coercivity, the film at zero field is in a highly ordered ferromagnetic state. This eliminates the regions of weak and heterogeneous ferromagnetism that deteriorate the quantum Hall edge states. The robust ferromagnetism of V-doped allowed the authors to achieve a longitudinal resistance of only about as well as a quantization of the transverse resistance to within 6 parts in 10,000. The second approach is based on minimizing the impact of parallel dissipative electron channels by localizing them. Dissipative channels are thus made to behave like a very large resistance in parallel to the very small resistance of the edge states. The overall sample resistance would thus be dominated by the low resistance. This is the strategy followed by Goldhaber-Gordon’s group, using Cr-doped films. These samples show a very different magnetic field dependence of resistance compared to previous work [5], with the longitudinal resistance dropping to about at zero field. The authors further reduce the resistance to around by exploiting the cooling effect induced by demagnetization, and they obtain a precise quantization in transverse resistance within 1 part in 10,000. The motion of electrons in a two-dimensional system can be frozen or promoted by quantum interference between different scattering paths, leading to localization or antilocalization, respectively. In a magnetic topological insulator film such as the one used by the authors, the degree of disorder, doping level, and the magnetic properties will control the crossover between different localization or antilocalization regimes [7], each exhibiting a different magnetic field dependence of the longitudinal resistance. Bestwick et al.’s films are evidently tuned to a regime where dissipative electrons are frozen at zero field, but further studies are needed to clarify the exact localization mechanism at play. The extremely low longitudinal resistance demonstrated in the two experiments indicates that the dissipationless edge states dominate the transport properties at zero magnetic field. Importantly, the effect provides a chance to study the interplay between the quantum Hall effect and other effects that cannot withstand a strong magnetic field, like superconductivity. A superconducting quantum Hall system is predicted to be a chiral topological superconductor [8], which can be used to realize topological quantum computing—a quantum computing approach that is naturally robust against quantum decoherence. But the large field needed for the quantum Hall effect would destroy most superconducting states. The zero-field quantum anomalous Hall effect now opens the door for such studies. The results are a big step forward towards practical applications of dissipationless quantum Hall edge states. Physicists now need to figure out how to raise the temperature needed to enter the quantum anomalous Hall effect regime, which no study has, so far, been able to increase above 100 millikelvin. This research is published in Physical Review Letters and Nature Materials. A. J. Bestwick, E. J. Fox, Xufeng Kou, Lei Pan, Kang L. Wang, and D. Goldhaber-Gordon, “Precise Quantization of the Anomalous Hall Effect near Zero Magnetic Field,” Phys. Rev. Lett. 114, 187201 (2015) C. -Z. Chang et al., “High-Precision Realization of Robust Quantum Anomalous Hall State in a Hard Ferromagnetic Topological Insulator,” Nature Mater. 14, 473 (2015) F. D. M. Haldane, “Model for a Quantum Hall Effect without Landau Levels: Condensed-Matter Realization of the “Parity Anomaly”,” Phys Rev Lett. 61, 2015 (1988); M. Onoda and N. Nagaosa, “Quantized Anomalous Hall Effect in Two-Dimensional Ferromagnets: Quantum Hall Effect in Metals,” Phys. Rev. Lett. 90, 206601 (2003) X. -L. Qi, Y. -S. Wu, and S. C. Zhang, “Topological Quantization of the Spin Hall Effect in Two-Dimensional Paramagnetic Semiconductors,” Phys. Rev. B 74, 085308 (2006) C. -Z. Chang et al., “Experimental Observation of the Quantum Anomalous Hall Effect in a Magnetic Topological Insulator,” Science 340, 167 (2013) M. Z. Hasan and C. L. Kane, “Topological Insulators,” Rev. Mod. Phys. 82, 3045 (2010); X. -L. Qi and S. -C. Zhang, “Topological Insulators and Superconductors,” 83, 1057 (2011) H. -Z. Lu, S. Shi, and S. -Q. Shen, “Competition between Weak Localization and Antilocalization in Topological Surface States,” Phys Rev Lett. 107, 076801 (2011); M. Liu et al., “Crossover between Weak Antilocalization and Weak Localization in a Magnetically Doped Topological Insulator,” 108, 036805 (2011) X. -L. Qi, Taylor L. Hughes, and S. -C. Zhang, “Chiral Topological Superconductor from the Quantum Hall State,” Phys Rev B 82, 184516 (2010) Ke He is an associate professor of Department of Physics, Tsinghua University, China. He received his Ph.D. in physics from the Institute of Physics, Chinese Academy of Sciences and has worked at the Department of Physics and Institute for Solid State Physics of the University of Tokyo in Japan. His current research focuses on topological-insulator-related materials and quantum phenomena. Precise Quantization of the Anomalous Hall Effect near Zero Magnetic Field A. J. Bestwick, E. J. Fox, Xufeng Kou, Lei Pan, Kang L. Wang, and D. Goldhaber-Gordon MagnetismMaterials ScienceSuperconductivity
facet - Maple Help Home : Support : Online Help : Mathematics : Geometry : 3-D Euclidean : Polyhedra : facet define a faceting of a given polyhedron facet(gon, case, n) the name of the facetted polyhedron to be created the case polyhedron The case of a star-polyhedron or compound is the smallest convex solid that can contain it. The star-polyhedron or compound may be constructed by faceting its case which involves removal of solid pieces. Note that it can also be constructed by stellating its core. See the geom3d:-stellate command for more information. Maple currently supports faceting process to the five polyhedra: octahedron, cuboctahedron, icosidodecahedron, small rhombicuboctahedron and small rhombiicosidodecahedron. For the octahedron, there are two values of n: 0 and 1. For the other four polyhedra, there are three values of n: 0, 1 and 2. To access the information relating to a facetted polyhedron gon, use the following function calls: returns the center of the case polyhedron case. \mathrm{with}⁡\left(\mathrm{geom3d}\right): Define the 1-st faceting of a cuboctahedron with center (0,0,0) radius 2 \mathrm{facet}⁡\left(\mathrm{i1},\mathrm{cuboctahedron}⁡\left(c,\mathrm{point}⁡\left(o,0,0,0\right),2\right),1\right) \textcolor[rgb]{0,0,1}{\mathrm{i1}} \mathrm{coordinates}⁡\left(\mathrm{center}⁡\left(\mathrm{i1}\right)\right) [\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{0}] \mathrm{form}⁡\left(\mathrm{i1}\right) \textcolor[rgb]{0,0,1}{\mathrm{facetted_cuboctahedron3d}} \mathrm{schlafli}⁡\left(\mathrm{i1}\right) \textcolor[rgb]{0,0,1}{\mathrm{facetted}}\textcolor[rgb]{0,0,1}{⁡}\left([[\textcolor[rgb]{0,0,1}{3}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{4}]]\right) \mathrm{draw}⁡\left(\mathrm{i1},\mathrm{style}=\mathrm{patch},\mathrm{orientation}=[-145,132],\mathrm{lightmodel}=\mathrm{light4},\mathrm{shading}=\mathrm{XY},\mathrm{title}=\mathrm{`facetted cuboctahedron - 1`}\right)
What is APR and APY - LP-Swap Academy What is APR and APY You don't know what is APR or APY or the difference between the two? Then, you might not be earning as much as you could 😉 . Read this article and become a more profitable investor. Memento (read below first) I_n n days with an initial amount I_0 and no compounding: I_n = I_0 \left(1 + n \frac{\text{APR}}{365}\right) I_n n I_0 and daily compounding: I_n = I_0 \left(1 + \frac{\text{APR}}{365}\right)^n Relationship between APY and APR: \text{APY} = \left(1 + \frac{\text{APR}}{365}\right)^{365} - 1 Let's say you bought some CAKE. You can buy CAKE on LP-Swap. If your CAKE sit in your wallet, then you are not earning as much as you could! PancakeSwap has numerous Syrup Pools where you can deposit CAKE and earn tokens. For example, if you deposit your CAKE in the CAKE pool, you will earn CAKE: Here, I deposited ("staked") 10 CAKE (≈ $380.10 at screenshot time), waited several days and have now earned 1 CAKE (≈ $38.01). The APR gives you how much CAKE you will earn if you stake 1 CAKE for 1 year in the pool. Here, {\text{APR} = 90.97\%} . So if I stake 10 CAKE for 1 year, I will earn a reward of: 10 \times 90.97\% = 9.097 \text{ CAKE} The APR of the CAKE pool might change over time because the amount of CAKE staked might vary over time while the amount of CAKE rewards for the whole pool is constant. So the more CAKE staked, the less CAKE rewards per CAKE staked. You don't have to wait 1 year to start collecting rewards. In fact, you can collect rewards every second (but it would be expansive in transaction fees)! In my case, if I collect rewards in 1 day, I will earn a reward of: 10 \times \frac{90.97\%}{365} = 0.0249 \text{ CAKE} More generally, if you stake I_0\text{ CAKE} , then after n days, you will earn a reward of: I_0 \times \frac{\text{APR}}{365} \times n \;\; \text{CAKE} assuming APR stays constant. When you stake CAKE in the CAKE Syrup Pool, there is a magic way to drastically increase the amount of CAKE you earn: it is called compounding and consists in collecting rewards and staking them again. The important thing to understand is that your CAKE rewards are proportional to your CAKE staked. So by compounding (i.e. collecting rewards and staking them again), you increase your amount of CAKE staked and therefore increase the amount of CAKE rewards you will earn. Let's say I stake 10\text{ CAKE} at an APR of 90.97%. During the first day, I will earn: 10 \times \frac{90.97\%}{365} = 0.249 \text{ CAKE} I then compound (i.e. collect my rewards and stake them again). I now stake {10 + 0.249 = 10.249\text{ CAKE}} . During the second day, I will now earn: 10.249 \times \frac{90.97\%}{365} = 0.255 \text{ CAKE} whereas I would only have earned 0.249\text{ CAKE} if I had not compounded. More generally, if I stake I_0\text{ CAKE} , wait 1 day and compound, then my stake will be: I_0 + I_0 \times \frac{\text{APR}}{365} = I_0 \left(1 + \frac{\text{APR}}{365}\right) If I wait another day and compound, my stake will be: I_0 \left(1+\frac{\text{APR}}{365}\right)^2 n days of compounds, my stake will be: I_0 \left(1 + \frac{\text{APR}}{365}\right)^n If I substract the initial stake I_0 , I get the amount of rewards earned after n days: I_0 \left(\left(1 + \frac{\text{APR}}{365}\right)^n - 1\right) The APY gives you the amount of rewards you will earn per year if you compound. Assuming you compound daily, this is the formula to calculate the APY from the APR: \text{APY} = \left(1+\frac{\text{APR}}{365}\right)^{365} - 1 From this formula, it is not obvious that APY might be much bigger than APR, so let's calculate the APY for the CAKE pool. With \text{APR} = 90.97\% \text{APY} = \left(1+\frac{90.97\%}{365}\right)^{365} - 1 = 148.07\% If you initially staked $1000 worth of CAKE and compounded rewards everyday for 1 year, you would have earned $1480 worth of CAKE rewards, instead of $909 if you had not compounded. This is a big increase!! But you might be thinking: "Well, compounding everyday is exhausting...". You are lucky because PancakeSwap has a pool, called "Auto CAKE", that does the compounding for you! Deposit your CAKE in this pool and it will do the rest: it will stake your CAKE in the "Manual CAKE" pool and do the compounding multiple times per day for you. Lastly, you might wonder why the APY displayed is 143.84% whereas we calculated 148.07%? It is because PancakeSwap takes 2% of your rewards at each compound.
Solve Differential Algebraic Equations (DAEs) - MATLAB & Simulink - MathWorks Switzerland What is a Differential Algebraic Equation? Consistent Initial Conditions Imposing Nonnegativity Differential algebraic equations are a type of differential equation where one or more derivatives of dependent variables are not present in the equations. Variables that appear in the equations without their derivative are called algebraic, and the presence of algebraic variables means that you cannot write down the equations in the explicit form y\text{'}=f\left(t,y\right) . Instead, you can solve DAEs with these forms: The ode15s and ode23t solvers can solve index-1 linearly implicit problems with a singular mass matrix M\left(t,y\right)y\text{'}=f\left(t,y\right) , including semi-explicit DAEs of the form \begin{array}{c}y\text{'}=f\left(t,y,z\right)\\ 0=g\left(t,y,z\right)\text{\hspace{0.17em}}.\end{array} In this form, the presence of algebraic variables leads to a singular mass matrix, since there are one or more zeros on the main diagonal. My\text{'}=\left(\begin{array}{cccc}y{\text{'}}_{1}& 0& \cdots & 0\\ 0& y{\text{'}}_{2}& 0& ⋮\\ ⋮& 0& \ddots & 0\\ 0& \cdots & 0& 0\end{array}\right)\text{\hspace{0.17em}}. By default, solvers automatically test the singularity of the mass matrix to detect DAE systems. If you know about singularity ahead of time then you can set the MassSingular option of odeset to 'yes'. With DAEs, you can also provide the solver with a guess of the initial conditions for y{\text{'}}_{0} using the InitialSlope property of odeset. This is in addition to specifying the usual initial conditions for {y}_{0} in the call to the solver. The ode15i solver can solve more general DAEs in the fully implicit form f\left(t,y,y\text{'}\right)=0\text{\hspace{0.17em}}. In the fully implicit form, the presence of algebraic variables leads to a singular Jacobian matrix. This is because at least one of the columns in the matrix is guaranteed to contain all zeros, since the derivative of that variable does not appear in the equations. J=\partial f/\partial y\text{'}=\left(\begin{array}{ccc}\frac{\partial {f}_{1}}{\partial y{\text{'}}_{1}}& \cdots & \frac{\partial {f}_{1}}{\partial y{\text{'}}_{n}}\\ ⋮& \ddots & ⋮\\ \frac{\partial {f}_{m}}{\partial y{\text{'}}_{1}}& \cdots & \frac{\partial {f}_{m}}{\partial y{\text{'}}_{n}}\end{array}\right) The ode15i solver requires that you specify initial conditions for both y{\text{'}}_{0} {y}_{0} . Also, unlike the other ODE solvers, ode15i requires the function encoding the equations to accept an extra input: odefun(t,y,yp). DAEs arise in a wide variety of systems because physical conservation laws often have forms like x+y+z=0 . If x, x', y, and y' are defined explicitly in the equations, then this conservation equation is sufficient to solve for z without having an expression for z'. When you are solving a DAE, you can specify initial conditions for both y{\text{'}}_{0} {y}_{0} . The ode15i solver requires both initial conditions to be specified as input arguments. For the ode15s and ode23t solvers, the initial condition for y{\text{'}}_{0} is optional (but can be specified using the InitialSlope option of odeset). In both cases, it is possible that the initial conditions you specify do not agree with the equations you are trying to solve. Initial conditions that conflict with one another are called inconsistent. The treatment of the initial conditions varies by solver: ode15s and ode23t — If you do not specify an initial condition for y{\text{'}}_{0} , then the solver automatically computes consistent initial conditions based on the initial condition you provide for {y}_{0} . If you specify an inconsistent initial condition for y{\text{'}}_{0} , then the solver treats the values as guesses, attempts to compute consistent values close to the guesses, and continues on to solve the problem. ode15i — The initial conditions you supply to the solver must be consistent, and ode15i does not check the supplied values for consistency. The helper function decic computes consistent initial conditions for this purpose. DAEs are characterized by their differential index, which is a measure of their singularity. By differentiating equations you can eliminate algebraic variables, and if you do this enough times then the equations take the form of a system of explicit ODEs. The differential index of a system of DAEs is the number of derivatives you must take to express the system as an equivalent system of explicit ODEs. Thus, ODEs have a differential index of 0. An example of an index-1 DAE is y\left(t\right)=k\left(t\right)\text{\hspace{0.17em}}. For this equation, you can take a single derivative to obtain the explicit ODE form y\text{'}=k\text{'}\left(t\right)\text{\hspace{0.17em}}. \begin{array}{l}y{\text{'}}_{1}={y}_{2}\\ 0=k\left(t\right)-{y}_{1}\text{\hspace{0.17em}}.\end{array} These equations require two derivatives to be rewritten in the explicit ODE form \begin{array}{l}y{\text{'}}_{1}=k\text{'}\left(t\right)\\ y{\text{'}}_{2}=k\text{'}\text{'}\left(t\right)\text{\hspace{0.17em}}.\end{array} The ode15s and ode23t solvers only solve DAEs of index 1. If the index of your equations is 2 or higher, then you need to rewrite the equations as an equivalent system of index-1 DAEs. It is always possible to take derivatives and rewrite a DAE system as an equivalent system of index-1 DAEs. Be aware that if you replace algebraic equations with their derivatives, then you might have removed some constraints. If the equations no longer include the original constraints, then the numerical solution can drift. If you have Symbolic Math Toolbox™, then see Solve Differential Algebraic Equations (DAEs) (Symbolic Math Toolbox) for more information. Most of the options in odeset work as expected with the DAE solvers ode15s, ode23t, and ode15i. However, one notable exception is with the use of the NonNegative option. The NonNegative option does not support implicit solvers (ode15s, ode23t, ode23tb) applied to problems with a mass matrix. Therefore, you cannot use this option to impose nonnegativity constraints on a DAE problem, which necessarily has a singular mass matrix. For more details, see [1]. [1] Shampine, L.F., S. Thompson, J.A. Kierzenka, and G.D. Byrne. "Non-negative solutions of ODEs." Applied Mathematics and Computation 170, no. 1 (November 2005): 556–569. https://doi.org/10.1016/j.amc.2004.12.011 ode15s | ode23t | ode15i | odeset Equation Solving (Symbolic Math Toolbox)
Get Expression - Maple Help Home : Support : Online Help : Programming : Maplets : Examples : Get Expression display a Maplet application that requests an expression GetExpression(opts) equation(s) of the form option=value where option is one of caption, title, or width; specify options for the Maplet application The GetExpression() calling sequence displays a Maplet application that prompts the user to enter an expression. This expression is returned to the Maple session. If the user does not enter an expression, an exception is raised. The GetExpression sample Maplet worksheet describes how to write a Maplet application that behaves similarly to this routine by using the Maplets[Elements] package. Specifies the text that prompts the user for an expression. By default, the caption is Enter an expression:. Specifies the Maplet application title. By default, the title is Get Expression. Specifies the input field width. By default, the width is 30 characters. \mathrm{with}⁡\left(\mathrm{Maplets}[\mathrm{Examples}]\right): f≔\mathrm{GetExpression}⁡\left(\right) \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}{\textcolor[rgb]{0,0,1}{y}}^{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{4}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y} g≔\mathrm{GetExpression}⁡\left(\right) \textcolor[rgb]{0,0,1}{f}\textcolor[rgb]{0,0,1}{≔}\frac{\textcolor[rgb]{0,0,1}{ⅆ}}{\textcolor[rgb]{0,0,1}{ⅆ}\textcolor[rgb]{0,0,1}{x}}\phantom{\rule[-0.0ex]{0.4em}{0.0ex}}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁡}\left(\textcolor[rgb]{0,0,1}{x}\right)}^{\textcolor[rgb]{0,0,1}{2}} GetExpression Sample Maplet
LHSolve - Maple Help Home : Support : Online Help : Mathematics : Differential Equations : Lie Symmetry Method : Commands for PDEs (and ODEs) : LieAlgebrasOfVectorFields : LHPDE : LHSolve attempts to solve a LHPDEs system of finite type. LHSolve( obj, output = out, consts = c) (optional) a string: either "solution", "basis", or "lhpde" The LHSolve method attempts to solve the linear homogeneous PDEs in a LHPDE object. If solving is successful then the method returns a list of equations as the general solution. By specifying output = "basis", the returned output will be a list of lists of equations representing each solution in a basis. By specifying output = "lhpde", the returned output will be a new LHPDE object that is fully integrated. For a returned LHPDE object S that involves constants of integration variables, these variables are treated as additional dependent variables of S. The default names are _C1,_C2, ... {\mathrm{\alpha }}_{1},{\mathrm{\alpha }}_{2},{\mathrm{\alpha }}_{3},\dots By specifying consts = [alpha, beta, phi...](i.e. a list of names), the constants of integration will be named as \mathrm{\alpha },\mathrm{\beta },\mathrm{\phi },\dots This is a front-end to the existing pdsolve command (for partial DEs system) and the dsolve command (for ordinary DEs system) of finite type. The method throws an exception if the LHPDEs system is not of finite type. \mathrm{with}⁡\left(\mathrm{LieAlgebrasOfVectorFields}\right): \mathrm{Typesetting}:-\mathrm{Settings}⁡\left(\mathrm{userep}=\mathrm{true}\right): \mathrm{Typesetting}:-\mathrm{Suppress}⁡\left({\mathrm{\alpha },\mathrm{\beta },\mathrm{\eta },\mathrm{\xi }}⁡\left(x,y\right)\right): S≔\mathrm{LHPDE}⁡\left([\mathrm{diff}⁡\left(\mathrm{\xi }⁡\left(x,y\right),y,y\right)=0,\mathrm{diff}⁡\left(\mathrm{\eta }⁡\left(x,y\right),x\right)=-\mathrm{diff}⁡\left(\mathrm{\xi }⁡\left(x,y\right),y\right),\mathrm{diff}⁡\left(\mathrm{\eta }⁡\left(x,y\right),y\right)=0,\mathrm{diff}⁡\left(\mathrm{\xi }⁡\left(x,y\right),x\right)=0],\mathrm{indep}=[x,y],\mathrm{dep}=[\mathrm{\xi },\mathrm{\eta }]\right) \textcolor[rgb]{0,0,1}{S}\textcolor[rgb]{0,0,1}{≔}[{\textcolor[rgb]{0,0,1}{\mathrm{\xi }}}_{\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{\eta }}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}{\textcolor[rgb]{0,0,1}{\mathrm{\xi }}}_{\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{\eta }}}_{\textcolor[rgb]{0,0,1}{y}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{\xi }}}_{\textcolor[rgb]{0,0,1}{x}}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{indep}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{dep}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}] \mathrm{IsFiniteType}⁡\left(S\right) \textcolor[rgb]{0,0,1}{\mathrm{true}} \mathrm{LHSolve}⁡\left(S\right) [\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{_C1}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{_C3}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{_C1}}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{_C2}}] \mathrm{LHSolve}⁡\left(S,\mathrm{consts}=[\mathrm{\alpha },\mathrm{\beta },\mathrm{\delta }]\right) [\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{\delta }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}\textcolor[rgb]{0,0,1}{⁢}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{+}\textcolor[rgb]{0,0,1}{\mathrm{\beta }}] \mathrm{LHSolve}⁡\left(S,\mathrm{output}="basis"\right) [[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}]\textcolor[rgb]{0,0,1}{,}[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{0}]] \mathrm{LHSolve}⁡\left(S,\mathrm{output}="lhpde",\mathrm{consts}=\mathrm{\alpha }\right) [\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{y}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{3}}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{=}\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{⁢}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{+}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{2}}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{indep}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{x}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{y}]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{dep}}\textcolor[rgb]{0,0,1}{=}[\textcolor[rgb]{0,0,1}{\mathrm{\xi }}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{\mathrm{\eta }}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{1}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{2}}\textcolor[rgb]{0,0,1}{,}{\textcolor[rgb]{0,0,1}{\mathrm{\alpha }}}_{\textcolor[rgb]{0,0,1}{3}}] The LHSolve command was introduced in Maple 2020.
MCQs of Magnetism and Matter | GUJCET MCQ Magnetism and Matter MCQs MCQs of Magnetism and Matter A magnet of magnetic dipole moment 5.0 A m2 is lying in a uniform magnetic field of 7 × 10-4 T such that its dipole moment vector makes an angle of 30° with the field. The work done in increasing this angle from 30° to 45° is about _____ J. (b) 24.74 × 10-4 (c) 30.3 × 10-4 A bar magnet is oscillating in Earth's magnetic field with periodic time T. If a similar magnet with the same mass and volume has magnetic dipole moment, which is 4 times that of this magnet, then its periodic time will be _____ . \frac{\mathrm{T}}{2} A circular loop carrying current I is replaced by a bar magnet of equivalent magnetic dipole moment. The point on the loop is lying _____. (a) on equatorial plane of magnet (b) on axis of the magnet (d) except equatorial plane or axis of bar magnet When a current carrying loop is replaced by an equivalent magnetic dipole (a) the distance l between the poles is fixed. (b) the pole strength p of each pole is fixed. (c) the dipole moment is reversed. (d) the product pl is fixed. Let r be a distance of the point on the axis of a bar magnet from its centre. The magnetic field at r always proportional to \frac{1}{{\mathrm{r}}^{2}} \frac{1}{{\mathrm{r}}^{3}} \frac{1}{\mathrm{r}} \mathrm{not} \mathrm{necessarily} \frac{1}{{\mathrm{r}}^{3}} \mathrm{at} \mathrm{all} \mathrm{points} Magnetic meridian is a plane _____. (a) perpendicular to magnetic axis of Earth. (b) perpendicular to geographic axis of Earth. (c) passing through the magnetic axis of Earth. (d) passing through the geographic axis. At geomagnetic pole, a magnetic needle allowed to rotate in horizontal plane will (a) stay in north-south direction only (b) stay in any position (c) stay in east-west direction only (d) become rigid showing no movement The horizontal and vertical components of magnetic field of Earth are same at some place on the surface of Earth. The magnetic dip angle at this place will be _____ . (a) are not present (b) are parallel to the cross-sectional area of the magnet (c) are in the direction from N-pole to S-pole (d) are in the direction from S-pole to N-pole In non-uniform magnetic field, a diamagnetic substance experiences a resultant force (a) from the region of strong magnetic field to the region of weak magnetic field. (b) perpendicular to the magnetic field. (c) from the region of weak magnetic field to the region of strong magnetic field. (d) which is zero
hyad.es | Chasing the Terminator on SQ23 Chasing the Terminator on SQ23 I am on SQ23, a direct flight from New York City to Singapore, and I am procrastinating on my writing work by watching the plane chasing the day-night terminator on the in-flight progress screen. (I also watched the sun pseudo-rising for a quite while before being told to shut the window by the cabin crew). SQ23 departs from NYC at 2140h (local time) and lands in Singapore at 0416h (local time)1 — at least, according to the projected time on the in-flight display. The date at the time of departure is 22nd November. # for convenience we consider times relative to journey start in units of hours. # our reference meridian for the problem will also be the starting location. T = 18 # hours Unfortunately, the real SQ23 doesn't actually follow the geodesic between JFK and SIN owing to political considerations (and I'm guessing because there are good tailwinds to be had by sticking to specific latitudes). However, supposing it did, and given the information above: Does SQ23 ever catch up with the terminator? (the Sun still hasn't fully risen for me, so I don't actually know this yet; however, odds are looking good right now)2 If so, how many hours of astronomical daytime (i.e. altitude of Sun [>0] >0 ) am I going to get? BONUS CHALLENGE: I have to do this problem without internet access because I didn't pay for in-flight WiFi… We need to find the latitudes and longitudes of both the SQ23 and the terminator as a function of time. Unfortunately, since we're not at the equinox, the latitude of the terminator is also a function of longitude, which will have to be supplied by that of SQ23. Therefore, it makes more sense to parameterise the trajectory of SQ23 first. 1. SQ23 Now, I am lazy and don't remember the precise formula to find the appropriate parameterisation of a geodesic passing through two arbitrary latitude and longitude anchor points3. However, we realise that Singapore and New York City have a time difference of 12 hours when both are on Daylight Savings, which is why it is a reasonable approximation that the geodesic passes over the North Pole in the first place. This is obviously not fully correct (and we're deviating from the geodesic anyway), but I will assume that this is the case to greatly simplify our subsequent calculations. Now, I need to get hold of the latitudes of Singapore and NYC somehow. Unfortunately, Mathematica's location-based functionality all requires Internet connectivity. However, I have quite fortunately set up redshift in two different places, so these numbers are in my redshift.conf file. In particular, ;New Haven ;Singapore ;lat=1.3677 ;lon=103.7500 This also backs up our intuition about the geodesic being mostly latitudinal. Therefore we are travelling in mostly the latitudinal direction for about 138 degrees (49 deg to the North Pole, then 89 deg down). Assuming we are travelling at a constant speed, we now have latitude and longitude as a function of time. λ_start = 41 * 2 * np.pi / 360 λ_end = 1 * 2 * np.pi / 360 Θ = np.pi - λ_start - λ_end def λ_plane(t): λ = λ_start + t/T*Θ return np.where(λ > np.pi/2, np.pi - λ, λ) # the longitude of the plane is either the starting longitude or that + π, # depending on whether it's gone past the North Pole or not. def φ_plane(t): return np.where(λ > np.pi/2, φ0+np.pi, φ0) Starting to get lazy now. Let's say that the Sun is at a declination of [\delta] \delta above the equator (which we get from the time of the year). With some algebra we find that the longitude offset [\phi] \phi between the mean solar noon (which is a meridian) and the terminator at latitude [\lambda] \lambda [\cos \phi_t = - {\tan \lambda \tan \delta}.] The longitude of the solar noon meridian is itself a function of time, so we can rearrange this to solve for [\lambda] \lambda as a function of time (up to appropriate choices of branch for the arctan). # Today, Nov 22, is 30 + 31 + 1 days after Sept 21, # the autumnal equinox. # The declination of the sun goes as # sin δ = -sin (23.5 deg) * sin(62 / 365.24 * 2pi) # (negative because winter) sin_δ = - np.sin(23.5 * 2 * np.pi / 360) * np.sin(62 / 365.24 * 2 * np.pi) δ = np.arcsin(sin_δ) # at t = 0, local time is 2140, meaning it is solar noon # at a longitude 9.6 hr west (1 hr = 15 deg) of the starting point. # The solar noon meridian moves west at a rate of 1 hr per hr. def Φ_sun(t): return (φ0-9.6-t)*15/360*2*np.pi def λ_offset(Δφ): return np.arctan(-np.cos(Δφ)/np.tan(δ)) def λ_terminator(t): return λ_offset(Φ_sun(t) - φ_plane(t)) I plot the latitude of our idealised SQ23 and that of the terminator along its current meridian as a function of time. plt.plot(t, λ_plane(t), label="Ideal SQ23") plt.plot(t, λ_terminator(t), label="Terminator geodesic latitude") plt.ylabel(r"Latitude/rad") plt.xlabel(r"t/h") SQ23 begins its journey north of the terminator at night; we can see that our idealised SQ23 remains in darkness throughout its entire journey. Editing this post now on the ground, I can say that SQ23 actually deviated quite significantly from this idealised trajectory (in particular it passed south of Svalbard!), going around the North Pole in the westerly direction, and therefore briefly experienced daylight. Perhaps this is why the plane landed almost an hour ahead of schedule: it must have benefited quite significantly from tailwinds in the northern polar jet stream. keep in mind that Singapore is on permanent Daylight Savings, so this should really be 0316h “local” time. ↩ Update, 3 hours later: the Sun has indeed risen on this non-ideal path! ↩ although the laptop I'm using has Mathematica installed, which could probably remember it for me if I tried hard enough. However I get the sense that relying on Mathematica would be cheating. ↩
SDE with Mean-Reverting Drift (SDEMRD) model - MATLAB - MathWorks Switzerland d{X}_{t}=S\left(t\right)\left[L\left(t\right)-{X}_{t}\right]dt+D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right)d{W}_{t} F\left(t,{X}_{t}\right)=A\left(t\right)+B\left(t\right){X}_{t} G\left(t,{X}_{t}\right)=D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right) d{X}_{t}=S\left(t\right)\left[L\left(t\right)-{X}_{t}\right]dt+D\left(t,{X}_{t}^{\alpha \left(t\right)}\right)V\left(t\right)d{W}_{t} A\left(t\right)=S\left(t\right)L\left(t\right),B\left(t\right)=-S\left(t\right) d{X}_{t}=0.2\left(0.1-{X}_{t}\right)dt+0.05{X}_{t}^{\frac{1}{2}}d{W}_{t}
Physics - Non-Abelian anyons: New particles for less than a billion? Non-Abelian anyons: New particles for less than a billion? The potential discovery of anyons in a fractional quantum Hall device tests the limits of what is known about particles confined to two dimensions. Credit: (c), (d) R. L. Willet et al. [1] Figure 1: The distinct quantum states resulting from particle worldlines in (a) and (b) do not interfere with each other if they follow non-Abelian statistics. (c) Electron micrograph of the interferometric device. (d) Aharonov-Bohm oscillations with sweeping of the gate voltage at filling factor \nu =5/2 e/4 e/2 quasiparticles.The distinct quantum states resulting from particle worldlines in (a) and (b) do not interfere with each other if they follow non-Abelian statistics. (c) Electron micrograph of the interferometric device. (d) Aharonov-Bohm oscillations with sweeping ... Show more Half a century ago, Richard Feynman famously proclaimed, “There’s plenty of room at the bottom,” while attempting to foresee the developments in what nowadays has become nanoscience and nanotechnology. He also said, “This field…will not tell us much of fundamental physics (in the sense of, ‘What are the strange particles?’) but…it might tell us much of great interest about the strange phenomena that occur in complex situations.” Well, as it turns out, such strange phenomena may, in fact, involve very unusual new particles. It is well known that a wave function of two identical bosons is symmetric upon their exchange, while it is antisymmetric for two fermions. However, particles confined to two dimensions aren’t limited to obeying the exchange rules of bosons and fermions. In two dimensions, swapping the positions of two particles in a clockwise manner is distinct from doing it counterclockwise. In three or more dimensions, one could continuously deform such trajectories into each other, rendering the two operations identical. Experimentally, when you confine electrons to an atomically thin layer by essentially electrostatic means, the low-energy collective excitations of the system now behave as two-dimensional particles (although the higher-energy excitations are still conventional electrons). In particular, they can act as anyons—particles whose braiding statistics is neither bosonic nor fermionic. Naturally, one has to go to very low temperatures in search of such quasiparticles, and this is exactly the regime in which the fractional quantum Hall effect—the primary “playground” for finding anyons—is observed. Writing in Physical Review B, Robert Willett and colleagues from Bell Laboratories in the US present experiments that demonstrate the existence of a very special kind of “non-Abelian” anyons [1]. “Conventional” Abelian anyons have been conjectured to exist in a number of fractional quantum Hall states, yet despite several attempts, their braiding statistics have never been established conclusively. The reason they are called Abelian is because the exchange of two such anyons leads to a nontrivial phase acquired by their wave function, and a series of exchanges merely results in multiple phase factors whose order is irrelevant. Unfortunately, the relative simplicity of their statistics is also the reason why they are so difficult to detect experimentally. The problem is that the phase factors may arise from other sources such as the Aharonov-Bohm effect, the dynamical phase, etc. Hence distilling the statistical phase contribution from the overall phase is quite difficult [2]. Non-Abelian anyons—the ones that Willett and co-workers are after—are even more interesting objects. A ground state of a system with many such particles must not be unique, with their exchange resulting in mixing of the degenerate states. Formally, this is equivalent to multiplying a vector constructed out of such degenerate ground states by a matrix. Since matrix multiplication is generally noncommutative, the order in which such anyons are exchanged now matters. Surprisingly, the experimental signature of non-Abelian anyons is expected to be much cleaner than that for Abelian anyons. Upon exchange, non-Abelian particles may end up in a different quantum state that will not interfere with their initial state [Fig. 1(a) and (b)]. By contrast with Abelian statistics, whose observation is hinged on extracting the precise phase information from an interference pattern between different particle trajectories, non-Abelian statistics should manifest itself through the suppression of the amplitude of the fringes. Moreover, for certain types of anyons—like those expected to exist in the fractional quantum Hall state [3–5] ( being the filling factor) studied by Willett and co-workers—the interference fringes may vanish altogether [6–8]. A specific prediction for the state, dubbed an “even-odd effect,” states that two trajectories of non-Abelian quasiparticles with an electric charge of should not interfere with one another if there is an odd number of similar quasiparticles encircled by these trajectories; for an even number of encircled quasiparticles the interference pattern should be seen again [7,8]. There are, however, other theories of the state which predict that particles are Abelian and hence no even-odd effect should occur. Therefore, a careful interferometric study is needed to confirm the non-Abelian nature of these excitations and precisely such an attempt has been made by Willett et al. [1]. Their main finding is an aperiodic alternating pattern of Aharonov-Bohm oscillations with frequencies corresponding to the quasiparticle charges of and . This observation is consistent with the non-Abelian nature of the excitations; the idea is that by using a gate voltage to change the area inside the interferometer [Fig. 1(c)], one can alternate between even and odd numbers of enclosed particles. According to the prediction, this should repeatedly turn on and off the interference of excitations moving along the edge. However, the physics of the quantum Hall edge (which, along with two quantum point contacts, forms an interferometric loop) is more complicated: aside from anyons, it also supports excitations whose statistics are Abelian and hence should result in a separate interference pattern, irrespective of the number of quasiparticles inside the loop. There are no reliable estimates for the relative strength of vs tunneling, although some attempts have been made [9,10]. On general grounds, one would expect the tunneling current to dominate at low temperatures, yet there is a separate important consideration: quasiparticles are expected to maintain their coherence over longer distances, which would increase their relative contribution to the interference fringes [9,11]. In fact, the latter effect would explain why, at higher temperatures, the interference signal has been found to be completely dominated by the excitations [12]. At lower temperatures, the interference pattern should then be expected to consist of a sum of ever-present oscillations and on-and-off oscillations, rather than display an actual alternation of the two frequencies. The apparent alternation seen by Willett et al. [see Fig. 1(d)] may well be the result of a particular nonuniform way the Fourier transform of the signal has been normalized in this study. Perhaps the most convincing argument in support of the non-Abelian nature of the observed pattern is the apparent interchange between and periods of oscillations when the magnetic field is changed by the amount needed to generate one extra excitation inside the interferometric loop. Willett et al. have performed a number of important checks, such as the scaling of the oscillation period with the applied magnetic field (for several quantum Hall states), which strongly indicate the Aharonov-Bohm nature of the observed oscillations. The temporal stability of the alternation pattern has also been studied, indicating extremely long life times of the spatial distribution of localized quasiparticles. Together with the earlier study [12], these results make a good case for the existence of non-Abelian anyons in the state, yet it may be too early to declare victory. First and foremost, there is the issue of reproducibility: all published data has so far come from a single sample, albeit subjected to several different preparation cycles. Secondly, the temporal stability of the phase (as opposed to periodicity) of Aharonov-Bohm oscillations needs to be studied, in particular since the non-Abelian explanation comes with its own source of such phase instability. The temperature scaling of the relative strength of vs signals might be able to clarify the role of decoherence; this calls for more data at temperatures below (the lowest temperature for which the data has so far been reported). Hopefully, going to lower temperatures will also result in sharper Fourier transform peaks, making identifying oscillations more unambiguous. The work by Willett and collaborators is an indication that the amazing theoretical possibility of non-Abelian anyonic statistics may, in fact, be realized in the fractional quantum Hall state. One hopes that follow-up experiments will be able to unambiguously answer Feynman’s question “What are the strange particles?” in the context of these solid-state systems. The consequences for our understanding of nature, and perhaps even new technological advances [13,14] that so far have mostly occupied the minds of theorists, cannot be underestimated. R. L. Willett, L. N. Pfeiffer, and K. W. West, Phys. Rev. B 82, 205301 (2010) F. E. Camino, W. Zhou, and V. J. Goldman, Phys. Rev. B 72, 075342 (2005) G. Moore and N. Read, Nucl. Phys. B 360, 362 (1991) M. Greiter, X. G. Wen, and F. Wilczek, Nucl. Phys. B 374, 567 (1992) C. Nayak and F. Wilczek, Nucl. Phys. B 479, 529 (1996) E. Fradkin, C. Nayak, A. Tsvelik, and F. Wilczek, Nucl. Phys. B 516, 704 (1998) A. Stern and B. I. Halperin, Phys. Rev. Lett. 96, 016802 (2006) P. Bonderson, A. Kitaev, and K. Shtengel, Phys. Rev. Lett. 96, 016803 (2006) W. Bishara, P. Bonderson, C. Nayak, K. Shtengel, and J. K. Slingerland, Phys. Rev. B 80, 155303 (2009) H. Chen, Z.-X. Hu, K. Yang, E. H. Rezayi, and X. Wan, Phys. Rev. B 80, 235305 (2009) X. Wan, Z.-X. Hu, E. H. Rezayi, and K. Yang, Phys. Rev. B 77, 165316 (2008) R. L. Willett, L. N. Pfeiffer, and K. W. West, Proc. Natl. Acad. Sci. U.S.A. 106, 8853 (2009) S. Das Sarma, M. Freedman, and C. Nayak, Phys. Rev. Lett. 94, 166802 (2005) Kirill Shtengel is an Associate Professor at the Department of Physics and Astronomy at the University of California, Riverside. He received his Ph.D. from UCLA in 1999 for his work in statistical mechanics. Following postdoctoral appointments at UC Irvine, Microsoft Research, and Caltech, in 2005 he joined UC Riverside as a faculty member. His current research interests include topological phases of matter and their potential applications for quantum information processing. R. L. Willett, L. N. Pfeiffer, and K. W. West Semiconductor PhysicsStrongly Correlated Materials
G2 manifold - Wikipedia In differential geometry, a G2 manifold is a seven-dimensional Riemannian manifold with holonomy group contained in G2. The group {\displaystyle G_{2}} is one of the five exceptional simple Lie groups. It can be described as the automorphism group of the octonions, or equivalently, as a proper subgroup of special orthogonal group SO(7) that preserves a spinor in the eight-dimensional spinor representation or lastly as the subgroup of the general linear group GL(7) which preserves the non-degenerate 3-form {\displaystyle \phi } , the associative form. The Hodge dual, {\displaystyle \psi =*\phi } is then a parallel 4-form, the coassociative form. These forms are calibrations in the sense of Reese Harvey and H. Blaine Lawson,[1] and thus define special classes of 3- and 4-dimensional submanifolds. 3 Connections to physics {\displaystyle G_{2}} -manifold are 7-dimensional, Ricci-flat, orientable spin manifolds. In addition, any compact manifold with holonomy equal to {\displaystyle G_{2}} has finite fundamental group, non-zero first Pontryagin class, and non-zero third and fourth Betti numbers. {\displaystyle G_{2}} might possibly be the holonomy group of certain Riemannian 7-manifolds was first suggested by the 1955 classification theorem of Marcel Berger, and this remained consistent with the simplified proof later given by Jim Simons in 1962. Although not a single example of such a manifold had yet been discovered, Edmond Bonan nonetheless made a useful contribution by showing that, if such a manifold did in fact exist, it would carry both a parallel 3-form and a parallel 4-form, and that it would necessarily be Ricci-flat.[2] The first local examples of 7-manifolds with holonomy {\displaystyle G_{2}} were finally constructed around 1984 by Robert Bryant, and his full proof of their existence appeared in the Annals in 1987.[3] Next, complete (but still noncompact) 7-manifolds with holonomy {\displaystyle G_{2}} were constructed by Bryant and Simon Salamon in 1989.[4] The first compact 7-manifolds with holonomy {\displaystyle G_{2}} were constructed by Dominic Joyce in 1994. Compact {\displaystyle G_{2}} manifolds are therefore sometimes known as "Joyce manifolds", especially in the physics literature.[5] In 2013, it was shown by M. Firat Arikan, Hyunjoo Cho, and Sema Salur that any manifold with a spin structure, and, hence, a {\displaystyle G_{2}} -structure, admits a compatible almost contact metric structure, and an explicit compatible almost contact structure was constructed for manifolds with {\displaystyle G_{2}} -structure.[6] In the same paper, it was shown that certain classes of {\displaystyle G_{2}} -manifolds admit a contact structure. In 2015, a new construction of compact {\displaystyle G_{2}} manifolds, due to Alessio Corti, Mark Haskins, Johannes Nordstrőm, and Tommaso Pacini, combined a gluing idea suggested by Simon Donaldson with new algebro-geometric and analytic techniques for constructing Calabi–Yau manifolds with cylindrical ends, resulting in tens of thousands of diffeomorphism types of new examples.[7] Connections to physics[edit] These manifolds are important in string theory. They break the original supersymmetry to 1/8 of the original amount. For example, M-theory compactified on a {\displaystyle G_{2}} manifold leads to a realistic four-dimensional (11-7=4) theory with N=1 supersymmetry. The resulting low energy effective supergravity contains a single supergravity supermultiplet, a number of chiral supermultiplets equal to the third Betti number of the {\displaystyle G_{2}} manifold and a number of U(1) vector supermultiplets equal to the second Betti number. Recently it was shown that almost contact structures (constructed by Sema Salur et al.)[6] play an important role in {\displaystyle G_{2}} geometry".[8] ^ Harvey, Reese; Lawson, H. Blaine (1982), "Calibrated geometries", Acta Mathematica, 148: 47–157, doi:10.1007/BF02392726, MR 0666108 . ^ Bonan, Edmond (1966), "Sur les variétés riemanniennes à groupe d'holonomie G2 ou Spin(7)", Comptes Rendus de l'Académie des Sciences, 262: 127–129 . ^ Bryant, Robert L. (1987), "Metrics with exceptional holonomy", Annals of Mathematics, 126 (2): 525–576, doi:10.2307/1971360, JSTOR 1971360 . ^ Bryant, Robert L.; Salamon, Simon M. (1989), "On the construction of some complete metrics with exceptional holonomy", Duke Mathematical Journal, 58: 829–850, doi:10.1215/s0012-7094-89-05839-0, MR 1016448 . ^ Joyce, Dominic D. (2000), Compact Manifolds with Special Holonomy, Oxford Mathematical Monographs, Oxford University Press, ISBN 0-19-850601-5 . ^ a b Arikan, M. Firat; Cho, Hyunjoo; Salur, Sema (2013), "Existence of compatible contact structures on {\displaystyle G_{2}} -manifolds", Asian Journal of Mathematics, 17 (2): 321–334, arXiv:1112.2951, doi:10.4310/AJM.2013.v17.n2.a3 . ^ Corti, Alessio; Haskins, Mark; Nordström, Johannes; Pacini, Tommaso (2015). "G2-manifolds and associative submanifolds via semi-Fano 3-folds". Duke Mathematical Journal. 164: 1971–2092. ^ de la Ossa, Xenia; Larfors, Magdalena; Magill, Matthew (2021). "Almost contact structures on manifolds with a G2 structure". arXiv:2101.12605. {{cite journal}}: Cite journal requires |journal= (help) Becker, Katrin; Becker, Melanie; Schwarz, John H. (2007), "Manifolds with G2 and Spin(7) holonomy", String Theory and M-Theory : A Modern Introduction, Cambridge University Press, pp. 433–455, ISBN 978-0-521-86069-7. Fernandez, M.; Gray, A. (1982), "Riemannian manifolds with structure group G2", Ann. Mat. Pura Appl., 32: 19–845, doi:10.1007/BF01760975 . Karigiannis, Spiro (2011), "What Is . . . a G2-Manifold?" (PDF), AMS Notices, 58 (04): 580–581 . Retrieved from "https://en.wikipedia.org/w/index.php?title=G2_manifold&oldid=1020378070"
Shells of twisted flag varieties and the Rost invariant 1 February 2016 Shells of twisted flag varieties and the Rost invariant S. Garibaldi, V. Petrov, N. Semenov We introduce two new general methods to compute the Chow motives of twisted flag varieties and settle a 20-year-old conjecture of Markus Rost about the Rost invariant for groups of type {\mathrm{E}}_{7} S. Garibaldi. V. Petrov. N. Semenov. "Shells of twisted flag varieties and the Rost invariant." Duke Math. J. 165 (2) 285 - 339, 1 February 2016. https://doi.org/10.1215/00127094-3165434 Received: 22 October 2013; Revised: 10 January 2015; Published: 1 February 2016 Keywords: Chow motives , equivariant Chow groups , linear algebraic groups , Rost invariant , twisted flag varieties S. Garibaldi, V. Petrov, N. Semenov "Shells of twisted flag varieties and the Rost invariant," Duke Mathematical Journal, Duke Math. J. 165(2), 285-339, (1 February 2016)
Ground expression — Wikipedia Republished // WIKI 2 Term that does not contain any variables In mathematical logic, a ground term of a formal system is a term that does not contain any variables. Similarly, a ground formula is a formula that does not contain any variables. In first-order logic with identity, the sentence {\displaystyle Q(a)\lor P(b)} {\displaystyle a}nd {\displaystyle b}eing constant symbols. A ground expression is a ground term or ground formula. 2.1 Ground terms 2.2 Ground atom 2.3 Ground formula Consider the following expressions in first order logic over a signature containing a constant symbol {\displaystyle 0} {\displaystyle 0,} {\displaystyle s} {\displaystyle +} {\displaystyle s(0),s(s(0)),s(s(s(0))),\ldots } {\displaystyle 0+1,\;0+1+1,\ldots } {\displaystyle x+s(1)} {\displaystyle s(x)} {\displaystyle s(0)=1} {\displaystyle 0+0=0} What follows is a formal definition for first-order languages. Let a first-order language be given, with {\displaystyle C} {\displaystyle V} {\displaystyle F} {\displaystyle P} the set of predicate symbols. {\displaystyle C} {\displaystyle f\in F} is a{\displaystyle n} {\displaystyle \alpha _{1},\alpha _{2},\ldots ,\alpha _{n}} {\displaystyle f\left(\alpha _{1},\alpha _{2},\ldots ,\alpha _{n}\right)} Roughly speaking, the Herbrand universe is the set of all ground terms. Ground atom A ground predicate, ground atom or ground literal is an atomic formula all of whose argument terms are ground terms. {\displaystyle p\in P} is a{\displaystyle n} {\displaystyle \alpha _{1},\alpha _{2},\ldots ,\alpha _{n}} {\displaystyle p\left(\alpha _{1},\alpha _{2},\ldots ,\alpha _{n}\right)} Roughly speaking, the Herbrand base is the set of all ground atoms, while a Herbrand interpretation assigns a truth value to each ground atom in the base. Ground formula {\displaystyle \lnot p} {\displaystyle p.} {\displaystyle p\lor q,p\land q,p\to q} {\displaystyle p} {\displaystyle q.} {\displaystyle \forall x\;p} {\displaystyle \exists x\;p} {\displaystyle p} {\displaystyle x.} Open formula Sentence (mathematical logic) Hodges, Wilfrid (1997), A shorter model theory, Cambridge University Press, ISBN 978-0-521-58713-6 First-Order Logic: Syntax and Semantics
Exterior angle theorem - Wikipedia Exterior angle of a triangle is greater than either of the remote interior angles The exterior angle theorem is Proposition 1.16 in Euclid's Elements, which states that the measure of an exterior angle of a triangle is greater than either of the measures of the remote interior angles. This is a fundamental result in absolute geometry because its proof does not depend upon the parallel postulate. In several high school treatments of geometry, the term "exterior angle theorem" has been applied to a different result,[1] namely the portion of Proposition 1.32 which states that the measure of an exterior angle of a triangle is equal to the sum of the measures of the remote interior angles. This result, which depends upon Euclid's parallel postulate will be referred to as the "High school exterior angle theorem" (HSEAT) to distinguish it from Euclid's exterior angle theorem. Some authors refer to the "High school exterior angle theorem" as the strong form of the exterior angle theorem and "Euclid's exterior angle theorem" as the weak form.[2] 2 Euclid's exterior angle theorem 2.1 Invalid in Spherical geometry 3 High school exterior angle theorem Exterior angles[edit] A triangle has three corners, called vertices. The sides of a triangle (line segments) that come together at a vertex form two angles (four angles if you consider the sides of the triangle to be lines instead of line segments).[3] Only one of these angles contains the third side of the triangle in its interior, and this angle is called an interior angle of the triangle.[4] In the picture below, the angles ∠ABC, ∠BCA and ∠CAB are the three interior angles of the triangle. An exterior angle is formed by extending one of the sides of the triangle; the angle between the extended side and the other side is the exterior angle. In the picture, angle ∠ACD is an exterior angle. Euclid's exterior angle theorem[edit] The proof of Proposition 1.16 given by Euclid is often cited as one place where Euclid gives a flawed proof.[5][6][7] Euclid proves the exterior angle theorem by: construct the midpoint E of segment AC, draw the ray BE, construct the point F on ray BE so that E is (also) the midpoint of B and F, draw the segment FC. By congruent triangles we can conclude that ∠ BAC = ∠ ECF and ∠ ECF is smaller than ∠ ECD, ∠ ECD = ∠ ACD therefore ∠ BAC is smaller than ∠ ACD and the same can be done for the angle ∠ CBA by bisecting BC. The flaw lies in the assumption that a point (F, above) lies "inside" angle (∠ ACD). No reason is given for this assertion, but the accompanying diagram makes it look like a true statement. When a complete set of axioms for Euclidean geometry is used (see Foundations of geometry) this assertion of Euclid can be proved.[8] Invalid in Spherical geometry[edit] Small triangles may behave in a nearly Euclidean manner, but the exterior angles at the base of the large triangle are 90°, a contradiction to the Euclid's exterior angle theorem. The exterior angle theorem is not valid in spherical geometry nor in the related elliptical geometry. Consider a spherical triangle one of whose vertices is the North Pole and the other two lie on the equator. The sides of the triangle emanating from the North Pole (great circles of the sphere) both meet the equator at right angles, so this triangle has an exterior angle that is equal to a remote interior angle. The other interior angle (at the North Pole) can be made larger than 90°, further emphasizing the failure of this statement. However, since the Euclid's exterior angle theorem is a theorem in absolute geometry it is automatically valid in hyperbolic geometry. High school exterior angle theorem[edit] The high school exterior angle theorem (HSEAT) says that the size of an exterior angle at a vertex of a triangle equals the sum of the sizes of the interior angles at the other two vertices of the triangle (remote interior angles). So, in the picture, the size of angle ACD equals the size of angle ABC plus the size of angle CAB. The HSEAT is logically equivalent to the Euclidean statement that the sum of angles of a triangle is 180°. If it is known that the sum of the measures of the angles in a triangle is 180°, then the HSEAT is proved as follows: {\displaystyle b+d=180^{\circ }} {\displaystyle b+d=b+a+c} {\displaystyle \therefore d=a+c.} On the other hand, if the HSEAT is taken as a true statement then: {\displaystyle d=a+c} {\displaystyle b+d=180^{\circ }} {\displaystyle \therefore b+a+c=180^{\circ }.} Illustration of proof of the HSEAT Proving that the sum of the measures of the angles of a triangle is 180°. The Euclidean proof of the HSEAT (and simultaneously the result on the sum of the angles of a triangle) starts by constructing the line parallel to side AB passing through point C and then using the properties of corresponding angles and alternate interior angles of parallel lines to get the conclusion as in the illustration.[9] The HSEAT can be extremely useful when trying to calculate the measures of unknown angles in a triangle. ^ Henderson & Taimiņa 2005, p. 110 ^ Wylie, Jr. 1964, p. 101 & p. 106 harvnb error: no target: CITEREFWylie,_Jr.1964 (help) ^ One line segment is considered the initial side and the other the terminal side. The angle is formed by going counterclockwise from the initial side to the terminal side. The choice of which line segment is the initial side is arbitrary, so there are two possibilities for the angle determined by the line segments. ^ This way of defining interior angles does not presuppose that the sum of the angles of a triangle is 180 degrees. ^ Heath 1956, Vol. 1, p. 316 Henderson, David W.; Taimiņa, Daina (2005), Experiencing Geometry/Euclidean and Non-Euclidean with History (3rd ed.), Pearson/Prentice-Hall, ISBN 0-13-143748-8 HSEAT references Geometry Common Core, 'Pearson Education: Upper Saddle River, ©2010, pages 171-173 | United States. Wheater, Carolyn C. (2007), Homework Helpers: Geometry, Franklin Lakes, NJ: Career Press, pp. 88–90, ISBN 978-1-56414-936-7 . Retrieved from "https://en.wikipedia.org/w/index.php?title=Exterior_angle_theorem&oldid=1049102598"
A big symmetric planar set with small category projections Krzysztof Ciesielski, Tomasz Natkaniec (2003) We show that under appropriate set-theoretic assumptions (which follow from Martin's axiom and the continuum hypothesis) there exists a nowhere meager set A ⊂ ℝ such that (i) the set {c ∈ ℝ: π[(f+c) ∩ (A×A)] is not meager} is meager for each continuous nowhere constant function f: ℝ → ℝ, (ii) the set {c ∈ ℝ: (f+c) ∩ (A×A) = ∅} is nowhere meager for each continuous function f: ℝ → ℝ. The existence of such a set also follows from the principle CPA, which... A cardinal preserving immune partition of the ordinals M. Stanley (1995) \left({\nu }_{p}:p\in \Pi \right) {\nu }_{p}\le {2}^{\nu } {\nu }_{p} A characterization of well-orders P\left(ℕ\right) A method and tools for digital document and image reconstruction Maja Jovanović, Aleksandar Perović, Nenad Andonovski, Aleksandar Jovanović (2007) A partial order where all monotone maps are definable Martin Goldstern, Saharon Shelah (1997) It is consistent that there is a partial order (P,≤) of size {\aleph }_{1} such that every monotone function f:P → P is first order definable in (P,≤). A polarized partition relation and failure of GCH at singular strong limit The main result is that for λ strong limit singular failing the continuum hypothesis (i.e. {2}^{\lambda }>{\lambda }^{+} ), a polarized partition theorem holds. \alpha \le {2}^{} {G}^{\gamma } {G}^{\alpha } M{A}_{countable} M{A}_{countable} A topology generated by eventually different functions G. Labędzki (1996) A type of βN with {\aleph }_{0} relative types R. Solomon (1973) A universal null graph whose domain has positive measure G. V. Cox (1982) About An Equivalent Of The Continuum Hypothesis Žikica Perović (1982) Adding a random or a Cohen real: topological consequences and the effect on Martin's axiom J. Roitman (1979) Hartley Slater (2005) Les arguments de Maddy avancés en 1990 contre la théorie des agrégats se trouvent affaiblis par le retournement qu’elle opère en 1997. La présente communication examine cette théorie à la lumière de ce retournement ainsi que des récentes recherches sur les “Nouveaux axiomes pour les mathématiques”. Si la théorie des ensembles est la théorie de la partie–tout des singletons, identifier les singletons à leurs membres singuliers ramène la théorie des ensembles à la théorie des agrégats. Toutefois si... Almost disjoint families and property (a) Paul Szeptycki, Jerry Vaughan (1998) We consider the question: when does a Ψ-space satisfy property (a)? We show that if |A|<p then the Ψ-space Ψ(A) satisfies property (a), but in some Cohen models the negation of CH holds and every uncountable Ψ-space fails to satisfy property (a). We also show that in a model of Fleissner and Miller there exists a Ψ-space of cardinality p which has property (a). We extend a theorem of Matveev relating the existence of certain closed discrete subsets with the failure of property (a). Applications of some strong set-theoretic axioms to locally compact T₅ and hereditarily scwH spaces Peter J. Nyikos (2003) Under some very strong set-theoretic hypotheses, hereditarily normal spaces (also referred to as T₅ spaces) that are locally compact and hereditarily collectionwise Hausdorff can have a highly simplified structure. This paper gives a structure theorem (Theorem 1) that applies to all such ω₁-compact spaces and another (Theorem 4) to all such spaces of Lindelöf number ≤ ℵ₁. It also introduces an axiom (Axiom F) on crowding of functions, with consequences (Theorem 3) for the crowding of countably compact... Axioms which imply GCH Jan Mycielski (2003) We propose some new set-theoretic axioms which imply the generalized continuum hypothesis, and we discuss some of their consequences. Baire category in spaces of probability measures Between Martin's Axiom and Souslin's Hypothesis Kenneth Kunen, Franklin Tall (1979)
X H {d}_{F}\left(X\right) {P}_{0} H 2{d}_{F}\left(X\right)+1 \delta >0 P H {P}_{0} X \parallel P-{P}_{0}\parallel <\delta {\left(P{|}_{X}\right)}^{-1} {P|}_{X}\phantom{\rule{0.222222em}{0ex}}X\to PX Abner J. Salgado (2013) For a two phase incompressible flow we consider a diffuse interface model aimed at addressing the movement of three-phase (fluid-fluid-solid) contact lines. The model consists of the Cahn Hilliard Navier Stokes system with a variant of the Navier slip boundary conditions. We show that this model possesses a natural energy law. For this system, a new numerical technique based on operator splitting and fractional time-stepping is proposed. The method is shown to be unconditionally stable. We present... Kohr, Mirela (2000) limsu{p}_{R\to 0⁺}1/R{\int }_{{Q}_{R}\left(x₀,t₀\right)}|curlu×u/|u||²dxdt\le {\epsilon }_{*} {\epsilon }_{*}>0 A finite element convergence analysis for 3D Stokes equations in case of variational crimes Petr Knobloch (2000) We investigate a finite element discretization of the Stokes equations with nonstandard boundary conditions, defined in a bounded three-dimensional domain with a curved, piecewise smooth boundary. For tetrahedral triangulations of this domain we prove, under general assumptions on the discrete problem and without any additional regularity assumptions on the weak solution, that the discrete solutions converge to the weak solution. Examples of appropriate finite element spaces are given. A Fortin operator for two-dimensional Taylor-Hood elements Richard S. Falk (2008) A standard method for proving the inf-sup condition implying stability of finite element approximations for the stationary Stokes equations is to construct a Fortin operator. In this paper, we show how this can be done for two-dimensional triangular and rectangular Taylor-Hood methods, which use continuous piecewise polynomial approximations for both velocity and pressure. A free boundary problem for compressible viscous fluids. Alberto Valli, Paolo Secchi (1983) \left(𝐯,p\right) \Omega \subset {ℝ}^{3} \left(𝐟{x}_{0},{t}_{0}\right) {L}^{3} 𝐯 {L}^{3/2} p \left({𝐱}_{0},{t}_{0}\right) 𝐯 p \left({𝐱}_{0},{t}_{0}\right) A global existence result for the compressible Navier-Stokes-Poisson equations in three and higher dimensions Zhensheng Gao, Zhong Tan (2012) The paper is dedicated to the global well-posedness of the barotropic compressible Navier-Stokes-Poisson system in the whole space {ℝ}^{N} with N ≥ 3. The global existence and uniqueness of the strong solution is shown in the framework of hybrid Besov spaces. The initial velocity has the same critical regularity index as for the incompressible homogeneous Navier-Stokes equations. The proof relies on a uniform estimate for a mixed hyperbolic/parabolic linear system with a convection term. F. M. Guillén-González, J. V. Gutiérrez-Santacreu (2013) In this work we study a fully discrete mixed scheme, based on continuous finite elements in space and a linear semi-implicit first-order integration in time, approximating an Ericksen–Leslie nematic liquid crystal model by means of a Ginzburg–Landau penalized problem. Conditional stability of this scheme is proved via a discrete version of the energy law satisfied by the continuous problem, and conditional convergence towards generalized Young measure-valued solutions to the Ericksen–Leslie problem... V. V. Yurinsky (2008) This article is dedicated to localization of the principal eigenvalue (PE) of the Stokes operator acting on solenoidal vector fields that vanish outside a large random domain modeling the pore space in a cubic block of porous material with disordered micro-structure. Its main result is an asymptotically deterministic lower bound for the PE of the sum of a low compressibility approximation to the Stokes operator and a small scaled random potential term, which is applied to produce a similar bound... A multidomain spectral collocation method for the Stokes problem. G. Sacchi Landriani, H. Vandeven (1990/1991)
Sequential cancellation criterion Sequential cancellation criterion | Technically Exists Sequential cancellation criterion The sequential cancellation criterion is an extension of the cancellation criterion to sequential multi-winner voting methods. This criterion requires that, whenever a voter likes each of the candidates elected so far the same amount as another voter does, their ballots be capable of cancelling each other out in the next round. While the cancellation criterion itself can be applied to multi-winner methods, it is incompatible with proportional representation which arguably makes it a bad criterion to pass in multi-winner contexts. The sequential cancellation criterion fixes this by only requiring cancellation when two voters have given the same amount of support to all candidates elected so far. To give a more formal definition of the sequential cancellation criterion, some terms and notation will need to be defined first. An incomplete list of winners is a (possibly empty) list of candidates that is smaller than the number of seats k in the election. The notation m(b_1, b_2, \dots, b_n)[1:i] i m b_1, b_2, \dots, b_n b(c) c b A sequential voting method m passes the sequential cancellation criterion if for every ballot b and incomplete list of winners W , there exists some ballot b' For every candidate c W b'(c) = b(c) For any list of ballots b_1, b_2, \dots, b_n m(b_1, b_2, \dots, b_n)[1:\vert W \vert] = W m(b_1, b_2, \dots, b_n, b, b')[1:\vert W \vert] = W m(b_1, b_2, \dots, b_n, b, b')[1:\vert W \vert+1] = m(b_1, b_2, \dots, b_n)[1:\vert W \vert+1] m is deterministic and passes the anonymity criterion (though there is an extension that also applies to non-deterministic methods). Because no candidates are elected before the first round, the sequential cancellation criterion requires that ballots obey the cancellation criterion during that round. This also means that it reduces to the cancellation criterion for single-winner methods since they only have one round. In most cases, a pair of ballots that cancels in one round won’t cancel in previous or subsequent rounds. This means it’s possible that adding a cancelling pair of ballots to the election changes the candidates elected in earlier rounds; when this happens, the ballots aren’t required to cancel. The sequential cancellation criterion was primarily designed with proportional methods in mind, but it also behaves nicely in the context of bloc voting. In fact, the sequential cancellation criterion and the cancellation criterion are equivalent for bloc methods. Below is an informal sketch of the relevant proof. First, it must be shown that if a bloc method passes the sequential cancellation criterion, it passes the cancellation criterion as well. A bloc method works by applying a single-winner method to elect a candidate, removing that candidate from the ballots, and then repeating those two steps until all seats are filled. The sequential cancellation criterion requires that a voting method pass cancellation in the first round, so the single-winner method employed by the bloc method must pass cancellation. Since the bloc method uses this same method in every round, it will pass cancellation in every round. Thus, the bloc method itself must pass cancellation. Next, it must be shown that if a bloc method passes the cancellation criterion, it also passes the sequential cancellation criterion. Consider some arbitrary ballot b W b_{-W} b W . Since the single-winner method employed passes cancellation, there must exist some cancelling ballot b'_{-W} b_{-W} b' be this cancelling ballot but modified so that it maps each candidate c W b(c) b b' \vert W \vert W b_{-W} b'_{-W} . This means the next winner elected will also remain unchanged if b b' are added. Therefore, this bloc method must pass sequential cancellation. Comparison to vote unitarity Vote unitarity is an alternate proposal for defining one person, one vote in the context of proportional representation. Unlike the sequential cancellation criterion, it does not attempt to extend the cancellation criterion (and thus does not reduce to the cancellation criterion for single-winner methods). Instead, it takes a different approach centered around constraining the spending of ballot weights. Below is a table comparing vote unitarity with the sequential cancellation criterion and the cancellation criterion. Sequential Cancellation Vote Unitarity Applies to all voting methods Yes No, only sequential methods (including single-winner methods) No, only sequential proportional methods Compatible with proportional representation No Yes Yes Doesn’t require access to internal mechanics Yes Yes No, requires access to ballot weights If two methods always produce the same results, either both pass or both fail Yes Yes, but candidates must always be elected in the same order No
Compute SNR using the sonar equation - MATLAB sonareqsnr - MathWorks 日本 Estimate the SNR of a signal transmitted by a source with a source level of 130 dB//1 μPa and reflected from a target with 25 dB//1 {m}^{2} target strength. The noise level is 45 dB//1 μPa, the receive array directivity is 25 dB, and the one-way transmission loss is 60 dB. Sonar source level, specified as a scalar. Source level is the ratio of the source intensity to a reference intensity, converted to dB. The reference intensity is the intensity of a sound wave having a root-mean-square (rms) pressure of 1 μPa. Units are in dB//1 μPa. TL=10\mathrm{log}\frac{{I}_{\text{s}}}{I\left(R\right)}
Physics - Elusive Superconducting Superhydride Synthesized {\text{CaH}}_{6} Hongbo Wang/Jilin University Among the various routes that physicists hope might lead to practical, room-temperature superconductors, one of the most fruitful ones involves hydrogen-rich binary compounds containing rare-earth or actinide elements. One such compound, lanthanum superhydride, has already been shown to superconduct at temperatures up to 260 K, but only under pressures greater than 170 GPa (see Viewpoint: Pushing Towards Room-Temperature Superconductivity). Now, Liang Ma and colleagues from Jilin University in China have expanded the search by synthesizing a new type of superhydride containing an alkaline-earth metal instead of a rare-earth metal or an actinide [1]. The researchers say that the synthesis of the new material, clathrate calcium hydride ( ), opens the door to a class of superconductors that is currently underexplored. The structure and superconducting properties of were first predicted in 2012. However, subsequent attempts to synthesize the compound failed to overcome obstacles such as the high reactivity between calcium and hydrogen, which, when brought together at low pressures, can result in hydrides with low hydrogen content. In their new work, Ma and colleagues solved this issue by using ammonia borane ( ) as a hydrogen source, which allowed them to synthesize the compound by direct reaction between calcium and hydrogen at high temperature and pressure. In the team’s experiments, the synthesized exhibited superconducting properties very close to theoretical predictions, achieving a critical temperature of 215 K at 172 GPa. Although this temperature is below the record set—under a similarly impractical pressure—by lanthanum superhydride, the researchers hope that experiments with other alkaline-earth-metal superhydrides will eventually lead to room-temperature superconductivity under more easily achievable conditions. L. Ma et al., “High-temperature superconducting phase in clathrate calcium hydride CaH6 up to 215 K at a pressure of 172 GPa,” Phys. Rev. Lett. 128, 167001 (2022). High-Temperature Superconducting Phase in Clathrate Calcium Hydride {\mathrm{CaH}}_{6} up to 215 K at a Pressure of 172 GPa Liang Ma, Kui Wang, Yu Xie, Xin Yang, Yingying Wang, Mi Zhou, Hanyu Liu, Xiaohui Yu, Yongsheng Zhao, Hongbo Wang, Guangtao Liu, and Yanming Ma {\mathrm{CaH}}_{6}
Impermanent loss and how to compute it - LP-Swap Academy Impermanent loss and how to compute it We consider an investment into WBNB-CAKE LP tokens. Impermanent Loss (IL) is the difference between the revenue of a pure investment into the volatile assets (WBNB and CAKE) and an investment into the LP tokens. Recall that the variation of the price of the LP token is proportional to \sqrt{r \times s } r s are the variations of the prices of WBNB and CAKE respectively. Therefore an investment in WBNB-CAKE LP tokens generates revenues proportional \sqrt{r \times s } However we should also take into account that there is a yield farming interest r^{\mathrm{LP}} per LP token derived from any source - trading fees, subsidies, reward tokens. If this interest is reinvested into the pool for compounding and not removed as payout, then total revenue is given by \sqrt{r\times s}\times (r^{\mathrm{LP}}+1). On the other hand, for a pure 50/50 investment in WBNB and CAKE, the revenue is proportional to the variations r s of the prices of WBNB and CAKE: \frac{r+s}{2} Impermanent Loss (IL) is the difference between the revenue of an investment into the LP tokens and a pure investment into the volatile assets (WBNB and CAKE): \mathrm{IL}=\sqrt{r\times s}\times (r^{\mathrm{LP}}+1)-\frac{r+s}{2}={\color{blue}\sqrt{r\times s}-\frac{r+s}{2}} + {\color{green}\sqrt{r\times s}\times r^{\mathrm{LP}}}. The last equality is particularly helpful to quantify Impermanent Loss. If we neglect the yield farming interest, r^{\mathrm{LP}}=0 , then impermanent loss simply consists in the blue term. By the Arithmetic-Geometric mean inequality, this is always negative: {{\color{blue} \sqrt{r\times s}-\frac{r+s}{2}}<0} Therefore, in the absence of yield farming interest, an investment into LP tokens is always loosing against a pure investment into each token of the pair. The higher the difference between the variations r s of the prices of WBNB and CAKE, the bigger will be the Impermanent Loss. This is why the yield farming interest r^{\mathrm{LP}}>0 is crucial, as then the yield farming revenue {\color{green}\sqrt{r\times s}\times r^{\mathrm{LP}}} can compensate for the loss {{\color{blue} \sqrt{r\times s}-\frac{r+s}{2}}<0} Next, let us assume that the price of one token does not change, for instance s=1 . This happens for instance when CAKE is replaced with a stable coin. Then the revenue of a pure investment in WBNB is given by the price evolution r . Therefore the investment in LP tokens beats IL if: \sqrt{r}\times (r^{\mathrm{LP}}+1) \geq r \Longleftrightarrow (r^{\mathrm{LP}}+1)^2\geq r. Similarly the investment in LP tokens is sub-optimal whenever: \sqrt{r}\times (r^{\mathrm{LP}}+1) \leq r \Longleftrightarrow (r^{\mathrm{LP}}+1)^2\leq r. Note that the investment in LP token can even tolerate significant drops in asset price down to \sqrt{r}\times (r^{\mathrm{LP}}+1) \geq 1 \Longleftrightarrow r\geq \frac{1}{(r^{\mathrm{LP}}+1)^2}. The situation is summarised in this graphic:
p\left(\textcolor[rgb]{0,0.8,1}{ }\left[\left[\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{2}\right]\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{ }\left[\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{4}\right]\right] \right) p(\textcolor[rgb]{0.501960784313725,0,0}{ }\left[\begin{array}{cc}\textcolor[rgb]{0.501960784313725,0,0}{1}& \textcolor[rgb]{0.501960784313725,0,0}{2}\\ \textcolor[rgb]{0.501960784313725,0,0}{3}& \textcolor[rgb]{0.501960784313725,0,0}{4}\end{array}\right]) \mathrm{LinearAlgebra}:-\mathrm{Rank}\left( \mathrm{Array}\left( \left[ \left[ 1,2 \right], \left[3,4\right] \right] \right) \right); \textcolor[rgb]{0,0,1}{2} \mathrm{LinearAlgebra}:-\mathrm{Determinant}\left( \left[ \left[ 6, 7 \right],\left[8,9\right] \right]\right); \textcolor[rgb]{0,0,1}{-}\textcolor[rgb]{0,0,1}{2} p\left( \mathrm{Array}\left( \left[ \left[ 1, 2 \right], \left[3, 4\right] \right] \right) \right); \textcolor[rgb]{0,0,1}{\mathrm{Matrix}} p\left( \mathrm{Vector}\left[\mathrm{row}\right]\left( \left[ 5, 6,7 \right] \right) \right); \textcolor[rgb]{0,0,1}{\mathrm{Matrix}} p\left( \left[ \left[ 1,2, 3\right], \left[ 4, 5, 6 \right] \right] \right); \textcolor[rgb]{0,0,1}{\mathrm{Matrix}} Adding a ~ prefix to the m::~Matrix parameter specification in the example above tells Maple it will accept something similar to a Matrix. You can now pass in a Vector. Behind the scenes an n A ≔ \mathrm{Array}\left(2..3,6..7\right): \mathrm{ArrayDims}\left(A\right); \textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{3}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{6}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{7} p\left(A\right); \textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{2}\textcolor[rgb]{0,0,1}{,}\textcolor[rgb]{0,0,1}{1}\textcolor[rgb]{0,0,1}{..}\textcolor[rgb]{0,0,1}{2} p\left( 〈1,2;3,4〉 \right); \textcolor[rgb]{0,0,1}{1} \mathrm{Frac}\left(1.5\right); \frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}} \mathrm{Frac}\left(\frac{3}{2}\right); \frac{\textcolor[rgb]{0,0,1}{1}}{\textcolor[rgb]{0,0,1}{2}} p\left("a string"\right); \textcolor[rgb]{0,0,1}{"a string"} p\left(\mathrm{`a name`}\right); \textcolor[rgb]{0,0,1}{"a name"} p\left(\mathrm{expect}+\mathrm{an}+\mathrm{Error}\right);
Butterworth filter design - MATLAB butter - MathWorks Australia Lowpass Butterworth Transfer Function Bandstop Butterworth Filter Highpass Butterworth Filter [b,a] = butter(n,Wn) [b,a] = butter(n,Wn,ftype) [z,p,k] = butter(___) [A,B,C,D] = butter(___) [___] = butter(___,'s') [b,a] = butter(n,Wn) returns the transfer function coefficients of an nth-order lowpass digital Butterworth filter with normalized cutoff frequency Wn. [b,a] = butter(n,Wn,ftype) designs a lowpass, highpass, bandpass, or bandstop Butterworth filter, depending on the value of ftype and the number of elements of Wn. The resulting bandpass and bandstop designs are of order 2n. [z,p,k] = butter(___) designs a lowpass, highpass, bandpass, or bandstop digital Butterworth filter and returns its zeros, poles, and gain. This syntax can include any of the input arguments in previous syntaxes. [A,B,C,D] = butter(___) designs a lowpass, highpass, bandpass, or bandstop digital Butterworth filter and returns the matrices that specify its state-space representation. [___] = butter(___,'s') designs a lowpass, highpass, bandpass, or bandstop analog Butterworth filter with cutoff angular frequency Wn. Design a 6th-order lowpass Butterworth filter with a cutoff frequency of 300 Hz, which, for data sampled at 1000 Hz, corresponds to 0.6\pi [b,a] = butter(6,fc/(fs/2)); Design a 6th-order Butterworth bandstop filter with normalized edge frequencies of 0.2\pi 0.6\pi rad/sample. Plot its magnitude and phase responses. Use it to filter random data. [b,a] = butter(3,[0.2 0.6],'stop'); Design a 9th-order highpass Butterworth filter. Specify a cutoff frequency of 300 Hz, which, for data sampled at 1000 Hz, corresponds to 0.6\pi [z,p,k] = butter(9,300/500,'high'); Design a 20th-order Butterworth bandpass filter with a lower cutoff frequency of 500 Hz and a higher cutoff frequency of 560 Hz. Specify a sample rate of 1500 Hz. Use the state-space representation. Design an identical filter using designfilt. [A,B,C,D] = butter(10,[500 560]/750); legend(fvt,'butter','designfilt') 2\pi Wn — Cutoff frequency Cutoff frequency, specified as a scalar or a two-element vector. The cutoff frequency is the frequency at which the magnitude response of the filter is 1 / √2. If Wn is scalar, then butter designs a lowpass or highpass filter with cutoff frequency Wn. If Wn is the two-element vector [w1 w2], where w1 < w2, then butter designs a bandpass or bandstop filter with lower cutoff frequency w1 and higher cutoff frequency w2. For digital filters, the cutoff frequencies must lie between 0 and 1, where 1 corresponds to the Nyquist rate—half the sample rate or π rad/sample. For analog filters, the cutoff frequencies must be expressed in radians per second and can take on any positive value. 'bandpass' specifies a bandpass filter of order 2n if Wn is a two-element vector. 'bandpass' is the default when Wn has two elements. 'stop' specifies a bandstop filter of order 2n if Wn is a two-element vector. H\left(z\right)=\frac{B\left(z\right)}{A\left(z\right)}=\frac{\text{b(1)}+\text{b(2)}\text{\hspace{0.17em}}{z}^{-1}+\cdots +\text{b(n+1)}\text{\hspace{0.17em}}{z}^{-n}}{\text{a(1)}+\text{a(2)}\text{\hspace{0.17em}}{z}^{-1}+\cdots +\text{a(n+1)}\text{\hspace{0.17em}}{z}^{-n}}. H\left(s\right)=\frac{B\left(s\right)}{A\left(s\right)}=\frac{\text{b(1)}\text{\hspace{0.17em}}{s}^{n}+\text{b(2)}\text{\hspace{0.17em}}{s}^{n-1}+\cdots +\text{b(n+1)}}{\text{a(1)}\text{\hspace{0.17em}}{s}^{n}+\text{a(2)}\text{\hspace{0.17em}}{s}^{n-1}+\cdots +\text{a(n+1)}}. H\left(z\right)=\text{k}\frac{\left(1-\text{z(1)}\text{\hspace{0.17em}}{z}^{-1}\right)\text{\hspace{0.17em}}\left(1-\text{z(2)}\text{\hspace{0.17em}}{z}^{-1}\right)\cdots \left(1-\text{z(n)}\text{\hspace{0.17em}}{z}^{-1}\right)}{\left(1-\text{p(1)}\text{\hspace{0.17em}}{z}^{-1}\right)\text{\hspace{0.17em}}\left(1-\text{p(2)}\text{\hspace{0.17em}}{z}^{-1}\right)\cdots \left(1-\text{p(n)}\text{\hspace{0.17em}}{z}^{-1}\right)}. H\left(s\right)=\text{k}\frac{\left(s-\text{z(1)}\right)\text{\hspace{0.17em}}\left(s-\text{z(2)}\right)\cdots \left(s-\text{z(n)}\right)}{\left(s-\text{p(1)}\right)\text{\hspace{0.17em}}\left(s-\text{p(2)}\right)\cdots \left(s-\text{p(n)}\right)}. \begin{array}{c}x\left(k+1\right)=\text{A}\text{\hspace{0.17em}}x\left(k\right)+\text{B}\text{\hspace{0.17em}}u\left(k\right)\\ y\left(k\right)=\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{C}\text{\hspace{0.17em}}x\left(k\right)+\text{D}\text{\hspace{0.17em}}u\left(k\right).\end{array} \begin{array}{l}\stackrel{˙}{x}=\text{A}\text{\hspace{0.17em}}x+\text{B}\text{\hspace{0.17em}}u\\ y=\text{C}\text{\hspace{0.17em}}x+\text{D}\text{\hspace{0.17em}}u.\end{array} [b,a] = butter(n,Wn,ftype); % This is an unstable filter [z,p,k] = butter(n,Wn,ftype); % Display and compare results Butterworth filters have a magnitude response that is maximally flat in the passband and monotonic overall. This smoothness comes at the price of decreased rolloff steepness. Elliptic and Chebyshev filters generally provide steeper rolloff for a given filter order. butter uses a five-step algorithm: It finds the lowpass analog prototype poles, zeros, and gain using the function buttap. For digital filter design, it uses bilinear to convert the analog filter into a digital filter through a bilinear transformation with frequency prewarping. Careful frequency adjustment enables the analog filters and the digital filters to have the same frequency response magnitude at Wn or at w1 and w2. It converts the state-space filter back to its transfer function or zero-pole-gain form, as required. besself | buttap | buttord | cheby1 | cheby2 | designfilt | ellip | filter | maxflat | sosfilt
A bijection between planar constellations and some colored Lagrangian trees. Chauve, Cedric (2003) A characterization of hypergraphs generated by arborescences Maciej M. Sysło (1979) Michael A. Henning (2002) A Roman dominating function (RDF) on a graph G = (V,E) is a function f: V → 0,1,2 satisfying the condition that every vertex u for which f(u) = 0 is adjacent to at least one vertex v for which f(v) = 2. The weight of f is w\left(f\right)={\sum }_{v\in V}f\left(v\right) . The Roman domination number is the minimum weight of an RDF in G. It is known that for every graph G, the Roman domination number of G is bounded above by twice its domination number. Graphs which have Roman domination number equal to twice their domination number are called... A combinatorial derivation of the number of labeled forests. Callan, David (2003) A combinatorial proof of Postnikov's identity and a generalized enumeration of labeled trees. Seo, Seunghyun (2005) A distance between isomorphism classes of trees A few remarks on the history of MST-problem Jaroslav Nešetřil (1997) On the background of Borůvka’s pioneering work we present a survey of the development related to the Minimum Spanning Tree Problem. We also complement the historical paper Graham-Hell [GH] by a few remarks and provide an update of the extensive literature devoted to this problem. A formula for all minors of the adjacency matrix and an application R. B. Bapat, A. K. Lal, S. Pati (2014) We supply a combinatorial description of any minor of the adjacency matrix of a graph. This description is then used to give a formula for the determinant and inverse of the adjacency matrix, A(G), of a graph G, whenever A(G) is invertible, where G is formed by replacing the edges of a tree by path bundles. A Generalization of the Matrix-Tree Theorem. Mordechai Lewin (1982) Daniele D’Angeli, Alfredo Donno (2009) A Link Between Ordered Sets And Trees On The Rectangle Tree Hypothesis Đuro Kurepa (1982) Michael Poschen, Lutz Volkmann (2006) Let ir(G) and γ(G) be the irredundance number and domination number of a graph G, respectively. The number of vertices and leaves of a graph G are denoted by n(G) and n₁(G). If T is a tree, then Lemańska [4] presented in 2004 the sharp lower bound γ(T) ≥ (n(T) + 2 - n₁(T))/3. In this paper we prove ir(T) ≥ (n(T) + 2 - n₁(T))/3. for an arbitrary tree T. Since γ(T) ≥ ir(T) is always valid, this inequality is an extension and improvement of Lemańska's result. ...
14G17 Positive characteristic ground fields 14G50 Applications to coding theory and cryptography 𝒟 L 𝒟 F L A Classical Diophantine Problem and Modular Forms of Weight 3/2. J.B. Tunnell (1983) A counterexample in the theory of local zeta functions. Martin, Roland (1995) A Gauss-Bonnet theorem for motivic cohomology. S. Turner (1990) A motivic Chebotarev density theorem. Dhillon, Ajneet, Mináč, Ján (2006) p L John L. Boxall (1986) In this paper we apply the results of our previous article on the p -adic interpolation of logarithmic derivatives of formal groups to the construction of p L -functions attached to certain elliptic curves with complex multiplication. Our results are primarily concerned with curves with supersingular reduction. Franziska Heinloth (2007) We consider zeta functions with values in the Grothendieck ring of Chow motives. Investigating the \lambda –structure of this ring, we deduce a functional equation for the zeta function of abelian varieties. Furthermore, we show that the property of having a rational zeta function satisfying a functional equation is preserved under products. A Note on Height Pairings, Tamagawa Numbers, and the Birch and Swinnerton-Dyer Conjecture. S. Bloch (1980) A remark on a theorem of Chowla-Cowles. Hideji Ito (1982) A Remark on the Sato-Tate Conjecture. A.P. OGG (1969/1970) Abelian varieties-Galois representation and properties of ordinary reduction Rutger Noot (1995) Absolute derivations and zeta functions. Kurokawa, Nobushige, Ochiai, Hiroyuki, Wakayama, Masato (2003) Spencer Bloch (1984) Algebraische Zyklen auf Hilbert-Blumenthal-Flächen. G. Harder, R.P. Langlands (1986) An analogue of the Weierstrass ζ-function in characteristic p José Felipe Voloch (1997) An explicit factorisation of the zeta functions of Dwork hypersurfaces Philippe Goutet (2010) Another Look at p-Adic L-Functions for Totally Real Fields. Nicholas M. Katz (1981) Chern classes, adeles and L-functions. A.N. Parshin (1983) Class numbers of quadratic fields and Shimura's correspondence. Jan Nekovár (1990)
Physics - Superconductivity Dome Rises from Damped Phonons Superconductivity Dome Rises from Damped Phonons January 25, 2022 &bullet; Physics 15, s11 An unexpectedly simple extension to a theory explains the critical temperature anomalies observed in ferroelectric superconductors. M. Baggioli/Shanghai Jiao Tong University; ktsdesign/Shutterstock Ever since ferroelectric materials were first discovered, scientists have sought to understand how their properties relate to superconductivity. A recently discovered curiosity, for instance, is that these materials exhibit a puzzling “dome” in their critical superconducting temperature near their ferroelectric transitions. Now, Chandan Setty at Rice University, Texas, and colleagues provide a surprisingly simple explanation for this puzzle that uses lattice dynamics and phonons [1]. The superconducting dome is puzzling because of what the ferroelectric transition does to a material’s phonons. According to Bardeen-Cooper-Schrieffer (BCS) theory, phonons mediate the binding of electrons into superconductivity-producing Cooper pairs. But as a material nears the ferroelectric transition—as a result of doping or the application of strain, for example—these phonons are damped and have their lifetimes reduced. In a conventional superconductor, such phonon damping lowers the critical superconducting temperature; in a ferroelectric material, the critical temperature instead rises to a peak at the ferroelectric “instability point,” where the damping is greatest. Setty and colleagues explain the dome by expanding BCS theory so that it describes the phonon-damping process more accurately. In particular, they show that, at the instability point, this damping is anharmonic. As a result of this anharmonicity, anti-Stokes phonon-electron scattering is suppressed more than Stokes phonon-electron scattering. Since anti-Stokes scattering inhibits Cooper-pair formation, while Stokes scattering promotes it, the effect is an increase in the critical temperature around the instability point. Anharmonic damping occurs in all ferroelectric materials—as well as other systems with so-called soft-mode structural instabilities—explaining why experimenters have observed superconducting domes across a wide range of conditions. Setty and colleagues hope that their new understanding will provide scientists with another tool to engineer superconductivity in such materials. C. Setty et al., “Superconducting dome in ferroelectric-type materials from soft mode instability,” Phys. Rev. B 105, L020506 (2022). Superconducting dome in ferroelectric-type materials from soft mode instability Chandan Setty, Matteo Baggioli, and Alessio Zaccone {\text{CaH}}_{6}
(177049) 2003 EE16 Wikipedia 9.93475×10−5 AU (1.486217×104 km) 5.1×1010 kg (assumed) (177049) 2003 EE16, provisionally known as 2003 EE16, is an Apollo near-Earth asteroid and potentially hazardous object.[2] It was discovered on 8 March 2003 by LPL/Spacewatch II at an apparent magnitude of 20 using a 1.8-meter (71 in) reflecting telescope.[1] It has an estimated diameter of 320 meters (1,050 ft).[3] The asteroid was listed on Sentry Risk Table with a Torino Scale rating of 1 on 2 April 2003.[3] Many of the virtual impactors were located near the nominal orbital solution and the asteroid has a low inclination relative to Earth's orbit.[4] Observation by the Very Large Telescope (VLT) 8 meter facilities on 22 May and 19 June 2003 when 2003 EE16 was very dim with an apparent magnitude between 24–25[note 1] refined the orbit.[4] It was removed from the Sentry Risk Table on 28 May 2003.[5] 2003 EE16 has the smallest Earth Minimum orbit intersection distance (MOID) of any known potentially hazardous asteroid.[6] The Earth MOID is 0.0000475 AU (7,110 km; 4,420 mi).[6] Asteroids with a smaller Earth MOID are less than ~100 meters in diameter such as 2013 XY8 and 2010 TD54. Earth impactors 2008 TC3 and 2014 AA had small Earth MOID values as they were on their impact approach when discovered. Close-approaches to Earth[7] 2014-07-01 0.0966 AU (14,450,000 km; 8,980,000 mi) (37.6 LD) 2149-07-06 0.0518 AU (7,750,000 km; 4,820,000 mi) (20.2 LD) ^ At an apparent magnitude of 24, the asteroid was roughly 10 million times fainter than can be seen with the naked eye. {\displaystyle ({\sqrt[{5}]{100}})^{24-6.5}=10000000} ^ a b "MPEC 2003-E34 : 2003 EE16". IAU Minor Planet Center. 9 March 2003. Retrieved 3 February 2014. (K03E16E) ^ a b c d "JPL Small-Body Database Browser: 177049 (2003 EE16)" (2013-03-12 last obs and observation arc=10.8 years). Jet Propulsion Laboratory. Retrieved 7 April 2016. ^ a b c "Current Impact Risks (2003 EE16)". Near-Earth Object Program. NASA. 2 April 2003. Archived from the original on 2 April 2003. ^ a b "2003 EE16". Spaceguard Central Node. 15 July 2003. Archived from the original on 21 February 2014. Retrieved 3 February 2014. ^ "Date/Time Removed". NASA/JPL Near-Earth Object Program Office. Archived from the original on 2 June 2002. Retrieved 3 February 2014. ^ a b "JPL Small-Body Database Search Engine: H <= 22 (mag) and Earth MOID < 0.0027 (AU)". JPL Solar System Dynamics. Retrieved 3 February 2013. ^ "JPL Close-Approach Data: 177049 (2003 EE16)" (2013-03-12 last obs and observation arc=10.8 years). Retrieved 3 February 2014. (177049) 2003 EE16 at NeoDyS-2, Near Earth Objects—Dynamic Site (177049) 2003 EE16 at ESA–space situational awareness (177049) 2003 EE16 at the JPL Small-Body Database
Physics - Strong Staggered Flux Lattices for Cold Atoms Strong Staggered Flux Lattices for Cold Atoms Extremely high magnetic fields have been simulated by laser manipulation of atoms trapped in an optical lattice. APS/Erich Mueller Figure 1: (Bottom) Schematic of the tight-binding model realized by Aidelsburger et al. in their cold-atom experiment [1]. Red and blue spheres (labeled by A and B) denote sites. Arrows in the middle of plaquettes show the direction of the effective magnetic field. (Top) Schematic of the optical lattice potential with period two superlattice in the x direction used in the experiment. The large detuning between the red sites and blue sites prevents tunneling in the x direction. Two external lasers are introduced, whose detuning coincides with the energy mismatch, restoring tunneling. Phases from these lasers lead to the staggered effective magnetic field appearing in the tight-binding model.(Bottom) Schematic of the tight-binding model realized by Aidelsburger et al. in their cold-atom experiment [1]. Red and blue spheres (labeled by A and B) denote sites. Arrows in the middle of plaquettes show the direction of the effective magnetic f... Show more Electrons in magnetic fields move in circular orbits. These gyrations are key to a wide range of physical phenomena: superconducting vortices, quantum magnetoresistance oscillations, Hall effects, etc. Theoretical (see 25 April 2011 Viewpoint) and experimental (see 30 March 2009 Viewpoint) advances have begun to make possible “artificial magnetic fields” that allow this physics to be studied using cold neutral atoms. As they report in Physical Review Letters, Monika Aidelsburger at Ludwig Maximilian University, Germany, and colleagues have reached an important milestone in the creation of artificial magnetic fields [1], producing a quantum degenerate gas of interacting bosons that experiences the analog of a large staggered flux. Electrons on a lattice are often described by tight-binding models, where the charge carriers move by hopping between spatially localized single-particle states. Magnetic fields appear as phases on the hopping matrix elements [2]. In their experiment on neutral rubidium- atoms, Aidelsburger et al. mimic the effect of a magnetic field by using techniques from quantum optics (described below) to generate such phases. Their success opens up several new areas of study with cold atoms; most importantly the interplay of lattices and magnetic fields, analogs of the quantum Hall effects, and band-structure physics related to topological insulators. The “effective magnetic field” generated in these experiments is extremely strong. One of the most important dimensionless measures of the field strength in a lattice system is the magnetic flux through a single square unit cell or plaquette. For particles of charge , this flux is is , where is the sum of the phases on the four hopping matrix elements encountered when hopping around the boundary of the plaquette, , and is Plank’s constant. (In other contexts, this relation between phases and flux gives rise to the Aharanov-Bohm effect, whereby an interference experiment can be used to detect a magnetic field.) Since is only defined modulo , its largest possible magnitude is . Here the atoms encountered a phase . If Aidelsburger et al. were actually working with electrons (as opposed to atoms), and the lattice was of atomic dimensions, this would correspond to a magnetic field well above tesla. Such fields are impossible to achieve in condensed-matter experiments, but theorists speculate that they would lead to rich phenomena, for example, in a uniform field, the single-particle spectrum is fractal [3]. Hints of this fractal spectrum have been seen in high-mobility nanofabricated structures [4]. Adding interactions, as is readily done in the cold-atom setting, should make the physics even more interesting, and lead one into uncharted territory. How is the quantum Hall effect modified by Mott physics, or by the fractal single-particle spectrum? Future experiments will explore such questions. Even more striking, Aidelsburger et al. produced a staggered field: As illustrated in Fig. 1, their system acts as if the magnetic field oscillates from plaquette to plaquette. One consequence is an unusual band structure describing the atoms, with two overlapping bands that touch at two “Dirac points.” Small modifications to this lattice can introduce gaps between the two bands at these Dirac points, producing a band structure in which phases accumulate when one adiabatically moves around closed paths in quasimomentum space. These phases act similarly to an applied magnetic field, producing “anomalous Hall effects” and, under appropriate circumstances, giving rise to an insulator with nontrivial topological invariants [5]. Even though there is no net magnetic field, Aidelsburger’s lattice breaks time-reversal symmetry and is very similar to a model introduced by Haldane [6], which has inspired recent work on “topological insulators” (see 28 November 2011 Viewpoint). Similar staggered flux lattices spontaneously appear in models of spin liquids and strongly correlated electron systems [7]. It is very exciting to contemplate using cold atoms to study analogs of this physics in a more controlled setting. To generate their effective magnetic field, the authors first used lasers to create the potential illustrated in Fig. 1, where in the direction there is a bias between alternate sites (labeled A and B). The energy difference between the A sites and B sites is large compared to the bandwidth, effectively “turning off” tunneling in the direction. To turn tunneling back on, Aidelsburger et al. introduce two more lasers, carefully tuned so that an atom can resonantly absorb a photon from one beam, emit it into the other, and hop in the direction. The lasers are skewed with respect to one another, so that the relative phase between the two beams varies in space. This phase difference translates into spatially dependent phases on the hopping matrix elements. By varying the angle between these beams the “flux” can be varied. An equivalent picture is that the two-photon process gives the atoms an impulse in the direction when they jump from a row of A sites to B sites. Interpreting this impulse in terms of a Lorentz force gives the effective magnetic field. The direction of the field alternates because the direction one hops to get from A to B alternates. To demonstrate that they have created staggered fluxes, Aidelsburger et al. studied the momentum distribution of a Bose-Einstein condensate in this lattice. They found that as they changed the ratio of the strength of the hopping in the and direction, they could tune from a regime where condensation involved only a single quasimomentum to a regime where two different quasimomenta were involved—the latter occurring when the hopping, denoted , was much weaker than the hopping, . They found quantitative agreement with the tight-binding band structure and with detailed theoretical studies by Möller and Cooper [8]. By slightly modifying their setup, Aidelsburger et al. were also able to carry out dynamical experiments on an array of isolated -site plaquettes, each of which contained a single atom. Such studies of small clusters are another important frontier [9]. The idea of using “Raman assisted hopping” to mimic a magnetic field was first suggested by Jaksch and Zoller [10]. As they pointed out, one produces a uniform field, rather than a staggered field, if one replaces the superlattice with a linear potential. Each hop in the positive direction will then involve the same impulse. Alternative approaches are described in detail in a recent review article by Dalibard et al. [11]. There have been two other successful methods of generating analogs of magnetic fields for cold atoms. First, the Coriolis force has the same form as the Lorentz force, allowing experiments on rotating gases to explore vortex physics. In rotation experiments, the strength of the “magnetic field” is limited by instabilities caused by the centrifugal force. Second, a group of scientists at the National Institute for Standards and Technology has produced a continuum system where two-photon Raman transitions are used to emulate a vector potential [12]. In the coming years, artificial electromagnetic fields will play an ever-increasing role in cold-atom experiments. This material is based upon work supported by the National Science Foundation under grant No. PHY-1068165. M. Aidelsburger, M. Atala, S. Nascimbène, S. Trotzky, Y-A. Chen, and I. Bloch, Phys. Rev. Lett. 107, 255301 (2011) R. Peierls, Z. Phys. 80, 763 (1933) D. Hofstadter, Phys. Rev. B 14, 2239 (1976) T. Feil et al., Phys. Rev. B 75, 075303 (2007); M. C. Geisler et al., Phys. Rev. Lett. 92, 256801 (2004); S. Melinte et al., 92, 036802 (2004) D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys 82, 1959 (2010) F. Haldane, Phys. Rev. Lett. 61, 2015 (1988) S. Chakravarty, R. B. Laughlin, D. K. Morr, and C. Nayak, Phys. Rev. B 63, 094503 (2001) G. Moller and N. R. Cooper, Phys. Rev. A 82, 063625 (2010) D. Blume, arXiv:1111.0941 (2011); Rep. Prog. Phys (to be published) D. Jaksch and P. Zoller, New J. Phy. 5, 56 (2003) J. Dalibard, F. Gerbier, G. Juzeliūnas, and P. Öhberg, Rev. Mod. Phys. 83 1523 (2011) Y.-J. Lin, R. L. Compton, K. J. Garcia, J. V. Porto, and I. B. Spielman, Nature 462, 628 (2009) M. Aidelsburger, M. Atala, S. Nascimbène, S. Trotzky, Y.-A. Chen, and I. Bloch Atomic and Molecular PhysicsTopological Insulators
Concept of temperature — lesson. Science State Board, Class 9. The degree of hotness or coolness of a body is measured by its temperature. The higher the temperature, the hotter the body. Kelvin is the SI unit of temperature (\(K\)). Celsius (\(°C\)) is used in everyday applications. A thermometer is used to determine the temperature. There are three temperature scales, they are Celsius or Centigrade scale Absolute scale or Kelvin scale The freezing and boiling points in Fahrenheit are \(32\) degrees Fahrenheit and \(212\) degrees Fahrenheit, respectively. The interval has been broken down into \(180\) segments. Celsius scale: The freezing and boiling points on the Celsius scale (also known as the centigrade scale) are \(0°C\) and \(100°C\), respectively. The interval has been broken down into \(100\) pieces. The formula to convert a Celsius scale to Fahrenheit scale is: F=\frac{9}{5}\left(C+32\right) The formula for converting a Fahrenheit scale to Celsius scale is: Kelvin scale(Absolute scale): Kelvin scale is also known as the absolute scale. Absolute zero, or \(0\) \(K\) on the Kelvin scale, is the temperature at which a substance's molecules have the lowest possible energy. At \(273.16\) \(K\), the solid, liquid, and gaseous phases of water can coexist in equilibrium. Kelvin is defined as \(1/273.16\) of the triple point temperature. The formula for converting a Celsius scale to a Kelvin scale is: K=C+273.15 https://commons.wikimedia.org/wiki/File:Temperature_Scales.png
Initial Launch - bDollar Protocol bDollar (BDO) token distribution bDollar Shares (sBDO) distribution Block countdown: https://bscscan.com/block/countdown/346900​ Banks - Finished (bootstrap phase) Current Bank Pools BDO-TEN LP (Farming details can be found here) Retired Bank Pools (Please withdraw any funds in these pools): BDO-BNB LP BDO-BFI LP BDO-BUSD LP: 45x sBDO-BUSD LP: 30x BDO-BNB LP: 10x bBDO (bond tokens) are always on sale, however, a purchase during epoch expansion will result in a nett loss. Hence, the platform UI only allows for purchase of bBDO when BDO falls below the 1 $BUSD peg. Early redemption of bBDO bonds for BDO (when BDO's TWAP < 1) will experience a 10% nett loss. e.g. if BDO's TWAP < 1, exchange bBDO bonds for BDO will be in a 1:0.9 ratio. To encourage redemption of bBDO bonds for BDO when BDO's TWAP > 1 and incentivise profit-seekers who are doing bonds. Bonds redemption will be more profitable with a higher BDO TWAP value, of which bBDO bonds to BDO ratio will be 1:R, where R can be calculated in the formula as shown below: R = 1.0 + min[0.3, (BDO_(twapprice) - 1.0) * coeff)] Example 1. if BDO's TWAP = 1.05, bBDO to BDO ratio in Example 1 would be 1:1.0325 bBDO to BDO ratio in Example 2 would be 1:1.065 As you can tell, bBDO holders are able to profit more when they participate in bonds redemption when BDO TWAP >> 1.
\sigma \sigma \left(L,M\right) \sigma A Borel topos Donovan H. Van Osdol (1981) A category Ψ-density topology Władysław Wilczyński, Wojciech Wojdowski (2011) Ψ-density point of a Lebesgue measurable set was introduced by Taylor in [Taylor S.J., On strengthening the Lebesgue Density Theorem, Fund. Math., 1958, 46, 305–315] and [Taylor S.J., An alternative form of Egoroff’s theorem, Fund. Math., 1960, 48, 169–174] as an answer to a problem posed by Ulam. We present a category analogue of the notion and of the Ψ-density topology. We define a category analogue of the Ψ-density point of the set A at a point x as the Ψ-density point at x of the regular open... A class of Caratheodory outer measures in topological spaces Wilbur, John (1973) A class of sets whose distance set fills an interval Ken W. Lee (1979) A Convergence Theorem for Measures in Regular Hausdorff Spaces. Peter Gänssler (1971) A converse of the Arsenin–Kunugui theorem on Borel sets with σ-compact sections P. Holický, Miroslav Zelený (2000) Let f be a Borel measurable mapping of a Luzin (i.e. absolute Borel metric) space L onto a metric space M such that f(F) is a Borel subset of M if F is closed in L. We show that then {f}^{-1}\left(y\right) {K}_{\sigma } set for all except countably many y ∈ M, that M is also Luzin, and that the Borel classes of the sets f(F), F closed in L, are bounded by a fixed countable ordinal. This gives a converse of the classical theorem of Arsenin and Kunugui. As a particular case we get Taĭmanov’s theorem saying that the image of... {ℝ}^{2} A dominated convergence theorem for real-valued summable set functions Appling, William D.L. (1970) A general form of the Vitali theorem Miguel de Guzmán (1975) A generalization of the Nikodym boundedness theorem. Christopher Stuart (2007) In this note an internal property of a ring of sets, named the Nested Partition Property, is shown to imply the Nikodym Property. A wide range of examples are shown to have this property. A generalization of the Vitali covering theorem D. Sarkhel (1977) A measurable selection theorem A metric space associated with a probability space. Taylor, Keith F., Wang, Xikui (1993) A new type of affine Borel function. A Note on a Theorem of A. D. Alexandroff Zdena Riečanová (1971) A note on complex unions of subsets of the real line Jacek Cichoń, Andrzej Jasiński (2001) A note on determinacy of measures Pavel Pták, Josef Tkadlec (1988)