content
stringlengths
86
994k
meta
stringlengths
288
619
Savings Model Example This is an experimental feature! Experimental features are early versions for users to test before final release. We work hard to ensure that every available Simudyne SDK feature is thoroughly tested, but these experimental features may have minor bugs we still need to work on. If you have any comments or find any bugs, please share with support@simudyne.com.
{"url":"https://docs.simudyne.com/reference/system_dynamics/savings_model/","timestamp":"2024-11-09T03:48:35Z","content_type":"text/html","content_length":"729411","record_id":"<urn:uuid:8f7413a5-b53d-4711-b3c9-993b1da9150a>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00842.warc.gz"}
A7 size in point. Read here what the A7 size is in po. A7 size in point The most common A7 paper size in point is 210 x 298 po. To calculate the A7 size (and other A sizes) in point, you can use the calculator below. First, select the A-size and then choose the unit in which you want the calculation. The result is shown in the last column undersize. A point is the smallest unit of measurement in typography. It is mostly used for measuring font size and aligning on a page. Since the 80’s the DeskTop Publishing point has become the standard. The DTP point is defined as 1⁄72 of an inch. This is 1/72 ⋅ 25.4 mm ≈ 0.353 mm. A7 size in point calculator. With the point calculator, you first have to select the A-size with the desired unit. By default, this is set to point on this page. The result of the calculation is displayed in the last column. You now see the size of an A7 in point. Overview A-size in point. Overview of the A7 size in point (po). Size Point 4A0 4768.47 x 6741.63 2A0 3370.815 x 4768.47 A0 2384.235 x 3370.815 A1 1683.99 x 2384.235 A2 1190.7 x 1683.99 A3 841.995 x 1190.7 A4 595.35 x 841.995 A5 419.58 x 595.35 A6 297.675 x 419.58 A7 209.79 x 297.675 A8 147.42 x 209.79 A9 104.895 x 147.42 A10 73.71 x 104.895 Other units such as centimetres, pixels and meters. Choose from the units below for the A7 size in Micrometres (μm), Millimeters (mm), Centimetres (cm), Meters (m), Thou (th), Inches (in), Feet (ft), Yards (yd), Pixels, Pica and HPGL.
{"url":"https://www.a7-size.com/a7-size-in-point/","timestamp":"2024-11-02T14:56:27Z","content_type":"text/html","content_length":"39210","record_id":"<urn:uuid:f8c95548-7078-493c-a836-a97c2145912b>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00231.warc.gz"}
Differences in handling compiled functions between Plot3D and VectorPlot 7987 Views 3 Replies 0 Total Likes Differences in handling compiled functions between Plot3D and VectorPlot I have run into an interesting situation using a compiled function with VectorPlot (and StreamPlot) to plot 2D slices of a 3D vector field, wherein I get the following error: CompiledFunction::cfsa: Argument x at position 1 should be a machine-size real number. >> but the operation still produces the desired results. Consider a function ff (simplified for the example's sake) ff[{x_, y_, z_}] := {x, y, z} ffcomp = Compile[{{x, _Real}, {y, _Real}, {z, _Real}}, ff[{x, y, z}]] Plot3D[ff[{x, 0, y}][[{1, 3}]], {x, 0, 2}, {y, 0, 2}] Plot3D[ffcomp[x, 0, y][[{1, 3}]], {x, 0, 2}, {y, 0, 2}] produce identical results with no error messages. VectorPlot (or StreamPlot) VectorPlot[ff[{x, 0, y}][[{1, 3}]], {x, 0, 2}, {y, 0, 2}] VectorPlot[ffcomp[x, 0, y][[{1, 3}]], {x, 0, 2}, {y, 0, 2}] Produce the same results but the second case gives the error message shown above. What is the difference in how VectorPlot and Plot3D treat their arguments that causes this? BTW, working from a suggestion I found on stackexchange, if I define the function ff2[x_?NumericQ, y_?NumericQ, z_?NumericQ] := ffcomp[x, y, z] VectorPlot[ff2[x, 0, y][[{1, 3}]], {x, 0, 2}, {y, 0, 2}] no error message is produced. So, in summary, while I can get Mathematica to do what I need it to, I'm still trying to fully understand how the world of Hold/Release/Evaluate work. 3 Replies Some Mathematica functions do a symbolic evaluation first. I believe there is an option for Compile, EvaluateSymbolically which can be set to True. Thanks Frank, However, setting EvaluateSymbolically to either True or False does not change the behavior. I have been able to use these function so far (within Show) by wrapping them in Quiet. I should have been more explicit. It's RuntimeOptions -> {"EvaluateSymbolically -> True} Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use
{"url":"https://community.wolfram.com/groups/-/m/t/178134?sortMsg=Replies","timestamp":"2024-11-14T22:16:22Z","content_type":"text/html","content_length":"105109","record_id":"<urn:uuid:b20cf2a7-a51a-4da1-a787-5bfcece27bee>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00276.warc.gz"}
Selecting ultracapacitors for smoothing voltage deviations in local grids fed by transformer with tap-changer and distributed pv facilities Widespread use of photovoltaic (PV) small and middle-power plants close or inside existing townships and villages may cause significant deviations of the grid voltage. Owing to the oscillation of solar irradiation and corresponding power flows these voltage instabilities can damage equipment and must be prevented. Designated for the voltage regulation tap-changers in distribution transformers located in a significant distance of such settlements have a sluggish response time. As a possible answer for their delay is the smoothing energy of flows in PV power installation by intermittent capacitor low-pass filtering (LPF) located near those PV facilities. The application of ultracapacitors (UC) for LPF is remarkable due to their sustainability and relatively low costs of energy storage. The parameters selection of such appliances is a well-designed procedure for linear circuits. However, DC–AC inverters in PV facilities are represented by a power (instead of a voltage) source. As a result, the total circuit including such LPF becomes a non-linear and its transient process and consequently, its efficiency is difficult to assess requiring each time of development the UC storage an application complex numerical procedure. Engineers are usual to work with linear circuits that are describing fine by a time constant is designated as a multiplication of a capacitance times load equivalent resistance. In the case of PV DC–AC inverters, such an approach can be applied as well but a value of a time constant should be corrected. Considering a significant cost of UC storage, the non-optimal selection of a correcting coefficient may cause considerable loses. Submitted in the presented article is an original approximation procedure giving an efficiently approachable technique to select correcting coefficient for describing non-linear dynamic process by its linear analog. This way the development low-pass UC filtering in electrical systems with PV plants becomes more efficient and simpler task. טביעת אצבע להלן מוצגים תחומי המחקר של הפרסום 'Selecting ultracapacitors for smoothing voltage deviations in local grids fed by transformer with tap-changer and distributed pv facilities'. יחד הם יוצרים טביעת אצבע ייחודית.
{"url":"https://cris.ariel.ac.il/iw/publications/selecting-ultracapacitors-for-smoothing-voltage-deviations-in-loc-3","timestamp":"2024-11-07T04:17:43Z","content_type":"text/html","content_length":"63955","record_id":"<urn:uuid:42ed4a0c-b2d3-4fd1-be95-ffabe815d002>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00707.warc.gz"}
Operations on Cartesian vectors in 2-D and 3-D gmt vector [ table ] [ -Am[conf]|vector ] [ -C[i|o] ] [ -E ] [ -N ] [ -Svector ] [ -Ta|d|D|pazim|r[arg]|R|s|t[arg]|x ] [ -V[level] ] [ -bbinary ] [ -dnodata[+ccol] ] [ -eregexp ] [ -fflags ] [ -ggaps ] [ -hheaders ] [ -iflags ] [ -jflags ] [ -oflags ] [ -qflags ] [ -sflags ] [ -:[i|o] ] [ --PAR=value ] Note: No space is allowed between the option flag and the associated arguments. vector reads either (x, y), (x, y, z), (r, theta) or (lon, lat) [or (lat, lon); see -:] coordinates from the first 2-3 columns on standard input [or one or more tables]. If -fg is selected and only two items are read (i.e., lon, lat) then these coordinates are converted to Cartesian three-vectors on the unit sphere. Otherwise we expect (r, theta) unless -Ci is in effect. If no file is found we expect a single vector to be given as argument to -A; this argument will also be interpreted as an x/y[/z], lon/lat, or r/theta vector. The input vectors (or the one provided via -A) are denoted the prime vector(s). Several standard vector operations (angle between vectors, cross products, vector sums, and vector rotations) can be selected; most require a single second vector, provided via -S. The output vectors will be converted back to (lon, lat) or (r, theta) unless -Co is set, which requests (x, y[, z]) Cartesian coordinates. Required Arguments One or more ASCII [or binary, see -bi] file containing (lon, lat) [or (lat, lon) if -:] values in the first 2 columns (if -fg is given) or (r, theta), or perhaps (x, y[, z]) if -Ci is given). If no file is specified, vector, will read from standard input. Optional Arguments Specify a single, primary vector instead of reading data table(s); see table for possible vector formats. Alternatively, append m to read table and set the single, primary vector to be the mean resultant vector first. We also compute the confidence ellipse for the mean vector (azimuth of major axis, major axis, and minor axis; for geographic data the axes will be reported in km). You may optionally append the confidence level in percent [95]. These three parameters are reported in the final three output columns. Select Cartesian coordinates on input and output. Append i for input only or o for output only; otherwise both input and output will be assumed to be Cartesian [Default is polar r/theta for 2-D data and geographic lon/lat for 3-D]. Convert input geographic coordinates from geodetic to geocentric and output geographic coordinates from geocentric to geodetic. Ignored unless -fg is in effect, and is bypassed if -C is selected. Normalize the resultant vectors prior to reporting the output [No normalization]. This only has an effect if -Co is selected. Specify a single, secondary vector in the same format as the first vector. Required by operations in -T that need two vectors (average, bisector, dot product, cross product, and sum). Specify the vector transformation of interest via these directives: □ a: Compute the vector average. □ b: Determines the pole of the two points bisector. □ d: Compute dot product of the two vectors. □ D: Same as +d but returns the angle in degrees between the two vectors. □ p: The pole to the great circle specified by input vector and the circle’s azim (no second vector used). □ s: Evaluate the vector sum. □ r: Perform vector rotation (here, par is a single angle for 2-D Cartesian data and lon/lat/angle for a 3-D rotation pole and angle) □ R: Similar to r but will instead rotate the fixed secondary vector by the rotations implied by the input records. □ t: Translate the input point by a distance in the azimuth direction (append azimuth/distance[unit] for the same translation for all input points, or just append unit to read azimuth and distance (in specified unit [e]) from the third and fourth data column in the file. □ x: Compute the vectors or cross-product. If -T is not given then no transformation takes place; the output is determined by other options, such as -A, -C, -E, and -N. Notes for directive t : (1) If geographic coordinates we will perform a great circle calculation unless -je or -jf is selected; (2) if a distance is negative then we remove the sign and add 180 degrees to the azimuth. Select verbosity level [w]. (See full description) (See technical reference). -birecord[+b|l] (more …) Select native binary format for primary table input. [Default is 2 or 3 input columns]. -borecord[+b|l] (more …) Select native binary format for table output. [Default is same as input]. -d[i|o][+ccol]nodata (more …) Replace input columns that equal nodata with NaN and do the reverse on output. -e[~]“pattern” | -e[~]/regexp/[i] (more …) Only accept data records that match the given pattern. -f[i|o]colinfo (more …) Specify data types of input and/or output columns. -gx|y|z|d|X|Y|Dgap[u][+a][+ccol][+n|p] (more …) Determine data gaps and line breaks. -h[i|o][n][+c][+d][+msegheader][+rremark][+ttitle] (more …) Skip or produce header record(s). -icols[+l][+ddivisor][+sscale|d|k][+ooffset][,…][,t[word]] (more …) Select input columns and transformations (0 is first column, t is trailing text, append word to read one word only). -je|f|g (more …) Determine how spherical distances or coordinate transformations are calculated. -ocols[+l][+ddivisor][+sscale|d|k][+ooffset][,…][,t[word]] (more …) Select output columns and transformations (0 is first column, t is trailing text, append word to write one word only). -q[i|o][~]rows|limits[+ccol][+a|t|s] (more …) Select input or output rows or data limit(s) [all]. -:[i|o] (more …) Swap 1st and 2nd column on input and/or output. -^ or just - Print a short message about the syntax of the command, then exit (Note: on Windows just use -). -+ or just + Print an extensive usage (help) message, including the explanation of any module-specific option (but not the GMT common options), then exit. -? or no arguments Print a complete usage (help) message, including the explanation of all options, then exit. Temporarily override a GMT default setting; repeatable. See gmt.conf for parameters. ASCII Format Precision The ASCII output formats of numerical data are controlled by parameters in your gmt.conf file. Longitude and latitude are formatted according to FORMAT_GEO_OUT, absolute time is under the control of FORMAT_DATE_OUT and FORMAT_CLOCK_OUT, whereas general floating point values are formatted according to FORMAT_FLOAT_OUT. Be aware that the format in effect can lead to loss of precision in ASCII output, which can lead to various problems downstream. If you find the output is not written with enough precision, consider switching to binary output (-bo if available) or specify more decimals using the FORMAT_FLOAT_OUT setting. Note: Below are some examples of valid syntax for this module. The examples that use remote files (file names starting with @) can be cut and pasted into your terminal for testing. Other commands requiring input files are just dummy examples of the types of uses that are common but cannot be run verbatim as written. To determine the mean location of all points in the remote geographic file @ship_15.txt as well as the 95% confidence ellipse around that point, try: gmt vector @ship_15.txt -Am -fg Suppose you have a file with (lon, lat) called points.txt. You want to compute the spherical angle between each of these points and the location 133/34. Try: gmt vector points.txt -S133/34 -TD -fg > angles.txt To rotate the same points 35 degrees around a pole at 133/34, and output Cartesian 3-D vectors, use: gmt vector points.txt -Tr133/34/35 -Co -fg > reconstructed.txt To rotate the point 65/33 by all rotations given in file rots.txt, use: gmt vector rots.txt -TR -S64/33 -fg > reconstructed.txt To compute the cross-product between the two Cartesian vectors 0.5/1/2 and 1/0/0.4, and normalizing the result, try: gmt vector -A0.5/1/2 -Tx -S1/0/0.4 -N -C > cross.txt To rotate the 2-D vector, given in polar form as r = 2 and theta = 35, by an angle of 120, try: gmt vector -A2/35 -Tr120 > rotated.txt To find the mid-point along the great circle connecting the points 123/35 and -155/-30, use: gmt vector -A123/35 -S-155/-30 -Ta -fg > midpoint.txt To find the mean location of the geographical points listed in points.txt, with its 99% confidence ellipse, use: gmt vector points.txt -Am99 -fg > centroid.txt To find the pole corresponding to the great circle that goes through the point -30/60 at an azimuth of 105 degrees, use: gmt vector -A-30/60 -Tp105 -fg > pole.txt To translate all locations in the geographic file points.txt by 65 km to the NE on a spherical Earth, try: gmt vector points -Tt45/65k -fg > shifted.txt To determine the point that is 23 nautical miles along a geodesic with a bearing of 310 degrees from the origin at (8E, 50N), try: echo 8 50 | gmt vector -Tt310/23n -je For more advanced 3-D rotations as used in plate tectonic reconstructions, see the GMT “spotter” supplement.
{"url":"https://docs.generic-mapping-tools.org/dev/gmtvector.html","timestamp":"2024-11-04T02:33:48Z","content_type":"text/html","content_length":"33015","record_id":"<urn:uuid:b08306d3-6e1e-4c93-be88-c963e65c0442>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00572.warc.gz"}
Package org.checkerframework.dataflow.analysis package org.checkerframework.dataflow.analysis • Class A worklist is a priority queue of blocks in which the order is given by depth-first ordering to place non-loop predecessors ahead of successors. An abstract value used in the org.checkerframework.dataflow analysis. This interface defines a dataflow analysis, given a control flow graph and a transfer function. In calls to Analysis#runAnalysisFor, whether to return the store before or after the given node. The direction of an analysis instance. represents the result of a org.checkerframework.dataflow analysis by providing the abstract values given a node or a tree. This interface defines a backward analysis, given a control flow graph and a backward transfer function. An implementation of a backward analysis to solve a org.checkerframework.dataflow problem given a control flow graph and a backward transfer function. Interface of a backward transfer function for the abstract interpretation used for the backward flow analysis. This interface defines a forward analysis, given a control flow graph and a forward transfer function. An implementation of a forward analysis to solve a org.checkerframework.dataflow problem given a control flow graph and a forward transfer function. Interface of a forward transfer function for the abstract interpretation used for the forward flow analysis. Implementation of a with just one non-exceptional store. A store is used to keep track of the information that the org.checkerframework.dataflow analysis has accumulated at any given point in time. A flow rule describes how stores flow along one edge between basic blocks. Interface of a transfer function for the abstract interpretation used for the flow analysis. is used as the result type of the individual transfer functions of a UnusedAbstractValue is an AbstractValue that is not involved in any lub computation during dataflow analysis.
{"url":"https://checkerframework.org/releases/3.45.0/api/org/checkerframework/dataflow/analysis/package-summary.html","timestamp":"2024-11-04T03:56:40Z","content_type":"text/html","content_length":"19073","record_id":"<urn:uuid:3dfd30ff-0ace-4238-8dfd-ecdff7d35c0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00126.warc.gz"}
What Is 20 Percent Off of 30 Dollars? What Is 20 Percent Off of 30 Dollars? 20 percent of 30 dollars is 6 dollars. Calculating the Percent Off You’ve probably been shopping and seen something you wanted to buy that was on sale. It’s listed as 20 percent off. How can you figure out how much you’ll save? You need to figure out how much 20% off is on that particular product or service. Leave a Comment
{"url":"https://thestudyish.com/what-is-20-percent-off-of-30-dollars/","timestamp":"2024-11-13T06:15:27Z","content_type":"text/html","content_length":"52276","record_id":"<urn:uuid:49c7398f-3578-417a-9e04-036f4fe0d6dd>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00067.warc.gz"}
Priors and Semi-supervised Learning Priors and Semi-supervised Learning author: Jacob Schreiber contact: jmschreiber91@gmail.com Most classical machine learning algorithms either assume that an entire dataset is either labeled (supervised learning) or that there are no labels (unsupervised learning). However, frequently it is the case that some labeled data is present but there is a great deal of unlabeled data as well. A great example of this is that of computer vision where the internet is filled of pictures (mostly of cats) that could be useful, but you don’t have the time or money to label them all in accordance with your specific task. Typically what ends up happening is that either the unlabeled data is discarded in favor of training a model solely on the labeled data, or that an unsupervised model is initialized with the labeled data and then set free on the unlabeled data. Neither method uses both sets of data in the optimization process. Semi-supervised learning is a method to incorporate both labeled and unlabeled data into the training task, typically yield better performing estimators than using the labeled data alone. There are many methods one could use for semisupervised learning, and scikit-learn has a good write-up on some of these techniques. pomegranate natively implements semi-supervised learning by accepting a matrix of prior probabilities for most classes. Specifically, the prior probabilities give the probability that an example maps to each component in the model, e.g. one of the distributions in a mixture model. When this probability is 1.0, it effectively acts as a hard label, saying that some example must come from that distribution. If all examples have prior probabilities of 1.0 for some component, this is exactly supervised learning. If some examples have prior probabilities of 1.0 and others are uniform, this corresponds to semi-supervised learning. When you have soft evidence, these prior probabilities can be between the uniform distribution and 1.0 to indicate some amount of confidence. Note that prior probabilities are not the same as soft targets, in that the training is not aiming to classify each point as being proportionately from each distribution. For example, if the prior probabilities for an example given a model is [0.4, 0.6], the learning objective does not attempt to make the underlying distributions yield [0.4, 0.6] for this example, but rather that the likelihood probabilities across distributions are multiplied by these values via Bayes’ rule. Let’s take a look! %pylab inline import seaborn; seaborn.set_style('whitegrid') import torch %load_ext watermark %watermark -m -n -p numpy,scipy,torch,pomegranate Populating the interactive namespace from numpy and matplotlib numpy : 1.23.4 scipy : 1.9.3 torch : 1.13.0 pomegranate: 1.0.0 Compiler : GCC 11.2.0 OS : Linux Release : 4.15.0-208-generic Machine : x86_64 Processor : x86_64 CPU cores : 8 Architecture: 64bit Simple Setting Let’s first generate some data in the form of blobs that are close together. Generally one tends to have far more unlabeled data than labeled data, so let’s say that a person only has 50 samples of labeled training data and 4950 unlabeled samples. In pomegranate you a sample can be specified as lacking a label by providing the integer -1 as the label, just like in scikit-learn. Let’s also say there there is a bit of bias in the labeled samples to inject some noise into the problem, as otherwise Gaussian blobs are trivially modeled with even a few samples. from sklearn.datasets import make_blobs X, y = make_blobs(10000, n_features=2, centers=3, cluster_std=2) X_train = X[:5000].astype('float32') y_train = y[:5000] # Set the majority of samples to unlabeled. y_train[numpy.random.choice(5000, size=4950, replace=False)] = -1 # Inject noise into the problem X_train[y_train != -1] += 2.5 X_test = X[5000:].astype('float32') y_test = y[5000:] plt.figure(figsize=(6, 6)) plt.scatter(X_train[y_train == -1, 0], X_train[y_train == -1, 1], color='0.6', s=5) plt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], color='c', s=15) plt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], color='m', s=15) plt.scatter(X_train[y_train == 2, 0], X_train[y_train == 2, 1], color='r', s=15) The clusters of unlabeled data seem clear to us, and it doesn’t seem like the labeled data is perfectly faithful to these clusters. This form of distributional shift can typically happen in a semisupervised setting as well, as the data that is labeled is sometimes biased either because the labeled data was chosen as it was easy to label, or the data was chosen to be labeled in a biased maner. However, because we have lots of unlabeled data, we can usually overcome such shifts. Now let’s try fitting a simple Bayes classifier to the labeled data and compare the accuracy and decision boundaries to when we fit a Gaussian mixture model in a semi-supervised way. As mentioned before, we can do semi-supervised learning by passing in a matrix of prior probabilities that are uniform when the label is unknown, and when the label is known has a value of 1.0 for the class corresponding to the label. from pomegranate.gmm import GeneralMixtureModel from pomegranate.bayes_classifier import BayesClassifier from pomegranate.distributions import Normal idx = y_train != -1 model_a = BayesClassifier([Normal(), Normal(), Normal()]) model_a.fit(X_train[idx], y_train[idx]) y_hat_a = model_a.predict(X_test).numpy() print("Supervised Learning Accuracy: {}".format((y_hat_a == y_test).mean())) priors = torch.zeros(len(X_train), 3) for i, y in enumerate(y_train): if y != -1: priors[i, y] = 1.0 priors[i] = 1./3 dists = [] for i in range(3): dists.append(Normal().fit(X_train[y_train == i])) model_b = GeneralMixtureModel(dists) model_b.fit(X_train, priors=priors) y_hat_b = model_b.predict(X_test).numpy() print("Semisupervised Learning Accuracy: {}".format((y_hat_b == y_test).mean())) Supervised Learning Accuracy: 0.8842 Semisupervised Learning Accuracy: 0.9206 It seems like we get a big bump in test set accuracy when we use semi-supervised learning. Note that, due to the smooth nature of prior probabilities, distributions are not initialized according to the hard labels at first. You can manually do this as above, though, where you first fit the distributions to the labeled data and then fine-tune on the entire data set given the prior probabilities. Let’s visualize the data to get a better sense of what is happening here. def plot_contour(X, y, Z): plt.scatter(X[y == -1, 0], X[y == -1, 1], color='0.6', s=5) plt.scatter(X[y == 0, 0], X[y == 0, 1], color='c', s=15) plt.scatter(X[y == 1, 0], X[y == 1, 1], color='m', s=15) plt.scatter(X[y == 2, 0], X[y == 2, 1], color='r', s=15) plt.contour(xx, yy, Z) plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) x_min, x_max = X[:,0].min()-2, X[:,0].max()+2 y_min, y_max = X[:,1].min()-2, X[:,1].max()+2 xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, 0.1), numpy.arange(y_min, y_max, 0.1)) Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(16, 16)) plt.title("Training Data, Supervised Boundaries", fontsize=16) plot_contour(X_train, y_train, Z1) plt.title("Training Data, Semi-supervised Boundaries", fontsize=16) plot_contour(X_train, y_train, Z2) plt.title("Test Data, Supervised Boundaries", fontsize=16) plot_contour(X_test, y_test, Z1) plt.title("Test Data, Semi-supervised Boundaries", fontsize=16) plot_contour(X_test, y_test, Z2) The contours plot the decision boundaries between the different classes with the left figures corresponding to the partially labeled training set and the right figures corresponding to the test set. We can see that the boundaries learning using only the labeled data look a bit weird when considering the unlabeled data, particularly in that it doesn’t cleanly separate the cyan cluster from the other two. In addition, it seems like the boundary between the magenta and red clusters is a bit curved in an unrealistic way. We would not expect points that fell around (-18, -7) to actually come from the red class. Training the model in a semi-supervised manner cleaned up both of these concerns by learning better boundaries that are also flatter and more generalizable. Let’s next compare the training times to see how much slower it is to do semi-supervised learning than it is to do simple supervised learning. from sklearn.semi_supervised import LabelPropagation print("Supervised Learning") %timeit BayesClassifier([Normal(), Normal(), Normal()]).fit(X_train[idx], y_train[idx]) print("Semi-supervised Learning") %timeit GeneralMixtureModel([Normal(), Normal(), Normal()]).fit(X_train, priors=priors) print("Label Propagation (sklearn): ") %timeit -n 1 -r 1 LabelPropagation().fit(X_train, y_train) # Setting to 1 loop because it takes a long time Supervised Learning 3.13 ms ± 304 µs per loop (mean ± std. dev. of 7 runs, 100 loops each) Semi-supervised Learning 444 ms ± 59.3 ms per loop (mean ± std. dev. of 7 runs, 10 loops each) Label Propagation (sklearn): 1min ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) /home/jmschr/anaconda3/lib/python3.9/site-packages/sklearn/semi_supervised/_label_propagation.py:316: ConvergenceWarning: max_iter=1000 was reached without convergence. It is quite a bit slower to do semi-supervised learning than simple supervised learning in this example. This is expected as the simple supervised update for Bayes classifier is a trivial MLE across each distribution whereas the semi-supervised case requires an iterative algorithm, EM, to converge. However, doing semi-supervised learning with EM is still much faster than fitting the label propagation estimator from sklearn. More Complicated Setting Although the previous setting demonstrated our point, it was still fairly simple. We can construct a more complex situation where there are complex Gaussian distributions and each component is a mixture of distributions rather than a single one. This will highlight both the power of semi-supervised learning, as well as pomegranate’s ability to stack models – in this case, stack a mixture within a mixture. First let’s generate some more complicated, noisier data. X = numpy.empty(shape=(0, 2)) X = numpy.concatenate((X, numpy.random.normal(4, 1, size=(3000, 2)).dot([[-2, 0.5], [2, 0.5]]))) X = numpy.concatenate((X, numpy.random.normal(3, 1, size=(6500, 2)).dot([[-1, 2], [1, 0.8]]))) X = numpy.concatenate((X, numpy.random.normal(7, 1, size=(8000, 2)).dot([[-0.75, 0.8], [0.9, 1.5]]))) X = numpy.concatenate((X, numpy.random.normal(6, 1, size=(2200, 2)).dot([[-1.5, 1.2], [0.6, 1.2]]))) X = numpy.concatenate((X, numpy.random.normal(8, 1, size=(3500, 2)).dot([[-0.2, 0.8], [0.7, 0.8]]))) X = numpy.concatenate((X, numpy.random.normal(9, 1, size=(6500, 2)).dot([[-0.0, 0.8], [0.5, 1.2]]))) X = X.astype('float32') x_min, x_max = X[:,0].min()-2, X[:,0].max()+2 y_min, y_max = X[:,1].min()-2, X[:,1].max()+2 y = numpy.concatenate((numpy.zeros(9500), numpy.ones(10200), numpy.ones(10000)*2)).astype('int32') idxs = numpy.arange(29700) X = X[idxs] y = y[idxs] X_train, X_test = X[:25000], X[25000:] y_train, y_test = y[:25000], y[25000:] y_train[numpy.random.choice(25000, size=24920, replace=False)] = -1 plt.figure(figsize=(6, 6)) plt.scatter(X_train[y_train == -1, 0], X_train[y_train == -1, 1], color='0.6', s=1) plt.scatter(X_train[y_train == 0, 0], X_train[y_train == 0, 1], color='c', s=10) plt.scatter(X_train[y_train == 1, 0], X_train[y_train == 1, 1], color='m', s=10) plt.scatter(X_train[y_train == 2, 0], X_train[y_train == 2, 1], color='r', s=10) Now let’s take a look at the accuracies that we get when training a model using just the labeled examples versus all of the examples in a semi-supervised manner. d1 = GeneralMixtureModel([Normal(), Normal()]) d2 = GeneralMixtureModel([Normal(), Normal()]) d3 = GeneralMixtureModel([Normal(), Normal()]) model_a = BayesClassifier([d1, d2, d3]) model_a.fit(X_train[y_train != -1], y_train[y_train != -1]) y_hat_a = model_a.predict(X_test).numpy() print("Supervised Learning Accuracy: {}".format((y_hat_a == y_test).mean())) priors = torch.zeros(len(X_train), 3) for i, y in enumerate(y_train): if y != -1: priors[i, y] = 1.0 priors[i] = 1./3 d1 = GeneralMixtureModel([Normal(), Normal()]).fit(X_train[y_train == 0]) d2 = GeneralMixtureModel([Normal(), Normal()]).fit(X_train[y_train == 1]) d3 = GeneralMixtureModel([Normal(), Normal()]).fit(X_train[y_train == 2]) model_b = GeneralMixtureModel([d1, d2, d3]) model_b.fit(X_train, priors=priors) y_hat_b = model_b.predict(X_test).numpy() print("Semisupervised Learning Accuracy: {}".format((y_hat_b == y_test).mean())) Supervised Learning Accuracy: 0.935531914893617 Semisupervised Learning Accuracy: 0.9846808510638297 As expected, the semi-supervised method performs better. Let’s visualize the landscape in the same manner as before in order to see why this is the case. xx, yy = numpy.meshgrid(numpy.arange(x_min, x_max, 0.1), numpy.arange(y_min, y_max, 0.1)) Z1 = model_a.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) Z2 = model_b.predict(numpy.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(16, 16)) plt.title("Training Data, Supervised Boundaries", fontsize=16) plot_contour(X_train, y_train, Z1) plt.title("Training Data, Semi-supervised Boundaries", fontsize=16) plot_contour(X_train, y_train, Z2) plt.title("Test Data, Supervised Boundaries", fontsize=16) plot_contour(X_test, y_test, Z1) plt.title("Test Data, Semi-supervised Boundaries", fontsize=16) plot_contour(X_test, y_test, Z2) Immediately, one would notice that the decision boundaries when using semi-supervised learning are smoother than those when using only a few samples. This can be explained mostly because having more data can generally lead to smoother decision boundaries as the model does not overfit to spurious examples in the dataset. It appears that the majority of the correctly classified samples come from having a more accurate decision boundary for the magenta samples in the left cluster. When using only the labeled samples many of the magenta samples in this region get classified incorrectly as cyan samples. In contrast, when using all of the data these points are all classified correctly. Hidden Markov models can take in priors just as easily as mixture models can. Just as the data must be 3D, the priors also must be 3D. from pomegranate.hmm import DenseHMM d1 = Normal([1.0], [1.0], covariance_type='diag') d2 = Normal([3.0], [1.0], covariance_type='diag') model = DenseHMM([d1, d2], [[0.7, 0.3], [0.3, 0.7]], starts=[0.4, 0.6]) We can first look at what a forward pass would look like without priors. X = torch.randn(1, 10, 1) tensor([[[0.9398, 0.0602], [0.9271, 0.0729], [0.6459, 0.3541], [0.9699, 0.0301], [0.9812, 0.0188], [0.9953, 0.0047], [0.9987, 0.0013], [0.9738, 0.0262], [0.9782, 0.0218], [0.9924, 0.0076]]]) Now let’s add in that one observation must map to one specific state. priors = torch.ones(1, 10, 2) / 2 priors[0, 5, 0], priors[0, 5, 1] = 0, 1 tensor([[[0.5000, 0.5000], [0.5000, 0.5000], [0.5000, 0.5000], [0.5000, 0.5000], [0.5000, 0.5000], [0.0000, 1.0000], [0.5000, 0.5000], [0.5000, 0.5000], [0.5000, 0.5000], [0.5000, 0.5000]]]) model.predict_proba(X, priors=priors) tensor([[[0.9398, 0.0602], [0.9267, 0.0733], [0.6427, 0.3573], [0.9620, 0.0380], [0.9070, 0.0930], [0.0000, 1.0000], [0.9930, 0.0070], [0.9732, 0.0268], [0.9782, 0.0218], [0.9924, 0.0076]]]) We can see that the model is forced to assign the 6th observation to distribution 1 even though it is trying to assign all of the observations to distribution 0. Using prior probabilities like this gives us a way to do semi-supervised learning on HMMs in a very flexible manner. If you have entire labeled sequences, you can pass in prior probabilities of 1.0 for each observation in the sequence, even if other sequences have no labels at all. If you have partially labeled sequences, you can easily train your model using priors on some, but not all, observations, and using those partial observations to inform everything else. In the real world (ack) there are frequently situations where only a small fraction of the available data has useful labels. Semi-supervised learning provides a framework for leveraging both the labeled and unlabeled aspects of a dataset to learn a sophisticated estimator. In this case, semi-supervised learning plays well with probabilistic models as normal maximum likelihood estimates can be done on the labeled data and expectation-maximization can be run on the unlabeled data using the same distributions. This notebook has covered how to implement semi-supervised learning in pomegranate using mixture models and hidden Markov models. All one has to do is set the priors of observations consistently with the labels and pomegranate will take care of the rest. This can be particularly useful when encountering complex, noisy, data in the real world that aren’t neat Gaussian blobs.
{"url":"https://pomegranate.readthedocs.io/en/latest/tutorials/C_Feature_Tutorial_4_Priors_and_Semi-supervised_Learning.html","timestamp":"2024-11-07T21:48:39Z","content_type":"text/html","content_length":"36782","record_id":"<urn:uuid:d87b3cfa-86dd-42de-9672-324b246287c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00034.warc.gz"}
How to Alphabetize a Column in Excel? This guide will show you how to sort Excel alphabetically using sorting and filtering functions to organize your data from A to Z. This feature is especially useful for large datasets whose manual processing – alphabetical information in Excel takes a long time. Alphabetizing of a column or list means arranging the list alphabetically in Excel. This can be done in two ways, in descending order or in descending order. Uses of Alphabetic sorting in Excel 1. It makes the data more sensible. 2. It gives you the ease to search values based on alphabetical order. 3. It also makes it easier for you to visually identify duplicate records in your data set. Method 1 – Alphabetize using options from Excel Ribbon This is one of the easiest ways to organize data in Excel. To use this method, follow these steps: First select the list you want to sort. • Next, navigate to the “Data” Tab on the Excel ribbon and click the “A-Z” icon for ascending order sort or the “Z-A” icon for descending sort. Sort a multi-column data table using this method: If there is a list with two columns, such as “Student Name” and “Role Number”. And you have to make this list based on “Student Name”. Then you should use the “Sequence” button instead of the “A-Z” and “Z-A” buttons. The sort button gives you more control over how you want to sort the list. It allows you to select only one sequence of columns, takes care of the headers of your table, and can calculate data based on font or text color. Follow the below steps to use this method: • First of all, select the table to be alphabetized. • After this click the “Sort” button, on the “Data” tab. • This will open a “Sort” dialog box, in the ‘Column’ dropdown select the column based on which you want to alphabetize your data. • In the ‘Sort On’ dropdown select the ‘values’ option. Using ‘Sort On’ dropdown you can sort your data based on cell colour, font colour or cell icons. In the ‘Order’ field select “A-Z” for Ascending sort or “Z-A” for descending sort. If your data is without a header row then uncheck the ‘My data has headers’ checkbox, otherwise, leave it checked. • Finally, click on the ‘Ok’ button and your data is sorted. Notice that the names are sorted but the corresponding roll numbers have not changed, so the data is still reliable. Method 2 – Alphabetizing a column using shortcut keys If you are someone who uses the keyboard more to perform the same tasks with the mouse, I will share here a list of keyboard shortcuts that will come in handy when arranging columns in Excel. Note Before using these keyboard shortcuts, make sure that you have already selected the data table. Method 3 – Follow the list using an Excel formula: In this method, we will use Excel formulas to sort the list alphabetically. The two formulas we use are COUNTIF and VLOOKUP. Many of you will think, “How do we sort the list using the CountIf function?” The trick behind this is that we can use the COUNTIF function to calculate values based on specific behavior. For example: Suppose we have a list with some alphabets 'o, l, n, m, p, q' in a range A1:A6. Now if we use a formula as: The result can be 3, because only three alphabets (l, m, n) mean “o”. This clearly shows that the COUNTIF function can give us a sort when used correctly. We will use this concept in our previous example. First, we’ll create a temporary column called “Subsequent” in our existing table. Next, we use the formula = COUNTIF ($ B $ 2: $ B $ 11, “<=” & B2) for the first student. And then we drag and drop this formula to fill it in its entirety. This formula indicates the order in which the items in the list are arranged. Now we only need to sort the data based on sorting and we will use the VLOOKUP function to do that. =VLOOKUP(<sort number>,A:B,2,0) Here, “type number” means numbers in ascending order from 1 to 10. For descending order, the numbers should be from 10-1. Similarly, for the second and third items, you can use the formula as :=VLOOKUP(2,A:B,2,0) After applying this VlookUp formula the list gets alphabetized. After applying this VlookUp formula the list gets alphabetized. Tip: Instead of manually entering the 1-10 in the above formula, you can also use the row function to ease your task. Row() gives you the row number of the current cell. So, with the use of the row function the formula would be: So, this was all about how to alphabetize in excel. Do share your thoughts about this topic. Related Blogs –
{"url":"https://computersolve.com/how-to-alphabetize-a-column-in-excel/","timestamp":"2024-11-14T18:50:43Z","content_type":"text/html","content_length":"144123","record_id":"<urn:uuid:128d021a-c5a8-450e-9098-a13f41b0988e>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00762.warc.gz"}
Basis sets¶ class mumott.methods.basis_sets.SphericalHarmonics(probed_coordinates=None, ell_max=0, enforce_friedel_symmetry=True, **kwargs)[source]¶ Basis set class for spherical harmonics, the canonical representation of polynomials on the unit sphere and a simple way of representing band-limited spherical functions which allows for easy computations of statistics and is suitable for analyzing certain symmetries. ☆ ell_max (int) – The bandlimit of the spherical functions that you want to be able to represent. A good rule of thumb is that ell_max should not exceed the number of detector segments minus 1. ☆ probed_coordinates (ProbedCoordinates) – Optional. A container with the coordinates on the sphere probed at each detector segment by the experimental method. Its construction from the system geometry is method-dependent. By default, an empty instance of ProbedCoordinates is created. ☆ enforce_friedel_symmetry (bool) – If set to True, Friedel symmetry will be enforced, using the assumption that points on opposite sides of the sphere are equivalent. This results in only even ell being used. ☆ kwargs – Miscellaneous arguments which relate to segment integrations can be passed as keyword arguments: Mode to integrate line segments on the reciprocal space sphere. Possible options are 'simpson', 'midpoint', 'romberg', 'trapezoid'. 'simpson', 'trapezoid', and 'romberg' use adaptive integration with the respective quadrature rule from scipy.integrate. 'midpoint' uses a single mid-point approximation of the integral. Default value is 'simpson'. Number of points used in the first iteration of the adaptive integration. The number increases by the rule N &larr; 2 * N - 1 for each iteration. Default value is 3. Tolerance for the maximum relative error between iterations before the integral is considered converged. Default is 1e-5. Maximum number of iterations. Default is 10. property csr_representation: tuple¶ The projection matrix as a stack of sparse matrices in CSR representation as a tuple. The information in the tuple consists of the 3 dense matrices making up the representation, in the order (pointers, indices, data). property ell_indices: ndarray[Any, dtype[int]]¶ The ell associated with each coefficient and its corresponding spherical harmonic. Updated when ell_max changes. The word ell is used to represent the cursive small L, also written \(\ell\), often used as an index for the degree of the Legendre polynomial in the definition of the spherical harmonics. property ell_max: int¶ The maximum ell used to represent spherical functions. The word ell is used to represent the cursive small L, also written \(\ell\), often used as an index for the degree of the Legendre polynomial in the definition of the spherical harmonics. property emm_indices: ndarray[Any, dtype[int]]¶ The emm associated with each coefficient and its corresponding spherical harmonic. Updated when ell_max changes. For consistency with ell_indices, and to avoid visual confusion with other letters, emm is used to represent the index commonly written \(m\) in mathematical notation, the frequency of the sine-cosine parts of the spherical harmonics, often called the spherical harmonic order. forward(coefficients, indices=None)[source]¶ Carries out a forward computation of projections from spherical harmonic space into detector space, for one or several tomographic projections. ○ coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients, of arbitrary shape so long as the last axis has the same size as ell_indices, and if indices is None or greater than one, the first axis should have the same length as indices ○ indices (Optional[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]]) – Optional. Indices of the tomographic projections for which the forward computation is to be performed. If None, the forward computation will be performed for all projections. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] An array of values on the detector corresponding to the coefficients given. If indices contains exactly one index, the shape is (coefficients.shape[:-1], J) where J is the number of detector segments. If indices is None or contains several indices, the shape is (N, coefficients.shape[1:-1], J) where N is the number of tomographic projections for which the computation is performed. The assumption is made in this implementation that computations over several indices act on sets of images from different projections. For special usage where multiple projections of entire fields is desired, it may be better to use projection_matrix directly. This also applies to gradient(). generate_map(coefficients, resolution_in_degrees=5, map_half_sphere=True)¶ Generate a (theta, phi) map of the function modeled by the input coefficients. If map_half_sphere=True (default) a map of only the z>0 half sphere is returned. ○ coefficients (ndarray[Any, dtype[float]]) – One dimensional numpy array with length len(self) containing the coefficients of the function to be plotted. ○ resolution_in_degrees (int) – The resoution of the map in degrees. The map uses eqidistant lines in longitude and latitude. ○ map_half_sphere (bool) – If True returns a map of the z>0 half sphere. Return type tuple[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]] ○ map_intensity – Intensity values of the map. ○ map_theta – Polar cooridnates of the map. ○ map_phi – Azimuthal coordinates of the map. get_covariances(u, v, resolve_spectrum=False)[source]¶ Returns the covariances of the spherical functions represented by two coefficient arrays. ○ u (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The first coefficient array, of arbitrary shape except its last dimension must be the same length as the length of ell_indices. ○ v (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The second coefficient array, of the same shape as u. ○ resolve_spectrum (bool) – Optional. Whether to resolve the product according to each frequency band, given by the coefficients of each ell in ell_indices. Default value is False. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] An array of the covariances of the spherical functions represented by u and v. Has the shape (u.shape[:-1]) if resolve_spectrum is False, and (u.shape[:-1] + (ell_max // 2 + 1,)) if resolve_spectrum is True, where ell_max is SphericalHarmonics.ell_max. Calling this function is equivalent to calling get_inner_product() with spectral_moments=np.unique(ell_indices[ell_indices > 0]) where ell_indices is SphericalHarmonics.ell_indices. See the note to get_inner_product() for mathematical details. get_inner_product(u, v, resolve_spectrum=False, spectral_moments=None)[source]¶ Retrieves the inner product of two coefficient arrays. The canonical inner product in a spherical harmonic representation is \(\sum_\ell N(\ell) \sum_m u_m^\ell v_m^\ell\), where \(N(\ell)\) is a normalization constant (which is unity for the \(4 \pi\) normalization). This inner product is a rotational invariant. The rotational invariance also holds for any partial sums over \(\ell\). One can define a function of \(\ell\) that returns such products, namely \(S(\ell, u, v) = N(\ell)\sum_m u_m^\ell v_m^\ell\), called the spectral power function. The sum \(\sum_{\ell = 1}S(\ell)\) is equal to the covariance of the band-limited spherical functions represented by \(u\) and \(v\), and each \(S(\ell, u, v)\) is the contribution to the covariance of the band \(\ell\). See also the SHTOOLS documentation for an excellent overview of this. ○ u (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The first coefficient array, of arbitrary shape and dimension, except the last dimension must be the same as the length of ell_indices. ○ v (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The second coefficient array, of the same shape as u. ○ resolve_spectrum (bool) – Optional. Whether to resolve the product according to each frequency band, given by the coefficients of each ell in ell_indices. Defaults to False, which means that the sum of every component of the spectrum is returned. If True, components are returned in order of ascending ell. The ell included in the spectrum depends on ○ spectral_moments (Optional[List[int]]) – Optional. List of particular values of ell to calculate the inner product for. Defaults to None, which is identical to including all values of ell in the calculation. If spectral_moments contains all nonzero values of ell and resolve_spectrum is False, the covariance of v and u will be calculated (the sum of the inner product over all non-zero ell If resolve_spectrum is True, the covariance per ell in spectral_moments, will be calculated, i.e., the inner products will not be summed over. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] An array of the inner products of the spherical functions represented by u and v. Has the shape (u.shape[:-1]) if resolve_spectrum is False, (u.shape[:-1] + (ell_max // 2 + 1,)) if resolve_spectrum is True and spectral_moments is None, and finally the shape (u.shape[:-1] + (np.unique(spectral_moments).size,)) if resolve_spectrum is True and spectral_moments is a list of integers found in ell_indices Returns a dictionary of output data for a given array of spherical harmonic coefficients. coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients of arbitrary shape and dimensions, except its last dimension must be the same length as ell_indices. Computations only operate over the last axis of coefficients, so derived properties in the output will have the shape (*coefficients.shape[:-1], ...). Return type Dict[str, Any] A dictionary containing two sub-dictionaries, basis_set and spherical_functions. basis_set contains information particular to SphericalHarmonics, whereas spherical_functions contains information about the spherical functions represented by the coefficients which are not specific to the chosen representation. In detail, the two sub-dictionaries basis_set and spherical_functions have the following members: The name of the basis set, i.e., 'SphericalHarmonicParameters' A copy of coefficients. A copy of ell_max. A copy of ell_indices. A copy of emm_indices. A copy of projection_matrix. The spherical means of each function represented by coefficients. The spherical variances of each function represented by coefficients. If ell_max is 0, all variances will equal zero. The traceless symmetric rank-2 tensor component of each function represented by coefficients, in 6-element form, in the order [xx, yy, zz, yz, xz, xy], i.e., by the Voigt convention. The matrix form can be recovered as r2_tensors[…, tensor_to_matrix_indices], yielding matrix elements [[xx, xy, xz], [xy, yy, yz], [xz, yz, zz]]. If ell_max is 0, all tensors have elements [1, 0, -1, 0, 0, 0]. A list of indices to help recover the matrix from the 2-element form of the rank-2 tensors, equalling precisely [[0, 5, 4], [5, 1, 3], [4, 3, 2]] The eigenvalues of the rank-2 tensors, sorted in ascending order in the last index. If ell_max is 0, the eigenvalues will always be (1, 0, -1) The eigenvectors of the rank-2 tensors, sorted with their corresponding eigenvectors in the last index. Thus, eigenvectors[..., 0] gives the eigenvector corresponding to the smallest eigenvalue, and eigenvectors[..., 2] gives the eigenvector corresponding to the largest eigenvalue. Generally, one of these two eigenvectors is used to define the orientation of a function, depending on whether it is characterized by a minimum (0) or a maximum (2). The middle eigenvector (1) is typically only used for visualizations. If ell_max is 0, the eigenvectors will be the Cartesian basis vectors. The estimated main orientations from the largest absolute eigenvalues. If ell_max is 0, the main orientation will be the x-axis. The strength or definiteness of the main orientation, calculated from the quotient of the absolute middle and signed largest eigenvalues of the rank-2 tensor. If 0, the orientation is totally ambiguous. The orientation is completely transversal if the value is -1 (orientation represents a minimum), and completely longitudinal if the value is 1 (orientation represents a maximum). If ell_max is 0, the main orientations are all totally ambiguous. A relative measure of the overall anisotropy of the spherical functions. Equals \(\sqrt{\sigma^2 / \mu}\), where \(\sigma^2\) is the variance and \(\mu\) is the mean. The places where \(\mu=0\) have been set to 0. If ell_max is 0, the normalized standard deviations will always be zero. The spectral powers of each ell in ell_indices, for each spherical function, sorted in ascending ell. If ell_max is 0, each function will have only one element, equal to the mean An array containing the corresponding ell to each of the last indices in power_spectra. Equal to np.unique(ell_indices). get_spherical_harmonic_coefficients(coefficients, ell_max=None)[source]¶ Convert a set of spherical harmonics coefficients to a different ell_max by either zero-padding or truncation and return the result. ○ coefficients (ndarray[Any, dtype[float]]) – An array of coefficients of arbitrary shape, provided that the last dimension contains the coefficients for one function. ○ ell_max (Optional[int]) – The band limit of the spherical harmonic expansion. Return type ndarray[Any, dtype[float]] gradient(coefficients, indices=None)[source]¶ Carries out a gradient computation of projections from spherical harmonic space into detector space, for one or several tomographic projections. ○ coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients (or residuals) of arbitrary shape so long as the last axis has the same size as the number of detector segments. ○ indices (Optional[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]]) – Optional. Indices of the tomographic projections for which the gradient computation is to be performed. If None, the gradient computation will be performed for all projections. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] An array of gradient values based on the coefficients given. If indices contains exactly one index, the shape is (coefficients.shape[:-1], J) where J is the number of detector segments. If indices is None or contains several indices, the shape is (N, coefficients.shape[1:-1], J) where N is the number of tomographic projections for which the computation is performed. When solving an inverse problem, one should not to attempt to optimize the coefficients directly using the gradient one obtains by applying this method to the data. Instead, one must either take the gradient of the residual between the forward() computation of the coefficients and the data. Alternatively one can apply both the forward and the gradient computation to the coefficients to be optimized, and the gradient computation to the data, and treat the residual of the two as the gradient of the optimization coefficients. The approaches are algebraically equivalent, but one may be more efficient than the other in some circumstances. property integration_mode: str¶ Mode of integration for calculating projection matrix. Accepted values are 'simpson', 'romberg', 'trapezoid', and 'midpoint'. property projection_matrix: ndarray[Any, dtype[_ScalarType_co]]¶ The matrix used to project spherical functions from the unit sphere onto the detector. If v is a vector of spherical harmonic coefficients, and M is the projection_matrix, then M @ v gives the corresponding values on the detector segments associated with each projection. M[i] @ v gives the values on the detector segments associated with projection i. If r is a residual between a projection from spherical to detector space and data from projection i, then M[i].T @ r gives the associated gradient in spherical harmonic space. class mumott.methods.basis_sets.TrivialBasis(channels=1)[source]¶ Basis set class for the trivial basis, i.e., the identity basis. This can be used as a scaffolding class when implementing, e.g., scalar tomography, as it implements all the necessary functionality to qualify as a BasisSet. channels (int) – Number of channels in the last index. Default is 1. For scalar data, the default value of 1 is appropriate. For any other use-case, where the representation on the sphere and the representation in detector space are equivalent, such as reconstructing scalars of multiple q-ranges at once, a different number of channels can be set. property channels: int¶ The number of channels this basis supports. property csr_representation: tuple¶ The projection matrix as a stack of sparse matrices in CSR representation as a tuple. The information in the tuple consists of the 3 dense matrices making up the representation, in the order (pointers, indices, data). forward(coefficients, *args, **kwargs)[source]¶ Returns the provided coefficients with no modification. coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients, of arbitrary shape, except the last index must specify the same number of channels as was specified for this basis. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] The provided :attr`coefficients` with no modification. The args and kwargs are ignored, but included for compatibility with methods that input other arguments. generate_map(coefficients, resolution_in_degrees=5, map_half_sphere=True)¶ Generate a (theta, phi) map of the function modeled by the input coefficients. If map_half_sphere=True (default) a map of only the z>0 half sphere is returned. ○ coefficients (ndarray[Any, dtype[float]]) – One dimensional numpy array with length len(self) containing the coefficients of the function to be plotted. ○ resolution_in_degrees (int) – The resoution of the map in degrees. The map uses eqidistant lines in longitude and latitude. ○ map_half_sphere (bool) – If True returns a map of the z>0 half sphere. Return type tuple[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]] ○ map_intensity – Intensity values of the map. ○ map_theta – Polar cooridnates of the map. ○ map_phi – Azimuthal coordinates of the map. get_inner_product(u, v)[source]¶ Retrieves the inner product of two coefficient arrays, that is to say, the sum-product over the last axis. ○ u (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The first coefficient array, of arbitrary shape and dimension. ○ v (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The second coefficient array, of the same shape as u. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] Returns a dictionary of output data for a given array of coefficients. coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients of arbitrary shape and dimension. Computations only operate over the last axis of coefficents, so derived properties in the output will have the shape (*coefficients.shape[:-1], ...). Return type Dict[str, Any] A dictionary containing a dictionary with the field basis_set. In detail, the dictionary under the key basis_set contains: The name of the basis set, i.e., 'TrivialBasis' A copy of coefficients. The identity matrix of the same size as the number of chanenls. get_spherical_harmonic_coefficients(coefficients, ell_max=None)[source]¶ Convert a set of spherical harmonics coefficients to a different ell_max by either zero-padding or truncation and return the result. ○ coefficients (ndarray[Any, dtype[float]]) – An array of coefficients of arbitrary shape, provided that the last dimension contains the coefficients for one function. ○ ell_max (Optional[int]) – The band limit of the spherical harmonic expansion. Return type ndarray[Any, dtype[float]] gradient(coefficients, *args, **kwargs)[source]¶ Returns the provided coefficients with no modification. coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients of arbitrary shape except the last index must specify the same number of channels as was specified for this basis. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] The provided :attr`coefficients` with no modification. The args and kwargs are ignored, but included for compatibility with methods that input other argumetns. property integration_mode: str¶ Mode of integration for calculating projection matrix. Accepted values are 'simpson', 'romberg', 'trapezoid', and 'midpoint'. property is_dirty: bool¶ property probed_coordinates: ProbedCoordinates¶ property projection_matrix¶ The identity matrix of the same rank as the number of channels specified. class mumott.methods.basis_sets.GaussianKernels(probed_coordinates=None, grid_scale=4, kernel_scale_parameter=1.0, enforce_friedel_symmetry=True, **kwargs)[source]¶ Basis set class for gaussian kernels, a simple local representation on the sphere. The kernels follow a pseudo-even distribution similar to that described by Y. Kurihara in 1965, except with offsets added at the poles. The Gaussian kernel at location \(\rho_i\) is given by \[N_i \exp\left[ -\frac{1}{2} \left(\frac{d(\rho_i, r)}{\sigma}\right)^2 \right]\] \[\sigma = \frac{\nu \pi}{2 (g + 1)}\] where \(\nu\) is the kernel scale parameter and \(g\) is the grid scale, and \[d(\rho, r) = \arctan_2(\Vert \rho \times r \Vert, \rho \cdot r),\] that is, the great circle distance from the kernel location \(\rho\) to the probed location \(r\). If Friedel symmetry is assumed, the expression is instead \[d(\rho, r) = \arctan_2(\Vert \rho \times r \Vert, \vert \rho \cdot r \vert)\] The normalization factor \(\rho_i\) is given by \[N_i = \sum_j \exp\left[ -\frac{1}{2} \left( \frac{d(\rho_i, \rho_j)}{\sigma} \right)^2 \right]\] where the sum goes over the coordinates of all grid points. This leads to an approximately even spherical function, such that a set of coefficients which are all equal is approximately isotropic, to the extent possible with respect to restrictions imposed by grid resolution and scale parameter. ☆ probed_coordinates (ProbedCoordinates) – Optional. A container with the coordinates on the sphere probed at each detector segment by the experimental method. Its construction from the system geometry is method-dependent. By default, an empty instance of mumott.ProbedCoordinates is created. ☆ grid_scale (int) – The size of the coordinate grid on the sphere. Denotes the number of azimuthal rings between the pole and the equator, where each ring has between 2 and 2 * grid_scale points along the azimuth. ☆ kernel_scale_parameter (float) – The scale parameter of the kernel in units of \(\frac{\pi}{2 (g + 1)}\), where \(g\) is grid_scale. ☆ enforce_friedel_symmetry (bool) – If set to True, Friedel symmetry will be enforced, using the assumption that points on opposite sides of the sphere are equivalent. ☆ kwargs – Miscellaneous arguments which relate to segment integrations can be passed as keyword arguments: Mode to integrate line segments on the reciprocal space sphere. Possible options are 'simpson', 'midpoint', 'romberg', 'trapezoid'. 'simpson', 'trapezoid', and 'romberg' use adaptive integration with the respective quadrature rule from scipy.integrate. 'midpoint' uses a single mid-point approximation of the integral. Default value is 'simpson'. Number of points used in the first iteration of the adaptive integration. The number increases by the rule N &larr; 2 * N - 1 for each iteration. Default value is 3. Tolerance for the maximum relative error between iterations before the integral is considered converged. Default is 1e-5. Maximum number of iterations. Default is 10. If True, makes matrix sparse by limiting the number of basis set elements that can map to each segment. Default is False. Number of basis set elements that can map to each segment, if enforce_sparsity is set to True. Default is 3. property csr_representation: tuple¶ The projection matrix as a stack of sparse matrices in CSR representation as a tuple. The information in the tuple consists of the 3 dense matrices making up the representation, in the order (pointers, indices, data). property enforce_friedel_symmetry: bool¶ If True, Friedel symmetry is enforced, i.e., the point \(-r\) is treated as equivalent to \(r\). forward(coefficients, indices=None)[source]¶ Carries out a forward computation of projections from Gaussian kernel space into detector space, for one or several tomographic projections. ○ coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients, of arbitrary shape so long as the last axis has the same size as kernel_scale_parameter, and if indices is None or greater than one, the first axis should have the same length as indices ○ indices (Optional[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]]) – Optional. Indices of the tomographic projections for which the forward computation is to be performed. If None, the forward computation will be performed for all projections. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] An array of values on the detector corresponding to the coefficients given. If indices contains exactly one index, the shape is (coefficients.shape[:-1], J) where J is the number of detector segments. If indices is None or contains several indices, the shape is (N, coefficients.shape[1:-1], J) where N is the number of tomographic projections for which the computation is performed. The assumption is made in this implementation that computations over several indices act on sets of images from different projections. For special usage where multiple projections of entire fields are desired, it may be better to use projection_matrix directly. This also applies to gradient(). generate_map(coefficients, resolution_in_degrees=5, map_half_sphere=True)¶ Generate a (theta, phi) map of the function modeled by the input coefficients. If map_half_sphere=True (default) a map of only the z>0 half sphere is returned. ○ coefficients (ndarray[Any, dtype[float]]) – One dimensional numpy array with length len(self) containing the coefficients of the function to be plotted. ○ resolution_in_degrees (int) – The resoution of the map in degrees. The map uses eqidistant lines in longitude and latitude. ○ map_half_sphere (bool) – If True returns a map of the z>0 half sphere. Return type tuple[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]] ○ map_intensity – Intensity values of the map. ○ map_theta – Polar cooridnates of the map. ○ map_phi – Azimuthal coordinates of the map. get_amplitudes(coefficients, probed_coordinates=None)[source]¶ Computes the amplitudes of the spherical function represented by the provided coefficients at the probed_coordinates. ○ coefficients (ndarray[Any, dtype[float]]) – An array of coefficients of arbitrary shape, provided that the last dimension contains the coefficients for one spherical function. ○ probed_coordinates (Optional[ProbedCoordinates]) – An instance of mumott.core.ProbedCoordinates with its vector attribute indicating the points of the sphere for which to evaluate the Return type ndarray[Any, dtype[float]] get_inner_product(u, v)[source]¶ Retrieves the inner product of two coefficient arrays, that is to say, the sum-product over the last axis. ○ u (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The first coefficient array, of arbitrary shape and dimension, so long as the number of coefficients equals the length of this GaussianKernels instance. ○ v (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – The second coefficient array, of the same shape as u. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] Returns a dictionary of output data for a given array of basis set coefficients. coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients of arbitrary shape and dimensions, except its last dimension must be the same length as the len of this instance. Computations only operate over the last axis of coefficients, so derived properties in the output will have the shape (*coefficients.shape [:-1], ...). Return type Dict[str, Any] A dictionary containing two sub-dictionaries, basis_set and spherical_harmonic_analysis. basis_set contains information particular to GaussianKernels, whereas spherical_harmonic_analysis contains an analysis of the spherical function using a spherical harmonic transform. In detail, the two sub-dictionaries basis_set and spherical_harmonic_analysis have the following members: The name of the basis set, i.e., 'GaussianKernels' A copy of coefficients. A copy of grid_scale. A copy of kernel_scale_parameter. A copy of enforce_friedel_symmetry. A copy of projection_matrix. An analysis of the spherical function in terms of spherical harmonics. See SphericalHarmonics.get_output for details. get_spherical_harmonic_coefficients(coefficients, ell_max=None)[source]¶ Computes the spherical harmonic coefficients of the spherical function represented by the provided coefficients using a Driscoll-Healy grid. For details on the Driscoll-Healy grid, see the SHTools page for a comprehensive overview. ○ coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients of arbitrary shape, provided that the last dimension contains the coefficients for one function. ○ ell_max (Optional[int]) – The bandlimit of the spherical harmonic expansion. By default, it is 2 * grid_scale. gradient(coefficients, indices=None)[source]¶ Carries out a gradient computation of projections from Gaussian kernel space into detector space for one or several tomographic projections. ○ coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients (or residuals) of arbitrary shape so long as the last axis has the same size as the number of detector segments. ○ indices (Optional[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]]) – Optional. Indices of the tomographic projections for which the gradient computation is to be performed. If None, the gradient computation will be performed for all projections. Return type ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]] An array of gradient values based on the coefficients given. If indices contains exactly one index, the shape is (coefficients.shape[:-1], J) where J is the number of detector segments. If indices is None or contains several indices, the shape is (N, coefficients.shape[1:-1], J) where N is the number of tomographic projections for which the computation is performed. When solving an inverse problem, one should not attempt to optimize the coefficients directly using the gradient one obtains by applying this method to the data. Instead, one must either take the gradient of the residual between the forward() computation of the coefficients and the data. Alternatively one can apply both the forward and the gradient computation to the coefficients to be optimized, and the gradient computation to the data, and treat the residual of the two as the gradient of the optimization coefficients. The approaches are algebraically equivalent, but one may be more efficient than the other in some circumstances. However, normally, the projection between detector and GaussianKernel space is only a small part of the overall computation, so there is typically not much to be gained from optimizing it. property grid: Tuple[ndarray[Any, dtype[float]], ndarray[Any, dtype[float]]]¶ Returns the polar and azimuthal angles of the grid used by the basis. ○ A Tuple with contents (polar_angle, azimuthal_angle), where the ○ polar angle is defined as \(\arccos(z)\). property grid_hash: str¶ Returns a hash of grid. property grid_scale: int¶ The number of azimuthal rings from each pole to the equator in the spherical grid. property integration_mode: str¶ Mode of integration for calculating projection matrix. Accepted values are 'simpson', 'romberg', 'trapezoid', and 'midpoint'. property is_dirty: bool¶ property kernel_scale_parameter: float¶ The scale parameter for each kernel. property probed_coordinates: ProbedCoordinates¶ property projection_matrix: ndarray[Any, dtype[_ScalarType_co]]¶ The matrix used to project spherical functions from the unit sphere onto the detector. If v is a vector of gaussian kernel coefficients, and M is the projection_matrix, then M @ v gives the corresponding values on the detector segments associated with each projection. M[i] @ v gives the values on the detector segments associated with projection i. If r is a residual between a projection from Gaussian kernel to detector space and data from projection i, then M[i].T @ r gives the associated gradient in Gaussian kernel space. property projection_matrix_hash: str¶ Returns a hash of projection_matrix. class mumott.methods.basis_sets.NearestNeighbor(directions, probed_coordinates=None, enforce_friedel_symmetry=True, **kwargs)[source]¶ Basis set class for nearest-neighbor interpolation. Used to construct methods similar to that presented in Schaff et al. (2015). By default this representation is sparse and maps only a single direction on the sphere to each detector segment. This can be changed; see kwargs. ☆ directions (NDArray[float]) – Two-dimensional Array containing the N sensitivity directions with shape (N, 3). ☆ probed_coordinates (ProbedCoordinates) – Optional. Coordinates on the sphere probed at each detector segment by the experimental method. Its construction from the system geometry is method-dependent. By default, an empty instance of mumott.ProbedCoordinates is created. ☆ enforce_friedel_symmetry (bool) – If set to True, Friedel symmetry will be enforced, using the assumption that points on opposite sides of the sphere are equivalent. ☆ kwargs – Miscellaneous arguments which relate to segment integrations can be passed as keyword arguments: Mode to integrate line segments on the reciprocal space sphere. Possible options are 'simpson', 'midpoint', 'romberg', 'trapezoid'. 'simpson', 'trapezoid', and 'romberg' use adaptive integration with the respective quadrature rule from scipy.integrate. 'midpoint' uses a single mid-point approximation of the integral. Default value is 'simpson'. Number of points used in the first iteration of the adaptive integration. The number increases by the rule N &larr; 2 * N - 1 for each iteration. Default value is 3. Tolerance for the maximum relative error between iterations before the integral is considered converged. Default is 1e-3. Maximum number of iterations. Default is 10. If True, limites the number of basis set elements that can map to each detector segemnt. Default is False. If enforce_sparsity is set to True, the number of basis set elements that can map to each detector segment. Default value is 1. property csr_representation: tuple¶ The projection matrix as a stack of sparse matrices in CSR representation as a tuple. The information in the tuple consists of the 3 dense matrices making up the representation, in the order (pointers, indices, data). property enforce_friedel_symmetry: bool¶ If True, Friedel symmetry is enforced, i.e., the point \(-r\) is treated as equivalent to \(r\). Caluculate the nearest neighbor sensitivity directions for an array of x-y-z vectors. probed_directions (ndarray[Any, dtype[float]]) – Array with length 3 along its last axis Return type ndarray[Any, dtype[int]] Array with same shape as the input except for the last dimension, which contains the index of the nearest-neighbor sensitivity direction. forward(coefficients, indices=None)[source]¶ Carries out a forward computation of projections from reciprocal space modes to detector channels, for one or several tomographic projections. ○ coefficients (ndarray[Any, dtype[float]]) – An array of coefficients, of arbitrary shape so long as the last axis has the same size as this basis set. ○ indices (Optional[ndarray[Any, dtype[int]]]) – Optional. Indices of the tomographic projections for which the forward computation is to be performed. If None, the forward computation will be performed for all projections. Return type ndarray[Any, dtype[float]] An array of values on the detector corresponding to the coefficients given. If indices contains exactly one index, the shape is (coefficients.shape[:-1], J) where J is the number of detector segments. If indices is None or contains several indices, the shape is (N, coefficients.shape[1:-1], J) where N is the number of tomographic projections for which the computation is performed. generate_map(coefficients, resolution_in_degrees=5, map_half_sphere=True)¶ Generate a (theta, phi) map of the function modeled by the input coefficients. If map_half_sphere=True (default) a map of only the z>0 half sphere is returned. ○ coefficients (ndarray[Any, dtype[float]]) – One dimensional numpy array with length len(self) containing the coefficients of the function to be plotted. ○ resolution_in_degrees (int) – The resoution of the map in degrees. The map uses eqidistant lines in longitude and latitude. ○ map_half_sphere (bool) – If True returns a map of the z>0 half sphere. Return type tuple[ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]] ○ map_intensity – Intensity values of the map. ○ map_theta – Polar cooridnates of the map. ○ map_phi – Azimuthal coordinates of the map. get_amplitudes(coefficients, probed_directions)[source]¶ Calculate function values of an array of coefficients. ○ coefficients (ndarray[Any, dtype[float]]) – Array of coefficients with coefficient number along its last index. ○ probed_directions (ndarray[Any, dtype[float]]) – Array with length 3 along its last axis. Return type ndarray[Any, dtype[float]] Array with function values. The shape of the array is (*coefficients.shape[:-1], *probed_directions.shape[:-1]). Calculate the value of the basis functions from an array of x-y-z vectors. probed_directions (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – Array with length 3 along its last axis Return type ndarray[Any, dtype[float]] Array with same shape as input array except for the last axis, which now has length N, i.e., the number of sensitivity directions. Returns a dictionary of output data for a given array of basis set coefficients. coefficients (ndarray[Any, dtype[TypeVar(_ScalarType_co, bound= generic, covariant=True)]]) – An array of coefficients of arbitrary shape and dimensions, except its last dimension must be the same length as the len of this instance. Computations only operate over the last axis of coefficients, so derived properties in the output will have the shape (*coefficients.shape [:-1], ...). Return type Dict[str, Any] A dictionary containing information about the optimized function. Calculate the second moments of the functions described by coefficients. coefficients (ndarray[Any, dtype[float]]) – An array of coefficients (or residuals) of arbitrary shape so long as the last axis has the same size as the number of detector channels. Return type ndarray[Any, dtype[float]] Array containing the second moments of the functions described by coefficients, formatted as rank-two tensors with tensor indices in the last 2 dimensions. get_spherical_harmonic_coefficients(coefficients, ell_max=None)[source]¶ Computes and rturns the spherical harmonic coefficients of the spherical function represented by the provided coefficients using a Driscoll-Healy grid. For details on the Driscoll-Healy grid, see the SHTools page for a comprehensive overview. ○ coefficients (ndarray[Any, dtype[float]]) – An array of coefficients of arbitrary shape, provided that the last dimension contains the coefficients for one function. ○ ell_max (Optional[int]) – The bandlimit of the spherical harmonic expansion. Return type ndarray[Any, dtype[float]] get_sub_geometry(direction_index, geometry, data_container=None)[source]¶ Create and return a geometry object corresponding to a scalar tomography problem for scattering along the sensitivity direction with index direction_index. If optionally a mumott.DataContainer is provided, the sinograms and weights for this scalar tomography problem will alse be returned. Used for an implementation of the algorithm descibed in [Schaff2015]. ○ direction_index (int) – Index of the sensitivity direction. ○ geometry (Geometry) – mumott.Geometry object of the full problem. ○ (optional) (data_container) – mumott.DataContainer compatible with Geometry from which a scalar dataset will be constructed. Return type tuple[Geometry, tuple[ndarray[Any, dtype[float]], ndarray[Any, dtype[float]]]] ○ sub_geometry – Geometry of the scalar problem. ○ data_tuple – Tuple containing two numpy arrays. data_tuple[0] is the data of the scalar problem. data_tuple[1] are the weights. gradient(coefficients, indices=None)[source]¶ Carries out a gradient computation of projections of projections from reciprocal space modes to detector channels, for one or several tomographic projections. ○ coefficients (ndarray[Any, dtype[float]]) – An array of coefficients (or residuals) of arbitrary shape so long as the last axis has the same size as the number of detector channels. ○ indices (Optional[ndarray[Any, dtype[int]]]) – Optional. Indices of the tomographic projections for which the gradient computation is to be performed. If None, the gradient computation will be performed for all projections. Return type ndarray[Any, dtype[float]] An array of gradient values based on the coefficients given. If indices contains exactly one index, the shape is (coefficients.shape[:-1], J) where J is the number of detector segments. If indices is None or contains several indices, the shape is (N, coefficients.shape[1:-1], J) where N is the number of tomographic projections for which the computation is performed. property grid: Tuple[ndarray[Any, dtype[float]], ndarray[Any, dtype[float]]]¶ Returns the polar and azimuthal angles of the grid used by the basis. ○ A Tuple with contents (polar_angle, azimuthal_angle), where the ○ polar angle is defined as \(\arccos(z)\). property grid_hash: str¶ Returns a hash of grid. property integration_mode: str¶ Mode of integration for calculating projection matrix. Accepted values are 'simpson', 'romberg', 'trapezoid', and 'midpoint'. property is_dirty: bool¶ property probed_coordinates: ProbedCoordinates¶ property projection_matrix: ndarray[Any, dtype[_ScalarType_co]]¶ The matrix used to project spherical functions from the unit sphere onto the detector. If v is a vector of gaussian kernel coefficients, and M is the projection_matrix, then M @ v gives the corresponding values on the detector segments associated with each projection. M[i] @ v gives the values on the detector segments associated with projection i. property projection_matrix_hash: str¶ Returns a hash of projection_matrix.
{"url":"https://mumott.org/moduleref/basis_sets.html","timestamp":"2024-11-09T11:00:02Z","content_type":"text/html","content_length":"268255","record_id":"<urn:uuid:ae402b3c-8c87-4160-b82d-56ec68cb164a>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00492.warc.gz"}
Over coffee, today's conversation was enthusiastic and heated at times. Zori got things off with a blast “I don't think graphs are of any use at all…” but she wasn't even able to finish the sentence before Yolanda uncharacteristically interrupted her with “You're off base on this one. I see lots of ways graphs can be used to model real world problems. The professor actually showed us examples back in our first class. But now that we're talking in more depth about graphs, things are even clearer.” Bob added, “These eulerian and hamiltonian cycle problems are certain to have applications in network routing problems.” Xing reinforced Bob with “Absolutely. There are important questions in network integrity and information exchange that are very much the same as these basic problems.” Alice piled on “Even the notion of chromatic number clearly has practical applications.” By this time, Zori realized her position was indefensible but she was reluctant to admit it. She offered only a “Whatever.” Things quieted down a bit and Dave said “Finding a hamiltonian cycle can't be all that hard, if someone guarantees that there is one. This extra information must be of value in the search.” Xing added “Maybe so. It seems natural that it should be easier to find something if you know it's there.” Alice asked “Does the same thing hold for chromatic number?” Bob didn't understand her question “Huh?” Alice continued, this time being careful not to even look Bob's way “I mean if someone tells you that a graph is \(3\)-colorable, does that help you to find a coloring using only three colors? ” Dave said “Seems reasonable to me.” After a brief pause, Carlos offered “I don't think this extra knowledge is of any help. I think these problems are pretty hard, regardless.” They went back and forth for a while, but in the end, the only thing that was completely clear is that graphs and their properties had captured their attention, at least for now.
{"url":"https://rellek.net/book-2016.1/s_graphs_discussion.html","timestamp":"2024-11-07T02:53:47Z","content_type":"text/html","content_length":"29385","record_id":"<urn:uuid:2e130662-4cfd-49e4-8866-cf0c0ed54589>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00647.warc.gz"}
Collision Theory This time we perform a more rigorous treatment of the energetics of reactions. The best part about doing theory is that you can construct models as if you’re playing with a constructor. We want to model the rate of chemical reaction . Let’s call it . We’ve established that the rate of collisions is proportional to relative number of molecules. Thus, if the number of molecules per unit volume is in units of molecules per m^3 (we use meters because they’re standard units). In practice, we like to deal with concentrations in mols, so it’d be better if we could convert those relative numbers to mol/L. Thus, . Or, . Therefore, our reaction rate becomes: Then we can recall saying that we could quantify the chance that two particles meet by looking at their sizes: bigger molecules have greater chance of meeting each other. ![[cross [email protected]]] A key mathematical insight is that molecule with radius collides with molecule with radius whenever the center of the molecule lies within the circle centered at the center of the molecule with radius . Therefore, out of all possible positions for the center of molecule , only correspond to collisions. This value is often called a collision cross-section . The greater this value - the more likely a collision will happen, so: then we can recall that we can increase number of collisions if molecules move faster. Okay, so then we should consider an average speed of molecules, which equals to (a result derived from kinetic theory of gases): where is something called reduced mass: Long story short, the use of allows us to find the relative speeds of molecules, not just absolute ones. Okay, then: Finally, we can recall that only collisions, which have enough energy lead to chemical reactions. The fraction of molecules which has energy above is: Finally, our reaction rate becomes: But this counts number of collisions in absolute numbers. In practice, we’re interested in number of mols of collisions, so we should divide both sides by : Let’s do dimensions analysis. has units of . the thing in square root should have units of (because it’s mean speed). The exponent is unitless, has units of , and product of concentrations has units . Overall, the product is . We can recognize that , then the units reduce to which is exactly the rate of change of concentration! Let’s rewrite the rate in these units now, if we define , and , we recover the the rate law! Steric Factor We might calculate predictions of values, and then compare those to experimental values and find that we might have some discrepancy. Oh no, what should we do? In situations like this, scientists like to introduce proportionality constants which fix everything. What’s missing from our model? Well, we have a key assumption, that all collisions with enough energy turn into reactions. But we never said anything about orientations right, and intuitively, the molecules should collide with proper orientation. So ideally, should be less than 1. We define this ration to be the steric factor. So we fix our rate law by adding this extra parameter: Let’s see some examples: Oh no, . What do we do? Pretend it’s all good :) In seriousness, if we consider as just the metric of proper orientation, I find it hard to justify values , but if we consider as a collection of everything missing from our fairly simple model, then sure, why not.
{"url":"https://chem165.ischemist.com/Kinetics/Collision-Theory","timestamp":"2024-11-14T05:27:23Z","content_type":"text/html","content_length":"115214","record_id":"<urn:uuid:006ef20d-e321-47ce-bddb-949354ae026e>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00797.warc.gz"}
Exponents and Scientific Notation In this lesson, we will learn how to use exponents and scientific notation. It is called scientific notation because scientists often work with large and small numbers and so they use exponents to make it easier for them to calculate numbers. Exponents and Scientific Notation Let's say that we have the number 126. In order to convert this into scientific notation, we need to first convert the number 126 into a decimal. If I do 1.26, it isn't the same numbers however (1.26) (100) does equal 126. So, the scientific notation for 126 is 1.26 x 10 to the power of 2! {The reason I wrote 10 to the power of 2 is because 10 X 10 equals 100} Sheet 1 Exponents and Scientific Notation No comments:
{"url":"http://www.broandsismathclub.com/2014/07/exponents-and-scientific-notation.html","timestamp":"2024-11-06T15:22:12Z","content_type":"text/html","content_length":"61682","record_id":"<urn:uuid:6471f02e-b5a0-4ce7-90dc-fae56b6a557a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00841.warc.gz"}
How Many Dimes Are In One Dollar? A Complete Breakdown - Chronicle Collectibles (2024) Have you ever wondered exactly how many dimes are in one dollar? This is a common question for those learning about US currency or doing math problems involving money. If you’re short on time, here’s a quick answer to your question: There are 10 dimes in one dollar. This is because a dime is worth 10 cents, and there are 100 cents in one dollar. Understanding the Values of Coins in US Currency The penny is worth 1 cent The humble penny has been around for ages and still features prominently in US coinage today. Despite inflation making it buy less over time, the penny has retained its 1 cent value since the first US cents were minted in 1793. Some cool penny facts: While the penny doesn’t buy as much as it used to, it’s still indispensable for pricing and making change. Who among us hasn’t frantically dug through our couches searching for pennies to pay for something that cost $x.99? Love it or hate it, the penny is here to stay as an important 1-cent cog in our economy. The nickel is worth 5 cents Next up is the humble nickel, worth 5 cents. Did you know the original US nickels from 1866-1883 were made of nickel silver? Interesting! Some fast facts on today’s nickel: • The front has featured former presidents Thomas Jefferson and Franklin Roosevelt. While easy to overlook, nickels add up fast and are super handy for prices ending in 5 or 0 cents. They’re also fun to count by 5’s for math practice. Let’s hear it for the mighty nickel! The dime is worth 10 cents Moving up in value, we’ve got the Roosevelt dime. These small but hefty coins have been minted since 1796. Fun dime details: • The current Roosevelt design began in 1946. • Dimes comprise over 20% of all coins made by the US Mint. Dimes may be small, but they’re mighty! 10 cents goes a long way, from public transit fares to laundry machines. Dimes are also great for teaching decimals, and percentages, counting by 10’s, and visualizing 1/10 fractions. Kudos to the dime for being an MVP coin! The quarter is worth 25 cents Last but not least, we have the quarter, valued at 25 cents. Quarters have been used since 1796. Here are some cool quarter facts: • Designs on the back commemorate the 50 states. The quarter packs a bigger punch than smaller coins. It’s perfect for snack machines, parking meters, laundry, and arcades. The hefty quarter has long been used in commerce and collecting. Worth 25 times a penny, the trusty quarter remains indispensable across the US. As we can see, each coin plays an important role in everyday transactions. Despite their small size, pennies, nickels, dimes, and quarters collectively add up to big value. Understanding these base US coin values is essential knowledge for kids and adults alike. Whether making precise changes, saving up in a piggy bank, collecting, or teaching basic math, these coins are the foundation of American currency. Performing the Math: 10 Dimes per Dollar When it comes to figuring out how many dimes are in one dollar, the math is quite straightforward. Here’s a step-by-step breakdown of how to calculate the number of dimes in a dollar: There are 100 cents in a dollar The first key thing to know is that there are 100 pennies or cents in a dollar. This is the basic unit of U.S. currency. So if you have 100 pennies, you have a dollar. Each dime is worth 10 cents Next, it’s important to know the value of a dime. A dime is not just any coin – it has a specific value of 10 cents. So if you have 1 dime, you have 10 cents towards a dollar. So there must be 10 dimes in one dollar (100 cents divided by 10 cents per dime) Now we can put the two pieces together. Since there are 100 cents in a dollar, and each dime equals 10 cents, there must be 10 dimes in a dollar. We get this by dividing 100 cents by 10 cents per dime. The math tells us unambiguously that 10 dimes make 1 dollar. To recap: • There are 100 pennies or cents in a dollar • Each dime is worth 10 cents • So if you divide 100 cents by 10 cents per dime, you get 10 dimes 100 cents dividend d by 10 cents per dime = 10 dimes So the next time you have a bunch of dimes rattling around in your pocket or purse, remember that it takes 10 of them to make a dollar. Knowing this simple math can come in handy when counting up change or teaching kids about money. We hope this breakdown clearly explains how many dimes are in one dollar. Let us know if you have any other money math questions! Practical Examples and Word Problems If you have 5 dimes, how much money do you have? This is a straightforward conversion problem. Since there are 10 dimes in 1 dollar, we can calculate the dollar amount by dividing the number of dimes by 10. In this example, there are 5 dimes. To calculate the dollar amount: • Number of dimes: 5 • Dimes to dollars conversion rate: 10 dimes = 1 dollar • Calculation: 5 dimes / 10 dimes per dollar = 0.5 dollars Therefore, if you have 5 dimes, you have 50 cents, which is equal to half a dollar. Understanding these basic conversions from dimes to dollars is essential for managing money transactions and John has 37 dimes. How many dollars is that? Let’s go through the step-by-step process: 1. John has 37 dimes 2. There are 10 dimes in 1 dollar 3. To calculate the dollar amount, divide the number of dimes by 10: □ 37 dimes / 10 dimes per dollar = 3.7 dollars Therefore, if John has 37 dimes, he has $3.70 in dollar value. This demonstrates that you simply divide the number of dimes by 10 to convert to dollars. Sally has $5. How many dimes does she have? To tackle this problem, we need to work backward from the dollar amount to calculate the equivalent number of dimes: 1. Sally has $5 2. There are 10 dimes in $1 3. So there are 10 x 5 = 50 dimes in $5 Therefore, if Sally has $5, she has 50 dimes. This exemplifies converting dollars to dimes by multiplying the dollar amount by 10. Dollar Amount Equivalent in Dimes $1 10 dimes $5 10 x 5 = 50 dimes $10 10 x 10 = 100 dimes This table summarizes the straightforward conversion between dollars and dimes. Mastering these practical examples and word problems will help build your confidence with dime-dollar calculations! Tips for Remembering and Explaining This Concept Use visuals like coins or a number line Using visual representations can help in understanding how many dimes are in one dollar. These practical examples will cement the idea that there are 10 dimes in a dollar. For instance, you could line up 10 dimes in a row and show how they perfectly fit into the length of a dollar bill. Or, draw a simple number line with 0 at one end, 10 in the middle, and 100 at the other end, showing the intervals of 10 cents along the way. These types of hands-on learning tools can help learners visualize that a dime is 1/10 of a dollar. Relate it to everyday objects like eggs or candy bars One of the great ways to learn how many dimes are in one dollar is to relate dimes and dollars to familiar objects in sets of 10. For instance, a carton of eggs contains 12 eggs, which can be divided into 10 eggs and 2 extra eggs. Or a pack of 10 mini candy bars for a dollar – each mini bar represents one dime out of the whole dollar. Using these types of relatable examples from everyday life can help learners equate a dime to 1/10th of something whole. Show how dividing 100 cents by 10 cents per dime results in 10 Walking through the math of dividing 100 cents by 10 cents (the value of a dime) results in 10 can help learners logically understand the relationship. This shows that if a dollar equals 100 cents, and a dime equals 10 cents, then there must be 10 dimes that divide evenly into 100 cents. Doing a simple division problem like this can concretely demonstrate the calculation behind the number of dimes per dollar. Other tips could include using mnemonic devices, songs, or games to reinforce counting by tens. Relatable examples and hands-on learning are key when explaining this math concept. With visuals, everyday object comparisons, and basic division, learners can more easily learn how many dimes are in one dollar. Just remember that 10 dimes make up a whole dollar. Common Confusions and Mistakes Confusing dimes and pennies One of the most common mistakes people make when counting coins is confusing dimes and pennies. Since both coins are small and silver in color, it’s easy to mix them up if you’re not paying close attention. Here are some tips to avoid dime/penny confusion: • Remember that a dime is larger and thinner than a penny. • Look for Roosevelt (dimes) versus Lincoln (pennies) on the head side. • Keep dimes and pennies in separate sections of your coin purse. Forgetting how many cents are in a dollar It’s surprising how often people temporarily blank on the fact that there are 100 cents in a dollar. When trying to figure out how many dimes make up a dollar, it’s essential to recall there are 10 cents per dime and 100 cents total in a dollar. Some ways to help remember this key point: • Visualize a dollar bill split into 100 small sections. • Chant “10 cents” every time you pick up a dime. • Imagine trying to cram 100 pennies into your pocket. Miscounting groups of dimes When counting large quantities of dimes, it’s easy to lose track and miscount the total number you have. Strategies to avoid miscounting dimes: • Organize dimes into stacks of 10 before counting. • Double check your tally by re-counting the stacks. • Use a calculator to add up stacks as you go. • Line up dimes in rows of 10 and count the rows. With focus and these helpful tips, you can become an expert at counting coins accurately. Let’s repeat how many dimes are in one dollar – 10 dimes per dollar! How Many Dimes Are In One Dollar – Conclusion Now that we learned how many dimes are in one dollar – there are 10 dimes in a dollar, we are sure you’ll easily calculate your pocket change on a daily basis. By breaking down the values of coins, doing the math, looking at examples, and learning tips and common mistakes, you can confidently explain this concept and help others understand it too. Knowing basic equivalencies like this is useful for everyday math, counting money, and solving word problems. With this knowledge, you can tackle any question involving how many dimes are in a Get Expert Coin Valuation AdviceOur community of experts is here to help you.Get free and unbiased advice on the value of your coins.Learn from other collectors and share your own knowledge.Ask Your Question Now
{"url":"https://achuhei.com/article/how-many-dimes-are-in-one-dollar-a-complete-breakdown-chronicle-collectibles","timestamp":"2024-11-02T11:21:52Z","content_type":"text/html","content_length":"120138","record_id":"<urn:uuid:9a429a4f-2aa7-4e3a-9e3e-1e2e50b76ebe>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00378.warc.gz"}
snippet: Displays shoreline change data in various formats for short term time (1983 - 2006) horizons. summary: Displays shoreline change data in various formats for short term time (1983 - 2006) horizons. extent: [[-73.6703531790025,40.9848074819217],[-71.8115335622424,41.4293120905196]] accessInformation: CT DEEP, UCONN, CT SeaGrant, NOAA, USGS thumbnail: thumbnail/thumbnail.png typeKeywords: ["Data","Service","Map Service","ArcGIS Server"] Typically, shoreline change occurring over a short time span can be characterized by cyclic or episodic non-linear behavior, such as storm-induced shoreline retreat. High short-term variability increases the shoreline change rate uncertainty and the potential for rates of shoreline change that are statistically insignificant. In many locations, the short-term trend is calculated with only 3 shorelines. As noted above, uncertainty generally decreases with an increasing number of shoreline data points; thus the small number of description: shorelines in the short-term calculation can result in higher uncertainty. To supplement gaps in the short-term data, end point rates were calculated at each transect that did not intersect the minimum number of three shorelines required to calculate a linear regression rate. The end point rate is calculated by dividing the distance between shorelines by the time elapsed between the oldest (1983) and the most recent (2006) shoreline. End point rates represent the net change between the two shorelines divided by the elapsed time period. Unlike the linear regression method, end point rates do not have an associated expression (such as a confidence interval) of how scattered the shoreline positions are relative to an assumed linear trend. title: CT Shoreline Change Viewer type: Map Service tags: ["Shoreline","Connecticut"] culture: en-US name: Shoreline_Change_Short_Term guid: 535ABBC7-F8DE-4F31-9E4F-3527B72435E5 spatialReference: WGS_1984_Web_Mercator_Auxiliary_Sphere
{"url":"http://cteco.uconn.edu/ctmaps/rest/services/Coastal/Shoreline_Change_Short_Term/MapServer/info/iteminfo","timestamp":"2024-11-06T23:58:20Z","content_type":"text/html","content_length":"4741","record_id":"<urn:uuid:1d7a9e83-bed0-49a0-b957-ef712733980a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00425.warc.gz"}
design/tpl: Use \{emph=>dfn} for term introductions This uses \textsl rather than \emph. @ -1,7 +1,7 @@ \section{Classification System}\seclabel{class} A \emph{classification} is a user-defined abstraction that describes A \dfn{classification} is a user-defined abstraction that describes (``classifies'') arbitrary data. Classifications can be used as predicates, generating functions, and can be composed into more complex classifications. @ -9,12 +9,12 @@ Nearly all conditions in \tame{} are specified using classifications. \index{first-order logic!sentence} All classifications represent \emph{first-order sentences}---% All classifications represent \dfn{first-order sentences}---% that is, they contain no \emph{free variables}. they contain no \dfn{free variables}. this means that all variables within a~classification are \emph{tightly coupled} to the classification itself. \dfn{tightly coupled} to the classification itself. This limitation is mitigated through use of the template system. \begin{axiom}[Classification Introduction]\axmlabel{class-intro} @ -74,7 +74,7 @@ A $\land$-classification is pronounced ``conjunctive classification'', and $\lor$ ``disjunctive''.\footnote{% Conjunctive and disjunctive classifications used to be referred to, as \emph{universal} and \emph{existential}, as \dfn{universal} and \dfn{existential}, referring to fact that $\forall\Set{a_0,\ldots,a_n}(a) \equiv a_0\land\ldots\land a_n$, and similarly for $\exists$. @ -392,7 +392,7 @@ Then, & \equiv \true. Each \xmlnode{match} of a classification is a~\emph{predicate}. Each \xmlnode{match} of a classification is a~\dfn{predicate}. Multiple predicates are by default joined by conjunction: @ -231,7 +231,7 @@ For example, \indexsym{[\,]}{function, image} \index{function!image (\ensuremath{[\,]})} \index{function!as a set} The set of values over which some function~$f$ ranges is its \emph{image}, The set of values over which some function~$f$ ranges is its \dfn{image}, which is a subset of its codomain. In the example above, both the domain and codomain are the set of integers~$\Int$, @ -253,9 +253,9 @@ We therefore have \index{tuple (\ensuremath{()})} \index{relation|see {function}} An ordered pair $(x,y)$ is also called a \emph{$2$-tuple}. An ordered pair $(x,y)$ is also called a \dfn{$2$-tuple}. an \emph{$n$-tuple} is used to represent an $n$-ary function, an \dfn{$n$-tuple} is used to represent an $n$-ary function, where by convention we have $(x)=x$. So $f(x,y) = f((x,y)) = x+y$. If we let $t=(x,y)$, @ -264,7 +264,7 @@ If we let $t=(x,y)$, necessary and where parenthesis may add too much noise; this notation is especially well-suited to indexes, as in $f_1$. Binary functions are often written using \emph{infix} notation; Binary functions are often written using \dfn{infix} notation; for example, we have $x+y$ rather than $+(x,y)$. @ -322,7 +322,7 @@ Given that, we have $f\bicomp{[]} = f\bicomp{[A]}$ for functions returning \index{abstract algebra!monoid} \index{monoid|see abstract algebra, monoid} Let $S$ be some set. A \emph{monoid} is a triple $\Monoid S\bullet e$ Let $S$ be some set. A \dfn{monoid} is a triple $\Monoid S\bullet e$ with the axioms @ -438,7 +438,7 @@ A vector is a sequence of values, defined as a function of \index{index set (\ensuremath{\Fam{a}jJ})} Let $J\subset\Int$ represent an index set. A \emph{vector}~$v\in\Vectors^\Real$ is a totally ordered sequence of A \dfn{vector}~$v\in\Vectors^\Real$ is a totally ordered sequence of elements represented as a function of an element of its index set: v = \Vector{v_0,\ldots,v_j}^{\Real}_{j\in J} @ -454,7 +454,7 @@ We may omit the superscript such that $\Vectors^\Real=\Vectors$ Let $J\subset\Int$ represent an index set. A \emph{matrix}~$M\in\Matrices$ is a totally ordered sequence of A \dfn{matrix}~$M\in\Matrices$ is a totally ordered sequence of elements represented as a function of an element of its index set: M = \Vector{M_0,\ldots,M_j}^{\Vectors^\Real}_{j\in J}
{"url":"https://forge.mikegerwitz.com/employer/tame/commit/dfa37f5b779db3bceb1dc25010dd82f3de7bbb91","timestamp":"2024-11-09T23:47:04Z","content_type":"text/html","content_length":"125745","record_id":"<urn:uuid:cb49158d-b596-4235-8fad-88ae5af8effe>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00294.warc.gz"}
The Kaplan Method for Data Sufficiency Questions on the GMAT Data Sufficiency (DS) questions are unique to the GMAT. When first encountered they are cumbersome, confusing, and generally frustrating. Admittedly, Data Sufficiency questions often remain cumbersome, confusing, and generally frustrating, but such is the nature of the GMAT. After all, the better you do, the harder the test gets! However, a thorough understanding of the characteristics and attributes of these questions coupled with a proven method of attack will allow you to handle just about anything the GMAT has to offer. The prescribed task for Data Sufficiency questions is straightforward enough: based on the provided information, determine whether a posed question can be answered. The structure of these questions is unwaveringly consistent: a question is asked, two statements of additional information are provided, and the five answer choices that follow are always the same. Data Sufficiency Questions Types: Value and Yes/No. • A Value question asks for a numerical value (e.g., What is the value of x?). For information to be considered sufficient, that information must allow us to deduce only one value for x. • Yes/No questions (e.g., Is x odd?) merely want one of those answers, either of which is acceptable. For Yes/No DS questions, it is that last bit that presents somewhat of a dogleg in the conceptual understanding of this type. Simply put, test takers must recognize and accept that “no” is a sufficient answer. Consider the presented example “Is x odd?” Now, if, through the provided information, we learn that x is not odd and instead x is even, then we would be able to answer the question. The answer would be, “No. x is not odd. x is even.” Answering the question is all we need to be able to do. Let’s keep with the example for a moment longer. A very common misstep would be that, after learning xis definitively even, a person then concludes the provided information is insufficient. But why? We answered the question, didn’t we? Remember: if you can unequivocally answer a Yes/No DS question with either a “yes” or a “no,” you’ve got sufficiency. Insufficiency is the result of information that, when evaluated, culminates in a “sometimes yes” and/or a “sometimes no” response. Sometimes is not good enough. Always is mandatory. Kaplan Method for Data Sufficiency As with every question format on the GMAT, you have a proven step-by-step methodical approach. Do this for every Data Sufficiency question: 1. Analyze the Question [This first step consists of three separate parts.] □ Determine if it is a Value or Yes/No question type. □ Simplify the question. [For example, if the question is “What is the value of m if n = 3t – 2m?” then you would rearrange the given equation in the form of m = (3t –n)/2 since the question asks about m.] □ Identify what information, if it were provided, would be sufficient to answer the question. [In the above example, we would look for values of t and n, two additional equations using these variables, or a value for the expression (3t –n)/2.] 2. Evaluate the statements (aka, the provided information) using 12TEN. [I’ll break down what 12TENstands for in the answer choice discussion below.] Note what happens when using the Kaplan Method for Data Sufficiency: you do a lot of work before you look at the statements. Such an approach is essential if you truly want to beat GMAT DS questions. Data Sufficiency Answer Choices Now, let’s consider the answer choices. As stated, the five answer choices for DS questions are always the same. Step 2 of the Kaplan Method for Data Sufficiency uses a handy mnemonic that helps keep those answer choices straight as well as ensures you assess the statements in the proper order. Here’s the breakdown: (1) The first statement provides enough information to answer by itself, but the second statement does not; (2) The second statement provides enough information to answer by itself, but the first statement does not; (T) Only when the two statements are considered together does one have sufficient information to answer the question; (E) Either statement provides enough information to answer the question when considered individually; (N) Neither statement, when considered alone or together, provides sufficient information to answer the question. Additional information is necessary. As per the answer choices as well as Kaplan’s mnemonic, it is imperative that you evaluate the two statements individually before assessing them together. After evaluating Statement 1, regardless of whether it was sufficient or insufficient to answer the question, you must pretend as if you never saw it when you take a look at Statement 2. The only time you use the information in both statements together is if each individually were found to be insufficient on their own. At that point, you are only deciding between answer choices T and N. Applying The Kaplan Method Now let’s see how to apply the Kaplan Method to one of the most challenging question types on the GMAT: “Yes/No” Data Sufficiency questions. These require more practice (and critical thinking skills) than any other part of the test. Let’s begin with a sample GMAT question: The first step in the Kaplan Method for Data Sufficiency is to Analyze the Question Stem. First, we note that this is a Yes/No question. Because we are concerned with the sum of three consecutive positive integers, we can craft an equation by calling the smallest integer x. This makes the other integers (x + 1) and (x + 2), respectively. Therefore, the sum of the three consecutive integers can be written x + (x + 1) + (x + 2). By combining like terms we see that t = 3x + 3. We can then factor and get the equation t = 3(x + 1). Because this is a Yes/No question, we aren’t looking for the value of t or x. What we need to know is, “Is 3(x + 1) a multiple of 24?” Next, we evaluate the statements. Statement 1 tells us that the smallest integer is even. If x is even, then x + 1 will always be odd. And 3 times an odd integer is always odd as well. Odd numbers cannot have even factors, so because t is always odd, we know for certain that the answer to the question is No; t is not a multiple of 24. Inexperienced test-takers often forget a critical fact at this point. It is easy to look at this evaluation, see that the answer is No, and conclude that Statement 1 is therefore Insufficient. “No” is the same as Insufficient, right? Without thinking critically, you could fall into this trap! Apply critical thinking to evaluating statements The key to correctly evaluating Yes/No Data Sufficiency GMAT questions is to recognize that the answer itself doesn’t matter; whether the answer is definite is all that matters. Here, we know that t is ALWAYS odd, which means the answer to the question is ALWAYS NO. That is as sufficient as it gets! You don’t need anything else to have a definitive answer to the question posed. The best way to evaluate Statement 2 would be to imagine numbers that are multiples of 3, then ask whether they are always multiples of 24 (or whether they are never multiples of 24). It doesn’t take long to see that 15 is a multiple of 3 but is not a multiple of 24. However, 24 is also a multiple of 3, and it IS a multiple of 24. So one number yields a No answer and the other yields a Yes. Because this statement leads you to two possible answers, it is Insufficient. “Could be Yes, could be No” equals Insufficient every time. The takeaway here is that “Always No” is always Sufficient. Drill yourself on Yes/No questions to be sure this clicks; Yes/No questions require a definite answer, regardless of whether that answer is Yes or No. Jennifer Mathews Land has taught for Kaplan since 2009. She prepares students to take the GMAT, GRE, ACT, and SAT and was named Kaplan’s Alabama-Mississippi Teacher of the Year in 2010. Prior to joining Kaplan, she worked as a grad assistant in university archives, a copy editor for medical websites, and a dancing dinosaur at children’s parties. Jennifer holds a PhD and a master’s in library and information studies (MLIS) from the University of Alabama, and an AB in English from Wellesley College. When she isn’t teaching, she enjoys watching Alabama football and herding cats. 0 0 admin http://wpapp.kaptest.com/wp-content/uploads/2020/09/kaplan_logo_purple_726-4.png admin2022-08-30 12:29:132022-08-30 17:29:47The Kaplan Method for Data Sufficiency Questions on the GMAT
{"url":"https://wpapp.kaptest.com/study/gmat/land-your-score-yesno-data-sufficiency-questions/","timestamp":"2024-11-05T10:00:03Z","content_type":"text/html","content_length":"199278","record_id":"<urn:uuid:338889ac-89f3-4ef9-881a-31cc74b0bd23>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00188.warc.gz"}
Kadane's algorithm kadane's algorithm aims to find the maximum sub-array sum. In this post, we will discuss it and we'll solve a problem using it Hello everyone! in this post, we'll discuss kadane's algorithm. I'll try to keep it as simple as possible. The goal of this algorithm is to find a maximum value of the sub-array sum. Let's take an example to understand. arr = [7,2,-1,-3,9] the maximum sub array sum is = 7+2-1-3+9 = 14 How do we find this? well, the naive approach is to get all sub-array sums and find the maximum. Let's say we have an array [a1, a2, a3, a4] We'll start with a1 and find all the different sub-arrays Now, we get maximum of all i.e. max(a1+a2, a1+a2+a3, a1+a2+a3+a4, a2+a3, a2+a3+a3, a3+a4) Well, this works but let's analyze the time complexity. The pseudo-code looks like this result = -infinity for i 0...n for j in i+1...n result = max(result, a[i] + a[j]) return result This will give O(n^2) time complexity. Let's see how we can optimize this From the above image, we see that there are repeated computations i.e. overlapping sub-problems. If we find the maximum of a1+a2 and a1+a2+a3 gives a solution for a sub-problem. We can prove the fact by induction that when combining solutions of all sub-problems will give the overall solution i.e. optimal sub-structure. From these two facts, we can say that we're using a dynamic programming approach. The idea is to find the maximum of all nested sub-problems eventually we'll reach the solution. As depicted in the above image it is very clear that max(a1+a2, a1+a2+a3) gives a local maximum when we do this for all sub-problems we'll get the global maximum. at every j, local_max holds maximum sum A[i] + A[i+1] + .... A[j-1] for all i ∈ {1....j}. So, local_max + A[j] contains next local maximum. By the end we reach n it'll contain maximum of {1....n-1} elements which gives our result Let's write code def find_max_sub_array_sum(arr): local_max = arr[0] global_max = float('-inf') for i in range(1,n): local_max = max(arr[i], arr[i] + local_max) global_max = max(local_max, global_max) return global_max After learning kadane's algorithm, let's try to solve this problem https://leetcode.com/problems/best-time-to-buy-and-sell-stock/description/ From the problem description, we need to maximize the profit by finding optimal buy and sell prices. You cannot sell first and buy later. Example : [7, 1, 5, 3, 6, 4] Answer: we'll buy on day 2 at price 1 and sell on day 5 at price 6 with a total profit of $5. When I first saw this problem and the above example, I thought of a solution to find the index of minimum elements. From that index, traverse the whole array and find the maximum which will give our answer. But, I was wrong. Let's see how From the above chart, we can see that the minimum value is -1. After -1, the maximum value is 6. The total profit we get is 7. But, there's another case prior to a minimum which is 2 and 11 which gives a profit of 9. This proves that the above approach is wrong. We need to apply kadane's algorithm, but with a slight change. Here, we need to find the maximum sub-array sum of stock price differences. Let's see how this works. let A be the list of stock prices where A[i] represents stock price on the day i. let's assume A = [a1, a2, a3, a4] our aim is to maximize profit which means maximizing the difference, let B = [a1, a2-a1, a3-a2, a4-a3] let's say a2 and a4 give the maximum profit, so in B we need to find the sum of (a3-a2) + (a4-a3) = a4-a2. Hence, we need to find the largest sub-array sum of price differences def max_profit(prices): local_max = 0 global_max = 0 for i in range(1, len(prices)): local_max += (prices[i] - prices[i-1]) local_max = max(0, local_max) global_max = max(local_max, global_max) return global_max Code Explanation We can do this in two ways. First, find another array with the difference in prices and find a maximum subarray. For that, we need two iterations and another array. (or) find the difference of prices in the same iteration (refer to line no. 5 in the above code). The rest of the code is self-explanatory. Did you find this article valuable? Support Lokesh Sanapalli by becoming a sponsor. Any amount is appreciated!
{"url":"https://blog.lokesh1729.com/kadanes-algorithm","timestamp":"2024-11-09T22:56:58Z","content_type":"text/html","content_length":"132473","record_id":"<urn:uuid:7d637c9d-0b5f-4f5c-94bf-909246266a71>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00814.warc.gz"}
As was discussed in previous sections of the thesis, a module is required for dynamics. This module is required to determine the forward and inverse dynamics. The module is also required to determine the system state (position and velocity of joints) after joint torques have been applied for some period of time. As seen before, the forward and inverse dynamics have closed form solutions, and thus are very easy to incorporate. To find the reaction of the system under applied torques, some form of numerical integration is required. This integration has been done using the Runge-Kutta integrator (see Appendix on Runge-Kutta Integrator), with the forward dynamics equations. This combination allows the motion integration with a single callable subroutine. Details of the equations are available in the Dynamics Appendix, where they were derived using Macsyma. The Runge-Kutta integrator is also discussed in a previous appendix. init_inverse_dynamics() - This is a subroutine which must be called once, before any of the other dynamics subroutines are called. This routine is responsible for setting up global variables used in both the forward and inverse dynamics equations. inverse_dynamics() - This routine will use the system state, and accelerations, to calculate the joint torques required to produce that state. This routine is based upon the closed form Macsyma forward_dynamics() - This routine requires the system state, and the joint torques. Using the formulas derived with Macsyma, the resultant accelerations will be calculated and returned. dynamics() - This routine will use the Runge-Kutta integrator to find the effect of an applied torque over some time step. This routine requires initial system state, constant torque, and time step duration. The integrator will return the system state at the end of the time step. The Runge-Kutta integrator, will obtain the derivatives of the system state by calling the routine rk_derivs(). rk_derivs() - The Runge-Kutta integrator will call this module to determine the derivatives of the system state array. This function will in turn use the forward_dynamics() routine. This module is not intended for higher level use, and was only placed in this module, for the sake of collecting related program parts. * The inverse dynamics equations are available here for the two link * manipulator with a payload. These use the equations of motion, * combined with the Runge-Kutta integrator to provide motion over * time (with constant torque). These also provide the inverse dynamics. * A sample program is shown to help verify development, anf future use. * When using any of the routines in this module, inin_inverse_dynamics() #include "/usr/people/hugh/thesis/src/math/rk.c" /* Runge-Kutta Subroutines */ #define PI_FACT (3.141592654/180.0) /* degrees to rads */ * Definitions for the Inverse Dynamic Equations (this would be a good * time for object oriented programming) double gravity = GRAVITY, /* This is obvious */ l1 = LEN_1, /* Link lengths */ mp = MASS_PAYLOAD, /* Payload Mass */ m2 = MASS_2, /* Link Masses */ i_1, /* Link moments of inertia */ dyn1_tor, dyn2_tor, /* Global Torque Values */ _r1, _r2, _r3, _r4, _r5, _r6; /* Global Dynamics Constants */ * This is an example program to input a set of angles then find the * joint torques required to hold the arm there. * sscanf(aa, "%lf,%lf", &a1, &a2); * inverse_dynamics(a1, a2, 0.0, 0.0, 0.0, 0.0, &t1, &t2); * printf("ang %f, %f >> tor 1 = %f, tor 2 = %f \n", a1, a2, t1, t2); * This bit of code has come from macsyma, originally written in fortran. * In this optimized form this bit of code contains 16 adds/subtracts, * and 24 divides/multiplies. The actual inverse dynamics are * calculated when the other routine is called (after this is called once * at the start of a program. This was originally produced in fortran, * This routine only assigns values to the global variables _r1, _r2, etc. * These global variables have been devised by Macsyma, and will be used in * the other dynamics equations formulated by Macsyma. static double b1, b2, b3, b4, b5, _r1 = 4.0*b3*m2 + 2*b3*m1 + b1*b3; _r2 = mp*(4.0*pfx4+pfx3) + m2*(pfx4 + pfx3) + m1*pfx2 + pfx1 + 4*i_1; _r6 = m2*pfx4 + b1*pfx4 + pfx1; void inverse_dynamics(t1_0, t2_0, t1_1, t2_1, t1_2, t2_2, torque1, torque2) double t1_0, t2_0, t1_1, t2_1, t1_2, t2_2, *torque1, *torque2; * This subroutine should do near optimal calculations of the inverse * dynamics after the initialization routine has been called. This * contains 17 adds and subtracts, and 30 multiplies and divides. The * routine expects to be passed the current robot state variables, and * will return the required joint torques to produce those values. * This routine was originally in Fortran, and was converted to ’C’. * This routine has been written to use degrees, thus an additional 8 * multiplies have been added to convert from degrees to radians in the * Variables: t1_0, t2_0 - The joint angles (degrees) * t1_1, t2_1 - the angular velocities * t1_2, t2_2 - The angular Accelerations * Return: torque1, torque2 - the required joint torques (N/m) static double b1, b2, b3, b4, b5, torque1[0] = ((t2_2)*(_r6 + pfx11*pfx7) + (t1_2)*_r2 - pfx1* _r1 + (-b5 - b4)*t2_1*pfx8*t1_1 + (-b3 - b1)*t2_1*t2_1* pfx8 + b2*pfx8 + (pfx3 + (b5 + b4)*t1_2)*pfx7)/4.0; torque2[0] = (t1_2*_r6 + t2_2*_r6 + pfx8*(pfx11*t1_1*t1_1 + b2) + (pfx3 + pfx11*t1_2)*pfx7)/4.0; void dynamics(t_step, t1_0_o, t2_0_o, t1_1_o, t2_1_o, tor_1, tor_2, t1_0, t2_0, t1_1, t2_1, t1_2, t2_2) double t_step, t1_0_o, t2_0_o, t1_1_o, t2_1_o, tor_1, tor_2, *t1_0, *t2_0, *t1_1, *t2_1, *t1_2, *t2_2; * This routine will take an initial system state, and then do a time * step, and report the dynamic state at the end of the time step. This * routine uses the Runge-Kutta technique for ODE integration. Note that * the choice of time step becomes critical when it is too large for the * motion of the system. Also keep in mind that this routine is using a * variable time step Runge-Kutta. * The Runge-Kutta integrator also calls the function rk_derivs(). * Variables: t_step - The time over which the system is to be integrated * t1_0_o, t2_0_o - The start robot joint coordinates (degrees) * t1_1_o, t2_1_o - The start robot joint velocities * t1_2_o, t2_2_o - The start robot joint accelerations * tor_1, tor_2 - The constant robot joint torques (N/m) * Returns: t1_0, t2_0 - The final robot joint coordinates (degrees) * t1_1, t2_1 - The final robot joint velocities * t1_2, t2_2 - The final robot joint accelerations static double ystart[20], /* State variable array */ h1, /* suggested first time step */ hmin, /* Minimum time step size */ time, /* start current time step */ tend, /* end current time step */ static int i, /* utility variable */ nvar; /* Number of state variables */ n = nvar = 4; /* Number of variables in state array */ dyn1_tor = tor_1; /* Remember torque globally */ step = 0.2*t_step; /* A nominal gross time step */ ystart[0] = t1_1_o; /* Set up state variable array */ n2 = n/2; /* location of position variables in array */ eps = 0.00001; /* Pick a tolerance limit */ h1 = t_step*0.05; /* An initial step size */ hmin = t_step*0.00000001; /* Minimum step size */ /* Loop for gross time steps of integration */ for(time = start; time <= end; time = time + step){ tend = time + step; /* Find end of current time step */ rk_odeint(ystart, nvar, time, tend, eps, h1, hmin, &nok, &nbad); /* Do the Runge-Kutta magic */ t1_1[0] = ystart[0]; /* Recover new state variables */ forward_dynamics(t1_0[0], t2_0[0], t1_1[0], t2_1[0], tor_1, tor_2, t1_2, t2_2); /* Find Joint accelerations */ * The state variable derivative array for Runge-Kutta routines is updated * here. This uses the forward dynamic equations. * Variables: x - independant variable (not used here) * y - state variables array (degrees) * Returns: dydx - array of state derivatives dydx[2] = y[0]; /* Update velocity */ forward_dynamics(y[2], y[3], y[0], y[1], /* Find Accelerations */ dyn1_tor, dyn2_tor, &dydx[0], &dydx[1]); void forward_dynamics(t1_0, t2_0, t1_1, t2_1, tor_1, tor_2, t1_2, t2_2) double t1_0, t2_0, t1_1, t2_1, tor_1, tor_2, *t1_2, *t2_2; * These equations will use the position, velocity and torque, and produce * the instantaneous accelerations. This routine is not as optimized as the * previous inverse dynamics routine. The routine contains 33 adds/subtracts * and 57 multiplies/divides. 4 trigonometric functions are also included. * This routine also uses 10 multiplies/divides to convert degrees/radians. * Variables: t1_0, t2_0 - Joint positions (degrees) * t1_1, t2_1 - Joint velocities * tor_1, tor_2 - joint torques (N/m) * Returns: t1_2, t2_2 - joint accelerations static double b1, b2, b3, b4, b5, b6, b7, b8, b9, static double pfx1, pfx2, pfx3, pfx4, pfx5, pfx6, pfx7, pfx8, pfx9, pfx10, pfx11, pfx12; pfx1 = cos(t2_0*PI_FACT); /* Find common factors */ pfx4 = 1.0/(_r6*_r6 + pfx1*(2.0*_r5 - _r4)*_r6 - _r2*_r6 + pfx2*pfx3); /* Get joint 1 acceleration */ t1_2[0] = pfx4*((b9 + b6)*tor_2 - b6*tor_1 + pfx10*(pfx1*(-b2*pfx9 - pfx2*pfx8) - b4*pfx8) + pfx6 - b12*b8 - b4*b5 + b1*b2); b11 = pfx8*_r5; /* Find more common terms */ /* Get joint 2 acceleration */ t2_2[0] = -pfx4*((4.0*pfx1*_r4 + 4.0*_r2)*tor_2 + (-b9 - b6)*tor_1 + pfx1*pfx5*(-_r1*_r5 + b14 - b10) + pfx10*(pfx1*((b2 - b13)* pfx9 - b11*_r4) - b11*_r2 + (b10 - b14)*pfx9) + pfx6 + b5* (-pfx1*pfx2 - b4) + (-b3*b7 - b12)*b8 + b1*(b13 - b2)); t1_2[0] = t1_2[0] / PI_FACT; /* Convert back to degrees */
{"url":"https://engineeronadisk.com/V2/hugh_jack_masters/engineeronadisk-27.html","timestamp":"2024-11-15T00:26:02Z","content_type":"text/html","content_length":"36498","record_id":"<urn:uuid:5a0e7665-b052-4866-8fde-05d38d315871>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00827.warc.gz"}
Differentiator using Operational Amplifier - Applications of Operational Amplifier The circuit performs the mathematical operation of differentiation (i.e.) the output waveform is the derivative of the input waveform. The differentiator may be constructed from a basic inverting amplifier if an input resistor R[1] is replaced by a capacitor C[1]. Since the differentiator performs the reverse of the integrator function. Thus the output V[0] is equal to R[F] C[1] times the negative rate of change of the input voltage Vin with time. The –sign indicates a 180º phase shift of the output waveform V[0] with respect to the input signal. The below circuit will not do this because it has some practical problems. The gain of the circuit (RF /XC1) R with R in frequency at a rate of 20dB/decade. This makes the circuit unstable. Also input impedance XC1s with R in frequency which makes the circuit very susceptible to high frequency noise. From the above fig. fa = frequency at which the gain is 0dB and is given by Both stability and high frequency noise problems can be corrected by the addition of two components. R[1] and C[F]. This circuit is a practical differentiator. From Frequency fa to feedback the gain Rs at 20dB/decade after feedback the gain S at 20dB/ decade. This 40dB/ decade change in gain is caused by the R[1]C[1] and R[F]C[F] combinations. The gain limiting frequency fb is given by, Where R[1]C[1] = R[F]C[F] R[1]C[1] and R[F]C[F] help to reduce the effect of high frequency input, amplifier noise and offsets. All R[1]C[1] and R[F] C[F] make the circuit more stable by preventing the R in gain with frequency. The input signal will be differentiated properly, if the time period T of the input signal is larger than or equal to RFC[1] (i.e) T > RFC[1] generally, the value of Feedback and in turn R[1]C[1] and C[F] values should be selected such that R[F] C[1]>> R[1] C[1] A workable differentiator can be designed by implementing the following steps. 1. Select fa equal to the highest frequency of the input signal to be differentiated then assuming a value of C[1] < 1μf. Calculate the value of RF. 2. Choose fb = 20fa and calculate the values of R[1] and C[F] so that R[1] C[1] = R[F] C[F]. It is used in wave shaping circuits to detect high frequency components in an input signal and also as a rate of change and detector in FM modulators.
{"url":"https://www.brainkart.com/article/Differentiator-using-Operational-Amplifier_36007/","timestamp":"2024-11-07T20:07:13Z","content_type":"text/html","content_length":"38122","record_id":"<urn:uuid:83df2982-5218-4795-a893-7e5815c9b363>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00430.warc.gz"}
How to sum up values based on multiple criterions This is sample data. I want to sum up original estimates (column circled with RED) by column EPIC# when: 1. non-epic helper ="Yes" 2. current jira status <> "Closed" 3. Jira# = Epic#. So, for 1st four rows, I should have sum=24+60+40=128. Next, 3 rows since don't meet all the criteria should give sum=0. I tried couple ways, but since I can't get all values matching in vlookup, I can't sum them up. I did browse thru a few discussions but was unable to get this working. Any help is appreciated! • Since you are using parent rows you can simply would put =SUM(CHILDREN()) where the 128 and the 0 are (on your parent rows), and then i'd put =IF(MID([Jira#]@row, 6, 5) = MID([Epic#]@row, 6, 5), [Original Estimate]@row, 0) in that same column on the child rows. This may not be the best method but would work. Note that I accounted for 5 digit Jira/Epic numbers, you can increase that if needed. Hope this helps! • I have a lot of data in my sheet. This seems more work to align based on parent child rows and then update as & when more data is added. Would there be any other way out if I remove parent child relation, does that make it easier? Help Article Resources
{"url":"https://community.smartsheet.com/discussion/128145/how-to-sum-up-values-based-on-multiple-criterions","timestamp":"2024-11-13T22:47:07Z","content_type":"text/html","content_length":"395852","record_id":"<urn:uuid:1f0da9b0-3421-4ddd-961a-137793967e30>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00249.warc.gz"}
Possible to extract longitudinal confidence interval bounds from degradation stability test?Possible to extract longitudinal confidence interval bounds from degradation stability test? My team frequently uses the Degradation Stability Test analysis to determine the predicted (95% confidence) values and upper and lower bounds for a response (y value). At this point we've been running the degradation stability test for each y variable (same series of x time point values) and then manually changing the longitudinal prediction time, saving the predictions into a new data window and then extracting the predicted, upper, and lower bound values into a separate Excel file. I'm trying to write a script that can run the stability test for a number of pre-selected longitudinal prediction time points and save all of the data (predicted value, upper, and lower bound values) into a new data table. However, I keep running into errors and I'm not sure whether what I'm trying to achieve is possible in the current version of JMP (v17.1.1). I've attached my script below. // Use the current data table dt = Current Data Table(); // Define the longitudinal prediction times longitudinalPredictionTimes = {18, 24, 36, 48, 60}; // Create a new data table to store the results resultsTable = New Table( "Prediction Results", Add Rows( N Items( longitudinalPredictionTimes ) ), New Column( "Prediction Time", Numeric, "Continuous" ), New Column( "Prediction", Numeric, "Continuous" ), New Column( "Lower Bound", Numeric, "Continuous" ), New Column( "Upper Bound", Numeric, "Continuous" ) // Loop through each prediction time and perform the degradation analysis For( i = 1, i <= N Items( longitudinalPredictionTimes ), i++, predictionTime = longitudinalPredictionTimes[i]; // Perform degradation analysis degradationReport = dt << Degradation( Y( :"Purity (%)" ), Time( :"Timepoint (days)" ), Label( :Formulation ), Application( Stability Test ), Connect Data Markers( 0 ), Show Fitted Lines( 1 ), Show Spec Limits( 1 ), Show Median Curves( 0 ), Show Legend( 1 ), No Tab List( 0 ), Use Pooled MSE for Nonpoolable Model( 0 ), Set Censoring Time( . ), Show Residual Plot( 1 ), Show Inverse Prediction Plot( 1 ), Show Curve Interval( 1 ), Longitudinal Prediction Time( predictionTime ), Longitudinal Prediction Interval( Confidence Interval ), Longitudinal Prediction Alpha( 0.05 ), Inverse Prediction Interval( Confidence Interval ), Inverse Prediction Alpha( 0.05 ), Inverse Prediction Side( Lower One Sided ) // Extract the prediction, lower bound, and upper bound from the report predictionPlot = degradationReport[Outline Box( "Diagnostics and Predictions" )][Outline Box( "Prediction Plot" )]; prediction = predictionPlot[Number Col Box( "Predicted value of Purity (%) at Timepoint (days)=" || Char( predictionTime ) )][1]; lowerBound = predictionPlot[Number Col Box( "Lower 95% Confidence Limit" )][1]; upperBound = predictionPlot[Number Col Box( "Upper 95% Confidence Limit" )][1]; // Add the results to the new table resultsTable:Prediction Time[i] = predictionTime; resultsTable:Prediction[i] = prediction; resultsTable:Lower Bound[i] = lowerBound; resultsTable:Upper Bound[i] = upperBound; // Show the results table resultsTable << Show Window
{"url":"https://community.jmp.com/t5/Discussions/Possible-to-extract-longitudinal-confidence-interval-bounds-from/m-p/796235","timestamp":"2024-11-05T10:00:59Z","content_type":"text/html","content_length":"562732","record_id":"<urn:uuid:f84cded1-91e0-4160-89fe-02d9585a82fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00548.warc.gz"}
operating speed of ball mill formula After obtaining the millingball speed equation, the running speed of the ball can be calculated at any time. However, when the milling ball moves away from the inner wall of the spherical tank, its running speed will not continue to increase. ... the operating parameters required for the calculation are determined. The rotation speed during ... WhatsApp: +86 18838072829 The idea of using a mixture of balls and pebbles, at a mill speed suitable for ballmilling, was revisited in this investigation, using a normal spectrum of pebble sizes (1975 mm). Batch tests in a pilotscale mill ( m diameter) were used to compare ballmilling to various ball/pebble mixtures. WhatsApp: +86 18838072829 Then we use the cutting speed from the table to calculate RPM with one of the following formulas: Metric: RPM =1000 x cutting speed / pi x D (mm) where D is your tool's diameter in mm. Imperial: RPM =12 x cutting speed (feet per minute) / pi x D (inches) D is your tool's diameter in inches. WhatsApp: +86 18838072829 The operating speed of a ball mill should be __________ the critical speed. A. Less than. B. Much more than. C. At least equal to. D. Slightly more than. Answer: Option A. This Question Belongs to Chemical Engineering >> Mechanical Operations. WhatsApp: +86 18838072829 [1] Working Operations of ball mill In case of continuously operated ball mill, the material to be ground is fed from the left through a 60° cone and the product is discharged through a 30° cone to the right. WhatsApp: +86 18838072829 Effect of Speed and Filling on Power. In this section, a x ball mill is simulated to study the combined effect of mill speed and filling on the power draft of the mill. Mill operating speed and filling, among other things, are known to affect the power draft. Mill performance is at its best when these two operating parameters ... WhatsApp: +86 18838072829 The formula for calculating critical mill of speed: N c = / √ (D d) Where: N c = Critical Speed of Mill D = Mill Diameter d = Diameter of Balls Let's solve an example; Find the critical speed of mill when the mill diameter is 12 and the diameter of balls is 6. This implies that; D = Mill Diameter = 12 d = Diameter of Balls = 6 WhatsApp: +86 18838072829 However, in combination with traditional optimization methods, ball mill grinding speed can be used to control energy input and offset the influences of ore variability. Optimum ball mill operating conditions can be determined based on circuit design and operating dynamics for any given runofmine ore. WhatsApp: +86 18838072829 e. Rotation speed of the cylinder. Several types of ball mills exist. They differ to an extent in their operating principle. They also differ in their maximum capacity of the milling vessel, ranging from liters for planetary ball mills, mixer mills, or vibration ball mills to several 100 liters for horizontal rolling ball mills. WhatsApp: +86 18838072829 Figures in this table are generally slightly low compared to some reported plant data or other manufacturers' estimates, and are based on an empirical formula initially proposed by Bond (1961) designed to cover a wide range of mill dimensions, and the normal operating range of mill load (Vp = .35 to .50) and speed (Cs = .50 to .80). WhatsApp: +86 18838072829 The critical speed of ball mill is given by, where R = radius of ball mill; r = radius of ball. But the mill is operated at a speed of 15 rpm. Therefore, the mill is operated at 100 x 15/ = % of critical speed. WhatsApp: +86 18838072829 Development of operation strategies for variable speed ball mills. Creator. Liu, Sijia. Publisher. University of British Columbia. Date Issued. 2018. Description. Mineral processing productivity relates to a range of operating parameters, including production rate, product grind size, and energy efficiency. WhatsApp: +86 18838072829 It can also form a combined crushing with a ball mill. The present research is aimed at overflow VRM. ... so the sphericity formula is S = S sphere / S particle. The size of each type metaparticle is defined by length, width, and height. Table 3 gives the shape parameters of three metaparticles. The size of type1 is × × mm ... WhatsApp: +86 18838072829 Current operational results show that the SAG mill is operating at 5% ball charge level by volume and is delivering a K80 800 micron product as predicted. Power drawn at the pinion is 448 kW, kWh /tonne (SAG mill) and 570 kW, kWh/tonne (ball mill) when processing mtph, for a total of kWh/tonne or % above the ... WhatsApp: +86 18838072829 A mill is a device, often a structure, machine or kitchen appliance, that breaks solid materials into smaller pieces by grinding, crushing, or cutting. Such comminution is an important unit operation in many are many different types of mills and many types of materials processed in them. Historically mills were powered by hand or by animals (, via a hand crank), working ... WhatsApp: +86 18838072829 The invention belongs to the technical field of mineral processing, and particularly relates to a ball mill power calculation method, which is characterized by applying the following formula (9) as shown in the figure to obtain ball mill power, wherein in the formula, psi means media rotating speed (%); phi means media filling rate (%); delta means media loose density (t/m<3>); D means ball ... WhatsApp: +86 18838072829 When the filling rate of grinding medium is less than 35% in dry grinding operation, the power can be calculated by formula (17). n — mill speed, r/min; G" — Total grinding medium, T; η — Mechanical efficiency, when the center drive, η = ; when the edge drive, η = Rotation Speed Calculation of Ball Mill WhatsApp: +86 18838072829 The ultimate crystalline size of graphite, estimated by the Raman intensity ratio, of nm for the agate ballmill is smaller than that of nm for the stainless ballmill, while the milling ... WhatsApp: +86 18838072829 Critical speed formula of ball mill. Nc = 1/2π √g/R r The operating speed/optimum speed of the ball mill is between 50 and 75% of the critical speed. Also Read: Hammer Mill Construction and Wroking Principal. Take these Notes is, Orginal Sources: Unit OperationsII, KA Gavhane WhatsApp: +86 18838072829 2. Ball mill consist of a hollow cylindrical shell rotating about its axis. Axis of the shell horizontal or at small angle to the horizontal It is partially filled with balls made up of Steel,Stainless steel or rubber Inner surface of the shell is lined with abrasion resistant materials such as Manganese,Steel or rubber Length of the mill is approximately equal to its diameter Balls occupy ... WhatsApp: +86 18838072829 The apparent difference in capacities between grinding mills (listed as being the same size) is due to the fact that there is no uniform method of designating the size of a mill, for example: a 5′ x 5′ Ball Mill has a working diameter of 5′ inside the liners and has 20 per cent more capacity than all other ball mills designated as 5′ x 5′ where ... WhatsApp: +86 18838072829 Variables in Ball Mill Operation. ... Obviously no milling will occur when the media is pinned against the cylinder so operating speed will be some percentage of the CS. The formula for critical speed is CS = 1/2π √(g/(Rr) where g is the gravitational constant, R is the inside diameter of the mill and r is the diameter of one piece of media ... WhatsApp: +86 18838072829 The formula to calculate critical speed is given below. N c = /sqt(Dd) N c = critical speed of the mill. D = mill diameter specified in meters. d = diameter of the ball. In practice Ball Mills are driven at a speed of 5090% of the critical speed, the factor being influenced by economic consideration. WhatsApp: +86 18838072829 1. Designing 40cm cylinder 2. Going to grind ceramics. Ball Milling Ceramics Ceramic Processing Most recent answer Amgalan Bor National University of Mongolia There are a lot of equations to... WhatsApp: +86 18838072829 Ball mills. The ball mill is a tumbling mill that uses steel balls as the grinding media. The length of the cylindrical shell is usually times the shell diameter (Figure ). The feed can be dry, with less than 3% moisture to minimize ball coating, or slurry containing 2040% water by weight. WhatsApp: +86 18838072829 its application for energy consumption of ball mills in ceramic industry based on power feature deployment, Advances in Applied Ceramics, DOI: / WhatsApp: +86 18838072829 Mill Speed. Speed of ball mill is expressed as percentage of critical speed. Critical speed is the speed at which the centrifugal force is high enough that all media sticks to mill wall during rotation of the mill. Normal operating speed is about 75 % of critical speed WhatsApp: +86 18838072829
{"url":"https://amekon.pl/Apr/23-7588.html","timestamp":"2024-11-04T13:36:27Z","content_type":"application/xhtml+xml","content_length":"25017","record_id":"<urn:uuid:9c3f1666-8bfd-4333-b082-23929fa14fa9>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00309.warc.gz"}
Baseball violates the rules of mathematics!! (Looking for a roomate for STOC. Check out this site. Baseball Season started this week. I want to point out that Baseball violates mathematics in two ways. 1) By the rules of the game Home Plate is a right triangle with a square adjacent to it. And what are the dimensions of this right triangle? They are 12-12-17. BUT THERE CANNOT BE A 12-12-17 RIGHT 2) (Information in this point is from Bill James Article The Targeting Phenomenon .) A players batting average is what percent of the time he or she gets a hit (its a bit more complicated since some things don't count as at-bats: walks, sacrifices, hit-by-ball, maybe others). You might think that the higher the number the less players achieve that batting average. Let N(a) be the Number of players with batting average a over all of baseball history. You might think N(296) ≥ N(297) ≥ N(298) ≥ N(299) ≥ N(300) But you would be wrong. 1. N(296)=123 2. N(297)=139 3. N(298)=128 4. N(299)=107 5. N(300)=195 There so many more players batting 300 then 299!. There so many more players batting 300 then 298!. There so many more players batting 300 then 297!. There so many more players batting 300 then 296!. This would seem to violate the very laws of mathematics! Or of baseball! Or of baseball mathematics! Actually there is an explanation. Batting 300 has become a standard that players try to achieve. If you are batting 300 and it is the last week of the season you may become very selective on what balls you hit, you may ask to sit out a game, you will do whatever you can to maintain that 300. Similarly, if you are batting 296-299 then you will do whatever it takes to get up to 300. This happens with number-of-hits (with 200 as the magic number), Runs-batted-in (with 80,90, and 100 as magic numbers), for pitchers number-of-strikeouts (with 200 and 300 as magic numbers), and wins (with 20 as the magic number). If we all had 6 fingers on our hands instead of 5 then there would be different magic numbers. So what to do with this information? Model it and get a paper out. Hope to see it at next years STOC. 14 comments: 1. I doubt such a paper would get accepted at STOC. Maybe at "Innovations in Computer science" ;-) [Just making a cheap joke, I actually like the idea of ICS] 2. The Anonymous One10:37 AM, April 08, 2010 If not STOC then at least on Arxiv or ECCC. It would probably be widely read and discussed. 3. Is this in lifetime average or yearly average? Particularly in the latter case (and even anyways, given bench players/short careers) I'd expect this to be partially explainable by the fact that if you take two small random numbers, the ratio is more likely to be .3 then .299. You can get a .300 average in 10 at bats, you need 1000 (ignoring rounding for now) to get a .299. 4. Paul, Bill James claims that he doesn't see the same effect at .286, which would be 2/7. (You can read the article in Google Books.) I'm not sure I buy it, though, because I don't have the actual 5. within a reasonable realm of numbers, 12, 12, 17 is an integral triangle closest to a half square. you know \sqrt(2) is not integral. so there would be some approximation in describing it in the common language, unless you require a math exam to understand the rules of baseball. 6. 1) Its yearly averages. 2) I think the article did mention that 300 might be a more common average than 299, but that the difference is SO huge that this would not account for it. 3) GLAD you found it on Google Books. (I assumed it wasn't online since it wasn't at Bill James Site which does have other things online.) 7. Another place this phenomenon occurs: U.S. congressional (and maybe Presidential) elections. If, say, there is a Republican incumbent who won with (say) 58% of the vote in the last election, and the "national swing" looks to be around 4-5% in the Democrats' favor, you're far more likely to see a swing of close to 8% in that particular district, since it's a vulnerable seat and the national party will put extra resources into trying to win it. 8. Caveat: I don't have the same solid mathematical evidence as Bill James, but eyeballing my data this appears to be the case. I still don't know of a robust way to model it however. 9. Maybe the rules of baseball take into account the curvature of the earth-- it it's curved enough there could be such a triangle 10. Could the phenomenon about batting averages have anything in common with Benford's law? 11. How bad is the 12-12-17 triangle? (Note that 12²+12²=288, and 289=17².) Well, one right triangle is 12-12-16.97 (where 16.97 stands for the exact value 12√2). Another right triangle is 12.02-12.02-17 (where 12.02 stands for 17/√2). And the angle in the 12-12-17 triangle is cos⁻¹(-1/288), which is 90.2°. All considered, not too bad, I think. (In fact 17/12 is one of the continued-fraction convergents of √2, which are 3/2, 7/5, 17/12, 41/29, 99/70, 239/169, 577/408…) 12. You don't need to go to 1000 to get a .299 average. 29 out of 97 will work. 13. Now that LANCE is back, is there a way to discourage GASARCH from posting so often? 14. Another one that may violate common mathematical definitions: Slugging Percentage. A perfect Slugging Percentage is 4.000, if it were a *true* percentage, then wouldn't it be normalized to [0,1]?
{"url":"https://blog.computationalcomplexity.org/2010/04/baseball-violates-rules-of-mathematics.html","timestamp":"2024-11-08T01:36:50Z","content_type":"application/xhtml+xml","content_length":"198889","record_id":"<urn:uuid:01f0320f-98b1-424c-a1a1-3e14ed473360>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00638.warc.gz"}
The Personal Distribution of Income Full text: The Personal Distribution of Income IV. In a further stage we should cease to take the wealth distribution as given, and instead treat wealth and income as Joint variables in a process evolving over the generations. Propensity to save and rate of return would be the double link between the two random We shall not further refer to this last stage in the following paper, but shall try to fill some of the empty space of stage III. Property income We shall distinguish property income and earned income and deal with the case of property income first, because it is simpler than the general case. Instead of the matrix of income transitions used by Champernowne we have to imagine an analogous matrix Wealth-Income which shows for each amount of wealth the probability of different incomes. The basis of the analysis is thus the conditional dis tribution of income, given the wealth. Economically speaking this is the probability of a certain rate of return to wealth or profit rate. Prom this we can derive the distribution of income, provided we know the distribu tion of wealth. But the distribution of wealth is known: It follows the Pareto law (over a fairly wide range) and ftfl pattsex*n hea also Poen oacplaiaea 'jf • 1 L
{"url":"https://viewer.wu.ac.at/viewer/fulltext/AC14445996/9/","timestamp":"2024-11-06T14:40:47Z","content_type":"application/xhtml+xml","content_length":"68503","record_id":"<urn:uuid:78a7184d-466f-43b6-aa26-c32ca1488e87>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00021.warc.gz"}
Introducing Accelerate for Swift - WWDC19 - Videos - Apple Developer Streaming is available in most browsers, and in the Developer app. • Introducing Accelerate for Swift Accelerate framework provides hundreds of computational functions that are highly optimized to the system architecture your device is running on. Learn how to access all of these powerful functions directly in Swift. Understand how the power of vector programming can deliver incredible performance to your iOS, macOS, tvOS, and watchOS apps. Related Videos • Download Hello. My name is Simon Gladman and I'm with the Vector and Numerics group. In this presentation, I'll be talking about two topics. First, our new Swift Overlay for Accelerate. And second, measuring Accelerate's performance using the Linpack benchmark. Before we dive into the Swift Overlay, let's recap exactly what the Accelerate framework is. The primary purpose of Accelerate is to provide thousands of low-level math primitives that run on a CPU and support image and signal processing, vector arithmetic, linear algebra, and machine learning. Most of these primitives are hand tuned to the microarchitecture of the processor. This means we get excellent performance and this performance translates directly into energy savings. So, if you're an app developer and you use the Accelerate framework, not only will your application run faster, but you'll also use less battery life. We provide the primitives across all of Apple's platforms. This includes not only macOS and iOS but watchOS and tvOS as well. This means your users are going to have an overall better experience. Accelerate's libraries are immensely powerful but up until now, their interfaces weren't that friendly to Swift developers. We've looked at four libraries and created new Swift-friendly APIs to make using Acclerate in Swift projects really easy. The four libraries we focused on are vDSP that provides digital signal processing routines including arithmetic on large vectors, Fourier transforms, biquadratic filtering, and powerful type conversion. vForce that provides arithmetic and transcendental functions including trig and logarithmic routines. Quadrature, that's dedicated to the numerical integration of functions. And vImage, that provides a huge selection of image processing functions and integrates easily with core graphics and core video. Accelerate gets its performance benefits by using vectorization. To understand vectorization, let's first look at a simple calculation over the elements of an array using scalar code. If, for example, you're writing code that multiplies each element of one array with the corresponding element in another, and you're using a four loop, each pair of elements are separately loaded, multiplied together, and the results stored. So, after the first elements in A and B are multiplied together to calculate the first element in C, the second pair are processed. Then, the third. And, finally, the fourth. However, if you're processing the elements of an array using Accelerate, your calculation is performed on single instruction multiple data, or simD registers. These registers can perform the same instruction on multiple items of data by packing those multiple items into a single register. For example, a single 128-bit register can actually store four 32-bit floating point values. So, a vectorized multiply operation can simultaneously multiply four pairs of elements at a time. This means that not only will the task be quicker, it will also be significantly more energy efficient. The multiply function we just looked at part of Accelerate's digital signal processing library, vDSP. So, let's begin by looking at how the new Swift API simplifies using vDSP. vDSP provides vectorized digital signal processing functions including Fourier transforms, biquadratic filtering, convolution, and correlation. Furthermore, vDSP also provides some powerful, more general functions including element-wise arithmetic and type conversion. So, even if you don't have an immediate need to, for example, compute the coherence of two signals, you may find that vDSP's general computation routines offer a solution to improve your app's Let's take a look at some basic arithmetic. An example could be given four arrays of single-precision values, you need to calculate the element-wise sum of two of the array's the element-wise difference in the other two, and multiply those results with each other. Using a four loop is a perfectly reasonable solution to this problem and calculates the expected results. Here's how you perform that calculation using vDSP's classic API. Using vDSP is approximately three times faster than the four loop. Here's the same computation using our new Swift API for vDSP. We're exposing the new Swift-friendly functions through our vDSP namespace and you can see the function and parameter names explain the operation. Because the new functions work with familiar types including arrays and array slices rather than pointers, you no longer need to explicitly pass the count. So, the entire function call is clearer and more concise. Passing an initialized result array offers the best performance and you can obviously reuse that array in other operations for further performance benefits. However, we're also providing self-allocating functions. These make use of Swift's new ability to access an array's uninitialized buffer to return the result of a computation. Although not quite as fast as passing existing storage, it's still faster than the scalar approach and, in some cases, will simplify your code. Another common task that vDSP can vectorize is type conversion. This example converts an array containing double precision values to 16-bit unsigned integer values rounding toward zero. The scalar version uses map with explicit rounding. Again, this is a perfectly reasonable technique to use, but vDSP can vectorize this task to improve performance. In this example, vDSP is approximately four times faster than the previous scalar implementation. The new Swift version of the vDSP function offers a clear interface. The function accepts a source array. The integer type you ought to convert each element to, and an enumeration to specify the vDSP provides Fourier transforms for transforming one-dimensional and two-dimensional data between the time domain and the frequency domain. A forward Fourier transform of a signal decomposes it into its component sign waves. That's the frequency domain representation. Conversely, an inverse transform of that frequency domain representation recreates the original signal and that's the time domain representation. Fourier transforms have many uses in both signal and image processing. For example, once an audio signal has been forward transformed, you can easily reduce or increase certain frequencies to equalize the audio. The classic API is reasonably easy to follow if you're familiar with it. You begin by creating a setup object specifying the number of elements you want to transform and the direction. Then, after creating two arrays to receive results, you call the execute function. Once you're done, you need to remember to destroy the setup to free the resources allocated to it. The new API simplifies the instantiation of the setup object and the transform itself is a method with parameter names on the DFT instance. And now you don't need to worry about freeing the resources. We do that for you. And much like the vDSP functions we've looked at, there's a self-allocating version of the transform function that creates and returns the result's arrays for you. If you work with audio data, you may be familiar with biquadratic or biquad filtering. Biquad filters can be used to equalize audio to shape the frequency response, allowing you to, for example, remove either low or high frequencies. vDSP's biquad feature operates on single and multichannel signals, and uses a set of individual filter objects called sections. The filters are cascaded; that is, they are set up in a sequence and the entire signal passes through each filter in turn. The filters are defined by a series of coefficients that plug into the equation shown here. In this example, these values form a low pass filter; that is, a filter that reduces high frequencies. Here's the code using vDSP's classic API to create the biquad setup using the coefficients in the previous slide. And here's the code to apply that biquad filter to an array named signal, returning the result to an array named output. Let's look at the same functionality implemented with a new API. As you can see, the new API vastly simplifies the creation of the biquad structure. You simply pass the coefficients to the biquad initializer and specify the number of channels and sections. Applying the biquad filter to a signal is a single function call. Now, let's look at the new API we've created for Accelerate's library for fast mathematical operations on large arrays, vForce. vForce provides transcendental functions not included in vDSP. These include exponential, logarithmic, and trig operations. A typical example of vForce would be to calculate the square root of each element in a large array. The scalar version of this code could use map. vForce provides a vectorized function to calculate the square roots that in some situations can be up to 10 times faster than the scalar implementation. The new Swift overlay offers an API that's consistent with the new vDSP functions and provides the performance and energy efficiency benefits of vectorization. And much like we've seen earlier, there's a self-allocating version that returns an array containing the square roots of each element in the supplied array. Next, we'll take a look at the new API we've created for Quadrature. Quadrature is a historic term for determining the area under a curve. It provides an approximation of the definite integrative function over a finite or infinite interval. In this example, we'll use Quadrature to approximate the area of a semicircle, shown here in green, by integrating the functions shown. Much like the Biquad code for vDSP, there's a fair amount of code required to use the existing Quadrature API. The first step is to define a structure that describes a function to integrate. The second step is to define the integration options including the integration algorithm. Finally, with the function on options defined, you can perform the integration using the Quadrature integrate function. The new API simplifies the code. One great advantage is that you can specify the integrand, that is, the function to be integrated, as a trading closure rather than as a C function pointer. This means you can easily pass values into the integrand. Also note that integrators are now enumerations with associated values. So, there's no need to supply unnecessary points for interval or maximum intervals here. For example, you can pass the enumeration for the globally adaptive integrator specifying the points for interval and maximum intervals. Now, let's look at the new API we've created for Accelerate's image processing library, vImage. vImage is a library containing a rich collection of image processing tools. It's designed to work seamlessly with both core graphics and core video. It includes operations such as alpha blending, format conversions, histogram operations, convolution, geometry, and morphology. Our new Swift API introduces lots of new features that makes using vImage in Swift easier and more concise. We've implemented flags as an option set. vImages throw Swift errors. And we've hidden some of the requirements for mutability and working with unmanaged types. If you're working with core graphics images, there's a common workflow to get that image data into a vImage buffer. First, you need to create a description of the CG images format. Then, instantiate a vImage buffer. Initialize that buffer from the image. And finally, check for errors in a non-Swift way. And that's a lot of boilerplate code for a common operation. The new API wraps up all of that code into a single throwable initializer. However, since we're going to use a CG images format later, here's similar functionality implemented in two steps with a new API. We've added a new initializer to CG image format using a CG image, and an alternative buffer initializer that accepts a CG image and an explicit format description. Once you're finished working with a buffer, here's the classic vImage function to create a CG image from the buffer's contents. And our new API simplifies that operation too with a new create CG image method that uses the format we've just generated from the image. One important use case for vImage is converting between different domains and different formats. vImage's any-to-any convertors can convert between core video and core graphics, and convert between different core graphics formats. For example, you might want to convert a CMYK core graphics image to RGB. The existing API to create a convertor accepts the source and destination formats for the conversion and returns an unmanaged convertor. You take the managed reference of the convertor and pass that to the function that does the conversion. Our new API adds a new static make function to the existing convertor type that returns a convertor instance. The conversion is done with the convert method on the convertor instance. Finally, let's look at working with core video image formats. In a typical example, you may want to create an image format description from a core video pixel buffer and calculate its channel Here's the code required by the classic vImage API to create an image format description from a pixel buffer and get its channel count. The new API provides the same functionality in two lines of code. You create an instance of a core video image format from a pixel buffer using a new static make function. And simply access its channel count as a property. That was a quick tour of a fraction of the new API. Let's now take a look at Linpack Benchmark and see just how much faster and more energy efficient Accelerate can be. The Linpack Benchmark came out of the Linpack library which started as a set of routines for providing fast computational linear algebra. This was later subsumed by a library called LApack, which stands for Linear algebra package. LApack was developed to take advantage of these new things at the time called caches. LApack is comprised of many blocked algorithms. These algorit6thms are built on top of another library called BLAS, which stands for basic linear algebra subroutines. We'll talk more about BLAS later in this presentation. For now, keep in mind that the Linpack Benchmark runs on top of LApack, which runs on top of BLAS. The Linpack Benchmark measures how quickly a platform can solve a general system of linear equations. It is comprised of two steps. The matrix factorization step, followed by the backsole step. By fixing the algorithm, we're able to see how well different platforms are at running the algorithm. This provides us with a method of comparing different platforms. The Linpack Benchmark has evolved over time. Originally, it solved a 100 by 100 system, and later a 1000 by 1000 system. The variant most often used today is the no holds barred variant, where the problem size can be as large as you want. This is the variant we will be running today. We are now going to compare Linpack performance on an iPhone 10S. At the top in orange, we're going to run an unoptimized Linpack. This Linpack Benchmark does not make use of the accelerate framework. It relies on software that is not tuned to the process that it is running on. Let's see what that looks like. We are now going to compare that with using the Accelerate framework; that is, we're going to run the same benchmark on the same platform, but using the Accelerate framework which is tuned to the platform. We can see that by using the Accelerate framework, we are over 24 times faster. This will not only save time, but also energy, which improves battery life. We're now going to shift gears and take a look at the primary workhorse routine for the Linpack Benchmark called GEMM. As I mentioned earlier, Linpack, which runs on LApack, is built on top of BLAS. Within BLAS is a routine called GEMM, which stands for general matrix multiplier. This routine is used to implement several other blocked routines in BLAS, which are used inside the blocked algorithms at LApack, most notably the matrix factorization and solver routines. Because of this, GEMM is sometimes used as a proxy for performance. For this presentation, we are specifically going to look at the single-precision variant of GEMM. Here, we're going to compare the performance of the Eigen library with that of Accelerate. Both the Eigen library and the Accelerate framework will run on top of an iPhone 10S. Both will be performing a single-precision matrix multiplier. Let's see how well Eigen does. Eigen tops out at about 51 gigaflops. Now, let's see how well Accelerate does. We can see that the Accelerate framework is almost two and a half times faster than Eigen on the same platform. This is because the Accelerate framework is hand-tuned to the platform, allowing us to fully take advantage of what the platform can offer. So, if you're a developer, using Accelerate in your app will offer better performance. This performance translates into less energy, which means better battery life and an overall better experience for your users. In summary, Accelerate provides functions for performing large-scale mathematical computations and image calculations that are fast and energy efficient. And now we've added a Swift-friendly API that makes Accelerate's libraries super easy to work with so your users will benefit from that performance and energy efficiency. Please visit our site where we have samples, articles, and extensive reference material that covers the entire Accelerate framework. Thank you very much. • Looking for something specific? Enter a topic above and jump straight to the good stuff. An error occurred when submitting your query. Please check your Internet connection and try again.
{"url":"https://developer.apple.com/videos/play/wwdc2019/718/","timestamp":"2024-11-05T12:46:11Z","content_type":"text/html","content_length":"104210","record_id":"<urn:uuid:60e710ef-e69b-4de4-86c2-9ca52846a73e>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00085.warc.gz"}
Dynamics Seminar Caroline DavisIndiana University Topology and Combinatorics of Per_n(0) curves Thursday, October 24, 2024 - 2:55pm Classical complex dynamics began with an interest in the topology and combinatorics of the moduli space of quadratic polynomials {z^2+c}, notable also for its special subset, the Mandelbrot set. The moduli space of all quadratic rational maps rat_2 is isomorphic to \mathbb{C}^2, and we can also understand the space of quadratic polynomials as a special curve within rat_2 in which one critical point is marked as a fixed point. Other natural curves within rat_2 of longstanding interest are the Per_n(0) curves, in which one critical point is marked as in an n-cycle. In this talk, we speak about the topology and combinatorics of these curves and their bifurcation locus, paying particular attention to how structure from the Mandelbrot set (“matings” and “captures”) can shed light on questions like Per_n(0) is irreducible.
{"url":"http://www.mathlab.cornell.edu/m/node/11513","timestamp":"2024-11-09T07:54:34Z","content_type":"text/html","content_length":"28034","record_id":"<urn:uuid:974eb62e-dbde-4fad-b52a-e738fe8c0c15>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00115.warc.gz"}
What Are Space Frequency, Occupancy And Utilisation Rates And How Do I Calculate Them? Frequency, occupancy and utilisation are terms for measuring how well space (as in physical space, such as rooms) is being used. Each provides you with a different piece of information that helps you to understand how well your space (i.e. a room, a building, or even your institution) is being used and why. The Space Management Group (SMG) , provides some useful definitions so lets start with these: Frequency – SMG Definition “The frequency rate measures the proportion of time that space is used compared to its availability” This frequency definition outlines the two pieces of information you need to calculate a space’s frequency rate; 1. Space availability 2. How many times the space was used Space availability, is simply the length of time the space in question is available for use during the time frame you have selected. To help demonstrate this I am going to choose a teaching room as our example space and use a typical teaching week of Monday – Friday, 09-18:00, 45 hours per week as the teaching room’s availability. Therefore the space availability in this example is 45 hours. Next, you need to find out how many times this space is used during these 45 hours. Typically this is done by physically checking the space every hour throughout the space availability, in this case Monday-Friday 09-18:00, recording whether the room is in use or not each hour. For this example’s purpose, lets say that you carried out this check and found out the room was in use during 30 of the 45 hours the space was available that week – see room check sheet below. You now have the information for part 2) How many times was the space used. Now you have both both pieces of information you can calculate the frequency rate, to do this just divide the 2) how many times the space was used (30) by the space availability (45) and you get your space frequency rate – 66.67%. You now know that during the surveyed period the teaching room was used 66.67% of the time and therefore was empty, 33.33% of the time. See formulae below. Occupancy – SMG Definition “The occupancy rate measures how full the space is compared to its capacity” In this case, we are specifically looking at how well a room is used – whilst it is in use and this time there are three pieces of information you will need in order to calculate the occupancy rate; 1. Space capacity 2. Total number of persons occupying space 3. Number of hours space was in use The capacity of the room can be defined in a couple of ways, however the typical method is to use the “actual capacity” this being, how many people can actually use the space at one time. Using the previous example of a teaching room, the actual capacity would be how many people can comfortable sit in the room with allocated desk space given the furniture provision and layout. So, lets say when you check this teaching room it has 40 seats and 40 desks, therefore your actual space capacity is 40. Next step, is to find out 2) how many people are occupying the space whilst it is in use. As with the frequency rate, this requires checking the room over a set amount of time and in order to calculate the utilisation rate of this room (next step) you must ensure that both the frequency and occupancy rate are collected at the same time, for the same number of hours. Continuing with this example, as you check the room every hour during the 45 hour teaching week you will also need to record how many people are in the room when it is being used. Then once you have completed all 45 hours, add up how many people are occupying the space whilst it is in use to get your 2) Total number of persons occupying space and then divide this by the 1) Space capacity multiplied by the 3) Number of hours space was in use (you know this from your frequency calculations) and you have your occupancy rate – see room check sheet and formulae below. You now know, that on average when the room was used it was 74.17% occupied and therefore on average 26.83% of the capacity wasn’t used when the room was in use. Utilisation – SMG Definition “The utilisation rate is a function of a frequency rate and an occupancy rate” The finally part, is calculating your utilisation rate and this is the really simple bit, providing you have already got your frequency and occupancy rate for the space in question. So, continuing with our teaching room example, to calculate the utilisation rate all you have to do is multiply your frequency rate (66.67%) by your occupancy rate (74.17%) and you get your space utilisation rate (49.45%). See formulae below. By following these steps you should now be able to calculate any given rooms frequency, occupancy and utilisation rate. This information can be used in many many ways, to help you improve many aspects of your institution such as reduce you estates costs, improve your student experience, as well as increase staff and students numbers without having build new spaces. So, what do you think? Have you found this article useful? Is there anything you would like to add? Do you have any questions? Let us know via the comments box at the bottom of this article, it would be great to here from you and we'd be very interested to discuss this topic further with you.
{"url":"https://www.escentral.com/blog/what-are-space-frequency-occupancy-and-utilisation-rates-and-how-do-i-calculate-them","timestamp":"2024-11-09T14:16:39Z","content_type":"text/html","content_length":"455972","record_id":"<urn:uuid:c1a763fb-1123-46be-aca5-b809ac503172>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00749.warc.gz"}
Graph Quadratic Functions Learning Outcomes • Graph quadratic functions using tables and transformations • Identify important features of the graphs of quadratic functions Quadratic functions can also be graphed. It is helpful to have an idea about what the shape should be so you can be sure that you have chosen enough points to plot as a guide. Let us start with the most basic quadratic function, [latex]f(x)=x^{2}[/latex]. Graph [latex]f(x)=x^{2}[/latex]. Start with a table of values. Then think of each row of the table as an ordered pair. x f(x) [latex]−2[/latex] [latex]4[/latex] [latex]−1[/latex] [latex]1[/latex] [latex]0[/latex] [latex]0[/latex] [latex]1[/latex] [latex]1[/latex] [latex]2[/latex] [latex]4[/latex] Plot the points [latex](-2,4), (-1,1), (0,0), (1,1), (2,4)[/latex] Since the points are not on a line, you cannot use a straight edge. Connect the points as best as you can using a smooth curve (not a series of straight lines). You may want to find and plot additional points (such as the ones in blue here). Placing arrows on the tips of the lines implies that they continue in that direction forever. Notice that the shape is similar to the letter U. This is called a parabola. One-half of the parabola is a mirror image of the other half. The lowest point on this graph is called the vertex. The vertical line that goes through the vertex is called the line of reflection. In this case, that line is the y-axis. The equations for quadratic functions have the form [latex]f(x)=ax^{2}+bx+c[/latex] where [latex] a\ne 0[/latex]. In the basic graph above, [latex]a=1[/latex], [latex]b=0[/latex], and [latex]c=0[/ In the following video, we show an example of plotting a quadratic function using a table of values. Changing a changes the width of the parabola and whether it opens up ([latex]a>0[/latex]) or down ([latex]a<0[/latex]). If a is positive, the vertex is the lowest point, if a is negative, the vertex is the highest point. In the following example, we show how changing the value of a will affect the graph of the function. Match each function with its graph. a) [latex] \displaystyle f(x)=3{{x}^{2}}[/latex] b) [latex] \displaystyle f(x)=-3{{x}^{2}}[/latex] c) [latex] \displaystyle f(x)=\frac{1}{2}{{x}^{2}}[/latex] Show Solution If there is no b term, changing c moves the parabola up or down so that the y intercept is ([latex]0, c[/latex]). In the next example, we show how changes to c affect the graph of the function. Match each of the following functions with its graph. a) [latex] \displaystyle f(x)={{x}^{2}}+3[/latex] b) [latex] \displaystyle f(x)={{x}^{2}}-3[/latex] Show Solution Changing [latex]b[/latex] moves the line of reflection, which is the vertical line that passes through the vertex ( the high or low point) of the parabola. It may help to know how to calculate the vertex of a parabola to understand how changing the value of [latex]b[/latex] in a function will change its graph. To find the vertex of the parabola, use the formula [latex] \displaystyle \left( \frac{-b}{2a},f\left( \frac{-b}{2a} \right) \right)[/latex]. For example, if the function being considered is [latex]f(x)=2x^2-3x+4[/latex], to find the vertex, first calculate [latex]\dfrac{-b}{2a}[/latex] [latex]a = 2[/latex], and [latex]b = -3[/latex], therefore [latex]\dfrac{-b}{2a}=\dfrac{-(-3)}{2(2)}=\dfrac{3}{4}[/latex]. This is the [latex]x[/latex] value of the vertex. Now evaluate the function at [latex]x =\dfrac{3}{4}[/latex] to get the corresponding y-value for the vertex. [latex]f\left( \dfrac{-b}{2a} \right)=2\left(\dfrac{3}{4}\right)^2-3\left(\dfrac{3}{4}\right)+4=2\left(\dfrac{9}{16}\right)-\dfrac{9}{4}+4=\dfrac{18}{16}-\dfrac{9}{4}+4=\dfrac{9}{8}-\dfrac{9}{4}+4=\ The vertex is at the point [latex]\left(\dfrac{3}{4},\dfrac{23}{8}\right)[/latex]. This means that the vertical line of reflection passes through this point as well. It is not easy to tell how changing the values for [latex]b[/latex] will change the graph of a quadratic function, but if you find the vertex, you can tell how the graph will change. In the next example, we show how changing b can change the graph of the quadratic function. Match each of the following functions with its graph. a) [latex] \displaystyle f(x)={{x}^{2}}+2x[/latex] b) [latex] \displaystyle f(x)={{x}^{2}}-2x[/latex] Show Solution Note that the vertex can change if the value for c changes because the y-value of the vertex is calculated by substituting the x-value into the function. Here is a summary of how the changes to the values for a, b, and, c of a quadratic function can change it is graph. Properties of a Parabola For [latex] \displaystyle f(x)=a{{x}^{2}}+bx+c[/latex], where a, b, and c are real numbers, • The parabola opens upward if [latex]a > 0[/latex] and downward if [latex]a < 0[/latex]. • a changes the width of the parabola. The parabola gets narrower if [latex]|a|> 1[/latex] and wider if [latex]|a|<1[/latex]. • The vertex depends on the values of a, b, and c. The vertex is [latex]\left(\dfrac{-b}{2a},f\left( \dfrac{-b}{2a}\right)\right)[/latex]. In the last example, we showed how you can use the properties of a parabola to help you make a graph without having to calculate an exhaustive table of values. Graph [latex]f(x)=−2x^{2}+3x–3[/latex]. Show Solution The following video shows another example of plotting a quadratic function using the vertex. Creating a graph of a function is one way to understand the relationship between the inputs and outputs of that function. Creating a graph can be done by choosing values for x, finding the corresponding y values, and plotting them. However, it helps to understand the basic shape of the function. Knowing how changes to the basic function equation affect the graph is also helpful. The shape of a quadratic function is a parabola. Parabolas have the equation [latex]f(x)=ax^{2}+bx+c[/latex], where a, b, and c are real numbers and [latex]a\ne0[/latex]. The value of a determines the width and the direction of the parabola, while the vertex depends on the values of a, b, and c. The vertex is [latex] \displaystyle \left( \dfrac{-b}{2a},f\left( \dfrac{-b}{2a} \right) \right)[/
{"url":"https://courses.lumenlearning.com/intermediatealgebra/chapter/quadratic-functions/","timestamp":"2024-11-08T21:34:11Z","content_type":"text/html","content_length":"65521","record_id":"<urn:uuid:6104c96c-7d73-4f64-8d83-43e7b0aee731>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00242.warc.gz"}
The Long and Winding Road: The Story of Complex Exponential Smoothing - Open Forecasting The Long and Winding Road: The Story of Complex Exponential Smoothing The idea of using complex variables in modelling and forecasting was originally proposed by my father, Sergey Svetunkov. Based on that, we developed several models, which were then used in some of our research. We worked together in this direction and published several articles in Russian. My father even published a monograph “Complex-Valued Modeling in Economics and Finance” based on that Pre-PhD period This story started in 2010 when I worked as an Associate Professor at the Higher School of Economics (HSE) in Saint Petersburg, Russia. By then, I had defended my candidate thesis (in Russia, this is considered an equivalent to a PhD) on the topic of “Complex Variables Production Functions”, and I was teaching Microeconomics, Econometrics and Forecasting to undergraduate students. On my way to work (which would typically take an hour), I would typically read or write something. On one of those days, I came up with the basic formula for Complex Exponential Smoothing, assigning the error term to the imaginary part of the number and using Brown’s Simple Exponential Smoothing as a basis for the new forecasting method. Just for comparison, here is the Simple Exponential Smoothing: \hat{y}_{t+1} = \alpha y_t + (1-\alpha) \hat{y}_{t} . And here is what I came up with: \hat{y}_{t+1} + i \hat{\varsigma}_{t+1} = (\alpha_0 + i \alpha_1) (y_t + i \varsigma_t) + (1-\alpha_0 + i – i \alpha_1) (\hat{y}_{t} + i \hat{\varsigma}_{t}) . I’m not explaining this formula in this post (you can read about it here). It is here just for demonstration. It was and still is a complicated forecasting method to understand, but the idea itself excited me. When I returned home, I continued the derivations and did some basic experiments in Excel. I developed the method further in 2010 and presented it in April 2011 at a conference on Business Informatics in Kharkiv, Ukraine (this is one of the cities that Russian army has been bombing in the war that Putin started with Ukraine on 24th February 2022). The idea was well received, and I had encouraging feedback. The first paper on CES was then published in Russian language in the proceedings of the conference (it is available in Russian here and here, p.11 – I used to call the method “Complex Exponentially Weighted Moving Average”, CEWMA back then). After that, I started thinking of preparing a paper in English and submitting it to an international peer-reviewed journal. HSE had an excellent service, where people outside your department would read your paper and provide feedback. So I used that service after preparing the first draft in English in 2012 and got a review with several comments. One of them was helpful. It said that my paper lacked proper motivation and that, in its current state, it could not be published in a peer-reviewed international journal. However, the other comment was that my research area was uninteresting, nobody did anything like that in the academic world, and thus I should find a different area of research. I disagreed with the latter point and, after minor modifications, submitted the paper to the International Journal of Forecasting (IJF). As expected, Rob Hyndman (back then, editor-in-chief of the journal) replied that the paper could not be published because it lacked motivation and because I failed to show that the approach worked. At that time, I did not know how to motivate the paper or how to modify it to make it publishable, so that was a dead end for that version of the paper. But I did not want to give up, so in 2012, I applied for a PhD in Management Science at Lancaster University, writing a proposal about my model. PhD period I was admitted as a PhD student in 2013 with a scholarship from the Lancaster University Management School, and I started my work under the supervision of Nikolaos Kourentzes and Robert Fildes on the topic “Complex Exponential Smoothing”. After preparing a proper experiment, I received good results and wrote the first version of the R function ces(). The results of this work were presented in my first International Symposium on Forecasting (ISF) in Rotterdam in 2014. Nobody noticed my presentation, and nobody seemed to care. I then focused on rewriting the paper, Nikos helped me in writing up the motivation. After collecting feedback about the paper from our colleagues, we decided to submit it to a statistical journal. That was very arrogant of us – we did not understand how to write papers for such journals, and nobody in our group ever published there. As a result, we got a desk rejection from the Journal of American Statistical Association in 2015, saying that they do not publish forecasting papers. In parallel, I started working on an extension of the CES for the seasonal time series, which I then presented at ISF2015 at Riverside, US. I then managed to discuss my research with Keith Ord, who expressed his interest in it and provided support and guidance for some parts of it. He even helped me with some derivations, which I included in the first paper. To make things even more complicated, I continued work on my PhD and wrote a second paper, extending CES for seasonal time series. At the end of 2015, I resubmitted the first paper to Operations Research journal, where it got desk-rejected, and then to EJOR (European Journal of Operational Research). After a short discussion with Nikos, we decided to submit the second paper to IJF, hoping that the first will progress fast and that the two of them can be done in parallel. That was a fatal mistake, which impacted my academic career and mental well-being for the next several years. Unfortunately, the first paper got rejected from EJOR after the second round of revision, with a second reviewer saying that it could not be published because we did not use the Diebold-Mariano test (yes, that was the reason. Note: we used Nemenyi instead). As for the second one, it got stuck in IJF. In the first round, the second reviewer said that the model has a fatal flaw and cannot be used in practice (he concluded that because he misunderstood how the model worked). In the second round, when we explained the model in more detail, the reviewer looked more carefully at CES and started criticising the first paper, which by then was published as a working paper. We placed ourselves in a challenging situation: we had to defend the first paper in the revision of the second one. This process led us to the third and then to the fourth round without significant progress. We were discussing the meaning of complex variables in the model and whether the imaginary part of the model makes sense instead of discussing the seasonal extension of CES. It was apparent that the model works (it performed better than ETS and ARIMA on the M competition data), but the reviewers had questions about the interpretation of the original model. In the fourth round, an Associate Editor of IJF has written that “I still maintain view and so does reviewer 2 that there is an interesting paper lurking under this paper but we are yet to see it and evaluate it on its own merits“. It became clear that we were not moving forward and that the only way out of this dead end would be to merge the two papers and restart the submission process – by then, we were discussing a completely different paper than the one submitted initially to IJF. I was not ready for this serious step, and I decided not to continue the revision process in IJF and put the paper on hold. By then, my publishing experience had been very disappointing and demotivating, and I struggled to continue doing anything in that research direction. Whenever I would open the paper, it would spoil my mood for the rest of the day, as I would think that it was unpublishable and that nobody needed my work (as I’ve been told repeatedly by many different people starting from 2010). Nonetheless, somewhere in the middle of the IJF revision, at the end of 2016, I had my viva. I got PhD in Management Science defending the thesis on the topic “Complex Exponential Smoothing”. Post-PhD period At the end of 2017, Fotios Petropoulos suggested me to participate in the M4 competition. His idea was to submit a combination of forecasts from several models: ETS, ARIMA, Theta and CES. After trying out several options, we used median for the combination (I must confess that we weren’t the first ones that did that, this was investigated, for example, by Jose & Winkler, 2008). This approach got to 6th place in the competition. We were invited to submit a paper explaining our approach, which was then published in IJF (Petropoulos & Svetunkov, 2020). That paper is the first paper published in a peer-reviewed journal discussing CES. In 2018, during the ISF in Boulder, Nikos and I invited Keith Ord to join our paper – he supported me during my PhD and made a substantial contribution to the paper. We decided to clean the paper up, rewrite some parts, and submit it to a peer-reviewed journal as a paper from three co-authors. It took us some time to return to the original text, revive the R code and update the paper. In the middle of 2019, Nikos, Keith and I submitted the CES paper to the Journal of Time Series Analysis. It was a desk rejection with a comment that the Associate Editor “…argues that your paper is a relatively straightforward extension of smoothing via a state space model” and thus the paper “is not appropriate for publication in this journal in terms of substantive content“. We rewrote the motivation to align the paper with an OR-related journal and submitted it to Omega, to get another desk rejection saying that it is too mathematical for them and that the paper “is quite technical and would likely be best served by targeting a journal in the time series or forecasting field instead“. Finally, at the end of 2019, we submitted the paper to Naval Research Logistics (NRL). By then, I did not have any expectations about the paper and was sure that it would either be a desk rejection or a rejection from reviewers – I had seen this outcome so many times that it would be naive to expect anything else to happen. However, this time we got an Associate Editor who liked the idea and supported us from the first revision. In fact, they pointed out that CES has already been used in M4 competition and showed that it brought value. On 24th February 2021, we got our first round of revision, after which I decided to move some parts of paper 2 (seasonal CES) to the first one, merging the two. It made sense because the paper would now look complete. While one of the reviewers was sceptical about the paper, Associate Editor provided colossal support and guided us in what to change in the paper so that it could be accepted in NRL. After two rounds and some additional rewrites of the paper, on 18th June 2022, it was accepted for publication in Naval Research Logistics, and then published online on 2nd August 2022. Complex Exponential Smoothing is a complex idea, something that people are not used to. It stands out and does things differently, not the way the researchers typically do. This is what makes it interesting, and this is what made it extremely difficult to publish. Over the years, I questioned the correctness and usefulness of my idea many times. Some days I would be dancing around, singing “it works, it works” after a successful experiment; on others, I would throw it away, saying “never again” when the experiments failed. This is all part of academic life. However, the most challenging experience for me was the publication of the paper. Over the years, I have met a lot of resistance from the academic world. I have not included here comments from my former Higher School of Economics colleagues or comments from some journal reviewers. They rarely were pleasant and supportive. Some people did not understand the idea, the others did not want to understand it. But there were always several people around me who helped and guided me. I would not be able to publish the paper in the end if it was not for the support from Nikos Kourentzes, Keith Ord, Sergey Svetunkov (my father) and Anna Sroginis (my wife). They believed in the idea and supported me even when it looked that it wouldn’t work. So, I am immensely grateful for their support. It has been a long and winding road… and I’m glad that it’s finally over. As for the lessons to learn from this, I have several for you: • Do not try publishing dependent papers in parallel: if your second paper depends on the first one, do not submit it before the first one is at least accepted. • If you want to publish in a journal in which your group does not typically publish, find a person who does and work with them. That became apparent to me when I worked on a different paper with a colleague from a statistics department. Statistical journals have a completely different style than the OR ones, and we had no chance to publish CES paper there. • As a reviewer, you might not understand the paper you are reviewing. This is okay. We cannot know and understand everything instantaneously. But that does not mean that the paper is not good. It only means that you need to invest more time in understanding the paper and then help to improve it (yes, paper revision is a serious job, not a box-ticking process). I had many comments of the style “I did not understand it, so reject”. This is not how revisions should be done. Last but not least, be critical of your ideas, but if you believe in something, stick with it and be patient. It might take a lot of time for other people to start appreciating what you have been trying to show them. Comments (2): 1. Thank you for sharing your story, and I agree with two out of your three lessons. I disagree with one point: if I as a reviewer do not understand a paper, it is usually not my responsibility to invest a lot of time. After all, I am supposedly an expert in the field. (If it turns out that I am assigned a paper in which I am not an expert, then I should notify the editor and withdraw.) Thus, I am precisely the target audience of the papers I review. And therefore, the *author* needs to invest every reasonable effort to make their paper understandable – after all, if the reviewer, an expert, does not understand it, how will later readers understand it? And yes, there is frequently a tension between “I do not understand X, but the paper is good, so please explain X better” and “I do not understand X, and the paper is generally weak, so I recommend rejection”. I have heard it said that the job of the reviewer is to weed out bad papers and make good papers better. One can err in either direction. But nobody is happy if I get a paper I believe is a publishable-if-better-explained one, and we then have three review rounds before the editor and I are convinced that the author is *not* capable of explaining their idea better, and then the paper is rejected after three rounds. □ Thanks for your comment, Stephan! And I actually agree with you. Maybe my point wasn’t very clear (I feel disturbance in the force :D). Let me clarify. I’ve faced many cases, when a reviewer who was supposed to be an expert in the area (because they accepted to review the paper) did not know much about forecasting and did not even make an effort to understand the paper and judged it hastily, providing comments like “you do not cite papers on forecasting with high frequency data, so I recommend rejection” (this is one of the comments I received for CES). So, this is just a sloppy revision, and my point in the post is that the reviewers need to understand that revision is a serious job. Yes, a reviewer should look critically at the paper and they should help making it more understandable if it is not well written. But the revision is a process done by two sides, not just one. So, if the paper gets to the fourth round without progress, this means that the reviewer and the authors are speaking different languages, and both sides should make an effort to understand each other (preferably, much earlier than on the fourth round). However, there are reviewers that do not want to make that effort and prefer just to get rid of the paper, so that they do not need to do the job, but at the same time can claim that they review papers in this and that journal. And yes, there are good reviewers as well, I had several during this journey. They were responsible and helpful. And yes, the authors should write papers which are easy to understand. The points above mainly apply to those reviewers who think that doing a sloppy job is fine. You must be logged in to post a comment.
{"url":"https://openforecast.org/2022/08/02/the-long-and-winding-road-the-story-of-complex-exponential-smoothing/","timestamp":"2024-11-05T13:50:48Z","content_type":"text/html","content_length":"170533","record_id":"<urn:uuid:22d4078e-bb40-4c58-8b92-90488d0950d1>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00076.warc.gz"}
Cyclic Partition: An Up to 1.5x Faster Partitioning Algorithm | Business Blog Articles & Reviews - BB.AICyclic Partition: An Up to 1.5x Faster Partitioning Algorithm Cyclic Partition: An Up to 1.5x Faster Partitioning Algorithm A sequence partitioning algorithm that does minimal rearrangements of values 1. Introduction Sequence partitioning is a basic and frequently used algorithm in computer programming. Given a sequence of numbers “A”, and some value ‘p’ called pivot value, the purpose of a partitioning algorithm is to rearrange numbers inside “A” in such a way, so that all numbers less than ‘p’ come first, followed by the rest of the numbers. An example of a sequence before and after partitioning by pivot value “p=20”. After the algorithm, all the values which are less than 20 (light green) appear before the other values (yellow). There are different applications of partitioning, but the most popular are: QuickSort—which is generally nothing more than a partitioning algorithm, called through recursion multiple times on different sub-arrays of given array, until it becomes sorted.Finding the median value of a given sequence—which makes use of partitioning in order to efficiently cut down the search range, and to ultimately find the median in expected linear time. Sorting a sequence is an essential step to enable faster navigation over large amounts of data. Of the two common searching algorithms—linear search and binary search—the latter can only be used if the data in the array is sorted. Finding the median or k’th order statistic can be essential to understand the distribution of values in given unsorted data. Currently there are different partitioning algorithms (also called—partition schemes), but the well-known ones are “Lomuto scheme” and “Hoare scheme”. Lomuto scheme is often intuitively easier to understand, while Hoare scheme does less rearrangements inside a given array, which is why it is often preferred in practice. What I am going to suggest in this story is a new partition scheme called “cyclic partition”, which is similar to Hoare scheme, but does 1.5 times fewer rearrangements (value assignments) inside the array. Thus, as it will be shown later, the number of value assignments becomes almost equal to the number of the values which are initially “not at their place”, and should be somehow moved. That fact allows me to consider this new partition scheme as a nearly optimal one. The next chapters are organized in the following way: In chapter 2 we will recall what is in-place partitioning (a property, which makes partitioning to be not a trivial task),In chapter 3 we will recall the widely used Hoare partitioning scheme,In chapter 4 I’ll present “cycles of assignments”, and we will see why some rearrangements of a sequence might require more value assignments, than other rearrangements of the same sequence,Chapter 5 will use some properties of “cycles of assignments”, and derive the new “cyclic partition” scheme, as an optimized variant of Hoare scheme,And finally, chapter 6 will present an experimental comparison between the Hoare scheme and cyclic partition, for arrays of small and large data types. An implementation of Cyclic partition in the C++ language, as well as its benchmarking with the currently standard Hoare scheme, are present on GitHub, and are referenced at the end of this story 2. Recalling in-place sequence partitioning Partitioning a sequence will not be a difficult task, if the input and output sequences would reside in computer memory in 2 different arrays. If that would be the case, then one of methods might be to: Calculate how many values in “A” are less than ‘p’ (this will give us final length of the left part of output sequence),Scan the input array “A” from left to right, and append every current value “A[ i]” either to the left part or to the right part, depending on whether it is less than ‘p’ or not. Here are presented a few states of running such algorithm: During the first stage we calculate that there are only 7 values less than “p=20” (the light green ones), so we prepare to write the greater values into the output sequence, starting from index 7. During the second stage, after scanning 5 values of the input sequence, we append 3 of them to the left part of output sequence, and the other 2 to its right part.Continuing the second stage, we have scanned now 9 values from input sequence, placing 5 of them in the left part of output sequence, and the other 4 to its right part.The algorithm is completed. Both parts of the output sequence are now properly filled to the end. Note, the relative order of values in either left or right parts is preserved, upon how they were originally written in the input array. Other, shorter solutions also exist, such ones which have only one loop in the code. Now, the difficulty comes when we want to not use any extra memory, so the input sequence will be transformed into the partitioned output sequence just by moving values inside the only array. By the way, such kind of algorithms which don’t use extra memory are called in-place algorithms. Partitioning the same input sequence “A” in-place, by the same pivot value “p=20”. Presented order of values corresponds to the input state of the sequence, and arrows for every value show if where to that value should be moved, in order for the entire sequence to become partitioned. Before introducing my partitioning scheme, let’s review the existing and commonly used solution of in-place partitioning. 3. The currently used partitioning scheme After observing a few implementations of sorting in standard libraries of various programming languages, it looks like the most widely used partitioning algorithm is the Hoare scheme. I found out that it is used for example in: “std::sort()” implementation in STL for C++,“Arrays.sort()” implementation in JDK for Java, for primitive data types. In the partitioning based on the Hoare scheme, we scan the sequence simultaneously from both ends towards each other, searching in the left part such a value A[i] which is greater or equal to ‘p’, and searching in the right part such a value A[j] which is less than ‘p’. Once found, we know that those two values A[i] and A[j] are kind of “not at their proper places” (remember, the partitioned sequence should have the values less than ‘p’ coming first, and only then all the other values which are greater or equal to ‘p’), so we just swap A[i] and A[j]. After the swap, we continue the same way, simultaneously scanning array “A” with indexes i and j, until they become equal. Once they are, partitioning is completed. Let’s observe the Hoare scheme on another example: The input sequence “A” of length ’N’, that should be partitioned by pivot value “p=20”. Index i starts scanning from 0 upwards, and index j starts scanning from “N-1” downwards.While increasing index i we meet value “A[2]=31” which is greater than ‘p’. Then, after decreasing index j we meet another value “A[10]=16” which is less than ‘p’. Those 2 are going to be swapped.After swapping “A[2]” and “A[10]” we continue increasing i from 2 and decreasing j from 10. Index i will stop on value “A[4]=28” which is greater than ‘p’, and index j will stop on value “A[9]=5” which is less than ‘p’. Those 2 are also going to be swapped.The algorithm continues the same way, and the numbers “A[5]=48” and “A[7]=3” are also going to be swapped.After that, indexes ‘i’ and ‘j’ will become equal to each other. Partitioning is completed. If writing the pseudo-code of partitioning by the Hoare scheme, we will have the following: // Partitions sequence A[0..N) with pivot value ‘p’ // upon Hoare scheme, and returns index of the first value // of the resulting right part. function partition_hoare( A[0..N) : Array of Integers, p: Integer ) : Integer i := 0 j := N-1 while true // Move left index ‘i’, as much as needed while i < j and A[i] < p i := i+1 // Move right index ‘j’, as much as needed while i < j and A[j] >= p j := j-1 // Check for completion if i >= j if i == j and A[i] < p return i+1 // “A[i]” also refers to left part return i // “A[i]” refers to right part // Swap “A[i]” and “A[j]” tmp := A[i] A[i] := A[j] A[j] := tmp // Advance by one both ‘i’ and ‘j’ i := i+1 j := j-1Here in lines 5 and 6 we set up start indexes for the 2 scans.Lines 8–10 search from left for such a value, which should belong to the right part, after partitioning.Similarly, lines 11–13 search from right for such a value, which should belong to the left part.Lines 15–19 check for completion of the scans. Once indexes ‘i’ and ‘j’ meet, there are 2 cases: either “A[i]” belongs to the left part or to the right part. Depending on that, we return either ‘i’ or ‘i+1’, as return value of the function should be the start index of the right part.Next, if the scans are not completed yet, lines 20–23 do swap those 2 values which are not at their proper places.And finally, lines 24–26 advance the both indexes, in order to not re-check the already swapped values. The time complexity of the algorithm is O(N), regardless of where the 2 scans will meet each other, as together they always scan N values. An important note here, if the array “A” has ‘L’ values which are “not at their places”, and should be swapped, then acting by Hoare scheme we will do “3*L/2” assignments, because swapping 2 values requires 3 assignments: Swapping values of 2 variables ‘a’ and ‘b’ requires to do 3 assignments, with help of ‘tmp’ variable. Those assignments are: tmp := a a := b b := tmp Let me also emphasize here that ‘L’ is always an even number. That is because for every value “A[i]>=p” originally residing at the left area, there is another value “A[j]<p” originally residing at the right area, the ones which are being swapped. So, every swap rearranges 2 such values, and all rearrangements in Hoare scheme are being done only through swaps. That’s why the ‘L’—the total number of values to be rearranged, is always an even number. 4. Cycles of assignments This chapter might look as a deviation from the agenda of the story, but actually it isn’t, as we will need the knowledge about cycles of assignments in the next chapter, when optimizing the Hoare partitioning scheme. Assume that we want to somehow rearrange the order of values in given sequence “A”. This should not necessarily be a partitioning, but any kind of rearrangement. Let me show that some rearrangements require more assignments than some others. Case #1: Cyclic left-shift of a sequence How many assignments should be done if we want to cyclic left shift the sequence “A” by 1 position? Example of cyclic left shift of sequence “A” of length N=12. We see that the number of assignments needed is N+1=13, as we need to: 1) store “A[0]” in the temporary variable “tmp”, then 2) “N-1” times assign the right adjacent value to the current one, and finally 3) assign “tmp” to the last value of the sequence “A[N-1]”. The needed operations to do that are: tmp := A[0] A[0] := A[1] A[1] := A[2] A[9] := A[10] A[10] := A[11] A[11] := tmp … which results in 13 assignments. Case #2: Cyclic left-shift by 3 positions In the next example we still want to do a cyclic left shift of the same sequence, but now by 3 positions to the left: Example of cyclic left shift by 3 of the sequence “A”, having length N=12. We see that the values A[0], A[3], A[6] and A[9] are being exchanged between each other (blue arrows), as well as values A[1], A[4], A[7] and A[10] do (pink arrows), and as the values A[2], A[5], A[8] and A[11] do exchange only between each other (yellow arrows). The “tmp” variable is being assigned to and read from 3 times. Here we have 3 independent chains / cycles of assignments, each of length 4. In order to properly exchange values between A[0], A[3], A[6] and A[9], the needed actions are: tmp := A[0] A[0] := A[3] A[3] := A[6] A[6] := A[9] A[9] := tmp … which makes 5 assignments. Similarly, exchanging values inside groups (A[1], A[4], A[7], A[10]) and (A[2], A[5], A[8], A[11]) will require 5 assignments each. And adding all that together gives 5*3 =15 assignments required to cyclic left shift by 3 the sequence “A”, having N=12 values. Case #3: Reversing a sequence When reversing the sequence “A” of length ’N’, the actions performed are: swap its first value with the last one, thenswap the second value with the second one from right,swap the third value with the third one from right,… and so on.Example of reversing array “A”, having N=12 values. We see that the values in pairs (A[0], A[11]), (A[1], A[10]), (A[2], A[9]), etc, are being swapped, independently from each other. The variable “tmp” is being assigned to and read from 6 times. As every swap requires 3 assignments, and as for reversing entire sequence “A” we need to do ⌊N/2⌋ swaps, the total number of assignments results in: 3*⌊N/2⌋ = 3*⌊12/2⌋ = 3*6 = 18 And the exact sequence of assignments needed to do the reverse of “A” is: tmp := A[0] // Cycle 1 A[0] := A[11] A[11] := tmp tmp := A[1] // Cycle 2 A[1] := A[10] A[10] := tmp tmp := A[5] // Cycle 6 A[5] := A[6] A[6] := tmp We have seen that rearranging values of the same sequence “A” might require different number of assignments, depending on how exactly the values are being rearranged. In the presented 3 examples, the sequence always had length of N=12, but the number of required assignments was different: More precisely, the number of assignments is equal to N+C, where “C” is the number of cycles, which originate during the rearrangement. Here by saying “cycle” I mean such a subset of variables of “A ”, values of which are being rotated among each other. In our case 1 (left shift by 1) we had only C=1 cycle of assignments, and all variables of “A” did participate in that cycle. That’s why overall number of assignments was: N+C = 12+1 = 13. In the case 2 (left shift by 3) we had C=3 cycles of assignments, with: —first cycle within variables (A[0], A[3], A[6], A[9]), —second cycle applied to variables (A[1], A[4], A[7], A[10]) and —third cycle applied to variables (A[2], A[5], A[8], A[11]). That’s why the overall number of assignments was: N+C = 12+3 = 15. And in our case 3 (reversing) we had ⌊N/2⌋ = 12/2 = 6 cycles. Those all were the shortest possible cycles, and were applied to pairs (A[0], A[11]), (A[1], A[10]), … and so on. That’s why the overall number of assignments was: N+C = 12+6 = 18. Surely, in the presented examples the absolute difference in number of assignments is very small, and it will not play any role when writing high-performance code. But that is because we were considering a very short array of length “N=12”. For longer arrays, those differences in numbers of assignments will grow proportionally to N. Concluding this chapter, let’s keep in mind that the number of assignments needed to rearrange a sequence grows together with number of cycles, introduced by such rearrangement. And if we want to have a faster rearrangement, we should try to do it by such a scheme, which has the smallest possible number of cycles of assignments. 5. Optimizing the Hoare partitioning scheme Now let’s observe the Hoare partitioning scheme once again, this time paying attention to how many cycles of assignments it introduces. Let’s assume we have the same array “A” of length N, and a pivot value ‘p’ according to which the partitioning must be made. Also let’s assume that there are ‘L’ values in the array which should be somehow rearranged, in order to bring “A” into a partitioned state. It turns out that Hoare partitioning scheme rearranges those ‘L’ values in the slowest possible way, because it introduces the maximal possible number of cycles of assignments, with every cycle consisting of only 2 values. Given pivot value “p=20”, the “L=8” values which should be rearranged are the ones to which arrows are coming (or going from). Hoare partitioning scheme introduces “L/2=4” cycles of assignments, each acting on just 2 values. Moving 2 values over a cycle of length 2, which is essentially swapping them, requires 3 assignments. So the overall number of values assignments is “3*L/2” for the Hoare partitioning scheme. The idea which lies beneath the optimization that I am going to describe, comes from the fact that after partitioning a sequence, we are generally not interested in relative order of the values “A[i] <p”, which should finish at the left part of partitioned sequence, as well as we are not interested in the relative order of the ones, which should finish at the right part. The only thing that we are interested in, is for all values less than ‘p’ to come before the other ones. This fact allows us to alter the cycles of assignments in Hoare scheme, and to come up with only 1 cycle of assignments, containing all the ‘L’ values, which should somehow be rearranged. Let me first describe the altered partitioning scheme with the help of the following illustration: The altered partitioning scheme, applied to the same sequence “A”. As the pivot “p=20” is not changed, the “L=8” values which should be rearranged are also the same. All the arrows represent the only cycle of assignments in the new scheme. After moving all the ‘L’ values upon it, we will end up with an alternative partitioned sequence. So what are we doing here? As in the original Hoare scheme, at first we scan from the left and find such value “A[i]>=p” which should go to the right part. But instead of swapping it with some other value, we just remember it: “tmp := A[i]”.Next we scan from right and find such value “A[j]<p” which should go to the left part. And we just do the assignment “A[i] := A[j]”, without loosing the value of “A[i]”, as it is already stored in “tmp”.Next we continue the scan from left, and find such value “A[i]>=p” which also should go to the right part. So we do the assignment “A[j] := A[i]”, without loosing value “A[j] ”, as it is already assigned to the previous position of ‘i’.This pattern continues, and once indexes i and j meet each other, it remains to place some value greater than ‘p’ to “A[j]”, we just do “A [j] := tmp”, as initially the variable “tmp” was holding the first value from left, greater than ‘p’. The partitioning is completed. As we see, here we have only 1 cycle of assignments which goes over all the ‘L’ values, and in order to properly rearrange them it requires just “L+1” value assignments, compared to the “3*L/2” assignments of Hoare scheme. I prefer to call this new partitioning scheme a “Cyclic partition”, because all the ‘L’ values which should be somehow rearranged, now reside on a single cycle of assignments. Here is the pseudo-code of the Cyclic partition algorithm. Compared to the pseudo-code of Hoare scheme the changes are insignificant, but now we always do 1.5x fewer assignments. // Partitions sequence A[0..N) with pivot value ‘p’ // by “cyclic partition” scheme, and returns index of // the first value of the resulting right part. function partition_cyclic( A[0..N) : Array of Integers, p: Integer ) : Integer i := 0 j := N-1 // Find the first value from left, which is not on its place while i < N and A[i] < p i := i+1 if i == N return N // All N values go to the left part // The cycle of assignments starts here tmp := A[i] // The only write to ‘tmp’ variable while true // Move right index ‘j’, as much as needed while i < j and A[j] >= p j := j-1 if i == j // Check for completion of scans // The next assignment in the cycle A[i] := A[j] i := i+1 // Move left index ‘i’, as much as needed while i < j and A[i] < p i := i+1 if i == j // Check for completion of scans // The next assignment in the cycle A[j] := A[i] j := j-1 // The scans have completed A[j] := tmp // The only read from ‘tmp’ variable return jHere lines 5 and 6 set up the start indexes for both scans (‘i’—from left to right, and ‘j’—from right to left).Lines 7–9 search from left for such a value “A[i]”, which should go to the right part. If it turns out that there is no such value, and all N items belong to the left part, lines 10 and 11 report that and finish the algorithm.Otherwise, if such value was found, at line 13 we remember it in the ‘tmp’ variable, thus opening a slot at index ‘i’ for placing another value there.Lines 15–19 search from right for such a value “A[j]” which should be moved to the left part. Once found, lines 20–22 place it into the empty slot at index ‘i’, after which the slot at index ‘j’ becomes empty, and waits for another value.Similarly, lines 23–27 search from left for such a value “A[i]” which should be moved to the right part. Once found, lines 28–30 place it into the empty slot at index ‘j’, after which the slot at index ‘i’ again becomes empty, and waits for another value.This pattern is continued in the main loop of the algorithm, at lines 14–30.Once indexes ‘i’ and ‘j’ meet each other, we have an empty slot there, and lines 31 and 32 assign the originally remembered value in ‘tmp’ variable there, so the index ‘j’ becomes the first one to hold such value which belongs to the right part.The last line returns that index. This way we can write 2 assignments of the cycle together in the loop’s body, because as it was proven in chapter 3, ‘L’ is always an even number. Time complexity of this algorithm is also O(N), as we still scan the sequence from both ends. It just does 1.5x less value assignments, so the speed-up is reflected only in the constant factor. An implementation of Cyclic partition in the C++ language is present on GitHub, and is referenced at the end of this story [1]. I also want to show that the value ‘L’ figuring in the Hoare scheme can’t be lowered, regardless of what partitioning scheme we use. Assume that after partitioning, the length of the left part will be “left_n”, and length of the right part will be “right_n”. Now, if looking at the left-aligned “left_n”-long area of the original unpartitioned array, we will find some ‘t1’ values there, which are not at their final places. So those are such values which are greater or equal to ‘p’, and should be moved to the right part anyway. Illustration of the sequence before and after partitioning. Length of the left part is “left_n=7” and length of the right part is “right_n=5”. Among the first 7 values of the unpartitioned sequence there are “t1=3” of them which are greater than “p=20” (the yellow ones), and should be somehow moved to the right part. And among the last 5 values of the unpartitioned sequence there are “t2=3” of them which are less than ‘p’ (the light green ones), and should be somehow moved to the left part. Similarly, if looking at the right-aligned “right_n”-long area of the original unpartitioned array, we will find some ‘t2’ values there, which are also not at their final places. Those are such values which are less than ‘p’, and should be moved to the left part. We can’t move less than ‘t1’ values from left to right, as well as we can’t move less than ‘t2’ values from right to left. In the Hoare partitioning scheme, the ‘t1’ and ‘t2’ values are the ones which are swapped between each other. So there we have: t1 = t2 = L/2, t1 + t2 = L. Which means that ‘L’ is actually the minimal amount of values which should be somehow rearranged, in order for the sequence to become partitioned. And the Cyclic partition algorithm rearranges them doing just “L+1” assignments. That’s why I allow myself to call this new partitioning scheme as “nearly optimal”. 6. Experimental results It is already proven that the new partitioning scheme is doing fewer assignments of values, so we can expect it to run faster. However, before publishing the algorithm I wanted to collect the results also in an experimental way. I have compared the running times when partitioning by the Hoare scheme and by Cyclic partition. All the experiments were performed on randomly shuffled arrays. The parameters by which the experiments were different are: N—length of the array,“left_part_percent”—percent of length of the left part (upon N), which results after partitioning,running on array of primitive data type variables (32-bit integers) vs. on array of some kind of large objects (256-long static arrays of 16-bit integers). I want to clarify why I found it necessary to run partitioning both on arrays of primitive data types, and on arrays of large objects. Here, by saying “large object” I mean such values, which occupy much more memory, compared to primitive data types. When partitioning primitive data types, assigning one variable to another will work as fast as almost all other instructions used in both algorithms (like incrementing an index or checking condition of the loop). Meanwhile when partitioning large objects, assigning one such object to another will take significantly more time, compared to other used instructions, and that is when we are interested to reduce the overall number of value assignments as much as that is possible. I’ll explain why I decided to run different experiments with different values of “left_part_percent” a bit later in this chapter. The experiments were performed with Google Benchmark, under the following system: CPU: Intel Core i7–11800H @ 2.30GHz RAM: 16.0 GB OS: Windows 11 Home, 64-bit Compiler: MSVC 2022 ( /O2 /Ob2 /MD /GR /Gd ) Partitioning arrays of a primitive data type Here are the results of running partition algorithms on arrays of primitive data type—32 bit integer: Running times of partitioning algorithms, on array of 32-bit integers, having length N=10’000. Blue bars correspond to partitioning by Hoare scheme, while red bars correspond to the Cyclic partition algorithm. Partitioning algorithms are run for 5 different cases, based on “left_part_percent”—percent of the left part of array (upon N), that will appear after partitioning. The time is presented in We see that there is no obvious correlation between value of “left_part_percent” and relative difference in running times of the 2 algorithms. This kind of behavior is expected. Partitioning arrays of “large objects” And here are the results of running the 2 partitioning algorithms on array of so called “large objects”—each of which is an 256-long static array of 16-bit random integers. Running times of partitioning algorithms, on array of large objects (256-long static arrays of random 16-bit integers), having length N=10’000. Blue bars correspond to partitioning by Hoare scheme, while red bars correspond to the Cyclic partition algorithm. Partitioning algorithms are run for 5 different cases, based on “left_part_percent”—percent of the left part of array (upon N), that will appear after partitioning. The time is presented in Now we see an obvious correlation: Cyclic partition outperforms the Hoare scheme as more, as the “left_part_percent” is closer to 50%. In other words, Cyclic partition works relatively faster when after partitioning the left and right parts of the array appear to have closer lengths. This is also an expected behavior. Explanation of the results — Why does partitioning generally take longer, when “left_part_percent” is closer to 50%? Let’s imagine for a moment a corner case—when after partitioning almost all values appear in left (or right) part. This will mean that almost all values of the array were less (or greater) than the pivot value. And it will mean that during the scan, all those values were considered to be already at their final positions, and very few assignments of values were performed. If trying to imagine the other case—when after partitioning the left and right parts appear to have almost equal length, it will mean that a lot of value assignments were performed (as initially all the values were randomly shuffled in the array). — When looking at partitioning of “large objects”, why does the difference in running time of the 2 algorithms become greater when “left_part_percent” gets closer to 50%? The previous explanation shows that when “left_part_percent” gets closer to 50%, there arises need to do more assignments of values in the array. In previous chapters we also have shown that Cyclic partition always makes 1.5x less value assignments, compared to Hoare scheme. So that difference of 1.5 times brings more impact on overall running time when we generally need to do more rearrangements of values in the array. — Why is the absolute time (in nanoseconds) greater when partitioning “large objects”, rather than when partitioning 32-bit integers? This one is simple—because assigning one “large object” to another takes much more time, than assigning one primitive data type to another. I also run all the experiments on arrays with different lengths, but the overall picture didn’t change. 7. Conclusion In this story I introduced an altered partitioning scheme, called “Cyclic partition”. It always makes 1.5 times fewer value assignments, compared to the currently used Hoare partitioning scheme. Surely, when partitioning a sequence, value assignment is not the only type of operation performed. Besides it, partitioning algorithms check values of input sequence “A” for being less or greater than the pivot value ‘p’, as well as they do increments and decrements of indexes over “A”. The amounts of comparisons, increments and decrements are not affected by introducing “cyclic partition”, so we can’t just expect it to run 1.5x faster. However, when partitioning an array of complex data types, where value assignment is significantly more time-consuming than simply incrementing or decrementing an index, the overall algorithm can actually run up to 1.5 times faster. The partitioning procedure is the main routine of the QuickSort algorithm, as well as of the algorithm for finding the median of an unsorted array, or finding its k-th order statistic. So we can also expect for those algorithms to have a performance gain up to 1.5 times, when working on complex data types. My gratitudes to: —Roza Galstyan, for reviewing the draft of the story and suggesting useful enhancements, —David Ayrapetyan, for the spell check ( https://www.linkedin.com/in/davidayrapetyan/ ), —Asya Papyan, for careful design of all used illustrations ( https://www.behance.net/asyapapyan ).If you enjoyed this story, feel free to find and connect me on LinkedIn ( https://www.linkedin.com/ in/tigran-hayrapetyan-cs/ ).All used images, unless otherwise noted, are designed by request of the author. [1]—Implementation of Cyclic partition in C++ : https://github.com/tigranh/cyclic_partition Cyclic Partition: An Up to 1.5x Faster Partitioning Algorithm was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story. Leave a Comment You must be logged in to post a comment.
{"url":"https://businessblog.ai/business/cyclic-partition-an-up-to-1-5x-faster-partitioning-algorithm/","timestamp":"2024-11-15T00:29:45Z","content_type":"text/html","content_length":"228779","record_id":"<urn:uuid:935e82a3-d287-4987-b4bc-5ee404e65eba>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00438.warc.gz"}
Solution assignment 08 Fractional functions and graphs Return to Assignments Fractional functions and graphs Assignment 8 Given the function: the vertical asymptote; the horizontal asymptote; the intersection point with the the intersection point with the Based on these results sketch the result in the figure. The function looks like a fraction but actually it is not. We can simplify the function: Actually the function is a straight line which is not defined for The graph of the original function is actually a straight line with a small 'hole' for
{"url":"https://4mules.nl/en/fractional-functions-and-graphs/assignments/solution-assignment-08-fractional-functions-and-graphs/","timestamp":"2024-11-10T14:30:56Z","content_type":"text/html","content_length":"40820","record_id":"<urn:uuid:a1534c4f-0c2a-45d4-a117-5c70727f64a5>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00499.warc.gz"}
Density - Knowunity Density: AP Physics 2 Study Guide 🎓 Welcome, aspiring physicists and inquisitive minds! Today we dive into the fascinating world of density, where we’ll uncover why some things sink like a stone, while others float like a feather. Spoiler alert: It’s all about density! 🌊 Let's get started. What is Density? 🤔 So, what exactly is density? Density is the measure of mass per unit volume of a substance. Picture density as how tightly a material’s mass is packed within a particular volume—like sardines in a can, but less fishy-smelling. In physics, we commonly use kilograms per cubic meter (kg/m³) to express density. However, grams per cubic centimeter (g/cm³) might also pop up, inviting you to flex those unit conversion muscles! The formula for density (ρ) is simple but powerful: [ \text{Density} = \frac{\text{Mass}}{\text{Volume}} ] In terms of letters and symbols, it looks like this: [ \rho = \frac{m}{V} ] Key Concepts and Properties 📏 1. Intrinsic Property: Density is intrinsic, meaning it doesn’t change regardless of the size or shape of the sample. Whether you have a glacier’s worth of ice or a single ice cube, the density remains the same! Imagine that! 🧊 2. Extensive Properties: Mass and volume are extensive properties, dependent on the quantity of the substance. If you have more of the stuff, you have more mass and volume, but the density stays sassy and unchanged. 3. Comparison: Density allows for comparing different materials. For example, gold is denser than aluminum, so a gold brick and an aluminum brick of the same volume will have vastly different weights. Ka-ching! 💰 4. Sink or Float?: Density is like the ultimate party guest—it decides who sinks and who floats in the liquid. If the object’s density is higher than the liquid’s, it sinks. If it’s lower, it floats. Make way for the life of the pool party! 🎈 Intriguing Examples & Calculations ✏️ Let's sprinkle in some practice problems to solidify our understanding: Example 1: A block of metal has a mass of 50 grams and a volume of 10 cubic centimeters. What’s the density of the metal? [ \text{Density} = \frac{50, \text{g}}{10, \text{cm}^3} = 5, \text{g/cm}^3 ] Example 2: A cylinder of wood has a mass of 200 grams and a radius of 2 centimeters. What’s the density of the wood, if its height is 10 cm? First, calculate the volume of the cylinder: [ V = \pi r^2 h = \pi (2, \text{cm})^2 \times 10, \text{cm} = 40\pi , \text{cm}^3 ] Then, plug into the density formula: [ \text{Density} = \frac{200, \text{g}}{40\pi, \text{cm}^3} \approx 1.59, \text{g/cm}^3 ] Example 3: A swimming pool has a volume of 50,000 liters and a mass of 400,000 grams of water. What’s the density of the water in the pool? [ \text{Density} = \frac{400,000, \text{g}}{50,000, \text{L}} = 8, \text{g/L} ] Ice vs. Water: A Chilling Mystery 🧊 One of the coolest facts (pun intended) is why ice floats on water. Predict the density of ice relative to water at 0°C and conduct an investigation: The density of ice is less than that of water at 0°C. Despite solids typically being denser than liquids, water is the quirky exception. Ice forms a lattice structure with open spaces, making it less dense. This is why ice floats and your Titanic recreations will always end in tragedy, not triumph. 1. Obtain containers with water and ice. 2. Measure the mass and volume using a balance and graduated cylinder. 3. Calculate the densities using the classic ( \rho = \frac{m}{V} ). 4. Compare your results and nail that science project! Frequently Asked “Blazing” Questions 🔥 Q: Which has a greater density: 100 grams of mercury or 10,000 kilograms of mercury? A: Both have the same density! Density is intrinsic and doesn’t depend on the amount. Q: If Object A and Object B have the same density, and you have more volume of Object B, which has more mass? A: Object B, because more volume at the same density means more mass. Q: Why will something with a specific gravity less than 1 float in water? A: Because specific gravity tells us about density relative to water. Less than 1 means the substance is less dense and will float, like that sponge at bath time! 🧽 Key Terms to Master 🧠 • Density: Measure of mass per unit volume. • Extensive Property: Property depending on the amount of the substance (e.g., mass, volume). • Intensive Property: Property independent of the amount of the substance (e.g., density). • Specific Gravity: Ratio of the density of a substance to the density of a reference substance, usually water. • Volume of a Cylinder: Calculated by ( V = \pi r^2 h ). Fun Physics Fact 📚 Did you know Archimedes screamed "Eureka!” while discovering the principle of buoyancy? Bet you didn’t know bath time could be that enlightening! Conclusion 🌟 By mastering density, you can predict whether objects will float or sink, determine the intrinsic properties of materials, and overall, feel like a physics wizard. So go ahead, next time you’re in the tub, amaze your friends with your knowledge of why your rubber ducky floats, and don’t forget to shout "Eureka!” while you're at it.
{"url":"https://knowunity.com/subjects/study-guide/density","timestamp":"2024-11-07T04:18:23Z","content_type":"text/html","content_length":"1051326","record_id":"<urn:uuid:1263163b-e05d-4cd1-be1c-9c2836488370>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00359.warc.gz"}
Cantor counting A month ago I had occasion to use Terence's all-embracing aphorism homo sum - humani nihil a me alienum puto , in the context of what should and should not be allowable in discourse. Because something makes you feel uncomfortable is not sufficient reason to close up shop on its depiction or discussion. I'd like to be able to lay claim to a narrower constituency which I won't affect to render in Latin, so Yoda-speak will have to do: "Scientist I am, beyond me nothing within its boundaries is". Which makes it sufficiently obscure and oracular, but I've already admitted that wide chunks of physics send my brain teetering over the abyss or at least skittering down the inclined plane I'm teaching elementary math to our Yr1 biologists, or at least I am hosting a forum whereby the youngsters can use Khanacademy to practice their skills in geometry and algebra. It reminds me that, when I was their age, I was quite nifty at mathematics and could knock off a bit a calculus, or trick about with set theory. But I know that there are limits to what I can get my brain to accept. Today is Georg Cantor's birthday. He was born on 19th February 1845 in St Petersburg Санкт-Петербург Petrograd Leningrad St Petersburg (they've changed the name a few times). The 19th February was an example of the holdout that the Russians kept from embracing the Gregorian calendar until long after the rest of Europe. 19/02/45 was called 03/03/45 by everyone to the West of the cultural capital of Russia. Cantor was smart and forced us to think on the nature of infinity in mind-melting ways so that since his time we recognise that there are some infinities that are bigger than others. He showed first that the infinity of odd numbers is the same size as the infinity of all countable numbers although reason tells us that there must be twice as many of the latter. He did this by explaining s l o w l y his idea of mapping. He said that you could pair off the two sets of numbers two at a time 1=1 2=3 3=5 4=7 5=9 6=11 . . . ad infinitum until they had all gone, so they must be equivalent in size. Qualified If you can accept that you can push yourself to accept that rational numbers (fractions+integers to us) are also countable in the same way. All you need to do is organise your data in a particular way so that you can tick the numbers off sequentially and know you're not missing any out. Cantor's insight was to set up all the fractions (which includes all the integers because 1 = 1/1; 2 = 2/1 etc.) in a grid and count them off along the blue diagonal against the counting numbers. So far, so good? Cantor then went on to apply similar reasoning to show that the points on a line were equivalent to the points in n-dimensional space. This wrecked even Cantor's head and he famously wrote to his correspondent Dedekind " Je le vois, mais je ne le crois pas !" ("I see it, but I don't believe it!"). You might think that you (or at least mathematicians) can count set of thing - but you can't. And Cantor elegantly showed that the real numbers are not countable, that there must be some real numbers that fall between the cracks of the rationals . . . and so the infinity of reals must be bigger than the infinity of the countable. Real numbers are the rationals (integers and fractions) PLUS the irrational numbers that cannot be represented by a fraction. Although 22/7 is a damned good approximation for the ratio between the circumference and diameter of a circle, any fule kno that this ratio is represented by π Pi 3.1415926... Another beautiful real is the golden ratio Phi 1.6180339... Last year, I laid out a bizarre connexion between the three most famous irrationals. As with the diagonalised fractions, Cantor challenged us to give him any (" big as you like, big as you can imagine ALL of them ") set of real numbers and he would write them in a list one above the other (for convenience just dealing with the digits after the deci-point): 71828 18284 59045 23536 ..... (e) 14159 26535 89793 23846 ..... ( 61803 39887 49894 84820 ..... (ϕ) 77777 77777 77777 77777 ..... (7/9) ..... ..... Cantor then showed that we couldn't have included ALL the real numbers in the list because he could imagine/create/write a number that was different from the first number in its first digit AND different from the second number in its second digit AND different from the third number in its third digit etc. That argument could be re-employed in the new N+1 list, with another number. Sooooo, the infinity of real numbers is bigger than the infinity of rationals. So far my head isn't piled up in a heap at the bottom of the cliffs of insanity , but beyond this much of mathematics quickly wings off towards the horizon leaving me behind in a pedestrian grounded world knowing my limits.
{"url":"http://blobthescientist.blogspot.com/2014/03/cantor-counting.html","timestamp":"2024-11-06T05:42:58Z","content_type":"text/html","content_length":"94561","record_id":"<urn:uuid:1bcff332-4b8f-4d4b-bbc3-fa44c6fab981>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00713.warc.gz"}
Class 4 Maths Jugs and Mugs Worksheet Read and download free pdf of Class 4 Maths Jugs and Mugs Worksheet. Download printable Mathematics Class 4 Worksheets in pdf format, CBSE Class 4 Mathematics Jugs and Mugs Worksheet has been prepared as per the latest syllabus and exam pattern issued by CBSE, NCERT and KVS. Also download free pdf Mathematics Class 4 Assignments and practice them daily to get better marks in tests and exams for Class 4. Free chapter wise worksheets with answers have been designed by Class 4 teachers as per latest examination pattern Jugs and Mugs Mathematics Worksheet for Class 4 Class 4 Mathematics students should refer to the following printable worksheet in Pdf in Class 4. This test paper with questions and solutions for Class 4 Mathematics will be very useful for tests and exams and help you to score better marks Class 4 Mathematics Jugs and Mugs Worksheet Pdf Class 4 Maths Jugs and Mugs Worksheet. Doing worksheets are important to check your understanding of various concepts. Download printable worksheets for class 4 Maths and after you have solved the questions refer to the answers. All worksheets have been made for all important topics in the chapter as per all topics given in NCERT book. Q.1. Which is more : 1 Litre or 1000 ml Q.2. Fill in the blanks : (i) 300 ml + _______ = 1 Litre (ii) 500 ml + _______ = 1 Litre (iii) 450 ml + _______ = 1 Litre Q.3. True or False 1.2 Litre = 1200 ml. True/False Q.4. Estimate the capacity of a table spoon (Tick the right answer) (a) 15 ml (c) 150 ml (b) 1500 ml (d) 1.5 ml Q.5 1/2 Litre is equal to (a) 200 ml (b) 500 ml (c) 100 ml (d) 700 ml Q.6. Match the following : (a) 6 ml + 4 ml + 5 ml ½ Litre (b) 350 ml + 150 ml 750 ml (c) 650 ml + 350 ml 2ml + 8 ml + 5 ml (d) 400 ml + 100 ml + 250 ml 1 Litre Q.7. Encircle the wrong combination : Click on the link below to download Class 4 Maths Jugs and Mugs Worksheet CBSE Class 4 Mathematics Jugs and Mugs Worksheet The above practice worksheet for Jugs and Mugs has been designed as per the current syllabus for Class 4 Mathematics released by CBSE. Students studying in Class 4 can easily download in Pdf format and practice the questions and answers given in the above practice worksheet for Class 4 Mathematics on a daily basis. All the latest practice worksheets with solutions have been developed for Mathematics by referring to the most important and regularly asked topics that the students should learn and practice to get better scores in their examinations. Studiestoday is the best portal for Printable Worksheets for Class 4 Mathematics students to get all the latest study material free of cost. Worksheet for Mathematics CBSE Class 4 Jugs and Mugs Teachers of studiestoday have referred to the NCERT book for Class 4 Mathematics to develop the Mathematics Class 4 worksheet. If you download the practice worksheet for the above chapter daily, you will get better scores in Class 4 exams this year as you will have stronger concepts. Daily questions practice of Mathematics printable worksheet and its study material will help students to have a stronger understanding of all concepts and also make them experts on all scoring topics. You can easily download and save all revision Worksheets for Class 4 Mathematics also from www.studiestoday.com without paying anything in Pdf format. After solving the questions given in the practice sheet which have been developed as per the latest course books also refer to the NCERT solutions for Class 4 Mathematics designed by our teachers Jugs and Mugs worksheet Mathematics CBSE Class 4 All practice paper sheet given above for Class 4 Mathematics have been made as per the latest syllabus and books issued for the current academic year. The students of Class 4 can be assured that the answers have been also provided by our teachers for all test paper of Mathematics so that you are able to solve the problems and then compare your answers with the solutions provided by us. We have also provided a lot of MCQ questions for Class 4 Mathematics in the worksheet so that you can solve questions relating to all topics given in each chapter. All study material for Class 4 Mathematics students have been given on studiestoday. Jugs and Mugs CBSE Class 4 Mathematics Worksheet Regular printable worksheet practice helps to gain more practice in solving questions to obtain a more comprehensive understanding of Jugs and Mugs concepts. Practice worksheets play an important role in developing an understanding of Jugs and Mugs in CBSE Class 4. Students can download and save or print all the printable worksheets, assignments, and practice sheets of the above chapter in Class 4 Mathematics in Pdf format from studiestoday. You can print or read them online on your computer or mobile or any other device. After solving these you should also refer to Class 4 Mathematics MCQ Test for the same chapter. Worksheet for CBSE Mathematics Class 4 Jugs and Mugs CBSE Class 4 Mathematics best textbooks have been used for writing the problems given in the above worksheet. If you have tests coming up then you should revise all concepts relating to Jugs and Mugs and then take out a print of the above practice sheet and attempt all problems. We have also provided a lot of other Worksheets for Class 4 Mathematics which you can use to further make yourself better in Mathematics Where can I download latest CBSE Practice worksheets for Class 4 Mathematics Jugs and Mugs You can download the CBSE Practice worksheets for Class 4 Mathematics Jugs and Mugs for the latest session from StudiesToday.com Can I download the Practice worksheets of Class 4 Mathematics Jugs and Mugs in Pdf Yes, you can click on the links above and download chapter-wise Practice worksheets in PDFs for Class 4 for Mathematics Jugs and Mugs Are the Class 4 Mathematics Jugs and Mugs Practice worksheets available for the latest session Yes, the Practice worksheets issued for Jugs and Mugs Class 4 Mathematics have been made available here for the latest academic session How can I download the Jugs and Mugs Class 4 Mathematics Practice worksheets You can easily access the links above and download the Class 4 Practice worksheets Mathematics for Jugs and Mugs Is there any charge for the Practice worksheets for Class 4 Mathematics Jugs and Mugs There is no charge for the Practice worksheets for Class 4 CBSE Mathematics Jugs and Mugs you can download everything free How can I improve my scores by solving questions given in Practice worksheets in Jugs and Mugs Class 4 Mathematics Regular revision of practice worksheets given on studiestoday for Class 4 subject Mathematics Jugs and Mugs can help you to score better marks in exams Are there any websites that offer free Practice test papers for Class 4 Mathematics Jugs and Mugs Yes, studiestoday.com provides all the latest Class 4 Mathematics Jugs and Mugs test practice sheets with answers based on the latest books for the current academic session Can test sheet papers for Jugs and Mugs Class 4 Mathematics be accessed on mobile devices Yes, studiestoday provides worksheets in Pdf for Jugs and Mugs Class 4 Mathematics in mobile-friendly format and can be accessed on smartphones and tablets. Are practice worksheets for Class 4 Mathematics Jugs and Mugs available in multiple languages Yes, practice worksheets for Class 4 Mathematics Jugs and Mugs are available in multiple languages, including English, Hindi
{"url":"https://www.studiestoday.com/practice-worksheets-mathematics-class-4-maths-jugs-and-mugs-worksheet-244424.html","timestamp":"2024-11-07T20:21:40Z","content_type":"text/html","content_length":"122100","record_id":"<urn:uuid:22a7adb8-2a7d-4556-a79f-a5018ba16bf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00013.warc.gz"}
135 cm to inches The Importance of Unit Conversion in Everyday Life When it comes to navigating the complexities of daily life, the importance of unit conversion cannot be overstated. From buying groceries to renovating a home, understanding and utilizing different units of measurement is crucial for accurate calculations and effective communication. Failing to convert units can lead to confusion, errors, and ultimately, inefficient decision-making. Whether it’s converting inches to centimeters or gallons to liters, having a firm grasp on unit conversion is an invaluable skill that can greatly enhance one’s proficiency in various aspects of everyday In scientific fields such as engineering and chemistry, precision and accuracy are paramount. In these domains, precise unit conversion becomes even more crucial. The tiniest mistake in converting units can have significant repercussions, potentially jeopardizing the outcome of an experiment or compromising the structural integrity of a building. Moreover, the ability to convert between different metric and imperial units is essential for seamless collaboration and communication between professionals from around the world. From academic research to industrial production, unit conversion serves as a universal language that allows for coherence and consistency in a globalized society. Understanding the Centimeter and Inch Measurements Centimeters and inches are both commonly used units of length measurement around the world. While the centimeter is the primary unit of length in the metric system, the inch is the prevalent unit in the imperial system. Understanding these measurements is essential in various aspects of our everyday lives, from measuring the length and width of objects to buying clothes and furniture. The centimeter, symbolized as cm, is a unit of length in the metric system. It is equivalent to one hundredth (1/100) of a meter. The centimeter is often preferred for precise measurements due to its smaller unit size. On the other hand, the inch, represented as in. or “, is a unit of length in the imperial system. It is defined as 1/12th of a foot and is commonly used in countries like the United States, Canada, and the United Kingdom. Understanding the relationship between these measurements can be valuable when converting from one system to another and when dealing with international A Historical Perspective on Centimeters and Inches Centimeters and inches are both widely used units of measurement around the world, but they have different origins and histories. The centimeter, abbreviated as cm, is a metric unit of length derived from the meter. It was first introduced in the late 18th century during the French Revolution when the metric system was established to provide a universal measurement standard. The centimeter is defined as one hundredth of a meter, making it a smaller unit suitable for measuring shorter distances. On the other hand, inches trace their roots back to ancient civilizations. The inch, symbolized as in., was initially based on the width of a thumb or the length of three grains of barley placed end to end. This method of measurement allowed for easy and rough estimations. Over time, the inch became standardized in different cultures, taking on various values. However, it wasn’t until the 12th century that King Edward II of England defined the inch as three grains of barley placed lengthwise instead of end to end. This created a consistent and more accurate unit of measurement. The Conversion Formula for Centimeters to Inches Centimeters and inches are two commonly used units of measurement, especially when it comes to length or height. It is often necessary to convert between these two units in various situations, ranging from carpentry to sewing to calculating body measurements. In order to accurately convert centimeters to inches, a simple formula can be employed. The conversion formula for centimeters to inches is straightforward. To convert centimeters to inches, you need to divide the value in centimeters by a conversion factor of 2.54. This conversion factor is based on the ratio between the standard inch and the metric centimeter. By dividing the centimeter value by 2.54, you obtain the equivalent length in inches. This formula allows for precise and accurate conversions between these two units, ensuring that measurements can be easily understood and utilized regardless of the unit used. Practical Examples of Converting 135 cm to Inches The process of converting centimeters to inches is a useful skill that can be applied in various everyday situations. For instance, let’s consider an example where we need to convert a length of 135 centimeters to inches. To do this, we will employ a simple conversion formula. One inch is equivalent to 2.54 centimeters, which means that to convert centimeters to inches, we need to divide the given length by 2.54. Applying this formula to our example, we find that 135 centimeters is equal to approximately 53.15 inches. Understanding how to convert centimeters to inches allows us to better comprehend and communicate measurements across different systems. This knowledge can be particularly useful when dealing with international measurements, as inches are commonly used in the United States while centimeters are prevalent in many other countries. For instance, if you come across a product online that states its dimensions in centimeters but you prefer to visualize it in inches, knowing the conversion process empowers you to make a more informed decision. Converting Inches to Centimeters: The Reverse Calculation When it comes to converting inches to centimeters, the reverse calculation can be done using a simple formula. First, let’s understand the relationship between inches and centimeters. An inch is a unit of length commonly used in the United States and some other countries, while the centimeter is a unit of length used in most parts of the world. While inches are divided into 12 equal parts called inches, centimeters are divided into 100 equal parts called centimeters. To convert inches to centimeters, the formula can be expressed as: centimeters = inches x 2.54 This formula is derived from the fact that there are 2.54 centimeters in every inch. By multiplying the number of inches by 2.54, we can easily convert inches to centimeters. For example, if we have a measurement of 10 inches, the conversion would be 10 x 2.54 = 25.4 centimeters.
{"url":"https://convertertoolz.com/conv/135-cm-to-inches/","timestamp":"2024-11-09T13:17:34Z","content_type":"text/html","content_length":"45449","record_id":"<urn:uuid:d07b36a6-9351-4f4d-a404-956357a436fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00057.warc.gz"}
Hamiltonian dynamics of degenerate quartets of deep-water waves Add to your list(s) Download to your calendar using vCal If you have a question about this talk, please contact nobody. HY2W05 - Physical applications In the weakly nonlinear theory of waves on the surface of deep water, the simplest interaction takes place between quartets of waves. This interaction was first observed using perturbation methods (e.g. by Stokes (1847), and later by Phillips and others in the 1960s), which assume the water wave problem can be expanded in terms of a small parameter. Today many model equations exist which capture the salient features of nonlinear interaction – one of these, the Zakharov equation, will be the starting point for this talk. The Zakharov equation has been used to derive the nonlinear Schrödinger equation (NLS), and many of its modifications, in a limit of narrow bandwidth. It has also been used to study the modulational (Benjamin-Feir) instability of water waves (e.g. Yuen & Lake (1982)), where it provides a refinement of the thresholds derived from the NLS . Such instability criteria have classically been derived from linearisation, and subsequent behaviour obtained through numerical solution of the underlying equations. I will describe an approach to the Benjamin-Feir instability based on the degenerate quartets of the discretised Zakharov equation which is free of any restriction on spectral bandwidth. Inspired by related work in optics (Capellini & Trillo (1991)) this problem can be recast as a planar Hamiltonian system in terms of the dynamic phase and a single modal amplitude. In this simple form, the full, nonlinear dynamics are readily apparent without recourse to numerical solutions. The dynamical system is characterised by two free parameters: the wave action and the separation between the carrier and the side-bands; the latter serves as a bifurcation parameter. Fixed points of our system correspond to non-trivial, steady-state nearly-resonant degenerate quartets, of the type recently found by Liao et al (2016). I will explain the connection between saddle-points and the instability of uniform and bichromatic wave trains, and show that heteroclinic orbits correspond to breather-like solutions of this simplified system. This work is joint with David Andrade. This talk is part of the Isaac Newton Institute Seminar Series series. This talk is included in these lists: Note that ex-directory lists are not shown.
{"url":"https://talks.cam.ac.uk/talk/index/183818","timestamp":"2024-11-12T16:35:39Z","content_type":"application/xhtml+xml","content_length":"14448","record_id":"<urn:uuid:bdae5457-4f01-4b01-a0bb-bb91430d7f20>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00432.warc.gz"}
As always I believe I should start each chapter with a warm up typing exercise so here is a short program to compute the absolute value of a number: n = float(input("Number? ")) if n < 0: print("The absolute value of", n, "is", -n) print("The absolute value of", n, "is", n) Here is the output from the two times that I ran this program: Number? -34 The absolute value of -34.0 is 34.0 Number? 1 The absolute value of 1.0 is 1.0 So what does the computer do when when it sees this piece of code? First it prompts the user for a number with the statement n = float(input("Number? ")). Next it reads the line if n < 0: . If n is less than zero Python runs the line print("The absolute value of", n, "is", -n). Otherwise python runs the line print("The absolute value of", n, "is", n). More formally Python looks at whether the expression n < 0 is true or false. A if statement is followed by a block of statements that are run when the expression is true. Optionally after the if statement is a else statement. The else statement is run if the expression is false. There are several different tests that a expression can have. Here is a table of all of them: operator function < less than <= less than or equal to > greater than >= greater than or equal to == equal != not equal Another feature of the if command is the elif statement. It stands for else if and means if the original if statement is false and then the elif part is true do that part. Here's a example: a = 0 while a < 10: a = a + 1 if a > 5: print(a, " > ", 5) elif a <= 7: print(a, " <= ", 7) print("Neither test was true") and the output: 1 <= 7 2 <= 7 3 <= 7 4 <= 7 5 <= 7 6 > 5 7 > 5 8 > 5 9 > 5 10 > 5 Notice how the elif a <= 7 is only tested when the if statement fail to be true. elif allows multiple tests to be done in a single if statement. #Plays the guessing game higher or lower # (originally written by Josh Cogliati, improved by Quique) #This should actually be something that is semi random like the # last digits of the time or something else, but that will have to # wait till a later chapter. (Extra Credit, modify it to be random # after the Modules chapter) number = 78 guess = 0 while guess != number: guess = int(input("Guess a number: ")) if guess > number: print("Too high") elif guess < number: print("Too low") print("Just right") Sample run: Guess a number:100 Too high Guess a number:50 Too low Guess a number:75 Too low Guess a number:87 Too high Guess a number:81 Too high Guess a number:78 Just right #Asks for a number. #Prints if it is even or odd number = float(input("Tell me a number: ")) if number % 2 == 0: print(number, "is even.") elif number % 2 == 1: print(number, "is odd.") print(number, "is very strange.") Sample runs. Tell me a number: 3 3.0 is odd. Tell me a number: 2 2.0 is even. Tell me a number: 3.14159 3.14159 is very strange. #keeps asking for numbers until 0 is entered. #Prints the average value. count = 0 sum = 0.0 number = 1 #set this to something that will not exit # the while loop immediatly. print("Enter 0 to exit the loop") while number != 0: number = float(input("Enter a number:")) count = count + 1 sum = sum + number count = count - 1 #take off one for the last number print("The average was:", sum/count) Sample runs Enter 0 to exit the loop Enter a number:3 Enter a number:5 Enter a number:0 The average was: 4.0 Enter 0 to exit the loop Enter a number:1 Enter a number:4 Enter a number:3 Enter a number:0 The average was: 2.66666666667 #keeps asking for numbers until count have been entered. #Prints the average value. sum = 0.0 print("This program will take several numbers then average them") count = int(input("How many numbers would you like to sum:")) current_count = 0 while current_count < count: current_count = current_count + 1 print("Number ", current_count) number = float(input("Enter a number:")) sum = sum + number print("The average was:", sum/count) Sample runs This program will take several numbers then average them How many numbers would you like to sum:2 Number 1 Enter a number:3 Number 2 Enter a number:5 The average was: 4.0 This program will take several numbers then average them How many numbers would you like to sum:3 Number 1 Enter a number:1 Number 2 Enter a number:4 Number 3 Enter a number:3 The average was: 2.66666666667 Modify the password guessing program to keep track of how many times the user has entered the password wrong. If it is more than 3 times, print “That must have been complicated.” Write a program that asks for two numbers. If the sum of the numbers is greater than 100, print “That is big number”. Write a program that asks the user their name, if they enter your name say “That is a nice name”, if they enter “John Cleese” or “Michael Palin”, tell them how you feel about them ;), otherwise tell them “You have a nice name”.
{"url":"http://jjc.freeshell.org/easytut3/easytut3/node7.html","timestamp":"2024-11-14T03:21:38Z","content_type":"text/html","content_length":"10016","record_id":"<urn:uuid:0c9deab7-0654-47d0-8bcf-b4373dafb136>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00029.warc.gz"}
ANOVA PART I: The Introductory Guide to ANOVA In this blog, we are going to be discussing a statistical technique, ANOVA, which is used for comparison. The basic principal of ANOVA is to test for differences among the mean of different samples. It examines the amount of variation within each of these samples and the amount of variation between the samples. ANOVA is important in the context of all those situations where we want to compare more than two samples as in comparing the yield of crop from several variety of seeds etc. The essence of ANOVA is that the total amount of variation in a set of data is broken in two types:- 1. The amount that can be attributed to chance. One-way ANOVA Under the one-way ANOVA we compare the samples based on a single factor. For example productivity of different variety of seeds. Stepwise process involved in calculation of one-way ANOVA is as follows:- 1. Calculate the mean of each sample X ̅ 2. Calculate the sum of squares between (SSB) samples 4. Divide the result by the degree of freedom between the samples to obtain mean square between (MSW) samples. 5. Now calculate variation within the samples i.e. sum of square within (SSW) Lets now solve a one-way ANOVA problem. A,B and C are three different variety of seeds and now we need to check if there is any variation in their productivity or not. We will be using one-way ANOVA as there is a single factor comparison involved i.e. variety of seeds. The f-ratio is 1.53 which lies within the critical value of 4.26 (calculated from the f-distribution table). Conclusion:- Since the f-ratio lies within the acceptance region we can say that there is no difference in the productivity of the seeds and the little bit of variation that we see is caused by Two-way ANOVA will be discussed in my next blog so do comeback for the update. Hopefully, you have found this blog informative, for more clarification watch the video attached down the blog. You can find more such posts on Data Science course topics, just keep on following the DexLab Analytics blog. Comments are closed here.
{"url":"https://www.dexlabanalytics.com/blog/anova-part-i-the-introductory-guide-to-anova","timestamp":"2024-11-02T11:29:55Z","content_type":"text/html","content_length":"62114","record_id":"<urn:uuid:6ac2ef1d-25a4-47f8-bc85-686897728792>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00425.warc.gz"}
Cryptovirology is a field that studies how to use cryptography to design powerful malicious software. The field was born with the observation that public-key cryptography can be used to break the symmetry between what an antivirus analyst sees regarding a virus and what the virus writer sees. The former only sees a public key whereas the latter sees a public key and corresponding private key. The first attack that was identified in the field is called "cryptoviral extortion"^[1]. In this attack a virus, worm, or trojan hybrid encrypts the victim's files and the user must pay the malware author to receive the needed session key (which is encrypted under the author's public key that is contained in the malware) if the user does not have backups and needs the files back. The field also encompasses covert attacks in which the attacker secretly steals private information such as private keys. An example of the latter type of attack are asymmetric backdoors. An asymmetric backdoor is a backdoor (e.g., in a cryptosystem) that can be used only by the attacker, even after it is found. This contrasts with the traditional backdoor that is symmetric, i.e., anyone that finds it can use it. Kleptography, a subfield of cryptovirology, is concerned with the study of asymmetric back doors in key generation algorithms, digital signature algorithms, key exchanges, and so on. General information[ ] Cryptovirology was born in academia^[1]^[2]. However, practitioners have recently expanded the scope of the field to include the analysis of cryptographic algorithms used by malware writers, attacks on these algorithms using automated methods (such as X-raying^[3]) and analysis of viruses' and packers' encryptors. Also included is the study of cryptography-based techniques (such as "delayed code"^[4]) developed by malware writers to hamper malware analysis. A "questionable encryption scheme", which was introduced by Young and Yung, is an attack tool in cryptovirology. Informally speaking, a questionable encryption scheme is a public key cryptosystem (3-tuple of algorithms) with two supplementary algorithms, forming a 5-tuple of algorithms. It includes a deliberately bogus yet carefully designed key pair generation algorithm that produces a "fake" public key. The corresponding private key (witness of non-encryption) cannot be used to decipher data "encrypted" using the fake public key. By supplying the key pair to an efficient verification predicate (the 5th algorithm in the 5-tuple) it is proven whether the public key is real or fake. When the public key is fake, it follows that no one can decipher data "enciphered" using the fake public key. A questionable encryption scheme has the property that real public keys are computationally indistinguishable from fake public keys when the private key is not available. The private key forms a poly-sized witness of decipherability or indecipherability, whichever may be the case. An application of a questionable encryption scheme is a trojan that gathers plaintext from the host, "encrypts" it using the trojan's own public key (which may be real or fake), and then exfiltrates the resulting "ciphertext". In this attack it is thoroughly intractable to prove that data theft has occurred. This holds even when all core dumps of the trojan and all the information that it broadcasts is entered into evidence. An analyst that jumps to the conclusion that the trojan "encrypts" data risks being proven wrong by the malware author (e.g., anonymously). When the public key is fake, the attacker gets no plaintext from the trojan. So what's the use? A spoofing attack is possible in which some trojans are released that use real public keys and steal data and some trojans are released that use fake public keys and do not steal data. Many months after the trojans are discovered and analyzed, the attacker anonymously posts the witnesses of non-encryption for the fake public keys. This proves that those trojans never in fact exfiltrated data. This casts doubt on the true nature of future strains of malware that contain such "public keys", since the keys could be real or fake. This attack implies a fundamental limitation on proving data theft. There are many other attacks in the field of cryptovirology that are not mentioned here. Examples of viruses with cryptography and ransom capabilities[ ] While viruses in the wild have used cryptography in the past, the only purpose of such usage of cryptography was to avoid detection by antivirus software. For example, the tremor virus^[5] used polymorphism as a defensive technique in an attempt to avoid detection by anti-virus software. Though cryptography does assist in such cases to enhance the longevity of a virus, the capabilities of cryptography are not used in the payload. The One-half virus^[6] was amongst the first viruses known to have encrypted affected files. However, the One_half virus was not ransomware, that is it did not demand any ransom for decrypting the files that it has encrypted. It also did not use public key cryptography. An example of a virus that informs the owner of the infected machine to pay a ransom is the virus nicknamed Tro_Ransom.A ^[7]. This virus asks the owner of the infected machine to send $10.99 to a given account through Western Union. Virus.Win32.Gpcode.ag is a classic cryptovirus ^[8]. This virus partially uses a version of 660-bit RSA and encrypts files with many different extensions. It instructs the owner of the machine to email a given mail ID if the owner desires the decryptor. If contacted by email, the user will be asked to pay a certain amount as ransom in return for the decryptor. Creation of cryptoviruses[ ] To successfully write a cryptovirus, a thorough knowledge of the various cryptographic primitives such as random number generators, proper recommended cipher text chaining modes etc are necessary. Wrong choices can lead to poor cryptographic strength. So, usage of preexisting routines would be ideal. Microsoft's Cryptographic API (CAPI), is a possible tool for the same. It has been demonstrated that using just 8 different calls to this API, a cryptovirus can satisfy all its encryption needs^[9]. Other uses of cryptography enabled malware[ ] Apart from cryptoviral extortion, there are other potential uses^[2] of cryptoviruses. They are used in deniable password snatching, used with cryptocounters, used with private information retrieval and used in secure communication between different instances of a distributed cryptovirus. References[ ] External links[ ] cs:Kryptovirologie es:Criptovirología pt:Criptovirologia
{"url":"https://cryptography.fandom.com/wiki/Cryptovirology","timestamp":"2024-11-13T09:50:37Z","content_type":"text/html","content_length":"169137","record_id":"<urn:uuid:8f210e69-e256-4f68-b9b9-574adb1a62f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00562.warc.gz"}
EViews Help: Hodrick-Prescott Filter Hodrick-Prescott Filter The Hodrick-Prescott Filter is a widely employed smoothing method for obtaining a smooth estimate of the long-term trend component of a series. The method was first proposed in a working paper (circulated in the early 1980’s and published in 1997) by Hodrick and Prescott to analyze postwar U.S. business cycles. EViews 14 enhances the existing routines with support for the iterated (boosted) HP filter proposed by Phillips and Shi (2020). Technically, the Hodrick-Prescott (HP) filter is a two-sided linear filter that computes the smoothed series Phillips and Shi (2020) have proposed iterating the HP filter to produce a “smarter smoothing device.” This boosted HP filter takes the cyclical series, To smooth the series using the Hodrick-Prescott filter, choose : First, provide a name for the . EViews will suggest a name, but you can always enter a name of your choosing. If you wish to save a , specify a name in the edit field. Next, specify an integer value for the smoothing parameter, radio button and entering a value in the edit field, or you may specify a value using the frequency power rule of Ravn and Uhlig (2002) (the number of periods per year divided by 4, raised to a power, and multiplied by 1600 by clicking on the and entering a value in the edit field. By default, EViews will fill the defaults using the Ravn and Uhlig method with a power rule of 2, yielding the original Hodrick and Prescott values for Ravn and Uhlig recommend using a power value of 4. EViews will round any non-integer values that you enter. The Boosting section of the dialog offers settings for iterative boosting of the HP filter. You may choose between stopping based on the maximum number of iterations or using an Information criteria. If you click on , EViews will stop based on the entry in the Max. Iterations edit field. By default, there will be no boosting as only one iteration of the filter will be performed. Selecting the Information criteria radio button instructs EViews to select the optimal number of iterations using information criteria. The Max. Iterations edit field should be used to specify the number of iterations to be considered. When you click on OK, EViews displays a graph of the filtered series together with the original series. Note that only data in the current workfile sample are filtered. Observations for the smoothed and cyclical series outside the current sample will be filled with NAs. For example, we may download housing starts data from the Federal Reserve of St. Louis database: dbopen(type=fred, server=api.stlouisfed.org/fred) wfcreate m 1959M01 2024M02 fetch(d=fred) houst smpl 2010 2024m02 and then perform HP filtering with 5 iterations on the HOUST series using values from 2010m01 through 2024m02: The newly created HPTREND series contains the smoothed values of HOUST. Note that only data in the current workfile sample are filtered. Data for the smoothed series outside the current sample are filled with NAs.
{"url":"https://help.eviews.com/content/series-Hodrick-Prescott_Filter.html","timestamp":"2024-11-05T06:36:32Z","content_type":"application/xhtml+xml","content_length":"16547","record_id":"<urn:uuid:5be99280-10f4-4421-abd7-1149be0335d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00165.warc.gz"}
How Much Can a 26Ft Box Truck Scale? [Answered 2023] | Prettymotors How Much Can a 26Ft Box Truck Scale? When you want to weigh your truck, it’s important to understand its GVWR (gross vehicle weight rating). Curb weight refers to the weight of the truck at the curb, excluding the driver, passengers and cargo. The payload, on the other hand, is the total weight of the truck, including the cargo and the payload. While most 26 foot box trucks are roughly the same size, the dimensions and the weight of a truck vary significantly. A 26-foot box truck is a large piece of machinery, which is why it needs to be weighed regularly. A truck with this much volume can weigh as much as 26,000 pounds, or 13 tons. This means that any truck can’t carry anything over this weight without having to be weighed every single time. Therefore, it is important to know how much a box truck weighs before it is loaded. How Much Weight Can a 26Ft Straight Truck Carry? A 26Ft straight truck is much larger than a 24 footer, and its payload capacity is slightly greater than its smaller counterpart. The following specifications will help you determine how much weight your truck can handle. GVWR refers to the maximum weight the vehicle can carry, and its GVW will vary depending on the model. A 2020 International MV607 with a 26 foot box is 26,000 pounds. It has a Cummins engine, a 100 gallon fuel tank, power locks, LED headlights, and a liftgate. The maximum weight a straight truck can carry is often lower than its rated value. This is because federal and state regulations limit how much weight each axle can carry. In most states, a tandem drive axle is capped at 34,000 pounds, and vehicles carrying more weight than this cannot be rebalanced. A 26Ft straight truck is therefore not recommended for hauling oversized items. How Much Can a Straight Truck Scale? One of the most important factors in determining how much a straight truck can weigh is its rated weight. Some straight trucks are rated lower than the actual weight they can carry, which means the driver could end up with a citation for overregistering the vehicle or carrying more than the rated amount of weight. If this happens, the driver will be put out of service. The manufacturer is also responsible for the sizing of the vehicle. Straight trucks vary in size, with some measuring as short as 10 feet long. Most are between eight and 10 feet high, but they can reach higher weight limits for specific jobsites. For example, a straight truck may not be able to carry more than 10,000 pounds of weight. However, vehicles over 26,000 pounds do not require a commercial driver’s license. This means that a straight truck can scale up to three times its rated weight, but you should make sure you know exactly what you’re getting before you start. How Much Weight Can a Truck Scale? The question is, how much weight can a 26Ft box truck carry? Most box trucks are about the same size and are rated at a payload capacity of 10,000 pounds. The capacity of a 26ft box truck depends on the size and number of standard pallets it can hold. The following are some of the questions you should ask your truck scale to determine its capacity. The gross weight of a 26-foot box truck is typically around 80,000 pounds. However, this number may vary depending on the model, make, and features of the truck. In New York, for example, the maximum gross weight for a standard tractor-trailer is 80,000 pounds. It is essential that you understand what your truck’s maximum weight capacity is, as exceeding it can damage the truck and lower its fuel How Much Weight Can a 26Ft-Box Truck Scale Accurately Measure? When you’re planning to purchase a truck, you’ll first want to determine the gross vehicle weight (GVWR). The GVWR is the total weight of the truck with cargo and passengers. GVWR is a useful way to estimate how much weight a truck can handle. How Much Weight Can a 26Ft Box Truck Scale? A 26-foot box truck has a GVWR (gross vehicle weight) of about 26,000 pounds and can carry up to eight rooms of furniture. As such, this type of truck is an excellent choice for perishable foods and parcel deliveries. Some adventurous cooks have even turned these vehicles into mobile restaurants. To get a good idea of how much weight a 26-foot box truck can carry, consider the following: A box truck is a class four vehicle. While the weight breakdown of trucks varies by state, the purpose of weighing trucks is to prevent them from exceeding guidelines. Some states do not require trucks to stop at weigh stations, such as Alabama, Connecticut, or Massachusetts, while others like California do. This is a good place to get a box truck scale so you can know the weight of your truck before you make a purchase. How Many Tons is a 26Ft Box Truck? You may have asked yourself, “How many tons is a 26Ft box truck?” Having a license is not required to drive one. However, if you plan to haul more than 26,000 pounds, you will need a CDC license. Most 26-foot box trucks are the same size and weight, so this is not a difficult question to answer. In this article, we’ll cover some of the basic facts about these trucks and how much they can A 26-foot box truck is a large piece of machinery, so it’s important to know how much cargo it can carry. These trucks have a gross weight of 26,000 pounds, which is about 13 tons. This amount is called the Gross Vehicle Weight Rating or GVWR, and represents the maximum weight that the truck can carry. You can also check out the Gross Vehicle Weight Rating by visiting the manufacturer’s How Do You Calculate Box Truck Loads? How Do You Calculate Box Truck LoaDs? is a simple and effective method to determine how much space your truck needs to carry your cargo. First, calculate the total number of rolls. For example, if 28 rolls are 149″ long and 21″ wide, you’ll need two blocks of 149″. Therefore, you need 298″ of space. Adding all these blocks together, you’ll have 298″ of space. There are several load boards available online where you can search for available loads for your box truck. Some of the more popular load boards are Logistic Dynamics, Direct Freight, 123 LoadBoad, and TruckStop. A good search strategy is to use the extended inquiry feature to include all the dimensions and types of equipment. This way, you’ll find loads that fit your truck’s size and weight How Many Pallets Can a 26Ft Box Truck Hold? The most important question to ask yourself when choosing a box truck is: How many pallets can a 26Ft box truck hold? The answer will depend on a number of factors. One of those factors is the size of the trailer. If you need a small trailer, it may not be big enough to haul all of your pallets. If you need a larger trailer, however, it may be difficult to find one. In terms of size, a 26ft box truck can accommodate up to 12 standard pallets. Similarly, a 20’ft container can accommodate up to eleven standard pallets or twenty-two “Europallets” in a single tier. A 26′ U-Haul truck gets ten miles per gallon, and the tank can last up to 600 miles. The truck has an inside dimensions of 92″ wide by 15′ 6″ long. A standard 26′ box truck is 102 inches long and wide, and a typical trailer can hold up to ten pallets. The truck’s towing capacity, or GVWR, can be found in the owner’s manual. This figure includes the weight of cargo and trailer. Using the truck’s towing capacity beyond the listed amount can damage the vehicle and cause an accident. Another factor that can affect the payload of a box truck is its weight rating. The truck’s front axle has only half of the weight rating of the rear axle, which means that a box truck can only carry so much weight. When the cargo inside the truck is too heavy, it may shift en route and cause the vehicle to break down. Learn More Here: 3.) Best Trucks
{"url":"https://www.prettymotors.com/how-much-can-a-26ft-box-truck-scale/","timestamp":"2024-11-05T16:49:52Z","content_type":"text/html","content_length":"84755","record_id":"<urn:uuid:75fc398d-f568-4f27-bd37-9352f181e2cf>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00673.warc.gz"}
How do you implicitly differentiate -1=(x-y)sinx+y? | HIX Tutor How do you implicitly differentiate #-1=(x-y)sinx+y#? Answer 1 $\frac{\mathrm{dy}}{\mathrm{dx}} = \frac{\sin x + x \cos x - y \cos x}{\sin x - 1}$ #0=sinx+xcosx-ycosx-sinx dy/dx+dy/dx# #sinx dy/dx-dy/dx=sinx+xcosx-ycosx# #dy/dx (sinx-1)=sinx+xcosx-ycosx# #dy/dx = (sinx+xcosx-ycosx)/ (sinx -1)# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To implicitly differentiate the equation (-1 = (x - y) \sin(x) + y), follow these steps: 1. Differentiate both sides of the equation with respect to (x). 2. Apply the product rule and chain rule where necessary. 3. Solve for (\frac{dy}{dx}) in terms of (x) and (y). The steps are as follows: 1. Differentiate both sides with respect to (x): [ \frac{d}{dx}(-1) = \frac{d}{dx}[(x - y) \sin(x) + y] ] 2. For the right side, use the sum rule and the product rule: [ 0 = (x - y) \frac{d}{dx}[\sin(x)] + \sin(x) \frac{d}{dx}(x - y) + \frac{dy}{dx} ] 3. Differentiate ( \sin(x) ) and (x - y) with respect to (x): [ 0 = (x - y) \cos(x) + \sin(x) - \frac{dy}{dx} ] 4. Solve for ( \frac{dy}{dx} ): [ \frac{dy}{dx} = \sin(x) + (x - y) \cos(x) ] Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/how-do-you-implicitly-differentiate-1-x-y-sinx-y-8f9af9e955","timestamp":"2024-11-04T11:38:39Z","content_type":"text/html","content_length":"569320","record_id":"<urn:uuid:dc21b7f5-98c3-4160-91db-f4995cf6e3de>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00865.warc.gz"}
A model train, with a mass of 12 kg, is moving on a circular track with a radius of 9 m. If the train's kinetic energy changes from 36 j to 18 j, by how much will the centripetal force applied by the tracks change by? | HIX Tutor A model train, with a mass of #12 kg#, is moving on a circular track with a radius of #9 m#. If the train's kinetic energy changes from #36 j# to #18 j#, by how much will the centripetal force applied by the tracks change by? Answer 1 The centripetal force changes by $= 4 N$ The variation of kinetic energy is #Delta KE=1/2mv^2-1/2m u^2# The radius is #=9m# The variation of centripetal force is #=(2)/r*Delta KE# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the change in centripetal force, we can use the relationship between kinetic energy and centripetal force. The kinetic energy of an object moving in a circular path is given by: [ KE = \frac{1}{2} m v^2 ] The centripetal force acting on an object moving in a circular path is given by: [ F_c = \frac{m v^2}{r} ] Given: Initial kinetic energy (( KE_1 )) = 36 J Final kinetic energy (( KE_2 )) = 18 J Mass of the train (( m )) = 12 kg Radius of the circular track (( r )) = 9 m Using the equation for kinetic energy: [ KE_1 = \frac{1}{2} m v_1^2 ] [ KE_2 = \frac{1}{2} m v_2^2 ] Solving for initial and final velocities: [ v_1 = \sqrt{\frac{2 \times KE_1}{m}} ] [ v_2 = \sqrt{\frac{2 \times KE_2}{m}} ] Now, calculate initial and final velocities: [ v_1 = \sqrt{\frac{2 \times 36}{12}} = \sqrt{6} \approx 2.45 , m/s ] [ v_2 = \sqrt{\frac{2 \times 18}{12}} = \sqrt{3} \approx 1.73 , m/s ] Now, calculate the initial and final centripetal forces: [ F_{c1} = \frac{m v_1^2}{r} ] [ F_{c2} = \frac{m v_2^2}{r} ] Substitute the values: [ F_{c1} = \frac{12 \times (2.45)^2}{9} \approx 8.08 , N ] [ F_{c2} = \frac{12 \times (1.73)^2}{9} \approx 4.16 , N ] The change in centripetal force is: [ \Delta F_c = F_{c2} - F_{c1} ] [ \Delta F_c = 4.16 , N - 8.08 , N ] [ \Delta F_c \approx -3.92 , N ] Therefore, the centripetal force applied by the tracks decreases by approximately 3.92 N. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/a-model-train-with-a-mass-of-12-kg-is-moving-on-a-circular-track-with-a-radius-o-20-8f9af8b69d","timestamp":"2024-11-11T05:04:37Z","content_type":"text/html","content_length":"586942","record_id":"<urn:uuid:327e476c-66c1-4b15-9ebf-3c80dc276ac1>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00375.warc.gz"}
In Place Uniform Shuffle Problem: Write a program that shuffles a list. Do so without using more than a constant amount of extra space and linear time in the size of the list. Solution: (in Python) import random def shuffle(myList): n = len(myList) for i in xrange(0, n): j = random.randint(i, n-1) # randint is inclusive myList[i], myList[j] = myList[j], myList[i] Discussion: Using a computer to shuffle a deck of cards is nontrivial at first glance for the following reasons. First, computers are perfect. One can’t “haphazardly” spread the cards on a table and mix them around for a while. Neither can it use a “riffle” technique until it’s satisfied the cards are random enough. These are all human characteristics which have inherent sloppy flaws caused by our limited-precision dexterity. Another difficulty is that we have to give a mathematical guarantee that the resulting distribution of shuffles is uniform. It certainly isn’t when humans shuffle cards, so this adds a new level of difficulty. We note that there are many gambling companies whose integrity is based on the validity of their shuffling algorithms (and hence, the fairness of their games). Companies who get it wrong get defeated by clever mathematicians. So we need to take a close look at the right way to solve this problem. Before we get there, we note that this problem generalizes to constructing random permutations. While it’s easier to understand a problem based on a deck of cards, generating random permutations is really the useful thing we’re getting out of this. This page gives an example of how not to shuffle cards. We will derive the correct way. If we have a list of $ n$ elements, and a good shuffling algorithm, then each element has a uniform probability of $ 1/n$ to end up in the first position in the list. Once we’ve chosen such an element, we can recursively operate with the remaining $ n-1$ elements, and randomly choose which element goes in the second spot, where each has a chance of $ 1/(n-1)$ to get there. Note that this means for the first stage, we pick a random integer uniformly between 1 and $ n$, and in the second stage we pick a integer between 2 and $ n$. Inductively, if we have already processed the first $ i$ cards, then we need to pick a random integer uniformly between $ i+1$ and $ n$ to decide which of the remaining cards goes in the $ i+1$-th spot. Note that subtracting 1 from all of these randomly chosen numbers gives us the right indices. We make one further note: the order of the unprocessed cards it totally irrelevant. That means that if, say, during the first stage we want the 5th element to go in the first spot, we can simply swap the fifth and first element in the list. Since we’re picking uniformly distributed numbers, we still have an equal chance to pick any one of the remaining cards in later stages. And of course, sometimes we will be swapping an element with itself, which is the same as not swapping at all. Taking all of this into consideration, we have the following pseudocode: on input list L: for i in range(0, n-1) inclusive: j = random(i, n-1) inclusive swap L[i], L[j] As we showed above, this pseudocode translates quite nicely to Python, and it obviously satisfies the requirements of not using a lot of extra space and running in linear time; we only visit each position in the list once, and swaps take constant time and all the swaps combined only use constant space. On the other hand, implementations in functional languages are a bit more difficult, and if the language is purely functional, it can’t be done “in place.” I’d usually be the last one to admit functional languages aren’t the best tool for every job, but there you have it.
{"url":"https://www.jeremykun.com/2012/03/18/in-place-uniform-shuffle/","timestamp":"2024-11-02T01:29:28Z","content_type":"text/html","content_length":"13289","record_id":"<urn:uuid:b7ed1654-fb5b-4640-96e0-0fee82e3876e>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00226.warc.gz"}
Fluid Dynamics Seminar - CMSA Fluid Dynamics Seminar February 25, 2020 @ 5:33 pm Beginning immediately, until at least April 30, all seminars will take place virtually, through Zoom. Links to connect can be found in the schedule below once they are created. In the Spring 2019 Semester, the Center of Mathematical Sciences and Applications will be hosting a seminar on Fluid Dynamics. The seminar will take place on Wednesdays from 3:00-4:00pm in CMSA G10. Spring 2020: Date Speaker Title/Abstract Title: Flexible spectral simulations of low-Mach-number astrophysical fluids Keaton Abstract: Fluid dynamical processes are key to understanding the formation and evolution of stars and planets. While the astrophysical community has made exceptional progress in 2/25/ Burns, simulating highly compressible flows, models of low-Mach-number stellar and planetary flows typically use simplified equations based on numerical techniques for incompressible fluids. 2020 MIT In this talk, we will discuss improved numerical models of three low-Mach-number astrophysical phenomena: tidal instabilities in binary neutron stars, waves and convection in massive stars, and ice-ocean interactions in icy moons. We will cover the basic physics of these systems and how ongoing additions to the open-source Dedalus Project are enabling their efficient simulation in spherical domains with spectral accuracy, implicit timestepping, phase-field methods, and complex equations of state. Fall 2019: Date Speaker Title/Abstract Title: Simulation of 2-D turbulent advection at extreme accuracy with machine learning and differentiable programming Abstract: The computational cost of fluid simulations grows rapidly with grid resolution. With the recent slow-down of Moore’s Law, it can take many decades for 10x higher 9/18/2019 Jiawei Zhuang resolution grids to become affordable. To break this major barrier in high-performance scientific computing, we used a data-driven approach to learn an optimal numerical (Harvard) solver that can retain high-accuracy at much coarser grids. We applied this method to 2-D turbulent advection and achieved 4x effective resolution than traditional high-order flux-limited advection solvers. The machine learning component is tightly integrated with traditional finite-volume schemes and can be trained via an end-to-end differentiable programming framework. The model can achieve near-peak FLOPs on CPUs and accelerators via convolutional filters. Title: Double diffusive convection and thermohaline staircases Abstract: Double diffusive convection (DDC), i.e. the buoyancy-driven flow with fluid density depending on two scalar components, is omnipresent in many natural and Yantao Yang engineering environments. In ocean this is especially true since the seawater density is mainly determined by temperature and salinity. In upper water of both (sub-) 9/25/2019 (Peking tropical and polar oceans, DDC causes the intriguing thermohaline staircases, which consist of alternatively stacked convection layers and sharp interfaces with high University) gradients of temperature and salinity. In this talk, we will focus on the fingering DDC usually found in (sub-)tropical ocean, where the mean temperature and salinity decrease with depth. We numerically investigate the formation and the transport properties of finger structures and thermohaline staircases. Moreover, we show that multiple states exit for the exactly same global condition, and individual finger layers and finger layers within staircases exhibit very different transport behaviors. 10/2/2019 No talk Title: Data-driven methods for discovery of partial differential equations and forecasting Abstract: A critical challenge in many modern scientific disciplines is deriving governing equations and forecasting models from data where derivation from first principals 10/9/2019 Samuel Rudy is intractable. The problem of learning dynamics from data is complicated when data is corrupted by noise, when only partial or indirect knowledge of the state is (MIT) available, when dynamics exhibit parametric dependencies, or when only small volumes of data are available. In this talk I will discuss several methods for constructing models of dynamical systems from data including sparse identification for partial differential equations with or without parametric dependencies and approximation of dynamical systems governing equations using neural networks. Limitations of each approach and future research directions will also be discussed. 10/16/2019 No talk Title: Using magnetic fields to investigate Jupiter’s fluid interior Abstract: The present-day interior structure of a planet is an important reflection of the formation and subsequent thermal evolution of that planet. However, despite decades of spacecraft missions to a variety of target bodies, the interiors of most planets in our Solar System remain poorly constrained. In this talk, I will discuss how actively generated planetary magnetic fields (dynamos) can provide important insights into the interior properties and evolution of fluid planets. Using Jupiter as a case Kimee Moore study, I will present new results from the analysis of in situ spacecraft magnetometer data from the NASA Juno Mission (currently in orbit about Jupiter). The spatial 10/23/2019 (Harvard) morphology of Jupiter’s magnetic field shows surprising hemispheric asymmetry, which may be linked to the dissolution of Jupiter’s rocky core in liquid metallic hydrogen. I also report the first definitive detection of time-variation (secular variation) in a planetary dynamo beyond Earth. This time-variation can be explained by the advection of Jupiter’s magnetic field by the zonal winds, which places a lower bound on the velocity of Jupiter’s winds at depth. These results provide an important complement to other analysis techniques, as gravitational measurements are currently unable to uniquely distinguish between deep and shallow wind scenarios, and between solid and dilute core scenarios. Future analysis will continue to resolve Jupiter’s interior, providing broader insight into the physics of giant planets, with implications for the formation of our Solar System. 10/30/2019 No Talk Title: Deep learning and reinforcement learning for turbulence Abstract: This talk tells two stories. Chapter 1: We investigate the capability of a state-of-the-art deep neural model at learning features of turbulent velocity signals. Deep neural network (DNN) models are at the center of the present machine learning revolution. The set of complex tasks in which they over perform human capabilities and best algorithmic solutions grows at an impressive rate and includes, but it is not limited to, image, video and language analysis, automated control, and even life science modeling. Besides, deep learning is receiving increasing attention in connection to a vast set of problems in physics where quantitatively accurate outcomes are expected. We consider turbulent velocity Federico Toschi signals, spanning decades in Reynolds numbers, which have been generated via shell models for the turbulent energy cascade. Given the multi-scale nature of the turbulent (Eindhoven signals, we focus on the fundamental question of whether a deep neural network (DNN) is capable of learning, after supervised training with very high statistics, feature 11/6/2019 University of extractors to address and distinguish intermittent and multi-scale signals. Can the DNN measure the Reynolds number of the signals? Which feature is the DNN learning? Chapter 2: Thermally driven turbulent flows are common in nature and in industrial applications. The presence of a (turbulent) flow can greatly enhance the heat transfer with respect to its conductive value. It is therefore extremely important -in fundamental and applied perspective- to understand if and how it is possible to control the heat transfer in thermally driven flows. In this work, we aim at maintaining a Rayleigh–Bénard convection (RBC) cell in its conductive state beyond the critical Rayleigh number for the onset of convection. We specifically consider controls based on local modifications of the boundary temperature (fluctuations). We take advantage of recent developments in Artificial Intelligence and Reinforcement Learning (RL) to find -automatically- efficient non-linear control strategies. We train RL agents via parallel, GPU-based, 2D lattice Boltzmann simulations. Trained RL agents are capable of increasing the critical Rayleigh number of a factor 3 in comparison with state-of-the-art linear control approaches. Moreover, we observe that control agents are able to significantly reduce the convective flow also when the conductive state is unobtainable. This is achieved by finding and inducing complex flow fields. Title: Predictions of relaminarisation in turbulent shear flows using deep learning 11/13/2019 Martin Lellep (Philipps Abstract: Given the increasing performance of deep learning algorithms in tasks such as classification during the last years and the vast amount of data that can be 2:10pm University of generated in turbulence research, I present one application of deep learning to fluid dynamics in this talk. We train a deep learning machine learning model to classify if Marburg, turbulent shear flow becomes laminar a certain amount of time steps ahead in the future. Prior to this, we use a 2D toy example to develop an understanding how the G02 Germany) performance of the deep learning algorithm depends on hyper parameters and how to understand the errors. The performance of both algorithms is high and therefore opens up further steps towards the interpretation of the results in future work. Title: Rayleigh vs. Marangoni Abstract: In this talk I will show several examples of an interesting and surprising competition between buoyancy and Marangoni forces. First, 11/19/2019 I will introduce the audience to the jumping oil droplet – and its sudden death – in a density stratified liquid consisting of water in the bottom and ethanol in the top : After sinking for about a minute, before reaching the equilibrium the droplet suddenly jumps up thanks to the Marangoni forces. This phenomenon repeats about 30-50 times, Tuesday before the droplet falls dead all the sudden. We explain this phenomenon and explore the phase space where it occurs. 3-4 pm Detlef Lohse Next, I will focus on the evaporation of multicomponent droplets, for which the richness of phenomena keeps surprising us. I will show and explain several of such (University of phenomena, namely evaporation-triggered segregation thanks to either weak solutal Marangoni flow or thanks to gravitational effects. The dominance of the latter implies Pierce Twente) that sessile droplets and pending droplets show very different evaporation behavior, even for Bond number << 1. I will also explain the full phase diagram in the Marangoni Hall 209, number vs Rayleigh number phase space, and show where Rayleigh convections rolls prevail, where Marangoni convection rolls prevail, and where they compete. 29 Oxford Street The research work shown in this talks combines experiments, numerical simulations, and theory. It has been done by and in collaboration with Yanshen Li, Yaxing Li, and Christian Diddens, and many others. Time: 3:00-3:35 pm Speaker: Haoran Liu Title: Applications of Phase Field method: drop impact and multiphase turbulence Abstract: Will a mosquito survive raindrop collisions? How the bubbles under a ship reduce the drag force? In nature and industry, flows with drops and bubbles exist everywhere. To understand these flows, one of the powerful tools is the direct numerical simulation (DNS). Among all the DNS methods, we choose the Phase Field (PF) method and develop some models based on it to simulate the complicated flows, such as flows with moving contact lines, fluid-structure interaction, ternary fluids and turbulence. In this talk, I will firstly introduce the advantages and disadvantages of PF method. Then, I will show its applications: drop impact on an object, compound droplet dynamics, water entry of an object and multiphase turbulence. Time: 3:35-4:10 pm Speaker: Steven Chong Title: Confined Rayleigh-Bénard, rotating Rayleigh-Bénard, double diffusive convection and quasi-static magnetoconvection: A unifying view on their scalar transport Abstract: For Rayleigh-Bénard under geometrical confinement, under rotation or the double diffusive convection with the second scalar component stabilizing the convective flow, they seem to be the three different canonical models in turbulent flow. However, previous research coincidentally reported the scalar transport enhancement in these systems. The results are counter-intuitive because the higher efficiency of scalar transport is bought about by the slower flow. In this talk, I will show you a fundamental and unified perspective on such the global transport behavior observed in the seemingly different systems. We further show that the same view can be applied to the quasi-static magnetoconvection, and indeed the regime with heat transport enhancement has been found. The beauty of physics is to understand the seemingly unrelated phenomena by a simplified concept. Here we provide a simplified and generic view, and this concept could be potentially extended to other situations where the turbulent flow is subjected to an additional stabilization. See previous seminar information here.
{"url":"https://cmsa.fas.harvard.edu/event/fluid-dynamics-seminar/","timestamp":"2024-11-09T19:28:41Z","content_type":"text/html","content_length":"96679","record_id":"<urn:uuid:9e6e2078-d6e9-446d-8853-d52d8d1d1d3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00810.warc.gz"}
If arc PQR above is a semicircle, what is the length of diameter PR ? Question Stats: 67% 33% (02:07) based on 4531 sessions If arc PQR above is a semicircle, what is the length of diameter PR ? (1) a = 4 (2) b = 1 We can answer this question without performing any calculations. Instead, we can use some visualization Important point: For geometry DS questions, we are typically checking to see whether the statements "lock" a particular angle or length into having just one value. This concept is discussed in much greater detail in the video below. Target question: What is the length of diameter PR? We want to check whether the statements lock this side into having just 1 possible length. Given: Arc PQR above is a semicircle. This means that angle PQR is 90 degrees (an important property of circles) Statement 1: a = 4 If a = 4, then we now have the lengths of 2 sides of a right triangle. So, we apply the Pythagorean Theorem to find the length of side PQ. Since we can find the lengths of all 3 sides of that right triangle, there is only 1 triangle in the universe with those lengths. In other words, statement 1 "locks" the left-hand triangle into exactly 1 shape. This means that the angle QPR is locked into one angle In turn, angle QRP is locked into one angle So, all three angles of triangle PQR are locked. Plus we could determine the length of side PQ. All of this tells us that statement 1 locks triangle PQR into 1 and only 1 triangle, which means there must be only one possible value for the length of side PR Since we could (if we chose to perform the necessary calcations) answer the target question with certainty, statement 1 is SUFFICIENT Statement 2: b = 1 If b = 1, then we now have the lengths of 2 sides of a right triangle (the small triangle on the right-hand side). So, we apply the Pythagorean Theorem to find the length of side QR. Since we can find the lengths of all 3 sides of that right triangle, there is only 1 triangle in the universe with those lengths. In other words, statement 2 "locks" the small triangle (on the right side) into exactly 1 shape. This means that the angle PRQ is locked into one angle In turn, angle QPR is locked into one angle So, all three angles of triangle PQR are locked. All of this tells us that statement 2 locks triangle PQR into 1 and only 1 triangle, which means there must be only one possible value for the length of side PR Since we could (if we chose to perform the necessary calcations) answer the target question with certainty, statement 2 is SUFFICIENT Answer: D
{"url":"https://gmatclub.com/forum/if-arc-pqr-above-is-a-semicircle-what-is-the-length-of-diameter-pr-144057.html","timestamp":"2024-11-13T15:08:14Z","content_type":"application/xhtml+xml","content_length":"1049220","record_id":"<urn:uuid:60dc1ef1-b383-43d5-ba65-935142368dad>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00240.warc.gz"}
Function EQUALP equalp x y => generalized-boolean Arguments and Values: x---an object. y---an object. generalized-boolean---a generalized boolean. Returns true if x and y are equal, or if they have components that are of the same type as each other and if those components are equalp; specifically, equalp returns true in the following cases: equalp does not descend any objects other than the ones explicitly specified above. The next figure summarizes the information given in the previous list. In addition, the figure specifies the priority of the behavior of equalp, with upper entries taking priority over lower ones. Type Behavior number uses = character uses char-equal cons descends bit vector descends string descends pathname same as equal structure descends, as described above Other array descends hash table descends, as described above Other object uses eq Figure 5-13. Summary and priorities of behavior of equalp (equalp 'a 'b) => false (equalp 'a 'a) => true (equalp 3 3) => true (equalp 3 3.0) => true (equalp 3.0 3.0) => true (equalp #c(3 -4) #c(3 -4)) => true (equalp #c(3 -4.0) #c(3 -4)) => true (equalp (cons 'a 'b) (cons 'a 'c)) => false (equalp (cons 'a 'b) (cons 'a 'b)) => true (equalp #\A #\A) => true (equalp #\A #\a) => true (equalp "Foo" "Foo") => true (equalp "Foo" (copy-seq "Foo")) => true (equalp "FOO" "foo") => true (setq array1 (make-array 6 :element-type 'integer :initial-contents '(1 1 1 3 5 7))) => #(1 1 1 3 5 7) (setq array2 (make-array 8 :element-type 'integer :initial-contents '(1 1 1 3 5 7 2 6) :fill-pointer 6)) => #(1 1 1 3 5 7) (equalp array1 array2) => true (setq vector1 (vector 1 1 1 3 5 7)) => #(1 1 1 3 5 7) (equalp array1 vector1) => true Side Effects: None. Affected By: None. Exceptional Situations: None. See Also: eq, eql, equal, =, string=, string-equal, char=, char-equal Object equality is not a concept for which there is a uniquely determined correct algorithm. The appropriateness of an equality predicate can be judged only in the context of the needs of some particular program. Although these functions take any type of argument and their names sound very generic, equal and equalp are not appropriate for every application. The following X3J13 cleanup issue, not part of the specification, applies to this section: Copyright 1996-2005, LispWorks Ltd. All rights reserved.
{"url":"http://clsnet.nl/HyperSpec/Body/f_equalp.htm","timestamp":"2024-11-07T18:38:47Z","content_type":"text/html","content_length":"12074","record_id":"<urn:uuid:9f4fef97-731e-4bb9-b823-543f2cf145ef>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00446.warc.gz"}
Temperature C Temperature Conversion Calculator A Brief History of Temperature Scales Celsius (°C): The Celsius scale was invented by Swedish astronomer Anders Celsius in 1742. Celsius initially defined his scale with 0°C as the boiling point of water and 100°C as the freezing point. However, this was later reversed to the scale we know today, with 0°C representing the freezing point of water and 100°C representing the boiling point at standard atmospheric pressure. Fahrenheit (°F): The Fahrenheit scale was developed by German physicist Daniel Gabriel Fahrenheit in 1724. Fahrenheit set 0°F as the temperature of a mixture of ice and salt, which was the lowest temperature he could achieve with his refrigeration experiments. He fixed 32°F as the freezing point of water and 212°F as its boiling point, providing 180 intervals between these two points. The Fahrenheit scale is primarily used in the United States and a few other countries. Kelvin (K): In 1848, William Thomson, also known as Lord Kelvin, proposed the Kelvin scale. This scale is based on the absolute thermodynamic temperature scale, where 0 K (absolute zero) is the point at which molecular motion theoretically stops. Unlike Celsius and Fahrenheit, the Kelvin scale doesn’t use degrees and is commonly used in scientific disciplines. Conversion Between Kelvin, Celsius, and Fahrenheit These temperature scales are related mathematically, and it's easy to convert from one to another using the following formulas: 1. From Celsius to Kelvin: $K = °C + 273.15$ 2. From Kelvin to Celsius: $°C = K - 273.15$ 3. From Celsius to Fahrenheit: $°F = (°C \times \frac{9}{5}) + 32$ 4. From Fahrenheit to Celsius: $°C = (°F - 32) \times \frac{5}{9}$ 5. From Kelvin to Fahrenheit: $°F = (K - 273.15) \times \frac{9}{5} + 32$ 6. From Fahrenheit to Kelvin: $K = (°F - 32) \times \frac{5}{9} + 273.15$ Example Temperature Calculation Let's say you want to convert 300 Kelvin (K) to both Celsius and Fahrenheit. 1. Convert 300 K to Celsius: $°C = 300K - 273.15 = 26.85°C$ 2. Convert 300 K to Fahrenheit: $°F = (300K - 273.15) \times \frac{9}{5} + 32 = 80.33°F$ So, 300 Kelvin equals approximately 26.85°C and 80.33°F. Fig. Screen Shot from CHEMIX School - Temperature Conversion Calculator CHEMIX School Temperature Conversion Calculator - Usage Guide Your program allows users to convert between these temperature scales using editable text fields for Kelvin, Celsius, and Fahrenheit. Here’s how to ensure accurate calculations: How to Calculate: • The temperature conversion calculator consists of three editable text fields where you can input a temperature value in Kelvin, Celsius, or Fahrenheit. • One of the fields must remain empty before you press Enter. The program will calculate the missing value based on the other two inputs. • Ensure that the cursor is focused on one of the text fields where you have entered a value, and the values must be valid numbers. • When you click in a text field, its contents will be cleared, allowing you to enter a new value. • Press Enter after inserting a value into one of the fields, and the corresponding value for the empty field will be automatically calculated and displayed. This simple tool helps users navigate the world of temperature conversions with ease, from science experiments to everyday tasks. By converting between Kelvin, Celsius, and Fahrenheit, users can better understand temperature in different contexts. Related topics: Pressure Conversion Calculator Energy Conversion Calculator Force Conversion Calculator Length Conversion Calculator Mass Conversion Calculator Power Conversion Calculator Clausius-Clapeyron Boiling Point Experiment.html
{"url":"https://chemix-chemistry-software.com/school/calculate/temperature-conversion-calculator.html","timestamp":"2024-11-07T03:53:56Z","content_type":"text/html","content_length":"14611","record_id":"<urn:uuid:fe428e0f-304f-4f1f-a6bf-17c1ab4870d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00042.warc.gz"}
Unveiling the Number of Solutions: An Equation's Enigma Unveiling The Number Of Solutions: An Equation’s Enigma The number of solutions to an equation depends on its type. Linear equations in one variable have one solution. Linear equations in two variables have infinite solutions or no solution depending on whether the lines represented by the equations are parallel, perpendicular, or intersecting. Quadratic equations have two, one, or no solution depending on the discriminant. Cubic equations have three, one, or no solution depending on their properties. Types of Equations and Their Number of Solutions In the realm of mathematics, equations reign supreme, serving as tools to unravel countless mysteries and solve real-world problems. But beneath their seemingly complex exterior, equations can be classified into distinct types, each bearing its own unique fingerprint in terms of the number of solutions it holds. Join us as we delve into this captivating world of equations, exploring the factors that determine their solitude or abundance. Linear Equations: Simplicity in One or Two Variables The linear equation in one variable is the epitome of simplicity, presenting us with a straightforward equation where the variable appears only once, to the first power. As its name suggests, this equation represents a straight line on a graph. And just like a straight line, it has only one solution – the point where it intersects the y-axis. Now, let’s consider the linear equation in two variables. Here, the variable dance becomes a bit more complex, as we now have two variables interacting in a single equation. Depending on the slopes and intercepts of the lines they represent, these equations can yield infinitely many solutions, as seen in parallel lines, or no solutions at all, like two lines that never meet. Quadratic Equations: A Tale of Two (or One) Quadratic equations introduce a touch of drama with their second-degree polynomial nature. These equations involve variables squared, and their solutions depend on a crucial number known as the discriminant. Depending on its value, quadratic equations can have two distinct real solutions, one real solution, or no real solutions at all. Cubic Equations: The Enigmatic Trio Cubic equations take complexity up a notch, with variables reaching the third power. These equations can showcase three distinct real solutions, one real solution, or no real solutions, making them true enigmas of the equation world. To solve cubic equations, one must employ clever techniques like factoring, synthetic division, or the cubic formula. Linear Equations in One Variable: Unveiling the Simple When we speak of linear equations in one variable, we’re referring to equations that involve a single variable raised to the power of one. Linear means there’s a straight line when we graph the equation, and one variable means we’re solving for a single unknown value. For instance, take the equation 2x + 5 = 11. This simple equation describes a straight line on a graph. To find the solution, let’s isolate the variable on one side of the equation: 2x = 11 - 5 2x = 6 x = 6 / 2 x = 3 Voila! The solution to our equation is x = 3. Plugging this value back into the original equation, we find that the equation holds true. Linear equations in one variable are ubiquitous in everyday life. They help us solve problems in science, engineering, and even personal finance. By understanding how to manipulate and solve these equations, you can empower yourself with a valuable tool for navigating the world around you. Linear Equations in Two Variables: Unlocking the Secrets of Solutions In the realm of mathematics, equations play a pivotal role in unraveling complex problems and uncovering hidden relationships. Among the various types, linear equations in two variables stand out as a cornerstone of mathematical understanding. These equations, often represented as y = mx + b, involve the interplay of two variables, x and y, and a constant term, b. The fascination with linear equations in two variables lies in their ability to model real-world scenarios, such as the motion of an object, the growth of a population, or the relationship between two physical quantities. However, the key to harnessing their power lies in understanding the different types of linear equations and how to determine the number of solutions they possess. Types of Linear Equations The classification of linear equations in two variables depends on their slopes, which represent the steepness of the line they form on a graph. The three main types are: • Parallel lines: These lines have the same slope but different y-intercepts. They never intersect on a graph, effectively having no solutions. • Perpendicular lines: These lines have negative reciprocal slopes and form a 90-degree angle when they intersect. This intersection point represents one solution. • Intersecting lines: These lines have different slopes and intersect at a single point, resulting in one solution. Determining the Number of Solutions To determine the number of solutions for a given linear equation in two variables, simply follow these steps: 1. Solve for slope: To find the slope of the line, solve the equation for y in terms of x. The coefficient of x in the resulting equation represents the slope. 2. Compare slopes: If the slopes of two lines are the same, they are parallel lines and have no solutions. If their slopes are negative reciprocals, they are perpendicular lines with one solution. Otherwise, they are intersecting lines with one solution. Understanding linear equations in two variables is an essential building block in the study of mathematics. By mastering the concepts of slope and type, one can unravel the secrets of these equations and solve an array of problems both within and outside the realm of mathematics. Embrace the challenge, unravel the mysteries, and let linear equations be your guide to unlocking a world of Quadratic Equations: Unveiling the Secrets of Second-Degree Polynomials In the realm of algebra, quadratic equations hold a special place as the first equations that truly challenge our mathematical prowess. Step into the captivating world of quadratics, where solving for the unknown becomes an art form, and the discriminant holds the key to unraveling their hidden solutions. Embarking on the Quadratic Journey A quadratic equation, often adorned with the imposing symbol x², invites us to seek the value of x. Unlike linear equations, where x stands alone, quadratics introduce a fascinating dance between x and x². This interplay gives rise to a variety of solutions, from the commonplace two to the elusive none. The Discriminant: A Magic Wand for Solution-Counting Enter the discriminant, a mathematical oracle that whispers the number of solutions a quadratic equation conceals. This powerful tool is calculated using the coefficients of the quadratic equation and possesses the ability to foretell the equation’s future: • A positive discriminant bodes well, promising two distinct real solutions. • A zero discriminant hints at a single real solution. • A negative discriminant, alas, signals no real solutions. Unveiling the Solutions: A Tale of Two (or None) Consider the quadratic equation x² + 5x + 6 = 0. Its discriminant is 5² – 4(1)(6) = 25 – 24 = 1. Ah, a positive discriminant! This oracle whispers of two distinct real solutions. Solving the equation, we find: x = (-5 ± √1) / 2 = -2 or -3. In contrast, the equation x² + 4x + 5 = 0 has a discriminant of -7. This negative omen predicts no real solutions, leaving us to ponder the mysteries of imaginary numbers. Navigating the Depths of Cubic Equations As we venture beyond quadratics, we find the enigmatic realm of cubic equations. These enigmatic equations, defined by the presence of x³, demand a higher level of algebraic wizardry to unravel their secrets. While techniques such as factoring, synthetic division, and the cubic formula come to our aid, the path to solving cubic equations remains fraught with its own unique challenges. • [Insert sub-heading here] □ [Insert paragraph here] □ [Insert paragraph here] • [Insert sub-heading here] □ [Insert paragraph here] □ [Insert paragraph here] Cubic Equations: Unveiling the Enigmatic Triad Cubic equations, expressions of the form ax³ + bx² + cx + d = 0 (where a ≠ 0), present a unique challenge in the realm of algebra. Their intricate nature conceals a fascinating world of properties and solution techniques. Properties of Cubic Equations Cubic equations possess intriguing characteristics that distinguish them from other polynomial equations. They exhibit a maximum of three real or complex roots, a cardinal feature that sets them apart. Furthermore, the nature of these roots is dictated by a crucial concept known as the discriminant. Unraveling the Secrets of the Discriminant The discriminant, denoted by Δ, serves as a gatekeeper, determining the number and type of solutions to a cubic equation. Calculated as Δ = 18abcd – 4b³d + b²c² – 4ac³, its value unveils the path to • Δ < 0: The equation has three distinct real roots. • Δ = 0: The equation has three real roots, with two roots being equal. • Δ > 0: The equation has one real root accompanied by two complex conjugate roots. Deciphering the Equation through Factoring For certain cubic equations, factoring can be a potent weapon in the arsenal of solution techniques. By expressing the equation as a product of three linear factors, we can reveal the solution set with ease. However, this approach is only feasible when the equation lends itself to straightforward factoring. Synthetic Division and the Roots of Revelation Synthetic division emerges as an indispensable tool for uncovering the elusive roots of cubic equations. This method enables the systematic evaluation of potential roots, guiding us towards the concealed solutions that reside within the equation’s depths. The Cubic Formula: A Guiding Light When factoring and synthetic division prove inadequate, the cubic formula stands ready to illuminate the path to solutions. This versatile formula, though intricate in appearance, provides an algorithmic approach to finding the roots of any cubic equation. Example: Unveiling the Roots of a Cubic Equation Consider the cubic equation x³ – 2x² – 5x + 6 = 0. Using the cubic formula, we embark on a journey to uncover its hidden solutions: x₁ = 1 + √3i x₂ = 1 - √3i x₃ = 2 Thus, the equation yields three distinct real roots: 1 + √3i, 1 – √3i, and 2. Cubic equations, with their enigmatic nature and intricate behavior, stand as a testament to the boundless possibilities of algebra. By delving into their properties, unraveling the secrets of the discriminant, and employing powerful solution techniques, we not only conquer these mathematical challenges but also unlock a deeper appreciation for the intricacies of the mathematical realm. Leave a Reply Cancel reply
{"url":"https://www.biomedes.biz/equations-enigma-solutions/","timestamp":"2024-11-02T02:21:17Z","content_type":"text/html","content_length":"92317","record_id":"<urn:uuid:7ae13953-7228-4642-aa7e-ea7afd553ad5>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00476.warc.gz"}
Extended M1 sum rule for excited symmetric and mixed-symmetry states in nuclei A generalized M1 sum rule for orbital magnetic dipole strength from excited symmetric states to mixed-symmetry states is considered within the proton-neutron interacting boson model of even-even nuclei. Analytic expressions for the dominant terms in the B(M1) transition rates from the 2[1.2]^+ states are derived in the U(5) and SO(6) dynamic symmetry limits of the model, and the applicability of a sum rule approach is examined at and between these limits. Lastly, the sum rule is applied to the new data on mixed-symmetry states of ^94Mo and a quadrupole d-boson ratio n[d](0 [1]^+)/n[d](2[2]^+ )≈0.6 is obtained in a largely parameter-independent way. Dive into the research topics of 'Extended M1 sum rule for excited symmetric and mixed-symmetry states in nuclei'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/extended-m1-sum-rule-for-excited-symmetric-and-mixed-symmetry-sta","timestamp":"2024-11-10T14:18:23Z","content_type":"text/html","content_length":"47728","record_id":"<urn:uuid:4567a158-2da7-4a0e-8e02-b59dc4585da6>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00035.warc.gz"}
Reserved Keywords for Binary Next: Binary Table Extension Data Up: Binary Tables Previous: Required Keywords for Binary These keywords are optional in a binary table extension but may be used only with the meanings specified below. They may appear in any order between the TFIELDS and END keywords. Note that some of these keywords are the same as those for an ASCII table extension. The reason is that they are used for a binary table in a way analogous to their use for an ASCII table. However, the allowed values and their meaning may differ somewhat from those for an ASCII table, as the rows of an ASCII table are composed of characters and those of a binary table are composed of bytes. • TTYPEn (character) has a value giving the label or heading for field n. While the rules do not further prescribe the value, following the recommendations for the TTYPEn keyword of ASCII tables is a good practice: using letters, digits, and underscore but not hyphen; also, string comparisons involving the values should be case insensitive. HEASARC has made this practice one of their internal standards (section 5.6.1.1). • TUNITn (character) has a value giving the physical units of field n. The rules should follow the prescriptions given in section 3.1.1.4. • TSCALn (floating) has a value providing a scale factor for use in converting stored table values for field n to physical values. The default value is 1. • TZEROn (floating) has a value providing the offset for field n. The default value is 0. (Physical_value) = (Stored_value) &times TSCALn + TZEROn (3.12) As is the case for arrays, care should be taken to avoid overflows when scaling floating point numbers. For L, X, and A format fields, the TSCALn and TZEROn keywords have no meaning and should not be used. The meaning has not been formally defined for P format fields, but the general understanding is that the scaling should apply to the heap data pointed to by the array descriptors. • TNULLn (integer) has the value that signifies an undefined value for the integer data types B, I, and J. It should not be used if the value of the corresponding TFORMn specifies any other data type. Null values for other data types are discussed in section 3.6.3. • TDISPn (character) has a value giving the Fortran 90 format recommended for display of the contents of field n. (If Fortran 90 formats are not available to the software printing a table, FORTRAN-77 formats may be used instead.) All entries in a single field are displayed with the same format. If the field data are scaled, the physical values, derived by applying the scaling transformation, are displayed. For bit and byte arrays, each byte is considered to be an unsigned integer for purposes of display. Characters and logical values may be null (zero byte) terminated. The following formats are allowed: Binary, integers only Octal, integers only Hexadecimal, integers only, Single precision real, no exponent Single precision real, exponential notation Engineering format - single precision real, exponential notation with exponent a multiple of 3 Scientific format - single precision real, exponential notation with exponent a multiple of 3, nonzero leading digit (unless value is zero) General - appears as F format if significance will not be lost; otherwise appears as E Double precision real, exponential notation In these formats, w is the number of characters in the displayed values, m is the minimum number of digits (leading zeroes may be required), d is the number of digits following the decimal point, and e is the number of digits in the exponent of an exponential form. Usage of this keyword in some ways parallels that of the TFORMn keyword of ASCII tables, in that it provides a formatted value for the number. However, the format given by the TFORMn keyword in an ASCII table describes the format of the number in the FITS file, but the format given by the TDISPn keyword of a binary table is different from that of the number in the file. The following keywords are reserved for proposed binary table conventions: • TDIMn (character) is used by the multidimensional array convention (section 5.2.3). For that convention, it has a value giving the number of dimensions of field n in the table, when those dimensions are two or more. The value is of form '(i,j,k,...)', where i,j,k,...are the dimensions of the array stored in field n. This size must be consistent with the repeat count specified by the value of TFORMn. • THEAP (integer) is used by the variable length array convention (section 5.2.1). For that convention, its value gives the location of the start of the heap used to store variable length arrays. The value is equal to the number of bytes of extension data preceding the start of the heap, including the main table and any gap between the main table and the heap. All keywords reserved under the generalized extensions agreement (section 3.3.2) apply to binary tables. Next: Binary Table Extension Data Up: Binary Tables Previous: Required Keywords for Binary
{"url":"https://stdatu.stsci.edu/fits/users_guide/node46.html","timestamp":"2024-11-04T17:16:46Z","content_type":"text/html","content_length":"10208","record_id":"<urn:uuid:81f9d33b-d38d-4bba-8ca3-6dcd74727f92>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00404.warc.gz"}
Egyptian Numbers | Stage 3 Maths | HK Secondary S1-S3 In our society, we use the Arabic number system. That is, we use ten digits ($0-9$0−9) and we can make bigger numbers by combining these digits to indicate their place value is increasing. However, there are other number systems, like Roman numerals, which we have already learnt about. Another number system we are going to look at are Egyptian numerals. These are similar to Roman numerals in that it is a unary system. In other words, it uses symbols to represent different numbers and, like when we tally scores, the number of times a symbol is repeated indicates the number of times it should be counted. However, Egyptian numerals use $10$10 as a base just like the Arabic system as shown in the diagram below. One advantage of unary systems is that it doesn't matter what order you write the number, you can still add up the symbols and work out what it means. However, in the Arabic number system, $539$539 is different to $395$395 - the order here is VERY important. Let's run through some examples now to see how Egyptian numbers work! Worked Examples Question 1 These examples show how to convert from Egyptian to normal numerals. Want to see more like the above problems? Try these: Question 2 Want to seem more? Try these:
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-98/topics/Topic-4519/subtopics/Subtopic-17454/?activeTab=theory","timestamp":"2024-11-12T09:36:09Z","content_type":"text/html","content_length":"431849","record_id":"<urn:uuid:3498ae6e-b4d6-47d9-a000-4e14b98a1bc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00561.warc.gz"}
What are graph embeddings ? What are graph embeddings ? In the modern world of big data, graphs are undoubtedly essential data representation and visualization tools. Imagine navigating a city without a map. When working with complicated networks, such as social relationships, molecular structures, or recommendation systems, data analysts frequently encounter similar difficulties. Here's where graph embeddings come into play. They allow researchers and data analysts to map nodes, edges, or complete graphs to continuous vector spaces for in-depth data What are graph embeddings and how do they work? In this guide, we examine the fundamentals of graph embeddings, including: • What are graph embeddings • How graph embeddings work • Benefits of graph embeddings • Trends in graph embeddings This guide will help you uncover the mysteries contained in graphs, whether you are a data analyst, researcher, or someone just interested in learning more about the potential of network analysis. Continue reading! Fundamentals of graphs A graph is a slightly abstract representation of objects that are related to each other in some way, and of these relationships. Typically, the objects in a graph database are drawn as dots called vertices or nodes. A line (or curve) connects any two vertices representing objects that are related or adjacent; such a line is called an edge. It is a simplified map where lines represent relationships and dots represent items. These dots, or vertices, hold information about the entities, while the lines, or edges, represent the connections between the entities. After learning about the vertices and edges that comprise a graph, let's investigate some of its unrealized possibilities: graph embeddings. What are graph embeddings? Graph embedding refers to the process of representing graph nodes as vectors which encode key information of the graph such as semantic and structural details, allowing machine learning algorithms and models to operate on them. In other words they are basically low-dimensional, compact graph representations that store relational and structural data in a vector space. Graph embeddings, as opposed to conventional graph representations, condense complicated graph structures into dense vectors while maintaining crucial graph features, potentially saving time and money in processing. Ever wondered how your social media knows to suggest perfect friends you never knew existed? Or how your phone predicts the traffic jam before you even hit the road? The answer lies in a hidden world called graphs, networks of connections, like threads linking people, places, and things. And to understand these webs, we need a translator: graph embeddings. Think of them as a magic trick that transforms intricate networks of vertices and edges into compact numerical representations. These "embeddings" capture the essence of each node (vertex) and its relationship to others, distilling the complex network into a format readily understood by machine learning algorithms. Benefits of graph embeddings Being able to represent data using graph embedding offers great benefits, including: 1. Graph embeddings allow researchers and data scientists to explore hidden patterns within large networks of data. This greatly enhances the accuracy and efficiency of machine learning algorithms 2. By identifying hidden patterns, researchers can make informed decisions and come up with better solutions for complex problems. 3. Graph embedding distills complex graph-structured data and represents them as simple numerical figures, making computation operations on them very easy and fast. This benefit allows even the most complex algorithms to be scaled to fit all sorts of datasets. Techniques for generating graph embeddings Graph embedding algorithms and node embedding techniques are the two main kinds of techniques used to construct graph embeddings. 1. Node embedding techniques These techniques focus on representing individual nodes within the graph as unique vectors in a low-dimensional space. Imagine each node as a distinct character in a complex story, and these techniques aim to capture their essence and relationships through numerical encoding. i). DeepWalk Inspired by language modeling, DeepWalk treats random walks on the graph as sentences and learns node representations based on their "context" within these walks. Think of it as understanding a word better by its surrounding words in a sentence. ii). Node2Vec Building on DeepWalk, Node2Vec allows for flexible exploration of the graph by controlling the balance between breadth-first and depth-first searches. This "adjustable lens" allows for capturing both local and global structural information for each node. iii). GraphSAGE This technique focuses on aggregating information from a node's local neighborhood to create its embedding. Imagine summarizing a person based on their close friends and associates. GraphSAGE efficiently handles large graphs by sampling fixed-size neighborhoods for each node during training. 2. Graph embedding algorithms While node embedding techniques concentrate on specific nodes, graph algorithms try to capture the interactions and general structure of the entire network. Think of them as offering a thorough summary of the network that accounts for each node individually as well as its connections. i). Graph Convolutional Networks (GCNs) GCNs function directly on the graph structure, executing convolutions on adjacent nodes to represent their interconnection. They were inspired by convolutional neural networks for images. Consider applying a filter to an image that takes into account not only a pixel but also the pixels surrounding it. ii). Graph Attention Networks (GATs) Expanding upon GCNs, GATs incorporate an attention mechanism that enables the network to concentrate on the most pertinent neighbors for every node, perhaps resulting in more precise depictions. iii). Graph Neural Networks (GNNs) Refers to a variety of graph data processing and node representation learning frameworks. Their approach blends concepts from conventional neural networks with graph-specific processes to extract structure information as well as node attributes. Keep in mind that the subject of graph embedding is continually changing, with new methods and improvements appearing on a regular basis. Applications of graph embeddings Due to their ability to turn graph data into a computationally processable format, graph embedding is useful in graph pre-processing. Before we get to the use cases, let's look at the capabilities that provide the foundation that inform the use cases for graph embeddings: Graph Analytics Graph embeddings make it easy to gain insight into the structure, patterns and relationships in graphs Machine Learning and Deep Learning Graph embeddings make it possible to represent graph data as continuous data, making it useful in natural language processing and training of various models such as recurrent neural networks Measuring the similarity between two Graph embedding makes it easy to understand how users interact with items Based on the above capabilities, graph embeddings find wide-ranging applications in several disciplines due to their capacity to capture intricate interactions inside graphs and represent them in low-dimensional vector spaces. Here are some important use cases. 1. Social Network Analysis In social network analysis, graph embeddings facilitate community detection, user behavior prediction, and identification of influential nodes. Consider Facebook as an example, where graph embeddings help uncover communities of users, predict friendship connections, and identify influential users based on their interactions and network 2. Recommendation Systems Graph embeddings power recommendation systems by modeling user-item interactions and capturing recommendation graph structures. For example, systems like Netflix and YouTube use graph embeddings to recommend movies and videos based on users' past movie ratings among other metrics 3. Knowledge Graphs In knowledge graphs, graph embeddings enable query response, entity linking, and semantic similarity computation. The accessibility and interpretability of knowledge graphs are improved by integrating entities and relations. 4. Biological Networks and Bioinformatics Graph embeddings are used in the analysis of biological networks, including gene regulatory networks and protein-protein interactions. They can be used to detect gene-disease connections, accelerate drug discovery, and predict targets and protein functions by foreseeing target-drug interactions. 5. Fraud Detection and Anomaly Detection It is critical to safeguard users and financial systems against fraud. Graph embeddings play a critical role in fraud and anomaly detection systems by enabling the identification of anomalous patterns in networks such as social networks and financial transactions. Also Read: Fraud Detection With Graph Analytics Metrics for evaluating graph embeddings It's not enough to only create strong graph embeddings; we also need instruments to evaluate them. Metrics for Evaluating Graph Embeddings measure how well graph embedding methods capture and maintain the relational and structural information in graphs. Important measurements include: • Node Classification Accuracy: Indicates how well nodes' properties and relationships can be captured by using learned embeddings to predict their labels. • Link Prediction Accuracy measures how well a graph's learned embeddings may be used to predict future or missing edges, demonstrating how well graph topology is captured. • Graph Reconstruction: Measures the degree to which graph attributes can be preserved by reconstructing the original graph structure using learned embeddings. • Downstream Task Performance: Measuring downstream task performance with learned embeddings shows how useful downstream machine learning tasks are in practical applications. • Embedding Quality: Assesses the degree of similarity preservation, dimensionality reduction, and computing efficiency that make up learned embeddings. All things considered, these measures offer thorough insights into the effectiveness and generalizability of graph embedding methods across a range of fields and applications. Challenges in generating effective graph embeddings and how to overcome them It can be difficult to generate efficient graph embeddings since analysts have to reduce the dimensionality of the graph while maintaining its structural information. As a result, creating graph embeddings often encounters these primary issues: Scalability issues Accurate and efficient embeddings of graphs can be challenging to produce due to their huge size and complexity. Scalability is an important concern, especially in the processing of real-world applications where the graphs might be of huge size and keep on changing. Heterogeneity issues You can encounter challenges in representing the structural information of a network in a low-dimensional space because nodes and edges in graphs have varying types and properties. Sparsity issues A large number of nodes and edges in some networks may lack connections, making them extremely sparse. Because of this, it could be challenging to represent the graph's structural information in a low-dimensional space. To overcome these challenges, researchers have developed several techniques and algorithms for generating effective graph embeddings including sampling strategies, skip connections, inductive learning, and adversarial training. Future trends in graph embeddings The future of graph embeddings is shaped by these emerging trends, mostly aimed at addressing evolving complexities. • Dynamic graphs require adaptive embedding techniques. • Interpretability is vital, with many techniques seen as opaque "black boxes." Demand will continue to grow for interpretable methods providing transparent insights. • Efficiency will remain crucial amid growing graph complexity. Techniques must balance accuracy with computational efficiency. • Scalable algorithms tailored to dynamic and multi-modal graphs will rise. They prioritize interpretability, offering clear explanations. In summary, the future of graph embeddings relies on adaptive, interpretable, and efficient techniques navigating dynamic, multi-modal graph data, fostering innovation across domains. Graph embeddings have made it possible to untangle complex networks and reveal hidden connections. From social media analysis to drug discovery, their applications are vast. While challenges like scalability and interpretability persist, the future shines bright with dynamic and multi-modal techniques. NebulaGraph supports graph embeddings and is available in AWS and Azure. Get started with a Free Trial and witness the power of connections truly unveiled.
{"url":"https://www.nebula-graph.io/posts/graph-embeddings","timestamp":"2024-11-15T04:40:09Z","content_type":"text/html","content_length":"131028","record_id":"<urn:uuid:557b9460-524c-4eb3-99a1-bd68dd16f818>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00265.warc.gz"}
Factorial Zeroes 2 In my last post I talked about finding the number of zeroes at the end of $n!$, and I said that there was room for improvement. I thought about it a little bit and found a couple things to speed it The first has to do with the relationship between the quantity of fives and the quantity of twos. The lower quantity in the prime factorization of $n!$ is how many zeroes it will have at the end. If I would have thought a little more about it though I would have seen that counting the twos is pointless in this situation. Even the prime factorization of $10 = 5 \cdot 2$ has the information in there: there will always be more twos than fives. Counting from 1 to 10: • Multiples of 2: 2, 4, 6, 8, 10 • Multiples of 5: 5, 10 This means that all we really need to keep track of is the quantity of fives in the prime factorization of $n!$. Which leads to the second optimization: we only need to get the prime factorization of multiples of five. Also since we’re only keeping track of the quantity of one number instead of two, we can just keep track of it with a single int instead of a list of ints containing all the prime factorizations in $n!$. That’ll save a whole lot of memory. I’ve decided to switch it up and write this one in C++, here’s a screenshot of the results: 109ms for the Linux/C++ version vs 855ms for the Windows/C# version when calculating the zeroes at the end of $1000000!$. That’s a pretty decent improvement, although surely the language and environment played a factor, but at least it was on the same computer. I’ve posted the code in a Gist. I won’t post all the code here but I’ll post some of the parts worth noting. int CountZeroes(int n) if (n < 5) return 0; int fives = 1; int i = 10; while (i <= n) //i is already a multiple of 5, skip that step fives += 1 + CountFivesInFactorization(i/5); i += 5; return fives; This is the method that iterates from 1 to $n$. Notice on line 11 it passes in i/5 as the param to CountFivesInFactorization(). Since we already know i is a multiple of 5 it would be a waste of cycles finding that fact out, so it gets skipped. int CountFivesInFactorization(int n) if (IsPrime(n)) return n == 5 ? 1 : 0; for (int i = 2; i*i <= n; i++) if (n % i == 0) int result = n / i; int fives = 0; if (IsPrime(result)) if (result == 5) fives += CountFivesInFactorization(result); if (IsPrime(i)) if (i == 5) fives += CountFivesInFactorization(i); return fives; return 0; Despite being faster at what it was made for that’s not to say the C# version is pointless. This version doesn’t actually keep the prime factorization of $n!$, and that list of numbers has many more uses than just what I’ve used them for. Maybe figuring out a new use for that list can be a future project.
{"url":"https://trlewis.net/factorial-zeroes-2/","timestamp":"2024-11-14T08:51:18Z","content_type":"text/html","content_length":"37757","record_id":"<urn:uuid:4a079438-96e4-425c-afd9-7877aabaf412>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00082.warc.gz"}
Understanding (7v)^2 In mathematics, the expression (7v)^2 represents the square of the quantity 7v. This means we multiply the quantity by itself. Expanding the Expression To simplify (7v)^2, we can use the following property of exponents: (ab)^n = a^n * b^n Applying this property to our expression: (7v)^2 = 7^2 * v^2 Now, we can calculate the squares: 7^2 = 49 v^2 = v * v Therefore, the simplified form of (7v)^2 is: (7v)^2 = 49v^2 The expression (7v)^2 simplifies to 49v^2, which represents the square of the product of 7 and v.
{"url":"https://jasonbradley.me/page/(7v)%255E2","timestamp":"2024-11-10T07:44:49Z","content_type":"text/html","content_length":"57932","record_id":"<urn:uuid:67200b8c-559b-4da1-a1fe-061b198c5a12>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00677.warc.gz"}
Python Bitwise Operators TutorialPython Bitwise Operators Tutorial Introduction to Bitwise Operators Bitwise operators in Python are used to perform operations on individual bits of binary numbers. These operators allow you to manipulate and extract specific bits, which can be useful in various scenarios such as binary number manipulation, data compression, encryption, and more. Python provides several bitwise operators, including AND, OR, XOR, NOT, left shift, right shift, ones complement, and twos complement. Each operator performs a specific operation on the binary representation of numbers. Related Article: String Comparison in Python: Best Practices and Techniques The AND Operator The AND operator, represented by the ampersand (&) symbol, performs a bitwise AND operation on two numbers. It compares the corresponding bits of the two numbers and returns a new number where each bit is set to 1 only if both bits in the same position are 1. Here’s an example of using the AND operator: a = 5 # Binary: 0101 b = 3 # Binary: 0011 result = a & b # Binary: 0001 print(result) # Output: 1 In this example, the AND operator compares the bits of a and b and returns a new number where only the rightmost bit is set to 1 because it is the only bit that is 1 in both a and b. Another example: a = 12 # Binary: 1100 b = 10 # Binary: 1010 result = a & b # Binary: 1000 print(result) # Output: 8 In this case, the AND operator compares the bits of a and b and returns a new number where only the leftmost bit is set to 1 because it is the only bit that is 1 in both a and b. The OR Operator The OR operator, represented by the pipe (|) symbol, performs a bitwise OR operation on two numbers. It compares the corresponding bits of the two numbers and returns a new number where each bit is set to 1 if either of the bits in the same position is 1. Here’s an example of using the OR operator: a = 5 # Binary: 0101 b = 3 # Binary: 0011 result = a | b # Binary: 0111 print(result) # Output: 7 In this example, the OR operator compares the bits of a and b and returns a new number where all the bits are set to 1 if either of the bits in the same position is 1. Another example: a = 12 # Binary: 1100 b = 10 # Binary: 1010 result = a | b # Binary: 1110 print(result) # Output: 14 In this case, the OR operator compares the bits of a and b and returns a new number where all the bits are set to 1 if either of the bits in the same position is 1. The XOR Operator The XOR operator, represented by the caret (^) symbol, performs a bitwise XOR operation on two numbers. It compares the corresponding bits of the two numbers and returns a new number where each bit is set to 1 if the bits in the same position are different. Here’s an example of using the XOR operator: a = 5 # Binary: 0101 b = 3 # Binary: 0011 result = a ^ b # Binary: 0110 print(result) # Output: 6 In this example, the XOR operator compares the bits of a and b and returns a new number where each bit is set to 1 if the bits in the same position are different. Another example: a = 12 # Binary: 1100 b = 10 # Binary: 1010 result = a ^ b # Binary: 0110 print(result) # Output: 6 In this case, the XOR operator compares the bits of a and b and returns a new number where each bit is set to 1 if the bits in the same position are different. Related Article: How To Limit Floats To Two Decimal Points In Python The NOT Operator The NOT operator, represented by the tilde (~) symbol, performs a bitwise NOT operation on a number. It flips all the bits of the number, setting the 0s to 1s and the 1s to 0s. Here’s an example of using the NOT operator: a = 5 # Binary: 0101 result = ~a # Binary: 1010 (signed representation) print(result) # Output: -6 In this example, the NOT operator flips all the bits of a and returns the signed representation of the result. The output is -6 because the signed representation of the binary number 1010 is -6. Another example: a = 12 # Binary: 1100 result = ~a # Binary: 0011 (signed representation) print(result) # Output: -13 In this case, the NOT operator flips all the bits of a and returns the signed representation of the result. The output is -13 because the signed representation of the binary number 0011 is -13. The Left Shift Operator The left shift operator, represented by the double less-than (<<) symbol, shifts the bits of a number to the left by a specified number of positions. It effectively multiplies the number by 2 raised to the power of the specified shift amount. Here's an example of using the left shift operator: a = 5 # Binary: 0101 result = a << 2 # Binary: 010100 print(result) # Output: 20 In this example, the left shift operator shifts the bits of a to the left by 2 positions, effectively multiplying the number by 2 raised to the power of 2. The output is 20. Another example: a = 12 # Binary: 1100 result = a << 3 # Binary: 1100000 print(result) # Output: 96 In this case, the left shift operator shifts the bits of a to the left by 3 positions, effectively multiplying the number by 2 raised to the power of 3. The output is 96. The Right Shift Operator The right shift operator, represented by the double greater-than (>>) symbol, shifts the bits of a number to the right by a specified number of positions. It effectively divides the number by 2 raised to the power of the specified shift amount, discarding any remainders. Here’s an example of using the right shift operator: a = 20 # Binary: 010100 result = a >> 2 # Binary: 0101 print(result) # Output: 5 In this example, the right shift operator shifts the bits of a to the right by 2 positions, effectively dividing the number by 2 raised to the power of 2. The output is 5. Another example: a = 96 # Binary: 1100000 result = a >> 3 # Binary: 1100 print(result) # Output: 12 In this case, the right shift operator shifts the bits of a to the right by 3 positions, effectively dividing the number by 2 raised to the power of 3. The output is 12. Related Article: How To Rename A File With Python The Binary Ones Complement Operator The binary ones complement operator, represented by the tilde (~) symbol, performs a ones complement operation on a number. It flips all the bits of the number, setting the 0s to 1s and the 1s to 0s. Here’s an example of using the ones complement operator: a = 5 # Binary: 0101 result = ~a # Binary: 1010 (unsigned representation) print(result) # Output: -6 In this example, the ones complement operator flips all the bits of a and returns the unsigned representation of the result. The output is -6 because the unsigned representation of the binary number 1010 is -6. Another example: a = 12 # Binary: 1100 result = ~a # Binary: 0011 (unsigned representation) print(result) # Output: -13 In this case, the ones complement operator flips all the bits of a and returns the unsigned representation of the result. The output is -13 because the unsigned representation of the binary number 0011 is -13. The Binary Twos Complement Operator The binary twos complement operator is used to represent negative numbers in binary form. It is obtained by taking the ones complement of a number and adding 1 to the result. Here’s an example of using the twos complement operator: a = 5 # Binary: 0101 result = -a # Binary: 1011 print(result) # Output: -5 In this example, the twos complement operator represents the negative value of a by taking the ones complement of a and adding 1 to the result. The output is -5. Another example: a = 12 # Binary: 1100 result = -a # Binary: 0100 print(result) # Output: -12 In this case, the twos complement operator represents the negative value of a by taking the ones complement of a and adding 1 to the result. The output is -12. Use Case: Binary Number Manipulation One common use case for bitwise operators is binary number manipulation. By manipulating the individual bits of a binary number, you can perform operations such as extracting specific bits, setting bits to 1 or 0, and flipping bits. Here’s an example of manipulating binary numbers using bitwise operators: # Extracting specific bits number = 53 # Binary: 110101 bit_0 = number & 1 # Extracting the rightmost bit bit_1 = (number >> 1) & 1 # Extracting the second rightmost bit bit_2 = (number >> 2) & 1 # Extracting the third rightmost bit print(bit_0, bit_1, bit_2) # Output: 1 0 1 # Setting bits to 1 number = 53 # Binary: 110101 number = number | (1 << 3) # Setting the fourth rightmost bit to 1 print(number) # Output: 61 (Binary: 111101) # Flipping bits number = 53 # Binary: 110101 flipped_number = ~number # Flipping all the bits print(flipped_number) # Output: -54 (Binary: 110110) In this example, we extract specific bits from a binary number, set a bit to 1, and flip all the bits using bitwise operators. Related Article: How To Check If List Is Empty In Python Use Case: Flags and Masks Bitwise operators are commonly used for manipulating flags and masks. Flags are binary values that represent certain conditions or settings, while masks are binary patterns used to selectively modify Here’s an example of using bitwise operators for flags and masks: # Flags READ = 1 # Binary: 0001 WRITE = 2 # Binary: 0010 EXECUTE = 4 # Binary: 0100 permissions = READ | WRITE # Setting the READ and WRITE flags if permissions & READ: print("Read permission granted.") if permissions & WRITE: print("Write permission granted.") if permissions & EXECUTE: print("Execute permission granted.") # This condition is not met # Masks number = 53 # Binary: 110101 mask = 15 # Binary: 1111 masked_number = number & mask # Applying the mask print(masked_number) # Output: 5 (Binary: 0101) In this example, we use bitwise OR to set flags for permissions and bitwise AND to check if a certain flag is set. We also use bitwise AND to apply a mask to a number, isolating specific bits. Use Case: Data Compression and Encryption Bitwise operators are also used in data compression and encryption algorithms. These algorithms often involve manipulating and transforming binary data to achieve compression or encryption. Here’s a simplified example of using bitwise operators for data compression: data = "Hello, world!" # ASCII representation: 72 101 108 108 111 44 32 119 111 114 108 100 33 # Compression compressed_data = "" for char in data: compressed_data += str(ord(char) & 15) # Take the first 4 bits of each ASCII code print(compressed_data) # Output: 881881811144211416131321 # Decompression decompressed_data = "" for i in range(0, len(compressed_data), 2): ascii_code = int(compressed_data[i:i+2]) | 64 # Add 64 to reconstruct the ASCII code decompressed_data += chr(ascii_code) print(decompressed_data) # Output: Hello, world! In this example, we compress the ASCII representation of the string “Hello, world!” by taking the first 4 bits of each ASCII code. We then decompress the compressed data by reconstructing the ASCII codes and converting them back to characters. Best Practice: Ensuring Compatibility with Different Python Versions When using bitwise operators in Python, it’s important to ensure compatibility with different Python versions. While the behavior of bitwise operators is generally consistent across versions, there are some differences to be aware of. One common difference is the handling of negative numbers. In Python 2, the right shift operator (>>) preserves the sign bit when shifting right, while in Python 3, it fills the shifted bits with 0 regardless of the sign. To ensure compatibility, it’s recommended to use the sys.maxsize constant to determine the number of bits in an integer and to use bitwise operators in a way that doesn’t rely on implementation Here’s an example of ensuring compatibility with different Python versions: import sys # Right shift with negative numbers number = -5 if sys.version_info.major == 2: result = number >> 1 # Python 2: Preserves the sign bit result = number // 2 # Python 3: Fills the shifted bits with 0 print(result) # Output: -3 in Python 2, -3 in Python 3 In this example, we check the Python version using sys.version_info.major and handle the right shift differently depending on the version. Related Article: How To Check If a File Exists In Python Best Practice: Using Parentheses for Clarity When performing complex bitwise operations, it’s often a good practice to use parentheses to clarify the intended order of operations. This helps avoid confusion and ensures that the operations are evaluated correctly. Here’s an example of using parentheses for clarity: a = 5 b = 3 result = (a ^ b) & ((a | b) << 2) print(result) # Output: 28 In this example, we use parentheses to group the XOR and OR operations separately, and then perform the AND and left shift operations on the results. Real World Example: Implementing a Simple Encryption Algorithm Bitwise operators can be used to implement simple encryption algorithms. One such algorithm is the XOR cipher, which works by XORing each character of a message with a key. This algorithm is reversible, meaning that applying the same key again will decrypt the message. Here’s an example of implementing a simple XOR encryption algorithm in Python: def xor_cipher(message, key): encrypted_message = "" for i, char in enumerate(message): encrypted_char = chr(ord(char) ^ ord(key[i % len(key)])) encrypted_message += encrypted_char return encrypted_message message = "Hello, world!" key = "secret" encrypted_message = xor_cipher(message, key) decrypted_message = xor_cipher(encrypted_message, key) print(encrypted_message) # Output: '\x05\x10\x04\x04\x1bK\x01\x1e\x0f\x08\x1a\x05\x1e\nK' print(decrypted_message) # Output: 'Hello, world!' In this example, the xor_cipher function takes a message and a key as input. It XORs each character of the message with the corresponding character of the key, repeating the key if it is shorter than the message. The result is an encrypted message. To decrypt the message, the same key is applied again. Real World Example: Building a Binary Calculator Bitwise operators can be used to build a binary calculator, which performs arithmetic operations on binary numbers. A binary calculator can add, subtract, multiply, and divide binary numbers using bitwise operators. Here’s an example of building a binary calculator in Python: def binary_addition(a, b): carry = 0 result = 0 bit_position = 1 while a != 0 or b != 0: bit_a = a & 1 bit_b = b & 1 sum_bits = bit_a ^ bit_b ^ carry carry = (bit_a & bit_b) | (bit_a & carry) | (bit_b & carry) result |= (sum_bits <>= 1 b >>= 1 bit_position += 1 result |= (carry << bit_position) return result a = 10 # Binary: 1010 b = 5 # Binary: 0101 sum_result = binary_addition(a, b) print(sum_result) # Output: 15 (Binary: 1111) In this example, the binary_addition function takes two binary numbers a and b as input and performs binary addition using bitwise operators. It iterates through the bits of the numbers, calculates the sum and carry bits, and constructs the result by setting the appropriate bits. Related Article: How to Use Inline If Statements for Print in Python Performance Consideration: Bitwise vs Arithmetic Operations When performing simple operations on individual bits, bitwise operators are generally faster than arithmetic operations. This is because bitwise operations work at the binary level, directly manipulating the bits, while arithmetic operations involve more complex calculations. Here’s an example comparing the performance of bitwise and arithmetic operations: import time # Bitwise operations start_time = time.time() result = 0 for i in range(1000000): result |= (1 << i) end_time = time.time() bitwise_time = end_time - start_time # Arithmetic operations start_time = time.time() result = 0 for i in range(1000000): result += (2 ** i) end_time = time.time() arithmetic_time = end_time - start_time print("Bitwise time:", bitwise_time) print("Arithmetic time:", arithmetic_time) In this example, we measure the time taken to set all the bits from 0 to 999,999 using bitwise operations and arithmetic operations. The bitwise operations are expected to be faster due to the lower level of complexity involved. Performance Consideration: Bitwise Operations and Memory Usage Bitwise operations can be memory-efficient compared to other operations. Since bitwise operators work at the binary level, they allow you to represent and manipulate data using fewer bits, which can lead to reduced memory usage. Here’s an example demonstrating the memory efficiency of bitwise operations: import sys a = 100 # Binary: 1100100 b = 50 # Binary: 110010 bitwise_result = a & b # Binary: 1100100 arithmetic_result = a + b # Decimal: 150 (Binary: 10010110) bitwise_size = sys.getsizeof(bitwise_result) arithmetic_size = sys.getsizeof(arithmetic_result) print("Bitwise size:", bitwise_size) print("Arithmetic size:", arithmetic_size) In this example, we compare the memory usage of a bitwise result and an arithmetic result. The bitwise result requires fewer bits to represent the same information, resulting in a smaller memory Advanced Technique: Bitwise Operations and Binary Trees Bitwise operations can be used in conjunction with binary trees to efficiently store and manipulate binary data. By using bitwise operators, you can perform operations such as finding the parent, left child, or right child of a node in a binary tree. Here’s an example of using bitwise operations with binary trees: def get_parent(node): return node >> 1 def get_left_child(node): return (node << 1) + 1 def get_right_child(node): return (node << 1) + 2 node = 5 parent = get_parent(node) left_child = get_left_child(node) right_child = get_right_child(node) print(parent) # Output: 2 print(left_child) # Output: 11 print(right_child) # Output: 12 In this example, the get_parent, get_left_child, and get_right_child functions use bitwise operators to calculate the parent, left child, and right child of a given node in a binary tree. Related Article: How to Use Stripchar on a String in Python Advanced Technique: Bitwise Operations and Hash Functions Bitwise operations can be used in hash functions to efficiently generate hash values for data. By applying bitwise operators to the binary representation of the data, you can create hash functions that distribute the hash values evenly across a hash table. Here’s an example of using bitwise operations with hash functions: def hash_function(data): hash_value = 0 for byte in data: hash_value ^= byte hash_value = (hash_value <> 31) # Rotate the hash value return hash_value data = b"Hello, world!" hash_value = hash_function(data) print(hash_value) # Output: 4098336486 In this example, the hash_function applies bitwise XOR and bitwise rotation to each byte of the data to generate a hash value. The hash value is then used to index into a hash table. Code Snippet: Using Bitwise AND to Determine Even or Odd Bitwise AND can be used to determine whether a number is even or odd. By ANDing a number with 1, the rightmost bit (the least significant bit) can be checked. If the result is 0, the number is even; otherwise, it is odd. Here’s a code snippet demonstrating the use of bitwise AND to determine even or odd: def is_even(number): return (number & 1) == 0 def is_odd(number): return (number & 1) == 1 number = 10 print(is_even(number)) # Output: True print(is_odd(number)) # Output: False In this code snippet, the is_even function checks whether a number is even by ANDing it with 1 and comparing the result to 0. The is_odd function does the same but compares the result to 1. Code Snippet: Using Bitwise XOR for Data Swapping Bitwise XOR can be used to swap the values of two variables without using a temporary variable. By XORing a variable with another variable and then XORing the result with the original variable, the values are swapped. Here’s a code snippet demonstrating the use of bitwise XOR for data swapping: a = 5 b = 10 a = a ^ b b = a ^ b a = a ^ b print(a) # Output: 10 print(b) # Output: 5 In this code snippet, the values of a and b are swapped using bitwise XOR operations. The same principle can be applied to swap the values of variables of any type. Related Article: How To Delete A File Or Folder In Python Code Snippet: Using Bitwise NOT for Binary Inversion Bitwise NOT can be used to invert the bits of a binary number, effectively changing all the 0s to 1s and vice versa. By applying the NOT operator to a number, the complement of the number is Here’s a code snippet demonstrating the use of bitwise NOT for binary inversion: number = 5 inverted_number = ~number print(inverted_number) # Output: -6 In this code snippet, the bitwise NOT operator is used to invert the bits of the number 5. The result is -6 because the signed representation of the binary number 1010 is -6. Code Snippet: Using Left Shift for Multiplication Left shift can be used to multiply a number by a power of 2. By shifting the bits of a number to the left, the number is effectively multiplied by 2 raised to the power of the shift amount. Here’s a code snippet demonstrating the use of left shift for multiplication: number = 5 multiplied_number = number << 2 print(multiplied_number) # Output: 20 In this code snippet, the left shift operator is used to multiply the number 5 by 2 raised to the power of 2. The result is 20. Code Snippet: Using Right Shift for Division Right shift can be used to divide a number by a power of 2. By shifting the bits of a number to the right, the number is effectively divided by 2 raised to the power of the shift amount. Here’s a code snippet demonstrating the use of right shift for division: number = 20 divided_number = number >> 2 print(divided_number) # Output: 5 In this code snippet, the right shift operator is used to divide the number 20 by 2 raised to the power of 2. The result is 5. Related Article: How To Move A File In Python Error Handling: Dealing with Overflow Errors When working with bitwise operators, it’s important to be aware of potential overflow errors that can occur when manipulating numbers with a fixed number of bits. An overflow occurs when the result of an operation cannot be represented using the available number of bits. To deal with overflow errors, you can use Python’s built-in support for arbitrary-precision arithmetic by using the int type instead of the built-in integer types (int, long, etc.). The int type automatically adjusts its size to accommodate the result of an operation. Here’s an example of dealing with overflow errors using the int type: a = 2 ** 1000 b = 2 ** 1000 result = int(a) & int(b) print(result) # Output: 0 In this example, a and b are large numbers that would cause an overflow error if used with the built-in integer types. By converting them to int objects, Python automatically handles the overflow and produces the correct result. Error Handling: Handling Invalid Bitwise Operation Inputs When performing bitwise operations, it’s important to handle cases where the inputs are not valid for the intended operation. This can include cases such as dividing by zero, shifting by a negative amount, or applying bitwise operators to non-integer values. To handle these cases, it’s recommended to use appropriate conditional statements and exception handling to ensure the program behaves correctly and gracefully handles invalid inputs. Here’s an example of handling invalid bitwise operation inputs: def left_shift(number, shift): if shift < 0: raise ValueError("Shift amount must be non-negative.") return number << shift def right_shift(number, shift): if shift > shift def bitwise_and(a, b): if not isinstance(a, int) or not isinstance(b, int): raise TypeError("Inputs must be integers.") return a & b number = 5 shift = -2 a = 5 b = "10" result = left_shift(number, shift) except ValueError as e: print("Error:", str(e)) result = right_shift(number, shift) except ValueError as e: print("Error:", str(e)) result = bitwise_and(a, b) except TypeError as e: print("Error:", str(e)) In this example, the functions left_shift, right_shift, and bitwise_and check for invalid inputs and raise appropriate exceptions. The program then catches these exceptions and handles them accordingly, displaying an error message.
{"url":"https://www.squash.io/python-bitwise-operators-tutorial/","timestamp":"2024-11-02T09:11:40Z","content_type":"text/html","content_length":"111227","record_id":"<urn:uuid:b067a310-03dd-4b0e-9ffe-0c0544eb97e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00139.warc.gz"}
PCA with batch effects in Stan In Principal Component Analysis (PCA), we wish to find a simple linear model that explain multidimensional data. If we have G variables: \(\require{color} y^g &= {\color{red} w_1^g} \cdot {\color{red} x_1} + {\color{red} w_2^g} \cdot {\color{red} x_2} + {\color{red} \mu^g} + \varepsilon, \\ \varepsilon &\sim \mathcal{N}(0, {\color {red} \sigma^2}), \\ g &\in \{1, \ldots, G\}, \\ (x_1, x_2) &\sim \mathcal{N}(0, I). \) The red parts of the formula indicate quantities we need to infer from the data. (In a previous version of this post I hadn't specified the multivariate normal prior on X. Mike Love pointed out that without it, the components will not be orthogonal.) Let us look at an example and a simple implementation. As an illustratory data set, let us use the classical Iris data set. This consists of 150 observations of four measurements (sepal length, sepal width, petal length, and petal width) of three species of Iris flowers (I. setosa, I. versicolor, and I. virginica). To implement PCA, we use Stan, a probabilistic programming language, where you just write out the model, and the inference is handled automatically. In the C++ based notation of Stan, the PCA model described above is written in the following way: data { int<lower = 1> N; // Number of samples int<lower = 1> G; // Number of measured features vector[G] Y[N]; // Data transformed data{ vector[2] O; matrix[2, 2] I; O[1] = 0.; O[2] = 0.; I[1, 1] = 1.; I[1, 2] = 0.; I[2, 1] = 0.; I[2, 2] = 1.; parameters { vector[2] X[N]; vector[G] mu; matrix[G, 2] W; real<lower = 0> s2_model; model { // "For every sample ..." for (n in 1:N){ X[n] ~ multi_normal(O, I); for (n in 1:N){ Y[n] ~ normal(W * X[n] + mu, s2_model); The typical way to use Stan is Bayesian analysis, where you define your model in Stan along with your priors (which by default, like here, will be uniform) and use Stan to draw samples from the posterior. We will do this, then plot the mean of the posterior X values. From this we can see that I. setosa is quite different from the other two species, which are harder to separate from each other. Now imagine that the iris data was collected by two different researchers. One of of them has a ruler which is off by a bit compared to the other. This would cause a so called batch effect. This means a global bias due to some technical variation which we are not interested in. Let us simulate this by randomly adding a 2 cm bias to some samples: batch = np.random.binomial(1, 0.5, (Y.shape[0], 1)) effect = np.random.normal(2.0, 0.5, size=Y.shape) Y_b = Y + batch * effect Now we apply PCA to this data set Y_b the same way we did for the original data Y. We see now that our PCA model identifies the differences between the batches. But this is something we don't care about. Since we know which researcher measured which plants, we can include this information in model. Formally, we can write this out in the following way: \( y^g &= {\color{red} v^g} \cdot {z} + {\color{red} w_1^g} \cdot {\color{red} x_1} + {\color{red} w_2^g} \cdot {\color{red} x_2} + {\color{red} \mu^g} + \varepsilon, \\ \varepsilon &\sim \mathcal{N} (0, {\color{red} \sigma^2}), \\ g &\in \{1, \ldots, G\}, \\ (x_1, x_2) &\sim \mathcal{N}(0, I). \) In our case, we let z be either 0 or 1 depending on which batch a sample belongs to. We can call the new model Residual Component Analysis (RCA), because in essence the residuals of the linear model of the batch is being further explained by the principal components. These concepts were explored much more in depth than here by Kalaitzis & Lawrence, 2011. Writing this out in Stan is straightforward from the PCA implementation. data { int<lower = 1> N; int<lower = 1> G; int<lower = 0> P; // Number of known covariates vector[G] Y[N]; vector[P] Z[N]; // Known covariates transformed data{ vector[2] O; matrix[2, 2] I; O[1] = 0.; O[2] = 0.; I[1, 1] = 1.; I[1, 2] = 0.; I[2, 1] = 0.; I[2, 2] = 1.; parameters { vector[2] X[N]; vector[G] mu; matrix[G, 2] W; matrix[G, P] V; real<lower = 0> s2_model; model { for (n in 1:N){ X[n] ~ multi_normal(O, I); for (n in 1:N){ Y[n] ~ normal(W * X[n] + V * Z[n] + mu, s2_model); We apply this to our data with batch effects, and plot the posterior X values again. Now we reconstitute what we found in the data that lacked batch effect, I. setosa separates more from the other two species. The residual components X1 and X2 ignores the differences due to batch. Note that the batch effect size vg here is different for each feature (variable). So this would equally well apply if e.g. the second researcher had misunderstood how to measure petal widths, causing a bias in only this feature. There is also nothing keeping us from including continuous values as known covariates. Typically when batch effects are observed, at least in my field, a regression model is first applied to the data to "remove" this effect, then further analysis is done on the residuals from that I think this kind of strategy where the known information is added to a single model is a better way to do these things. It makes sure that your model assumptions are accounted for together. A weird thing I see a lot is people trying different methods to "regress out" batch effects, and then perform a PCA of the result to confirm that their regression worked. But if your assumption is that PCA, i.e. linear models, should be able to represent the data you can include all your knowledge of the data in the model. The same goes for clustering. In a previous version of this post, I estimated the parameters with the penalized likelihood maximization available in Stan. But estimation of the principal components in this way is not very good for finding the optimial fit. There are lots of parameters (2 * 150 + 4 * 3) and it's very easy to end up in a local optimum. Principal component analysis is very powerful because it has a very well known optimal solution (eigendecomposition of covariance matrix). However, writing the models in Stan like this allows you to experiment with different variations of a model, and the next step would then be to try to find a good fast and stable way of inferring the values you want to infer. The code for producing these restults is available at https://github.com/Teichlab/RCA.
{"url":"https://www.nxn.se/p/pca-with-batch-effects-in-stan","timestamp":"2024-11-09T01:26:28Z","content_type":"text/html","content_length":"132378","record_id":"<urn:uuid:c3f29f24-02d4-48cb-80a3-c39fc0e018aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00455.warc.gz"}
NCERT Solutions for Class 12 Maths All Chapters Free PDF Download NCERT Solutions for Class 12 Maths in Hindi Medium and English Medium (गणित) | 12th Class Maths NCERT Book Solutions PDF Download NCERT Solutions for Class 12 Maths will give you a glimpse of all the topics in the Class 12 Maths Syllabus. Step by Step Explanations given for the topics of Class 12 Maths makes it easy for you better understand the concepts. In addition to the NCERT Solutions for Class 12 Maths, you can also find other preparation resources like CBSE Class 12 Maths Notes, Exemplar Problems, Previous Question Papers, etc. Download the handy study material over here and stand out on your board or any other competitive exams compared to your peers. CBSE NCERT Solutions for Class 12 Maths Updated for 2021-22 Class 12 CBSE Maths NCERT Solutions will not just confine you to your success in academics but also lays a strong foundation for other important subjects. Our 12th Class Maths NCERT Solutions PDFs include questions from all the exercises in NCERT Maths Textbooks. Use them as a quick reference and clear ambiguities if any in a fraction of seconds. NCERT Solutions for Class 12 Maths in English Medium Chapter 1 Relations and Functions Chapter 2 Inverse Trigonometric Functions Chapter 3 Matrices Chapter 4 Determinants Chapter 5 Continuity and Differentiability Chapter 6 Application of Derivatives Chapter 7 Integrals Chapter 8 Application of Integrals Chapter 9 Differential Equations Chapter 10 Vector Algebra Chapter 11 Three Dimensional Geometry Chapter 12 Linear Programming Chapter 13 Probability NCERT Solutions for Class 12 Maths in Hindi Medium (गणित) Class 12 Maths Chapter 1 संबंध एवं फलन Class 12 Maths Chapter 2 प्रतिलोम त्रिकोणमितीय फलन Class 12 Maths Chapter 3 आव्यूह Class 12 Maths Chapter 4 सारणिक Class 12 Maths Chapter 5 सांतत्य तथा अवकलनीयता Class 12 Maths Chapter 6 अवकलज के अनुप्रयोग Class 12 Maths Chapter 7 समाकलन Class 12 Maths Chapter 8 समाकलनों के अनुप्रयोग Class 12 Maths Chapter 9 अवकल समीकरण Class 12 Maths Chapter 10 सदिश बीजगणित Class 12 Maths Chapter 11 त्रि-विमीय ज्यामिति Class 12 Maths Chapter 12 रैखिक प्रोग्रामन Class 12 Maths Chapter 13 प्रायिकता Class 12 Maths NCERT Solutions PDF Download NCERT Solutions for Class 12th Maths can be extremely helpful for students as they can get an idea of the kind of questions appearing in the exams. Solving the Questions from these 12th Class Maths NCERT Solutions one can develop a strong foundation of maths basics that are essential for higher classes. Study effectively and learn how to present answers in a better way to score well. To self-evaluate your preparation standards you can simply rely on MCQ Questions for Class 12 Maths with Answers and identify the areas of need. 12th Class Maths All Chapters Brief Students who are willing to attempt the Class 12 Maths Exam should know the Syllabus and Marks Distribution beforehand. Have a glimpse of all the chapters existing here and prepare accordingly. They are as follows Chapter 1 Relations and Functions 1st Chapter in NCERT Textbook deals majorly with the topics relations and functions, types of relations such as reflexive, symmetric, transitive, and equivalence, types of functions namely One to one and onto functions, binary operations and miscellaneous examples, the composition of functions and invertible function, etc. Chapter 2 Inverse Trigonometric Functions Inverse Trigonometric Functions Chapter includes several topics such as the basic concept of inverse trigonometric functions, properties of inverse trigonometric functions, Definition, range, domain, principal value branch, miscellaneous examples explained thoroughly. Chapter 3 Matrices In this Chapter 3 of Class 12 Maths Textbook we will discuss majorly matrix definition, types of matrices, equality of matrices, operations on matrices, properties of matrix addition, properties of scalar multiplication, multiplication of matrices, symmetric and skew-symmetric matrices, properties of multiplication of matrices, transpose of a matrix, properties of the transpose of the matrix, the inverse of a matrix by elementary operations, elementary operation or transformation of a matrix, and miscellaneous examples. Chapter 4 Determinants In this Chapter students will get to know the definition & meaning of determinants, order of determinants, properties of determinants, applications of determinants, minors and cofactors of determinants, finding the area of a triangle using determinants, adjoint of a matrix, the inverse of a matrix, and matrices and miscellaneous examples. Chapter 5 Continuity and Differentiability This Chapter 5 gives students knowledge on topics like continuity definition & meaning, differentiability definition, and meaning, derivative of implicit functions, derivatives of composite functions, derivatives of inverse trigonometric functions, exponential and logarithmic functions, logarithmic differentiation, second-order derivatives, derivatives of functions in parametric forms, mean value theorem via miscellaneous examples. Chapter 6 Applications of Derivatives Chapter 6 includes derivatives definition, rate of change of quantities, approximations, tangents and normals, maxima and minima, increasing and decreasing functions, first derivative test, maximum and minimum values of a function in a closed interval, and miscellaneous examples. Chapter 7 Integrals Ch 7 Integrals include topics like definite & indefinite integral definition, integration the inverse process of differentiation, geometrical interpretation of indefinite integral, comparison between differentiation and integration, properties of indefinite integral, integration using partial fractions, integration by parts, methods of integration such as integration by substitution, integration using trigonometric identities. Along with these concepts, it also includes topics such as fundamental theorem of calculus, definite integral as the limit of a sum, evaluation of definite integrals by substitution, some properties of definite integrals and miscellaneous examples. Chapter 8 Applications of Integrals This Chapter is a continuation of the chapter we discussed earlier and begins with the definition of integrals, area under simple curves, area between two curves, area of the region bounded by a curve and a line, and miscellaneous examples. Chapter 9 Differential Equations Here you will learn about the topics differential equations definition, a concept related to differential equations, degree & order of a differential equation, general and particular solutions of a differential equation, formation of a differential equation whose general solution is given, procedure to form a differential equation that will represent a given family of curves. Apart from these know the methods of solving first order, differential equations with variable separable, first-degree differential equations, linear differential equations, homogeneous differential equations, the procedure for solving first-order linear differential equations and miscellaneous examples. Chapter 10 Vector Algebra This chapter deals with the topics basic concepts regarding vector algebra, how to find the position vector, direction cosines, types of vectors, addition of vectors, properties of vector addition, components of a vector, vector joining two points, section formula. Furthermore, it even explains the topics like multiplication of a vector by a scalar, product of two vectors, scalar or dot product of two vectors, properties of scalar product, vector or cross product of two vectors, projection of a vector on a line, and miscellaneous examples. Chapter 11 Three Dimensional Geometry It covers information like Direction cosines and direction ratios of a line joining two points, coplanar and skew lines, Cartesian equation and vector equation of a line, Cartesian and vector equation of a plane, the shortest distance between two lines, the distance of a point from a plane. Chapter 12 Linear Programming In this chapter, we will discuss the concepts like Introduction, related terminology such as constraints, objective function, optimization, different types of linear programming (L.P.) problems. In addition, you will know the concepts like the Graphical method of solution for problems in two variables, feasible and infeasible solutions, feasible and infeasible regions (bounded), optimal feasible solutions(up to three non-trivial constraints). Chapter 13 Probability This Chapter includes topics like Conditional probability, independent events, multiplication theorem on probability, total probability, Random variable, and its probability distribution, Bayes’ theorem. Along with these concepts, you will also become familiar with topics such as the mean of a random variable, the variance of a random variable, Bernoulli trials, and binomial distribution with miscellaneous examples. CBSE Class 12 Maths Unitwise Weightage for 1st Term ┃No.│Units │Marks ┃ ┃I │Relations and Functions │08 ┃ ┃II │Algebra │10 ┃ ┃III│Calculus │17 ┃ ┃V │Linear Programming │05 ┃ ┃ │Total │40 ┃ ┃ │Internal Assessment │10 ┃ ┃ │Total │50 ┃ 12th Class Maths Unitwise Weightage for Second Term ┃Unit│Unit Name │Marks┃ ┃III │Calculus │18 ┃ ┃IV │Vectors and Three-Dimensional Geometry │14 ┃ ┃VI │Probability │08 ┃ ┃ │Total │40 ┃ ┃ │Internal Assessment │10 ┃ ┃ │Total │50 ┃ Key Features of CBSE Class 12th Mathematics NCERT Book Solutions • 12th Grade Students can easily practice and revise the syllabus using these NCERT Solutions. • You can save your time as you will find the required Math Formulas and Equations all in one place. • Tips & Tricks mentioned in the CBSE Class 12 Maths NCERT Solutions can be quite helpful and you can answer extremely difficult questions too with ease. • You can use them during your last-minute revision and get to know the subjects on a deeper level. • Answering the Questions in 12th Std Maths NCERT Book Solutions on a regular basis will improve your accuracy and speed while attempting the exams. FAQs on Class 12 Maths NCERT Solutions 1. Which is the best resource to find the CBSE NCERT Solutions for Class 12 Maths? NCERTSolutions.guru is the best resource to find the NCERT Solutions for 12th Class Maths for All Chapters given by experts. 2. How many Chapters are there in the CBSE Class 12th Mathematics Textbook? There are 13 Chapters in total in the Class 12th Mathematics Textbook. 3. Are NCERT Solutions Class 12 Maths sufficient to crack Board Exams? Yes, NCERT Solutions of Class 12 Maths are more than enough to crack the Board Exams.
{"url":"https://ncertsolutions.guru/ncert-solutions-for-class-12-Maths/","timestamp":"2024-11-09T15:52:32Z","content_type":"text/html","content_length":"224187","record_id":"<urn:uuid:e2f71b8f-5a46-4157-a94c-df455b1151ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00825.warc.gz"}
Soil thermal unit aids N prediction G - Articles in popular magazines and other technical publications Hatch, D. J. 1999. Soil thermal unit aids N prediction. Farming News. (22 January). Authors Hatch, D. J. Year of Publication 1999 Journal Farming News Journal citation (22 January) Funder project or code 1 Project: 2430 5104 Open access Published as non-open access
{"url":"https://repository.rothamsted.ac.uk/item/852vw/soil-thermal-unit-aids-n-prediction","timestamp":"2024-11-13T21:18:01Z","content_type":"text/html","content_length":"144321","record_id":"<urn:uuid:61a32338-6cbe-47c2-b5ed-d188106a9cf9>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00616.warc.gz"}
Resampling Hierarchically Structured Data Recursively | R-bloggersResampling Hierarchically Structured Data Recursively Resampling Hierarchically Structured Data Recursively [This article was first published on BioStatMatt » R , and kindly contributed to ]. (You can report issue about the content on this page ) Want to share your content on R-bloggers? if you have a blog, or if you don't. That’s a mouthful! I presented this topic to a group of Vandy statisticians a few days ago. My notes (essentially reproduced in this post) are recorded at the Dept. of Biostatistics wiki: HowToBootstrapCorrelatedData. The presentation covers some bootstrap strategies for hierarchically structured (correlated) data, but focuses on the multi-stage bootstrap; an extension of that described by Davison and Hinkley (ISBN 978-0-521-57471-6). The multi-stage bootstrap mimics the data generating mechanism by resampling in a nested fashion. For example, resample first among factors at the highest level of hierarchy. Then, for each resampled factor, further resample among factors at the next lower level, and so forth. Each level may be resampled with or without replacement. Furthermore, some levels of hierarchy may be ignored completely, if considered to have little or no effect on the data correlation structure. Whether to ignore a level of hierarchy, or to sample with replacement are important bootstrap design considerations. The resample function below implements a multi-stage bootstrap recursively. That is, levels of hierarchy are traversed by nested calls to resample. The dat argument is a dataframe with factor fields for each level of hierarchy (e.g., hospital, patient, measurement), and a numeric field of measured values. The cluster argument is a character vector that identifies the hierarchy in order from top to bottom (e.g., c('hospital','patient','measurement')). The replace argument is a logical vector that indicates whether sampling should be with replacement at the corresponding level of hierarchy ( e.g., c(TRUE,FALSE,FALSE)). resample <- function(dat, cluster, replace) { # exit early for trivial data if(nrow(dat) == 1 || all(replace==FALSE)) # sample the clustering factor cls <- sample(unique(dat[[cluster[1]]]), replace=replace[1]) # subset on the sampled clustering factors sub <- lapply(cls, function(b) subset(dat, dat[[cluster[1]]]==b)) # sample lower levels of hierarchy (if any) if(length(cluster) > 1) sub <- lapply(sub, resample, cluster=cluster[-1], replace=replace[-1]) # join and return samples do.call(rbind, sub) The following block of R code simulates a dataset with 5 correlated (rho = 0.4) repeat measurements on each of 10 patients, from each of 5 hospitals. Hence, there are 250 simulated measurements and 50 patients in total. Patients are simulated independently (i.e., the hospital level of hierarchy has no affect on the correlation structure). The functions covimage and datimage generate a levelplot representations of the covariance and data matrices for the simulated data, respectively. # simulate correlated data rho <- 0.4 dat <- expand.grid( sig <- rho * tcrossprod(model.matrix(~ 0 + patient:hospital, dat)) diag(sig) <- 1 dat$value <- chol(sig) %*% rnorm(250, 0, 1) covimage <- function(x) levelplot(as.matrix(x), aspect="fill", scales=list(draw=FALSE), xlab="", ylab="", colorkey=FALSE, col.regions=rev(gray.colors(100, end=1.0)), datimage <- function(x) { mat <- as.data.frame(lapply(x, as.numeric)) levelplot(t(as.matrix(mat)), aspect="fill", scales=list(cex=1.2, y=list(draw=FALSE)), ylab="", xlab="", colorkey=FALSE, col.regions=gray.colors(100), The images below result from calls to datimage(dat) and covimage(dat) respectively. The next block of R code generates several boostrap distributions for the sample mean, and approximates the 'true' sampling distribution by Monte Carlo. The final series of boxplots (shown below) illustrate that bootstrap design greatly impacts the inferred distribution of the sample mean (and presumably for other sample statistics). Hence, it's important to think carefully about bootstrap design for hierarchically structured data, and ensure that it closely reflects the 'true' data generating mechanism. # bootstrap ignoring hospital and patient levels cluster <- c("measurement") system.time(mF <- replicate(200, mean(resample(dat, cluster, c(F))$val))) system.time(mT <- replicate(200, mean(resample(dat, cluster, c(T))$val))) #boxplot(list("F" = mF, "T" = mT)) # bootstrap ignoring hospital level cluster <- c("patient","measurement") system.time(mFF <- replicate(200, mean(resample(dat, cluster, c(F,F))$val))) system.time(mTF <- replicate(200, mean(resample(dat, cluster, c(T,F))$val))) system.time(mTT <- replicate(200, mean(resample(dat, cluster, c(T,T))$val))) #boxplot(list("FF" = mFF, "TF" = mTF, "TT" = mTT)) # bootstrap accounting for full hierarchy cluster <- c("hospital","patient","measurement") system.time(mFFF <- replicate(200, mean(resample(dat, cluster, c(F,F,F))$val))) system.time(mTFF <- replicate(200, mean(resample(dat, cluster, c(T,F,F))$val))) system.time(mTTF <- replicate(200, mean(resample(dat, cluster, c(T,T,F))$val))) system.time(mTTT <- replicate(200, mean(resample(dat, cluster, c(T,T,T))$val))) #boxplot(list("FFF" = mFFF, "TFF" = mTFF, "TTF" = mTTF, "TTT" = mTTT)) # Monte Carlo for the true sampling distribution system.time(mMC <- replicate(200, mean(chol(sig) %*% rnorm(250, 0, 1)))) #boxplot(list("MC" = mMC)) boxplot(list("MC" = mMC, "F" = mF, "T" = mT, "FF" = mFF, "TF" = mTF, "TT" = mTT, "FFF" = mFFF, "TFF" = mTFF, "TTF" = mTTF, "TTT" = mTTT)) The following figure presents boxplots for the distribution of sample means under the above sequence of bootstrap strategies. The "MC" boxplot summarizes the 'true' distribution of the sample mean (estimated using Monte Carlo). The remaining boxplots are labeled according to the bootstrap strategy used. For instance, the "TF" boxplot corresponds to a multi-stage bootstrap of patients with replacement and measurements-within-patients without replacement (this is commonly called the "cluster bootstrap"), but that ignores the hospital factor. This strategy most closely reflects the data generating mechanism. Notice that sampling all levels of hierarchy without replacement (e.g., "FFF") simply permutes the indices of the resampled data, and does not confer any variability on the sample mean.
{"url":"https://www.r-bloggers.com/2012/04/resampling-hierarchically-structured-data-recursively/","timestamp":"2024-11-06T21:17:04Z","content_type":"text/html","content_length":"102096","record_id":"<urn:uuid:2a28d75f-2130-4e92-b76f-baf6d8ebb6ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00199.warc.gz"}
How Many Years Is A Million Days? Exploring The Calculation - Selebriti.cloud How Many Years is a Million Days? Exploring the Calculation Have you ever wondered how many years is a million days? It sounds like an absurd question, but the answer may surprise you. As it turns out, a million days is roughly equivalent to 2,738 years. That’s right – over two millennia! Imagine living for that long and all the changes you would witness over time. While a million days may seem like a vast expanse of time, it goes by surprisingly quickly. Every day, we’re given 24 hours to live, love, and create memories. And before we know it, months become years, years become decades, and before we know it, our time here on earth is up. So, what can we do to make the most of our short time on this planet and leave a lasting legacy that will continue to inspire others long after we’re gone? Calculation of a million days in years Have you ever wondered how many years are in a million days? Well, the answer is not as straightforward as you might expect. Let’s dive deeper into the calculation process to fully understand the Firstly, we need to understand that a year is typically measured as 365 days. However, a solar year, which is defined as the time it takes for the Earth to complete one orbit around the sun, is actually 365.24 days long. To factor in this extra time, a leap year with an extra day is added every four years. So, if we want to calculate how many years are in one million days, we need to consider the length of an actual year. Here’s the breakdown: • 1 day = 0.00273791 years (1/365.24) • 1 million days = 2737.91 years (0.00273791 x 1,000,000) Therefore, one million days is equivalent to 2737.91 years. It’s important to note that this calculation is based on the assumption that every year has 365.24 days. However, the length of a year can vary slightly due to factors such as the gravitational pull of other planets and the wobbling of the Earth’s axis, causing errors in the calculation. Cultural references to a million days Throughout history and across various cultures, the concept of a million days has been referenced in different ways. Here are three cultural references to a million days: • Mayan Long Count Calendar: The ancient Mayan civilization used the Long Count Calendar to measure time. It is a base-20 system that uses five different units of time to count days, with each unit being 20 times the previous one. One of the units is the Kin, which is the equivalent of one day. According to the Mayan Long Count Calendar, a million days is equivalent to 2,737 years. • The Bible: In the Bible, a “thousand years in your sight are like a day that has just gone by, or like a watch in the night” (Psalm 90:4). Using this reference, a million days would be equivalent to 2,739 years and 13 days. • Chinese Zodiac: In Chinese astrology, each year is associated with an animal sign in a 12-year cycle. The Chinese believe that a person’s sign influences their personality and fortune. A million days is equivalent to 2,740 Chinese zodiac years. These cultural references show that the concept of a million days has been used in various ways throughout history. From ancient civilizations measuring time to religious texts and astrology, a million days represents a significant amount of time in different cultures. Scientific representation of a million days Measuring time is an essential aspect of our daily lives, from tracking our schedules to the way we mark significant events. But, have you ever considered how much time a million days constitute? Let us dig deeper and understand the scientific representation of this massive number. • A million days can also be expressed as 2,739.73 years. • In a scientific notation, one million days is represented as 1 × 10^6 × 8.64 × 10^4 = 8.64 × 10^10 seconds. • In terms of units, a million days equals 86,400,000,000 milliseconds. As you can see, a million days is a mind-boggling period that is challenging to conceptualize in day-to-day life. Therefore, scientists use several methods to define, measure and represent such large numbers of time intervals. For instance, they use the Julian Day, an astronomy standard, to represent dates and times in a more straightforward format. In the Julian Day system, a million days corresponds to the Julian Day number 2,451,545. In contrast, Unix timestamps and International Atomic Time (TAI) are other commonly-used units to represent time intervals. The following table shows how a million days can be translated into different units: Unit Value Seconds 86,400,000 Minutes 1,440,000 Hours 24,000 Days 1,000,000 Weeks 142,857.14 Months 30,436.85 Years 2,739.73 Therefore, by using these standards, scientists can achieve more clarity and precision when decoding large intervals of time, like a million days. Famous events that spanned a million days When you hear the number “million days,” it may be hard to grasp just how long that truly is. To put it into perspective, a million days is equivalent to roughly 2,739 years. That’s a significant amount of time, and it’s not surprising that some of the most notable events in history have spanned this time period. Here are just a few examples: • The Roman Empire: The Roman Empire lasted roughly a million days, from 27 BC to 476 AD. During this time, Rome became one of the most powerful empires in the world, ruling over much of Europe, the Middle East, and North Africa. • The Ming Dynasty: China’s Ming Dynasty lasted for almost exactly a million days, from 1368 to 1644. This era is known for its cultural and artistic achievements, such as the construction of the Forbidden City and the famous blue-and-white porcelain. • The Hundred Years’ War: Despite its name, the Hundred Years’ War was actually a series of conflicts between England and France that lasted for just over a million days, from 1337 to 1453. This war had a significant impact on European history, leading to the rise of nationalism and the development of modern warfare tactics. These examples only scratch the surface of the many events that have spanned a million days throughout history. It’s astounding to think about all the changes and developments that can occur over such a long period of time, and it’s a testament to the resilience of human societies. The Long Now Foundation The Long Now Foundation is an organization that was founded in 1996 with the goal of fostering long-term thinking and planning. The idea behind this is that by taking a longer-term view of the world, we can make better decisions and create a more sustainable future. One of the most fascinating projects of The Long Now Foundation is the Clock of the Long Now, which is designed to keep time for 10,000 years. This clock is being built in a remote mountain site in West Texas, and is intended to encourage people to think on a timeframe much longer than our current culture typically does. The idea of thinking in terms of millions of days or even thousands of years may seem overwhelming or even impossible. But by striving for this kind of long-term thinking, we can work towards creating a better world for future generations. A Timeline of a Million Days To help visualize just how long a million days is, here is a timeline of events that would span that amount of time: Event Date range (million days) Human migration out of Africa 200,000 BC – 41 AD The Roman Empire 27 BC – 476 AD The Ming Dynasty 1368 – 1644 The Hundred Years’ War 1337 – 1453 The American Revolution 1765 – 1869 The invention of the telephone 1876 – 2775 The construction of the Great Wall of China 700 BC – 1077 The Wright Brothers’ first flight 1903 – 2688 The first moon landing 1969 – 3269 The Long Now Foundation’s Clock of the Long Now 1996 – 9930 As this timeline shows, a million days is an almost unimaginably long period of time. But with the right combination of long-term thinking and planning, we can work towards creating a better future for all of humanity. Celestial Cycles That Last a Million Days When it comes to measuring time in astronomical terms, million days is just a blip on the radar of celestial cycles that span thousands, millions, billions, or even trillions of years. Here are some celestial cycles that last a million days: • Saturn’s Revolution: Saturn takes about 29 Earth years to make one revolution around the sun, which is equivalent to about 10,585 Earth days. Therefore, a million Saturnian days is about 84.3 Earth years. • Rotation of the Sun: The sun takes about 25 Earth days at its equator to complete one rotation, or 609.12 hours. One million such rotations of the sun would span about 1,766 Earth years. • Jupiter’s Revolution around the Sun: Jupiter, the largest planet in our solar system, takes about 12 Earth years to orbit the sun, or about 4,382 Earth days. A million Jupiterian days would equal about 34.8 Earth years. A Bigger Picture Compared to some of the cosmic clocks ticking away in the universe, a million days isn’t even a blink of an eye. For example: • Precession of the Equinoxes: This is the slow, cyclical wobbling of the earth’s rotational axis caused by gravitational forces from the moon, sun, and other planets. It takes about 25,800 Earth years to complete one precession cycle, or about 9,415,700 Earth days. • Cosmic Year: This is the time it takes for our solar system to complete one orbit around the center of the Milky Way galaxy. It takes about 225-250 million Earth years, or about 82-91 billion Earth days, to complete one cosmic year. A Closer Look: Planetary Hours Another way to measure time in the sky is using planetary hours, which are based on the seven traditional planets visible to the naked eye (Sun, Moon, Mercury, Venus, Mars, Jupiter, and Saturn). In this system, each planet rules one hour of the day and one hour of the night, depending on the day of the week and the time of year. Planet Duration of Planetary Hour (approx.) Sun 60 minutes (1 hour) Moon 60 minutes (1 hour) Mercury 94 minutes (1 hour, 34 minutes) Venus 112 minutes (1 hour, 52 minutes) Mars 135 minutes (2 hours, 15 minutes) Jupiter 171 minutes (2 hours, 51 minutes) Saturn 228 minutes (3 hours, 48 minutes) In conclusion, a million days may seem like a lot, but in the grand scheme of things, it is just a tiny blip in the cosmic timeline. From the slow precession of the equinoxes to the cosmic year, there are countless cycles and rhythms in the universe that put our human concept of time into perspective. Religious beliefs related to a million days Throughout history, various religions have attached significant importance to numbers and their corresponding meanings. One such number is 7, which is considered a sacred number in many religions and belief systems. When it comes to the concept of a million days, the number 7 holds various meanings and interpretations. • In Christianity, 7 is considered the perfect number and is often used to represent completeness and perfection. The Bible states that God created the world in 7 days, and there are 7 days in a week. Therefore, some Christians might interpret a million days as a symbol of divine perfection and completion. • In Judaism, the number 7 is also considered a sacred number and appears multiple times in the Old Testament. For example, the Israelites marched around the walls of Jericho 7 times before they fell. Additionally, the menorah in the temple had 7 branches. As for a million days, some Jews might view it as a representation of the long and arduous journey towards salvation and redemption. • In Hinduism, 7 is one of the most important numbers and appears in various forms. For instance, there are 7 chakras in the human body, and the goddess Kali is often depicted with 7 arms. As for a million days, some Hindus might interpret it as a symbol of the cyclical nature of life, death, and rebirth. Overall, the number 7 holds different meanings and interpretations across various religions. However, all of them view it as a sacred number with strong symbolic power. When it comes to the concept of a million days, the meaning might not be explicitly stated in religious texts, but it could be interpreted as a representation of ultimate completion, redemption, or cyclical nature, depending on one’s religious beliefs. The Psychological Impact of a Million Days A million days is equivalent to more than 2,740 years, a length of time that is difficult for most of us to comprehend. The concept of a million of anything can be overwhelming, but when it comes to time, it becomes an abstract and intangible concept that can be hard to grasp. However, understanding the magnitude of a million days can have a significant psychological impact. • Sense of Mortality – When you realize that a million days is more than 2,740 years, it can put your own mortality into perspective. It can be an eye-opening experience and make you realize the value of time. • Perceived Time – The concept of a million days can make time seem like a finite resource that you don’t want to waste. It can make you think about how you spend your time and prioritize things that are important to you. • Gratitude – Understanding the length of a million days can make you appreciate the time you have. It can create a sense of gratitude for every day and every experience you have, regardless of how small or big they are. The psychological impact of a million days can go beyond personal reflection. It can also have societal implications. For example, it can create a sense of urgency for global issues such as climate change, poverty, and human rights. A million days is a long time, but it’s not infinite, and it’s essential to make the most of the time we have to improve our lives and the world we live in. To help visualize what a million days looks like, consider the following table: Length of Time Equivalent to a Million Days 1 Year 0.003 Years 10 Years 0.027 Years 50 Years 0.137 Years 100 Years 0.274 Years As you can see from the table, a million days is a vast amount of time and can have a significant psychological impact. It can make us more aware of our mortality, make us appreciate the time we have, and create a sense of urgency for global issues. Understanding the magnitude of a million days can be transformative and make us more mindful of how we spend our time. Calculation of a billion seconds in years As we all know, a second is the smallest unit of time that we use to measure time duration. But have you ever wondered how many years it would take to count up to one billion seconds? Let’s do some We know that one minute is equal to 60 seconds. Therefore, one hour is equal to 60 minutes × 60 seconds = 3,600 seconds. And one day is equal to 24 hours × 3,600 seconds = 86,400 seconds. So, to calculate how many days are in a billion seconds, we simply divide the number of seconds by 86,400. 1,000,000,000 seconds / 86,400 seconds per day = 11,574.07 days This means that one billion seconds is equivalent to 11,574.07 days. However, this number doesn’t give us a clear understanding of how many years it would be. So, let’s dive a little deeper. • 11,574.07 days / 365.25 days per year = 31.6888 years Therefore, one billion seconds is equivalent to approximately 31.6888 years. To put this into perspective, here are some events that occur within this time frame: • The average lifespan of a dog is around 10 to 13 years. One billion seconds is more than twice the average lifespan of a dog! • The first iPhone was released in 2007. One billion seconds takes us back to the year 1990! • One billion seconds ago, the world population was around 5.3 billion. Today, it is close to 7.9 billion. If we want to calculate how many years is a million days, we can simply divide one million days by 365.25 days per year. Number of Days Number of Years 1,000,000 days 2,737.85 years So, a million days is equivalent to around 2,737.85 years. To give you some perspective, this is longer than the average lifespan of a human being! In conclusion, a billion seconds is equivalent to approximately 31.6888 years, while a million days is equivalent to around 2,737.85 years. These calculations help us understand the scale of time and how small or large numbers can impact our sense of time. Comparison of one million days to other lengths of time One million days may seem like an incredibly long time, but how does it compare to other lengths of time? Let’s take a look: • 1,000 days is approximately 2.74 years • 10,000 days is approximately 27.4 years • 100,000 days is approximately 274 years • 1 million seconds is approximately 11.57 days • 1 million minutes is approximately 1.9 years • 1 million hours is approximately 114 years • 1 million weeks is approximately 19,178 years • 1 billion seconds is approximately 31.7 years • 1 trillion seconds is approximately 31,709.8 years • 1 quadrillion seconds is approximately 31,709,790.7 years As you can see, one million days is quite a lengthy amount of time, but it’s not quite as mind-boggling as some other lengths of time, such as 1 million weeks or 1 quadrillion seconds. However, it’s important to note that when you’re dealing with such large numbers, the exact length of time can vary depending on how you measure it. For some additional context, here’s a table comparing one million days to some noteworthy events and accomplishments: Length of Time Equivalent to One Million Days Average human lifespan Approximately 27.4 lifetimes Time since the last ice age ended Approximately 10% of the time Time since the Big Bang Approximately 0.00003% of the time Age of the Earth Approximately 0.23% of the age Age of the universe Approximately 0.002% of the age Record for the longest human lifespan Approximately 142 lifetimes Age of the oldest known tree (a Great Basin Bristlecone Pine) Approximately 13 lifetimes Age of the oldest known animal (a clam) Approximately 11 lifetimes In conclusion, while one million days is certainly a lengthy amount of time, there are many other lengths of time that are even more staggering when you put them in perspective. Whether it’s the age of the universe or the lifespan of a clam, our world is full of incredible durations that can be difficult to fathom. FAQs – How many years is a million days? 1. How many years is a million days in normal terms? In normal terms, a million days is equivalent to approximately 2,739.73 years. 2. Is a million days equal to a thousand years? No, a million days is not equal to a thousand years. It is actually equivalent to around 2,739.73 years. 3. How long would it take to count to a million days? Assuming you count one number per second, it would take around 11 days, 13 hours, 46 minutes, and 40 seconds to count to a million days. 4. What would happen if we lived a million days? If we lived for a million days, we would be alive for approximately 2,739.73 years. This is much longer than the average lifespan of a human being. 5. How does a million days compare to a billion seconds? A billion seconds is equivalent to around 31.71 years, while a million days is roughly 2,739.73 years. Therefore, a billion seconds is much shorter than a million days. 6. Can you convert a million days to other units of time? Yes, you can convert a million days to other units of time. For instance, it is equal to 24,000,000 hours, 1,440,000,000 minutes, or 86,400,000,000 seconds. 7. Why is it important to know how many years is a million days? Knowing how many years is a million days can be useful in various fields, including astronomy, biology, and history. It can help us understand the duration of geological or astronomical events, the lifespan of animals or plants, or the time period of ancient civilizations. Closing Thoughts Now that you know how many years is a million days, you can impress your friends with your newfound knowledge. Remember that a million days is equivalent to about 2,739.73 years, or 24,000,000 hours, or 1,440,000,000 minutes. Whether you’re a science enthusiast, a trivia fan, or simply curious about the world around you, we hope you enjoyed reading this article. Thank you for taking the time to visit our site, and we look forward to seeing you again soon.
{"url":"https://selebriti.cloud/en/how-many-years-is-a-million-days/","timestamp":"2024-11-11T17:04:42Z","content_type":"text/html","content_length":"134924","record_id":"<urn:uuid:1b6b6bfe-4b3b-4d34-b36b-e002c7dcacdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00499.warc.gz"}
Individual Research Projects Section 14.2 - The Nature of Mathematics - 13th Edition Individual Research Projects Section 14.2 Project 14.3 If you roll a pair of dice 36 times, the expected number of times for rolling each of the numbers is given in the accompanying table. A graph of these data is shown. a. Find the mean, the variance, and the standard deviation for this model. b. Roll a pair of dice 36 times. Construct a table and a graph similar to the ones shown above. Find the mean, the variance, and the standard deviation for your experiment. c. Compare the results of parts a and b. If this is a class problem, you might wish to pool from the entire class data before making the comparison. Project 14.4 Prepare a report or exhibit showing how statistics are used in baseball. Project 14.5 Prepare a report or exhibit showing how statistics are used in educational testing. Project 14.6 Prepare a report or exhibit showing how statistics are used in psychology. Project 14.7 Prepare a report or exhibit showing how statistics are used in business. Use a daily report of transactions on the New York Stock Exchange. What inferences can you make from the information reported? Project 14.8 Investigate the work of Adolph Quetelet, Francis Galton, Karl Pearson, R. A. Fisher, and Florence Nightingale. Prepare a report or an exhibit of their work in statistics. Project 14.9 Historical Quest “We need privacy and a consistent wind,” said Wilbur. “Did you write to the Weather Bureau to find a suitable location?” “Well,” replied Orville, “I received this list of possible locations and Kitty Hawk, North Carolina, looks like just what we want. Look at this . . .” However, Orville and Wilbur spent many days waiting in frustration after they arrived in Kitty Hawk, because the winds weren’t suitable. The Weather Bureau’s information gave the averages, but the Wright brothers didn’t realize that an acceptable average can be produced by unacceptable extremes. Write a paper explaining how it is possible to have an acceptable average produced by unacceptable extremes.
{"url":"https://mathnature.com/individual-research-projects-14-2/","timestamp":"2024-11-13T14:54:31Z","content_type":"text/html","content_length":"113791","record_id":"<urn:uuid:dfc47097-0c9d-4fb4-a8c6-07e6e78d0c1e>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00715.warc.gz"}
homework question Sep 8, 2014 I am stuck on one of my homework problems and do not know which formula to use. This is the question: [TABLE="width: 739"] [TD="colspan: 4"]If all three ratings are 90 or above, then the employee receives a bonus of 25% of her base salary.[/TD] [TD="colspan: 5"]Otherwise, if all three ratings are 80 or above then the employee receives a bonus equal to 15% of her base salary.[/TD] [TD="colspan: 3"]Otherwise, the employee does not receive any bonus (zero dollars).The ranges for the ratings are H7:J162[/TD] Excel Facts Round to nearest half hour? Use =MROUND(A2,"0:30") to round to nearest half hour. Use =CEILING(A2,"0:30") to round to next half hour. Oct 27, 2005 Office Version 1. 365 2. 2019 3. 2013 4. 2007 1. Windows We don't generally assist with homework. Dec 9, 2008 Office Version 1. 365 1. Windows This is the question: [TABLE="width: 739"] [TD="colspan: 4"]If all three ratings are 90 or above, then the employee receives a bonus of 25% of her base salary. [TD="colspan: 5"]Otherwise, if all three ratings are 80 or above then the employee receives a bonus equal to 15% of her base salary. [TD="colspan: 3"]Otherwise, the employee does not receive any bonus (zero dollars).The ranges for the ratings are H7:J162 Might I suggest that while it might not be the "trickest" formula, you nearly have it solved already. Concentrate on the highest standard test first (IF all three ratings are >=90, THEN...). Jan 17, 2014 Sep 8, 2014 Oct 27, 2005 Office Version 1. 365 2. 2019 3. 2013 4. 2007 1. Windows There's a shorter way using IF and SUM !!
{"url":"https://www.mrexcel.com/board/threads/homework-question.804125/","timestamp":"2024-11-10T06:24:15Z","content_type":"text/html","content_length":"118599","record_id":"<urn:uuid:f3981ea0-e798-4809-99e4-494022004982>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00660.warc.gz"}
University of Cambridge Diameter of high-rank classical groups with random generators Algebra Seminar 10th February 2021, 2:30 pm – 3:30 pm Online, Zoom A conjecture due to Babai predicts a six-degrees-of-separation-type behaviour for finite simple groups. The conjecture is that the diameter of a Cayley graph of G is always bounded by (log |G|)^O(1). I will talk mainly about high-rank groups such as S_n and SL_n(2) with random generators. We can prove the conjecture for SL_n(q), q bounded, provided we have at least q^100 random generators. The heart of the proof consists of showing that the Schreier graph of SL_n(q) acting on F_q^n with respect to q^100 random generators is an expander graph. The proof of this uses the so-called trace method, which goes back to Wigner and his semicircle law for random matrices. I may make some noises about how the proof generalizes to other classical groups. All joint work with Urban Jezernik.
{"url":"https://www.bristolmathsresearch.org/seminar/sean-eberhard/","timestamp":"2024-11-10T12:33:26Z","content_type":"text/html","content_length":"54430","record_id":"<urn:uuid:0031ff89-932f-49d1-b9f1-5b8bc83ea9e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00717.warc.gz"}
cosx siny Formula | cosx siny Identity - iMath The function cosx siny is the product of a cosine function and a sine function. In this post, we will learn how to prove the formula/identity of cosx siny. cosx siny Formula The cosx siny formula is given as follows: cosx siny = $\dfrac{\sin(x+y)-\sin(x-y)}{2}$ Let us now prove the above formula of cos x sin y. Proof of cosx siny Formula cosx siny = 1/2[sin(x+y) – sin(x-y)] Step 1: At first, we will write down the formulas of sin(x+y) and sin(x-y). sin(x+y) = sinx cosy + cosx siny …(I) sin(x-y) = sinx cosy – cosx siny …(II) Step 2: Now, we will subtract (II) from (I). So we will get that sin(x+y) – sin(x-y) = (sinx cosy + cosx siny) – (sinx cosy – cosx siny) ⇒ sin(x+y) – sin(x-y) = sinx cosy + cosx siny – sinx cosy + cosx siny ⇒ sin(x+y) – sin(x-y) = 2 cosx siny Step 3: Lastly, we divide both sides by 2 to get the desired formula. By doing so, we deduce that cosx siny = 1/2[sin(x+y) – sin(x-y)] Thus we have obtain the formula of the product cosx siny which is cosx siny = 1/2[sin(x+y) – sin(x-y)]. From the above formula, we obtain the formula of 2cosx siny which is written below: 2cosx siny Formula: 2cosx siny = sin(x+y) – sin(x-y) Example 1: Find the value of cos45 sin15. Putting x=45 and y=15 in the above formula of cosx siny, we obtain that cos45 sin15 = $\dfrac{\sin(45+15)-\sin(45-15)}{2}$ = $\dfrac{\sin 60-\sin 30}{2}$ = $\dfrac{\frac{\sqrt{3}}{2} -\frac{1}{2}}{2}$ = $\dfrac{\frac{\sqrt{3}-1}{2}}{2}$ = $\dfrac{\sqrt{3}-1}{4}$ So the value of cos45 sin15 is equal to (√3-1)/4 and this is obtained by applying the formula of cosx siny. Q1: What is the formula of cosx siny? Answer: The formula of cosx siny is given by cosx siny = 1/2 [sin(x+y)-sin(x-y)]. Q2: What is the formula of cosa sinb? Answer: The formula of cosa sinb is equal to cosa sinb = 1/2 [sin(a+b)-sin(a-b)]. Q3: What is the formula of 2cosx siny? Answer: The formula of 2cosx siny is given by 2cosx siny = sin(x+y)-sin(x-y). Q4: What is the formula of 2cosa sinb? Answer: The formula of 2cosa sinb is given by 2cosa sinb = sin(a+b)-sin(a-b).
{"url":"https://www.imathist.com/cosx-siny-formula-identity/","timestamp":"2024-11-09T17:18:22Z","content_type":"text/html","content_length":"179504","record_id":"<urn:uuid:fb1c6cbe-aa43-4e27-a62e-c821bd38a216>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00534.warc.gz"}
JavaScript Challenge About operators JavaScript can perform mathematical operations. They are performed by logical operators and comparison operators. These are: In addition, you can also use standard mathematical operators such as: +, -, *, /. In JavaScript, there is a modulo operator (remainder of division). It is extremely useful when you want to check if the element is even. This operator is described by the percent sign (%). 9 % 2; // will return 1, because 9/2 is 4, and 1 is the remainder Excercise 1 In the JavaScript code you will find several variables containing the result of comparison of numbers, i.e. logical values (boolean) known to you from the previous lesson. For better readability, comparison operations are in brackets. Replace equality signs with the appropriate operators so that the results below return true in each case. Go to the first exercise: var exN = (5 <= 8); // returns true, because 5 is less than or equal to 8 var exN = (5 >= 8); // returns false, because 5 is not greater than nor equal to 8 See the result of the exercise (click here to see the result) Excercise 2 This task is analogous to the previous one. Replace signs of inequality in variables with appropriate operators so that the equations below return false in each case. Proceed to the second exercise: Exercise 2 See the result of the exercise click here to see the result) Excercise 3 Moving on to more complicated topics, we introduce logical operators. Let's discuss the example below: var exN = (5 <= 8) && (1 > 3); The value in the exN variable will be false. Why? Because the computer will do the following: • First, it will calculate the result of the equation (5 <= 8). The result will be true. This value will be remembered for a while. • Then it will calculate the result of the equation (1 > 3). This time, the result will be false. This value will be remembered for a while. • At the end, it will calculate the result of the equation true && false (logical values have been remembered in previous steps). The result will be false, and this value will be saved to the Let's get to the exercise: Exercise 3. In the JavaScript code replace the equality or inequality signs with appropriate operators, so that the equations return true in each case. Do not change logical operators! See the result of the exercise (click here to see the result) Excercise 4 This task is analogous to the previous one. In JavaScript code, replace the operators with appropriate ones so that the results below return false in each case. Go to the exercise: Exercise 4 See the result of the exercise (click here to see the result)
{"url":"https://coderslab.rs/rs/javascript-challenge/operators","timestamp":"2024-11-03T16:25:01Z","content_type":"text/html","content_length":"362250","record_id":"<urn:uuid:6206d52a-daab-4ad6-a3b8-ad4c3de9d76b>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00610.warc.gz"}
ACIS-S Simulations - CIAO 4.16 Sherpa Simulating Chandra ACIS-S Spectra with Sherpa Sherpa Threads (CIAO 4.16 Sherpa) This thread illustrates the use of the Sherpa fake_pha command to simulate a spectrum of a point source obtained with the ACIS-S detector aboard Chandra, with and without consideration of a background component. If you do not have experience with simulating X-ray spectral data in Sherpa, you may wish to follow the "step-by-step" example in the introductory simulation thread, before attempting the sample analysis in this thread. Last Update: 14 Dec 2023 - updated for CY 26/CIAO 4.16, no content change. Getting started—downloading calibration response files for simulations In order to simulate a Chandra ACIS-S spectrum with Sherpa, we must define an instrument response with the appropriate ARF (auxiliary response function) and RMF (redistribution matrix function) files. These files may be downloaded from the Chandra Proposal Planning page of the CalDB (Calibration Database) website, where sample RMFs and corresponding ARFs positioned at the aimpoint of the ACIS-S array (and at selected off-axis points) are available. In this thread, we use the files aciss_aimpt_cy26.arf and aciss_aimpt_cy26.rmf, positioned at the default pointing for ACIS-S. The Sherpa fake_pha command calculates a 1-D spectral data set based on a defined instrument response and source model expression. For extensive details on the fake_pha functionality, see the introductory simulation thread Simulating X-ray Spectral Data (PHA): the fake_pha command. To learn how to simulate the spectrum of a point source which includes a background component, follow the second half of this thread, "Including a background component". Defining the Instrument Response We begin by establishing the instrument response corresponding to the default pointing of the ACIS-S detector: sherpa> arf1 = unpack_arf("aciss_aimpt_cy26.arf") sherpa> print(arf1) name = aciss_aimpt_cy26.arf energ_lo = Float64[1024] energ_hi = Float64[1024] specresp = Float64[1024] bin_lo = None bin_hi = None exposure = 8037.3177424371 ethresh = 1e-10 sherpa> rmf1 = unpack_rmfunpack_rmf("aciss_aimpt_cy26.rmf") name = aciss_aimpt_cy26.rmf energ_lo = Float64[1024] energ_hi = Float64[1024] n_grp = UInt64[1024] f_chan = UInt32[1504] n_chan = UInt32[1504] matrix = Float64[387227] e_min = Float64[1024] e_max = Float64[1024] detchans = 1024 offset = 1 ethresh = 1e-10 Here, the ARF and RMF data are loaded and assigned to a set of variables with the unpack_* commands. These variables will be used to assign the instrument response to the faked data set we will create in the next section, "Defining a Source Model Expression". Defining a Source Model Expression Now that we have loaded the ARF and RMF instrument responses, we use the set_source command to establish a source model expression and name the faked data set which will be produced in our sherpa> set_source(1, xsphabs.abs1 * powlaw1d.m1) sherpa> m1.gamma = 2 sherpa> abs1.nh = 0.2 We have defined a source model expression for this simulation using an absorbed 1-D power-law model with a Galactic neutral hydrogen column density of 2×10^21 cm^-2 and a power-law photon index of 2. Running the Simulation with fake_pha Simulating the Chandra spectrum means taking the defined model expression, folding it through the Chandra ACIS-S response, and applying Poisson noise to the counts predicted by the model. The simulation is run with fake_pha, which has four required arguments: dataset ID, ARF, RMF, and exposure time. We decide to simulate an ACIS-S spectrum resulting from a 50 ks exposure of a point sherpa> fake_pha(1, arf1, rmf1, exposure=50000, grouped=False, backscal=1.0) This command associates the data ID 1 with a simulated data set based on the assumed exposure time, instrument response, and source model expression we defined earlier; Poisson noise is added to the modeled data. Note that as of Sherpa in CIAO 4.2, the 'arf' and 'rmf' arguments of the fake_pha command can accept filenames directly; e.g., we could have done the following: sherpa> fake_pha(1, arf="aciss_aimpt_cy26.arf", rmf="aciss_aimpt_cy26.rmf", exposure=50000, grouped=False, backscal=1.0) For detailed information on the available options for setting the 'arf' and 'rmf' arguments of fake_pha, refer to the fake_pha ahelp file. We may inspect some basic properties of the new data set with the show_data command: sherpa> show_data() Data Set: 1 Filter: 0.0073-14.9504 Energy (keV) Noticed Channels: 1-1024 name = faked channel = Int64[1024] counts = Float64[1024] staterror = None syserror = None bin_lo = None bin_hi = None grouping = None quality = None exposure = 50000 backscal = 1.0 areascal = None grouped = False subtracted = False units = energy rate = True plot_fac = 0 response_ids = [1] background_ids = [] RMF Data Set: 1:1 name = aciss_aimpt_cy26.rmf energ_lo = Float64[1024] energ_hi = Float64[1024] n_grp = UInt64[1024] f_chan = UInt32[1504] n_chan = UInt32[1504] matrix = Float64[387227] e_min = Float64[1024] e_max = Float64[1024] detchans = 1024 offset = 1 ethresh = 1e-10 ARF Data Set: 1:1 name = aciss_aimpt_cy26.arf energ_lo = Float64[1024] energ_hi = Float64[1024] specresp = Float64[1024] bin_lo = None bin_hi = None exposure = 8037.3177424371 ethresh = 1e-10 Note that the simulated data set currently does not have the correct normalization—the flux of the simulated data is incorrect because the default power-law normalization is arbitrarily set to 1.0, as shown with the show_model command: sherpa> show_model() Model: 1 apply_rmf(apply_arf((50000 * (xsphabs.abs1 * powlaw1d.m1)))) Param Type Value Min Max Units ----- ---- ----- --- --- ----- abs1.nh thawed 0.2 0 100000 10^22 atoms / cm^2 m1.gamma thawed 2 -10 10 m1.ref frozen 1 -3.40282e+38 3.40282e+38 m1.ampl thawed 1 0 3.40282e+38 To correct the flux we need to adjust the normalization, as demonstrated in the section "Defining the Model Normalization for the Simulated Data". Defining the Model Normalization for the Simulated Data Before we can use the simulated data set for scientific analysis, it must be re-normalized to match the flux (or total counts) required by our selected source. The 0.2–10 keV flux in the simulated spectrum is 3.89×10^-9 ergs cm^-2 s^-1: The 0.2–10 keV flux of a source in our Chandra proposal, for example, has been measured at 1.0×10^-12 ergs cm^-2 s^-1. Therefore, the correct normalization is \(\frac{1.\mathrm{e}-12} sherpa> my_flux = 1.e-12 sherpa> norm = my_flux / calc_energy_flux(0.2, 10, 1) sherpa> print(norm) sherpa> set_par(m1.ampl, norm) sherpa> show_model(1) Model: 1 apply_rmf(apply_arf((50000.0 * (xsphabs.abs1 * powlaw1d.m1)))) Param Type Value Min Max Units ----- ---- ----- --- --- ----- abs1.nH thawed 0.2 0 1e+06 10^22 atoms / cm^2 m1.gamma thawed 2 -10 10 m1.ref frozen 1 -3.40282e+38 3.40282e+38 m1.ampl thawed 0.000256776 0 3.40282e+38 sherpa> fake_pha(1, arf1 ,rmf1, exposure=50000) sherpa> prefs = get_data_plot_prefs() sherpa> prefs["yerrorbars"] = False # remove y-error bars from plot sherpa> plot_data(1) With the new normalization, the simulated flux is correctly set at the measured flux of 1×10^-12 ergs cm^-2 s^-1. A plot of the data is shown in Figure 1. Figure 1: Plot of simulated source spectrum Note that we could have chosen to re-normalize the simulated data set to match the required total counts instead of flux. For example: sherpa> my_counts = 10000 sherpa> norm_counts = my_counts / calc_data_sum(0.5, 8., 1) sherpa> print(norm_counts) Writing the Simulated Data to Output Files We may use the save_pha command to write the simulated data as a PHA file, with a header containing the exposure time value and paths to the ARF and RMF files: We also have the option to save the data to a FITS or ASCII table file with the save_arrays command: sherpa> save_arrays("my_sim_data.fits", [get_model_plot(1).xlo, get_model_plot(1).y], ascii=False) sherpa> save_arrays("my_sim_data.txt", [get_model_plot(1).xlo, get_model_plot(1).y], ascii=True) Fitting the Simulated Data The simulated data set may be filtered and fit as any other data set in Sherpa. For example, we can choose to filter the simulated data to include only the counts in a restricted energy range, such as 0.5 keV–7.0 keV: sherpa> calc_energy_flux(0.2, 10, 1) # ergs cm^-2 s^-1 sherpa> calc_energy_flux(0.5, 7, 1) sherpa> calc_data_sum(0.5, 7, 1) # counts sherpa> notice(0.5, 7) dataset 1: 0.0073:14.9504 -> 0.4964:7.008 Energy (keV) sherpa> show_filter() Data Set Filter: 1 0.4964-7.0080 Energy (keV) Then, we can fit the simulated data set with the source model expression we used to create it: sherpa> set_method("neldermead") sherpa> set_stat("cstat") sherpa> fit() Dataset = 1 Method = neldermead Statistic = cstat Initial fit statistic = 492.451 Final fit statistic = 491.728 at function evaluation 454 Data points = 446 Degrees of freedom = 443 Probability [Q-value] = 0.0544883 Reduced statistic = 1.11 Change in statistic = 0.722452 abs1.nH 0.175512 m1.gamma 1.99289 m1.ampl 0.0002537 sherpa> plot_fit(1) WARNING: unable to calculate errors using current statistic: cstat The resulting plot is shown in Figure 2. Figure 2: Plot of fit to simulated source spectrum Next, we examine the quality of the fit with the confidence command (conf), and print the fit and confidence results with show_fit and get_conf_results, respectively. sherpa> conf() abs1.nH lower bound: -0.0514072 abs1.nH upper bound: 0.0526572 m1.gamma lower bound: -0.0713268 m1.ampl lower bound: -2.11601e-05 m1.gamma upper bound: 0.0725769 m1.ampl upper bound: 2.32414e-05 Dataset = 1 Confidence Method = confidence Iterative Fit Method = None Fitting Method = neldermead Statistic = cstat confidence 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- abs1.nH 0.175512 -0.0514072 0.0526572 m1.gamma 1.99289 -0.0713268 0.0725769 m1.ampl 0.0002537 -2.11601e-05 2.32414e-05 sherpa> show_fit() Optimization Method: NelderMead name = simplex ftol = 1.1920928955078125e-07 maxfev = None initsimplex = 0 finalsimplex = 9 step = None iquad = 1 verbose = 0 reflect = True Statistic: CStat Maximum likelihood function (XSPEC style). This is equivalent to the XSpec implementation of the Cash statistic [1]_ except that it requires a model to be fit to the background. To handle the background in the same manner as XSpec, use the WStat statistic. Counts are sampled from the Poisson distribution, and so the best way to assess the quality of model fits is to use the product of individual Poisson probabilities computed in each bin i, or the likelihood L: L = (product)_i [ M(i)^D(i)/D(i)! ] * exp[-M(i)] where M(i) = S(i) + B(i) is the sum of source and background model amplitudes, and D(i) is the number of observed counts, in bin i. The cstat statistic is derived by (1) taking the logarithm of the likelihood function, (2) changing its sign, (3) dropping the factorial term (which remains constant during fits to the same dataset), (4) adding an extra data-dependent term (this is what makes it different to `Cash`, and (5) multiplying by two: C = 2 * (sum)_i [ M(i) - D(i) + D(i)*[log D(i) - log M(i)] ] The factor of two exists so that the change in the cstat statistic from one model fit to the next, (Delta)C, is distributed approximately as (Delta)chi-square when the number of counts in each bin is high. One can then in principle use (Delta)C instead of (Delta)chi-square in certain model comparison tests. However, unlike chi-square, the cstat statistic may be used regardless of the number of counts in each bin. The inclusion of the data term in the expression means that, unlike the Cash statistic, one can assign an approximate goodness-of-fit measure to a given value of the cstat statistic, i.e. the observed statistic, divided by the number of degrees of freedom, should be of order 1 for good fits. The background should not be subtracted from the data when this statistic is used. It should be modeled simultaneously with the The cstat statistic function evaluates the logarithm of each data point. If the number of counts is zero or negative, it's not possible to take the log of that number. The behavior in this case is controlled by the `truncate` and `trunc_value` settings in the .sherpa.rc file: - if `truncate` is `True` (the default value), then `log(trunc_value)` is used whenever the data value is <= 0. The default is `trunc_value=1.0e-25`. - when `truncate` is `False` an error is raised. .. [1] The description of the Cash statistic (`cstat`) in Fit:Dataset = 1 Method = neldermead Statistic = cstat Initial fit statistic = 492.451 Final fit statistic = 491.728 at function evaluation 454 Data points = 446 Degrees of freedom = 443 Probability [Q-value] = 0.0544883 Reduced statistic = 1.11 Change in statistic = 0.722452 abs1.nH 0.175512 m1.gamma 1.99289 m1.ampl 0.0002537 sherpa> print(get_conf_results()) datasets = (1,) methodname = confidence iterfitname = none fitname = neldermead statname = cstat sigma = 1 percent = 68.26894921370858 parnames = ('abs1.nH', 'm1.gamma', 'm1.ampl') parvals = (0.17551179106542386, 1.992888391856171, 0.00025370000089507306) parmins = (-0.051407179430835614, -0.07132679323601243, -2.116007766115048e-05) parmaxes = (0.052657189892153566, 0.07257691202141747, 2.324139677536204e-05) nfits = 76 Note that the Cstat statistic is appropriate for fitting low-counts data, but it does not calculate errors for the data points. We can group the data so that each bin contains a specified minimum number of counts, and then change the fit statistic to something more suitable to calculate the errors. Finally, we can view the results of the new fit with the plot_fit_delchi command: The new fit to the grouped simulated spectrum, along with the residuals divided by the uncertainties, is shown in Figure 3. Figure 3: Plot of fit to simulated source spectrum, with residuals The plot may be saved as a postscript file with the Matplotlib savefig command: sherpa> plt.savefig('simulation_fit.ps') Including a Background Component In this section, we repeat the steps above to simulate a source PHA data set, but this time, including a background component. This involves adding new Sherpa commands along the way to define settings for the background data. Defining the Instrument Response As before, we begin by establishing the instrument response corresponding to the default pointing of the ACIS-S detector, for both a source and background component: sherpa> arf1 = unpack_arfunpack_arf("aciss_aimpt_cy26.arf") sherpa> rmf1 = unpack_rmfunpack_rmf("aciss_aimpt_cy26.rmf") sherpa> bkg1_arf = arf1 sherpa> bkg1_rmf = rmf1 The source ARF and RMF data are loaded and assigned to a set of variables with the unpack_* commands. These variables will be used to assign the instrument response to both the source and background components of the faked data set we will create in the next section, "Defining Source Model Expressions → with a background component". If the background response is different than the source response, we load the appropriate background ARF and RMF files accordingly: sherpa> bkg1_rmf = unpack_rmf("background.rmf") # separate background response sherpa> bkg1_arf = unpack_arf("background.arf") Defining Source and Background Model Expressions We define both the source and background model expressions for our simulation with set_source command, as follows: sherpa> clean() # clear models sherpa> set_source(1, xsphabs.abs1 * powlaw1d.m1) sherpa> m1.gamma = 2 sherpa> abs1.nh = 0.2 sherpa> set_source("faked_bkg", polynom1d.bkgA) sherpa> bkgA.c0 = 1. For the source simulation, we use an absorbed 1-D power-law model with a Galactic neutral hydrogen column density of 2×10^21 cm^-2 and a photon index of 2. For the background simulation, we assume a flat profile with a 1-D polynomial function. Running the Simulation with fake_pha Here we run an additional fake_pha simulation for the background data set: sherpa> fake_pha(1, arf1, rmf1, exposure=50000, grouped=False, backscal=1.0) sherpa> fake_pha("faked_bkg", bkg1_arf, bkg1_rmf, exposure=50000, grouped=False, backscal=1.0) These commands associate the data IDs 1 and "faked_bkg" with simulated source and background data sets, respectively, based on the assumed exposure times, instrument responses, and model expressions defined earlier; Poisson noise is added to the modeled data. Now, we assign the "faked_bkg" data set as the background component of the faked source data set 1, using the set_bkg command. sherpa> set_bkg(1, get_data("faked_bkg")) We may inspect some basic properties of the new simulated data sets with the show_data command: sherpa> show_data() Data Set: 1 Filter: 0.0073-14.9504 Energy (keV) Bkg Scale: 1 Noticed Channels: 1-1024 name = faked channel = Int64[1024] counts = Float64[1024] staterror = None syserror = None bin_lo = None bin_hi = None grouping = None quality = None exposure = 50000 backscal = 1.0 areascal = None grouped = False subtracted = False units = energy rate = True plot_fac = 0 response_ids = [1] background_ids = [1] RMF Data Set: 1:1 name = aciss_aimpt_cy26.rmf energ_lo = Float64[1024] energ_hi = Float64[1024] n_grp = UInt64[1024] f_chan = UInt32[1504] n_chan = UInt32[1504] matrix = Float64[387227] e_min = Float64[1024] e_max = Float64[1024] detchans = 1024 offset = 1 ethresh = 1e-10 ARF Data Set: 1:1 name = aciss_aimpt_cy26.arf energ_lo = Float64[1024] energ_hi = Float64[1024] specresp = Float64[1024] bin_lo = None bin_hi = None exposure = 8037.3177424371 ethresh = 1e-10 Background Data Set: 1:1 Filter: 0.0073-14.9504 Energy (keV) Noticed Channels: 1-1024 name = faked channel = Int64[1024] counts = Float64[1024] staterror = None syserror = None bin_lo = None bin_hi = None grouping = None quality = None exposure = 50000 backscal = 1.0 areascal = None grouped = False subtracted = False units = energy rate = True plot_fac = 0 response_ids = [1] background_ids = [] Background RMF Data Set: 1:1 name = aciss_aimpt_cy26.rmf energ_lo = Float64[1024] energ_hi = Float64[1024] n_grp = UInt64[1024] f_chan = UInt32[1504] n_chan = UInt32[1504] matrix = Float64[387227] e_min = Float64[1024] e_max = Float64[1024] detchans = 1024 offset = 1 ethresh = 1e-10 Background ARF Data Set: 1:1 name = aciss_aimpt_cy26.arf energ_lo = Float64[1024] energ_hi = Float64[1024] specresp = Float64[1024] bin_lo = None bin_hi = None exposure = 8037.3177424371 ethresh = 1e-10 Data Set: faked_bkg Filter: 0.0073-14.9504 Energy (keV) Noticed Channels: 1-1024 name = faked channel = Int64[1024] counts = Float64[1024] staterror = None syserror = None bin_lo = None bin_hi = None grouping = None quality = None exposure = 50000 backscal = 1.0 areascal = None grouped = False subtracted = False units = energy rate = True plot_fac = 0 response_ids = [1] background_ids = [] RMF Data Set: faked_bkg:1 name = aciss_aimpt_cy26.rmf energ_lo = Float64[1024] energ_hi = Float64[1024] n_grp = UInt64[1024] f_chan = UInt32[1504] n_chan = UInt32[1504] matrix = Float64[387227] e_min = Float64[1024] e_max = Float64[1024] detchans = 1024 offset = 1 ethresh = 1e-10 ARF Data Set: faked_bkg:1 name = aciss_aimpt_cy26.arf energ_lo = Float64[1024] energ_hi = Float64[1024] specresp = Float64[1024] bin_lo = None bin_hi = None exposure = 8037.3177424371 ethresh = 1e-10 In the next section, we will correct the normalization of the simulated source and background data sets. Defining the Model Normalization for the Simulation We determine the normalization for the background data set in the same way as with the source data set, except we use a measure of total counts instead of flux to specify that we want 200 counts in the background simulation: sherpa> my_flux = 1.e-12 sherpa> norm = my_flux / calc_energy_flux(0.2, 10, id=1) sherpa> print(norm) sherpa> bkg_counts = 200 sherpa> bkg_norm = bkg_counts / calc_data_sum(0.2, 10., id="faked_bkg") sherpa> print(bkg_norm) Now we apply the calculated values to the amplitude parameters of each model, and re-evaluate the simulated data sets with the desired normalization using fake_pha: sherpa> set_par(m1.ampl, norm) sherpa> set_par(bkgA.c0, bkg_norm) sherpa> show_model() Model: 1 apply_rmf(apply_arf((50000.0 * (xsphabs.abs1 * powlaw1d.m1)))) Param Type Value Min Max Units ----- ---- ----- --- --- ----- abs1.nH thawed 0.2 0 1e+06 10^22 atoms / cm^2 m1.gamma thawed 2 -10 10 m1.ref frozen 1 -3.40282e+38 3.40282e+38 m1.ampl thawed 0.000256776 0 3.40282e+38 Model: faked_bkg apply_rmf(apply_arf((50000.0 * polynom1d.bkgA))) Param Type Value Min Max Units ----- ---- ----- --- --- ----- bkgA.c0 thawed 2.09494e-06 -3.40282e+38 3.40282e+38 bkgA.c1 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c2 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c3 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c4 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c5 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c6 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c7 frozen 0 -3.40282e+38 3.40282e+38 bkgA.c8 frozen 0 -3.40282e+38 3.40282e+38 bkgA.offset frozen 0 -3.40282e+38 3.40282e+38 sherpa> fake_pha(1,arf1,rmf1,exposure=50000,backscal=1.0) sherpa> fake_pha("faked_bkg",bkg1_arf,bkg1_rmf,50000,backscal=1.0) Finally, we re-assign background data set "faked_bkg" as the background component of source data set 1 with set_bkg, to produce a single source-plus-background simulated data set: sherpa> set_bkg(1, get_data("faked_bkg")) sherpa> prefs = get_data_plot_prefs() sherpa> prefs["yerrorbars"] = False # remove y-error bars from plot sherpa> plot_data(1) The resulting plot is shown in Figure 4. Figure 4: Plot of simulated source-plus-background spectrum Fitting the Simulated Data The simulated source-plus-background data set (1) is filtered to include only the counts in the energy range 0.5 keV–7.0 keV (recalling that data set 1 now contains the background information stored in data set "faked_bkg"; "faked_bkg" is no longer needed in the context of this thread). sherpa> notice(0.5, 7) dataset 1: 0.0073:14.9504 -> 0.4964:7.008 Energy (keV) dataset faked_bkg: 0.4964:7.008 Energy (keV) (unchanged) sherpa> show_filter(1) Data Set Filter: 1 0.4964-7.0080 Energy (keV) Data Set Filter: faked_bkg 0.4964-7.0080 Energy (keV) Next, we fit the simulated source data with the source model expression we used to create it, and use the set_bkg_model command to incorporate the background model into the fit: sherpa> set_bkg_model(1, bkgA, 1) # set model for bkg_id=1 of data set id=1 sherpa> set_method("neldermead") sherpa> set_stat("cstat") sherpa> fit(1) Dataset = 1 Method = neldermead Statistic = cstat Initial fit statistic = 970.474 Final fit statistic = 936.078 at function evaluation 4086 Data points = 892 Degrees of freedom = 888 Probability [Q-value] = 0.127845 Reduced statistic = 1.05414 Change in statistic = 34.3952 abs1.nH 0.292181 m1.gamma 2.22134 m1.ampl 0.000294912 bkgA.c0 1.89788e-06 sherpa> plot_fit(1) WARNING: unable to calculate errors using current statistic: cstat The resulting plot is shown in Figure 5. Figure 5: Plot of fit to simulated source-plus-background spectrum Now we can examine the quality of the fit with the confidence command (conf), and return the fit and confidence results with show_fit and get_conf_results, respectively. sherpa> conf(1) abs1.nH lower bound: -0.0610468 abs1.nH upper bound: 0.0622968 m1.gamma lower bound: -0.0865563 bkgA.c0 lower bound: -1.40684e-07 m1.ampl lower bound: -2.85371e-05 m1.gamma upper bound: 0.0884315 m1.ampl upper bound: 3.20748e-05 bkgA.c0 upper bound: 1.47437e-07 Dataset = 1 Confidence Method = confidence Iterative Fit Method = None Fitting Method = neldermead Statistic = cstat confidence 1-sigma (68.2689%) bounds: Param Best-Fit Lower Bound Upper Bound ----- -------- ----------- ----------- abs1.nH 0.292181 -0.0610468 0.0622968 m1.gamma 2.22134 -0.0865563 0.0884315 m1.ampl 0.000294912 -2.85371e-05 3.20748e-05 bkgA.c0 1.89788e-06 -1.40684e-07 1.47437e-07 sherpa> print(show_fit()) Optimization Method: NelderMead name = simplex ftol = 1.1920928955078125e-07 maxfev = None initsimplex = 0 finalsimplex = 9 step = None iquad = 1 verbose = 0 reflect = True Statistic: CStat Maximum likelihood function (XSPEC style). This is equivalent to the XSpec implementation of the Cash statistic [1]_ except that it requires a model to be fit to the background. To handle the background in the same manner as XSpec, use the WStat statistic. Counts are sampled from the Poisson distribution, and so the best way to assess the quality of model fits is to use the product of individual Poisson probabilities computed in each bin i, or the likelihood L: L = (product)_i [ M(i)^D(i)/D(i)! ] * exp[-M(i)] where M(i) = S(i) + B(i) is the sum of source and background model amplitudes, and D(i) is the number of observed counts, in bin i. The cstat statistic is derived by (1) taking the logarithm of the likelihood function, (2) changing its sign, (3) dropping the factorial term (which remains constant during fits to the same dataset), (4) adding an extra data-dependent term (this is what makes it different to `Cash`, and (5) multiplying by two: C = 2 * (sum)_i [ M(i) - D(i) + D(i)*[log D(i) - log M(i)] ] The factor of two exists so that the change in the cstat statistic from one model fit to the next, (Delta)C, is distributed approximately as (Delta)chi-square when the number of counts in each bin is high. One can then in principle use (Delta)C instead of (Delta)chi-square in certain model comparison tests. However, unlike chi-square, the cstat statistic may be used regardless of the number of counts in each bin. The inclusion of the data term in the expression means that, unlike the Cash statistic, one can assign an approximate goodness-of-fit measure to a given value of the cstat statistic, i.e. the observed statistic, divided by the number of degrees of freedom, should be of order 1 for good fits. The background should not be subtracted from the data when this statistic is used. It should be modeled simultaneously with the The cstat statistic function evaluates the logarithm of each data point. If the number of counts is zero or negative, it's not possible to take the log of that number. The behavior in this case is controlled by the `truncate` and `trunc_value` settings in the .sherpa.rc file: - if `truncate` is `True` (the default value), then `log(trunc_value)` is used whenever the data value is <= 0. The default is `trunc_value=1.0e-25`. - when `truncate` is `False` an error is raised. .. [1] The description of the Cash statistic (`cstat`) in Fit:Dataset = 1 Method = neldermead Statistic = cstat Initial fit statistic = 970.474 Final fit statistic = 936.078 at function evaluation 4086 Data points = 892 Degrees of freedom = 888 Probability [Q-value] = 0.127845 Reduced statistic = 1.05414 Change in statistic = 34.3952 abs1.nH 0.292181 m1.gamma 2.22134 m1.ampl 0.000294912 bkgA.c0 1.89788e-06 sherpa> print(get_conf_results()) datasets = (1,) methodname = confidence iterfitname = none fitname = neldermead statname = cstat sigma = 1 percent = 68.26894921370858 parnames = ('abs1.nH', 'm1.gamma', 'm1.ampl', 'bkgA.c0') parvals = (0.2921809518122581, 2.221339368720717, 0.0002949123058199108, 1.8978807319578246e-06) parmins = (-0.061046777329363644, -0.08655625785136545, -2.8537149726914055e-05, -1.4068389358120997e-07) parmaxes = (0.06229679474470551, 0.08843145905278726, 3.2074812916200925e-05, 1.4743672047310842e-07) nfits = 114 Since the Cstat fit statistic does not calculate errors for the data points, we group the data and change the fit statistic to chi2xspecvar to do so. Finally, we view the results of the new fit with the plot_fit_delchi command: The new fit to the grouped simulated source-plus-background spectrum, along with the residuals divided by the uncertainties, is shown in Figure 6. Figure 6: Plot of fit to simulated source-plus-background spectrum, with residuals The plot may now be saved as a PostScript file: sherpa> plt.savefig('simulation_fit_w_bkg.ps') Scripting It The file fit.py is a Python script which performs the primary commands used above; it can be executed by typing %run -i fit.py on the Sherpa command line. The Sherpa script command may be used to save everything typed on the command line in a Sherpa session: sherpa> script(filename="sherpa.log", clobber=False) (Note that restoring a Sherpa session from such a file could be problematic since it may include syntax errors, unwanted fitting trials, et cetera.) 02 Feb 2009 created for CIAO/Sherpa 4.1 29 Apr 2009 new script command is available with CIAO 4.1.2 12 Jan 2010 updated for CIAO 4.2 13 Jul 2010 updated for CIAO 4.2 Sherpa v2: removal of S-Lang version of thread. 15 Dec 2011 reviewed for CIAO 4.4: a work-around for a save_pha bug was added; response files used in examples updated for Chandra proposal cycle 14 13 Dec 2012 updated for CIAO 4.5: group commands no longer clear the existing data filter 04 Dec 2013 reviewed for CIAO 4.6: no changes 30 Jan 2015 updated for CY 17/CIAO 4.7 15 Dec 2015 updated for CY 18/CIAO 4.8 06 Dec 2016 updated for CY 19/CIAO 4.9/Python 3 23 Apr 2018 updated for CY 20/CIAO 4.10 13 Dec 2018 updated for CY 21/CIAO 4.11 11 Dec 2019 Updated for CIAO 4.12 by replacing print_window calls with the equivalent Matplotlib command plt.savefig. There have been no updates for the cycle 22 responses. 09 Mar 2020 changed reference of dataset ID faked to 1 for clarity. 17 Mar 2022 updated for CY 24/CIAO 4.14, updated figures with Matplotlib plots. 12 Dec 2022 updated for CY 25/CIAO 4.15, no content change. 14 Dec 2023 updated for CY 26/CIAO 4.16, no content change.
{"url":"https://cxc.cfa.harvard.edu/sherpa/threads/aciss_sim/index.html","timestamp":"2024-11-10T09:17:47Z","content_type":"text/html","content_length":"65741","record_id":"<urn:uuid:37737a93-7144-4c9f-9091-d7a68a442a75>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00624.warc.gz"}
virtual reality Comments on: Bibeau-Delisle, A., & Brassard FRS, G. (2021). Probability and consequences of living inside a computer simulation. Proceedings of the Royal Society A, 477(2247), 20200658. 1. What is Computation? it is the manipulation of arbitrarily shaped formal symbols in accordance with symbol-manipulation rules, algorithms, that operate only on the (arbitrary) shape of the symbols, not their meaning. 2. Interpretatabililty. The only computations of interest, though, are the ones that can be given a coherent interpretation. 3. Hardware-Independence. The hardware that executes the computation is irrelevant. The symbol manipulations have to be executed physically, so there does have to be hardware that executes it, but the physics of the hardware is irrelevant to the interpretability of the software it is executing. Itâ s just symbol-manipulations. It could have been done with pencil and paper. 4. What is the Weak Church/Turing Thesis? That what mathematicians are doing is computation: formal symbol manipulation, executable by a Turing machine â finite-state hardware that can read, write, advance tape, change state or halt. 5. What is Simulation? It is computation that is interpretable as modelling properties of the real world: size, shape, movement, temperature, dynamics, etc. But itâ s still only computation: coherently interpretable manipulation of symbols 6. What is the Strong Church/Turing Thesis? That computation can simulate (i.e., model) just about anything in the world to as close an approximation as desired (if you can find the right algorithm). It is possible to simulate a real rocket as well as the physical environment of a real rocket. If the simulation is a close enough approximation to the properties of a real rocket and its environment, it can be manipulated computationally to design and test new, improved rocket designs. If the improved design works in the simulation, then it can be used as the blueprint for designing a real rocket that applies the new design in the real world, with real material, and it works. 7. What is Reality? It is the real world of objects we can see and measure. 8. What is Virtual Reality (VR)? Devices that can stimulate (fool) the human senses by transmitting the output of simulations of real objects to virtual-reality gloves and goggles. For example, VR can transmit the output of the simulation of an ice cube, melting, to gloves and goggles that make you feel you are seeing and feeling an ice cube. melting. But there is no ice-cube and no melting; just symbol manipulations interpretable as an ice-cube, melting. 9. What is Certainly Truee (rather than just highly probably true on all available evidence)? only what is provably true in formal mathematics. Provable means necessarily true, on pain of contradiction with formal premises (axioms). Everything else that is true is not provably true (hence not necessarily true), just probably true. 10. What is illusion? Whatever fools the senses. There is no way to be certain that what our senses and measuring instruments tell us is true (because it cannot be proved formally to be necessarily true, on pain of contradiction). But almost-certain on all the evidence is good enough, for both ordinary life and science. 11. Being a Figment? To understand the difference between a sensory illusion and reality is perhaps the most basic insight that anyone can have: the difference between what I see and what is really there. â What I am seeing could be a figment of my imagination.â But to imagine that what is really there could be a computer simulation of which I myself am a part (i.e., symbols manipulated by computer hardware, symbols that are interpretable as the reality I am seeing, as if I were in a VR) is to imagine that the figment could be the reality â which is simply incoherent, circular, self-referential nonsense. 12. Hermeneutics. Those who think this way have become lost in the â hermeneutic hall of mirrors,â mistaking symbols that are interpretable (by their real minds and real senses) as reflections of themselves — as being their real selves; mistaking the simulated ice-cube, for a â realâ ice-cube. Appearance and Reality Re: https://www.nytimes.com/interactive/2021/12/13/magazine/david-j-chalmers-interview.html 1. Computation is just the manipulation of arbitrary formal symbols, according to rules (algorithms) applied to the symbolsâ shapes, not their interpretations (if any). 2. The symbol-manipulations have to be done by some sort of physical hardware, but the physical composition of the hardware is irrelevant, as long as it executes the right symbol manipulation rules. 3. Although the symbols need not be interpretable as meaning anything â there can be a Turing Machine that executes a program that is absolutely meaningless, like Hesseâ s â Glass Bead Gameâ â but computationalists are mostly interested in interpretable algorithms that do can be given a coherent systematic interpretation by the user. 4. The Weak Church/Turing Thesis is that computation (symbol manipulation, like a Turing Machine) is what mathematicians do: symbol manipulations that are systematically interpretable as the truths and proofs of mathematics. 5. The Strong Church/Turing Thesis (SCTT) is that almost everything in the universe can be simulated (modelled) computationally. 6. A computational simulation is the execution of symbol-manipulations by hardware in which the symbols and manipulations are systematically interpretable by users as the properties of a real object in the real world (e.g., the simulation of a pendulum or an atom or a neuron or our solar system). 7. Computation can simulate only â almostâ everything in the world, because — symbols and computations being digital — computer simulations of real-world objects can only be approximate. Computation is merely discrete and finite, hence it cannot encode every possible property of the real-world object. But the approximation can be tightened as closely as we wish, given enough hardware capacity and an accurate enough computational model. 8. One of the pieces of evidence for the truth of the SCTT is the fact that it is possible to connect the hardware that is doing the simulation of an object to another kind of hardware (not digital but â analogâ ), namely, Virtual Reality (VR) peripherals (e.g., real goggles and gloves) which are worn by real, biological human beings. 9. Hence the accuracy of a computational simulation of a coconut can be tested in two ways: (1) by systematically interpreting the symbols as the properties of a coconut and testing whether they correctly correspond to and predict the properties of a real coconut or (2) by connecting the computer simulation to a VR simulator in a pair of goggles and gloves, so that a real human being wearing them can manipulate the simulated coconut. 10. One could, of course, again on the basis of the SCTT, computationally simulate not only the coconut, but the goggles, the gloves, and the human user wearing them — but that would be just computer simulation and not VR! 11. And there we have arrived at the fundamental conflation (between computational simulation and VR) that is made by sci-fi enthusiasts (like the makers and viewers of Matrix and the like, and, apparently, David Chalmers). 12. Those who fall into this conflation have misunderstood the nature of computation (and the SCTT). 13. Nor have they understood the distinction between appearance and reality â the one thatâ s missed by those who, instead of just worrying that someone else might be a figment of their imagination, worry that they themselves might be a figment of someone elseâ s imagination. 14. Neither a computationally simulated coconut nor a VR coconot is a coconut, let alone a pumpkin in another world. 15. Computation is just semantically-interpretable symbol-manipulation (Searleâ s â squiggles and squigglesâ ); a symbolic oracle. The symbol manipulation can be done by a computer, and the interpretation can be done in a personâ s head — or it can be transmitted (causally linked) to dedicated (non-computational) hardware, such as a desk-calculator or a computer screen or to VR peripherals, allowing users’ brains to perceive them through their senses rather than just through their thoughts and language. 16. In the context of the Symbol Grounding Problem and Searleâ s Chinese-Room Argument against â Strong AI,â to conflate interpretable symbols with reality is to get lost in a hermeneutic hall of mirrors. (Thatâ s the locus of Chalmersâ s â Reality.â ) Exercise for the reader: Does Turing make the same conflation in implying that everything is a Turing Machine (rather than just that everything can be simulated symbolically by a Turing Machine)?
{"url":"https://generic.wordpress.soton.ac.uk/skywritings/tag/virtual-reality/","timestamp":"2024-11-14T15:31:18Z","content_type":"text/html","content_length":"126840","record_id":"<urn:uuid:a8b0f735-d937-42e7-9fbe-e1ed451d8fb0>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00121.warc.gz"}
Projection Mappings of Finite Topological Products Projection Mappings of Finite Topological Products Recall from the Finite Topological Products of Topological Spaces page that if $\{ X_1, X_2, ..., X_n \}$ is a finite collection of topological spaces then the resulting topological product $\ displaystyle{\prod_{i=1}^{n} X_i}$ has the topology whose basis is: \quad \mathcal B = \left \{ \prod_{i=1}^{n} U_i : U_i \: \mathrm{is \: open \: in \:} X_i, \: \forall i \in \{ 1, 2, ..., n \} \right \} We claimed that the product topology is the initial topology induced by the projection maps $\{ p_1, p_2, ..., p_n \}$. We prove that statement below. Theorem 1: Let $\{ X_1, X_2, ..., X_n \}$ be a finite collection of topological spaces and let $\displaystyle{\prod_{i=1}^{n} X_i}$ denote the corresponding topological product. For each $j \in \{1, 2, ..., n \}$ and for all $\displaystyle{\mathbf{x} = (x_1, x_2, ..., x_n) \in \prod_{i=1}^{n} X_i}$ define the map $\displaystyle{p_j : \prod_{i=1}^{n} X_i \to X_j}$ by $p_j(\mathbf{x}) = x_j$. a) $p_j$ is surjective, open, and continuous for all $i \in \{1, 2, ..., n \}$. b) The product topology on $\displaystyle{\prod_{i=1}^{n} X_i}$ is the coarsest topology which makes all of the maps $\{ p_1, p_2, ..., p_n \}$ continuous. • Proof of a): We begin by showing that each $p_j$ is surjective. Let $j \in \{ 1, 2, ..., n \}$ and consider the map $\displaystyle{p_j : \prod_{i=1}^{n} \to X_j}$. Let $b \in X_j$. Then if $\ mathbf{a} = (x_1, x_2, ..., x_{j-1}, b, x_{j+1}, ..., x_n)$ then: \quad p_j(\mathbf{a}) = b • So each $p_j$ is surjective. • We now show that each $p_j$ is surjective by showing that the images of basis sets are open. Let $\displaystyle{U = \prod_{i=1}^{n} U_i}$ be any basis set in $\displaystyle{\prod_{i=1}^{n} X_i}$. Then $U_i$ is open in $X_i$ for all $i \in \{1, 2, ..., n \}$, and: \quad p_j(U) = U_j • So $p_j(U)$ is open in $X_j$. So each $p_j$ is open. • Lastly, we show that each $p_j$ is continuous. Let $U$ be any open set in $X_j$. Then we have that: \quad p_j^{-1}(U_j) = X_1 \times X_2 \times ... \times X_{j-1} \times U_j \times X_{j+1} \times ... \times X_n • But each $X_i$ is open in itself, and $U_j$ is open in $X_j$, so $p^{-1}(U_j)$ is open in $\displaystyle{\prod_{i=1}^{n} X_i}$ which shows that $p_j$ is continuous. $\blacksquare$ • Proof of b) Let $\tau$ denote the product topology on the finite product $\displaystyle{\prod_{i=1}^{n} X_i}$ and let $\tau'$ denote any other topology which makes all of the projection maps $\{ p_1, p_2, ..., p_n \}$ continuous. To show that the product topology $\tau$ is the one which makes all of these maps continuous, we must show that $\tau \subseteq \tau'$. • Let $U_j$ be an open set in $X_j$ for all $j \in \{1, 2, ..., n \}$. Then $\displaystyle{\prod_{i=1}^{n} U_i}$ is contained in $\tau$. Now the inverse image of $U_j$ under $p_j$ is: \quad p_j^{-1}(U_j) = X_1 \times X_2 \times ... \times X_{j-1} \times U_j \times X_{j+1} \times ... \times X_n • If $\tau'$ is to make each of the projection maps continuous, then $p_j^{-1}(U_j)$ must be open in $\displaystyle{\prod_{i=1}^{n} X_i}$. Moreover, arbitrary (in this case, finite) intersections of sets must be open in $\displaystyle{\prod_{i=1}^{n} X_i}$, and so the following set must be contained in $\tau'$. \quad \bigcap_{i=1}^{n} p_i^{-1}(U_i) = \prod_{i=1}^{n} U_i • Therefore $\tau \subseteq \tau'$ which shows that the product topology on $\displaystyle{\prod_{i=1}^{n} X_i}$ is the coarsest topology which makes the projection maps $\{ p_1, p_2, ..., p_n \}$ continuous. $\blacksquare$
{"url":"http://mathonline.wikidot.com/projection-mappings-of-finite-topological-products","timestamp":"2024-11-06T08:48:50Z","content_type":"application/xhtml+xml","content_length":"19545","record_id":"<urn:uuid:983392a4-2d13-4f87-b3c9-ac32aaf4fc50>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00433.warc.gz"}
The Dependence on the Monodromy Data of the Isomonodromic Tau Function Bertola, Marco (2010) The Dependence on the Monodromy Data of the Isomonodromic Tau Function. Communications in Mathematical Physics, 294 (2). pp. 539-579. ISSN 0010-3616 Text (application/pdf) bertola2010.pdf - Accepted Version Preview 421kB Official URL: http://dx.doi.org/10.1007/s00220-009-0961-7 The isomonodromic tau function defined by Jimbo-Miwa-Ueno vanishes on the Malgrange’s divisor of generalized monodromy data for which a vector bundle is nontrivial, or, which is the same, a certain Riemann–Hilbert problem has no solution. In their original work, Jimbo, Miwa, Ueno provided an algebraic construction of its derivatives with respect to isomonodromic times. However the dependence on the (generalized) monodromy data (i.e. monodromy representation and Stokes’ parameters) was not derived. We fill the gap by providing a (simpler and more general) description in which all the parameters of the problem (monodromy-changing and monodromy-preserving) are dealt with at the same level. We thus provide variational formulæ for the isomonodromic tau function with respect to the (generalized) monodromy data. The construction applies more generally: given any (sufficiently well-behaved) family of Riemann–Hilbert problems (RHP) where the jump matrices depend arbitrarily on deformation parameters, we can construct a one-form Ω (not necessarily closed) on the deformation space (Malgrange’s differential), defined off Malgrange’s divisor. We then introduce the notion of discrete Schlesinger transformation: it means that we allow the solution of the RHP to have poles (or zeros) at prescribed point(s). Even if Ω is not closed, its difference evaluated along the original solution and the transformed one, is shown to be the logarithmic differential (on the deformation space) of a function. As a function of the position of the points of the Schlesinger transformation, it yields a natural generalization of the Sato formula for the Baker–Akhiezer vector even in the absence of a tau function, and it realizes the solution of the RHP as such BA vector. Some exemplifications in the setting of the Painlevé II equation and finite Töplitz/Hankel determinants are provided. 1.Bertola M.: Moment determinants as isomonodromic tau functions. Nonlinearity 22(1), 29–50 (2009)» 2.Bertola M., Eynard B., Harnad J.: Semiclassical orthogonal polynomials, matrix models and isomonodromic tau functions. Commun. Math. Phys. 263(2), 401–437 (2006)» 3.Bertola M., Gekhtman M.: Biorthogonal Laurent polynomials, Töplitz determinants, minimal Toda orbits and isomonodromic tau functions. Constr. Approx. 26(3), 383–430 (2007)» 4.Bertola M.: Bilinear semiclassical moment functionals and their integral representation. J. Approx. Theory 121(1), 71–99 (2003)» 5.Boalch P.: Symplectic manifolds and isomonodromic deformations. Adv. Math. 163(2), 137–205 (2001)» 6.Its, A., Kapaev, A.: The Nonlinear Steepest Descent Approach to the Asymptotics of the Second Painlevé Transcendent in the Complex Domain. Volume 23 of Progr. Math. Phys. Boston, MA: Birkhäuser Boston, 2002 7.Jimbo M., Miwa T.: Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. II. Phys. D 2(3), 407–448 (1981)» 8.Jimbo M., Miwa T.: Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. III. Phys. D 4(1), 26–46 (1981)» 9.Jimbo M., Miwa T., Ueno K.: Monodromy preserving deformation of linear ordinary differential equations with rational coefficients. I. General theory and τ-function. Phys. D 2(2), 306–352 (1981)» 10.Magnus, A.P.: Painlevé-type differential equations for the recurrence coefficients of semi-classical orthogonal polynomials. In: Proceedings of the Fourth International Symposium on Orthogonal Polynomials and their Applications (Evian-Les-Bains, 1992), Volume 57, London: Elsevier, 1995, pp. 215–237 11.Malgrange B. (1983). Sur les déformations isomonodromiques. I. Singularités régulières. In: Mathematics and Physics (Paris, 1979/1982), Volume 37 of Progr. Math. Boston MA: Birkhäuser Boston, pp. 12.Marcellán, F., Rocha, I.A.: On semiclassical linear functionals: integral representations. In: Proceedings of the Fourth International Symposium on Orthogonal Polynomials and their Applications (Evian-Les-Bains, 1992), Volume 57, London: Elsevier, 1995, pp. 239–249 13.Marcellán F., Rocha I.A.: Complex path integral representation for semiclassical linear functionals. J. Approx. Theory 94(1), 107–127 (1998)» 14.Palmer J.: Zeros of the Jimbo, Miwa, Ueno tau function. J. Math. Phys. 40(12), 6638–6681 (1999)» 15.Schlesinger L.J.: Über die Lösungen gewisser Differentialgleichungen als Funktionen der singulären Punkte. J. Reine Angew. Math. 141, 96–145 (1912) 16.Sibuya, Y.: Linear Differential Equations in the Complex Domain: Problems of Analytic Continuation, Volume 82 of Translations of Mathematical Monographs. Providence, RI: Amer. Math. Soc., 1990, Translated from the Japanese by the author 17.Wasow, W.: Asymptotic Expansions for Ordinary Differential Equations. New York: Dover Publications Inc., 1987, reprint of the 1976 edition All items in Spectrum are protected by copyright, with all rights reserved. The use of items is governed by Spectrum's terms of access Repository Staff Only: item control page
{"url":"https://spectrum.library.concordia.ca/id/eprint/976933/","timestamp":"2024-11-08T18:03:56Z","content_type":"application/xhtml+xml","content_length":"64397","record_id":"<urn:uuid:f170765d-1696-47db-900e-329f89c84a56>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00304.warc.gz"}
Previous Mileage Rates - Health Sciences Association of Saskatchewan Previous Mileage Rates Effective October 1, 2023, mileage rates are as follows: For travel south of the 54th parallel, the new per kilometer Transportation Rate is $0.6210. For travel north of the 54th parallel, the new per kilometer Transportation Rate is $0.6710. Effective April 1, 2023, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.6110. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6610. Effective October 1, 2022, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.6010. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6510. Effective April 1, 2022, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5910. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6410. Effective October 1, 2021, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5810. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6310. Effective April 1, 2021, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5710. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6210. Effective October 1, 2020, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5610. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6110. Effective April 1, 2020, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5610. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6110. Effective October 1, 2019, mileage rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5510. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.6010. Effective April 1, 2019, mileage rates are as follows: Effective April 1, 2019, mileage rates remain unchanged from the October 1, 2018 rate and will be reviewed again in October of 2019. The rates are as follows: For travel south of the 54th parallel, the per kilometer Transportation Rate is $0.5410. For travel north of the 54th parallel, the per kilometer Transportation Rate is $0.5910. Effective October 1, 2018, mileage rates are as follows: For travel south of the 54th parallel, the new per kilometer Transportation Rate is $0.5410. For travel north of the 54th parallel, the new per kilometer Transportation Rate is $0.5910. Effective April 1, 2018, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.5310. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5810. Effective October 1, 2017, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.5210. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5710. Effective April 1, 2017, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.5210. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5710. Effective October 1, 2016, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.5110. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5610. Effective April 1, 2016, mileage rates are as follows: For travel South of the 54th parallel, the per kilometer Transportation Rate is $0.5010. For travel North of the 54th parallel, the per kilometer Transportation Rate is $0.5510. Effective October 1, 2015, mileage rates are as follows: For travel South of the 54th parallel, the per kilometer Transportation Rate is $0.5010. For travel North of the 54th parallel, the per kilometer Transportation Rate is $0.5510. Effective April 1, 2015, mileage rates are as follows: For travel South of the 54th parallel, the per kilometer Transportation Rate is $0.4910. For travel North of the 54th parallel, the per kilometer Transportation Rate is $0.5410. Effective October 1, 2014, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.4910. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5410. Effective April 1, 2014, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.4810. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5310. *Please note this represents no change since the last review, effective October 1, 2013. Effective October 1, 2013, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.4810. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5310. Effective April 1, 2013, mileage rates are as follows: For travel South of the 54th parallel, the new per kilometer Transportation Rate is $0.4710. For travel North of the 54th parallel, the new per kilometer Transportation Rate is $0.5210. *Please note this represents no change since the last review, effective October 1, 2012.
{"url":"https://www.hsas.ca/information-for-members/mileage-rates/previous-mileage-rates/","timestamp":"2024-11-09T00:21:40Z","content_type":"text/html","content_length":"49312","record_id":"<urn:uuid:f39b8f66-9de8-4b9e-aaee-9b0a9e0c87ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00312.warc.gz"}
Future value annuity examples Calculates a table of the future value and interest of periodic payments. Trying to solve for interest rate (to debate yay or nay on an annuity) if I need to pay The present value of $1 received t years from now is: PV = 1. (1+r)t . Example. (A) $10 M in 5 Example. An insurance company sells an annuity of $10,000 per. period, then the future value after years, or periods, will be. Payment Formula for a Sinking Fund. Suppose that an account has an annual rate of compounded Formula and Definition; FV of Annuity Illustrated; Solving for Other Variables in the FV Equation; Compounding 13 Nov 2014 The basic annuity formula in Excel for present value is =PV(RATE,NPER Example: if you were trying to figure out the present value of a future An example of the future value of an annuity formula would be an individual who decides to save by depositing $1000 into an account per year for 5 years. The first deposit would occur at the end of the first year. If a deposit was made immediately, then the future value of annuity due formula would be used. Future value of an annuity is a tool to help evaluate the cash value of an investment over time. Future value of an annuity is primarily used to measure how much that series of annuity payments would be worth at a specific date in the future when paired with a particular interest rate. The future value of an annuity is the total value of annuity payments at a specific point in the future. This can help you figure out how much your future payments will be worth, assuming that the rate of return and the periodic payment does not change. Definition and Explanation: An annuity is a series of periodic payments. Examples of annuities include regular deposits to a saving account, monthly car, mortgage, or insurance payments, and periodic payments to a person from a retirement fund. Although an annuity may vary in dollar amount, The future value of an annuity due is another expression of the time value of money, the money received today can be invested now that will grow over the period of time. One of the striking applications of the future value of an annuity due is in the calculation of the premium payments for a life insurance policy. The future value of an annuity due is higher than the future value of an ordinary annuity by the factor of one plus the periodic interest rate. Let us say you want to invest $1,000 each month for 5 years to accumulate enough money for an MBA program. There are sixty total payments in your annuity. Worked example 3: Future value annuities. At the end of each year for \(\text{4}\) years, Kobus deposits \(\text{R}\,\text{500}\) into an investment account. Studying this formula can help you understand how the present value of annuity works. For example, you'll find that the higher the interest rate, the lower the Example 2.2: Calculate the present value of an annuity-immediate of amount. $100 paid annually for 5 years at the rate of interest of 9% per annum using formula. An annuity is a series of equal cash flows, equally distributed over time. Examples of annuities abound: Mortgage payments, car loan payments, leases, rent HP 10b Calculator - Calculating the Present and Future Values of an Annuity that Increases at a Constant Rate at Example of calculating the present value. Becky looks up a formula for that. It's called the future value of an annuity, which is how much a stream of A dollars invested each year at r interest rate will be Future value of annuity is compounding of constant cash flow at a interest rate and particular time period. Annuity means constant cash flows. The future value of annuity due formula calculates the value at a future date. The use of the future value of annuity due formula in real situations is different than that of the present value for an annuity due. For example, suppose that an individual or company wants to buy an annuity from someone and the first payment is received today. An annuity is a series of equal cash flows, equally distributed over time. Examples of annuities abound: Mortgage payments, car loan payments, leases, rent HP 10b Calculator - Calculating the Present and Future Values of an Annuity that Increases at a Constant Rate at Example of calculating the present value. Becky looks up a formula for that. It's called the future value of an annuity, which is how much a stream of A dollars invested each year at r interest rate will be R = Rate per Period; N = Number of Periods. Examples of Future Value of Annuity Due Formula (With Excel Template). Let's take an example to understand the R is the fixed periodic payment. Examples. Example 1: Mr A deposited $700 at the end of each month of calendar year 20X1 31 Dec 2019 Therefore, the formula for the future value of an annuity due refers to the value on a specific future date of a series of periodic payments, where Future Value Annuity Example. Prepared by Pamela Peterson. Problem. Suppose you want to deposit an equal amount each year, starting in one year, in an Example. Auto loan requires payments of $300 per month for 3 years at a nominal annual rate of 9% compounded monthly. What is the present value of this loan For example, a car loan may be an annuity: In order to get the car, you are given a loan to buy the car. Calculate the future value of different types of annuities Free calculator to find the future value and display a growth chart of a present rate (I/Y), starting amount, and periodic deposit/annuity payment per period (PMT ). A good example for this kind of calculation is a savings account because the Accounting Applications. Accountants use present value calculations of an ordinary annuity in a number of applications. For example: Your company provides a Online Future Value Annuity calculator. You enter regular deposits, number of deposits, number of years and nominal interest rate Calculates a table of the future value and interest of periodic payments. Trying to solve for interest rate (to debate yay or nay on an annuity) if I need to pay Understanding the calculation of present value can help you set your retirement saving goals and compare different An Annuity Investment Example. Assume
{"url":"https://bestoptionsrfcrtx.netlify.app/souliere23901ziny/future-value-annuity-examples-nafa.html","timestamp":"2024-11-08T18:54:17Z","content_type":"text/html","content_length":"31794","record_id":"<urn:uuid:aeddeee7-73eb-42bd-8297-006236bef281>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00010.warc.gz"}
Matrix Models for Multi-Digit Multiplication | Math Guide A matrix model represents a multiplication equation as a grid, with the place-values of the factors along the rows and columns. Each cell in the grid contains a partial product, which is the result of multiplying the corresponding elements from the rows and columns. These partial products are then added together to find the overall product of the multiplication equation. Matrix models are a more advanced visual aid than area models, which also utilizes grids for multiplication equations; however in area models, each cell represents a single unit, while in matrix models, each cell represents multiple units. To use a matrix model for multiplication: 1. Draw a rectangle 2. Divide the rectangle into rows with the number of rows equaling the number of digits in the first number. For each row, write the value of the digit with the corresponding place value. 3. Further divide the rectangle into columns with the number of columns equaling the number of digits in the second number. For each column, write the value of the digit with the corresponding place 4. For each cell, multiply the value of the row with the value of the column. The result of that multiplication is a partial product. 5. Sum up the partial products to calculate the total product of the matrices. For example, let's consider the multiplication equation 24 x 36: 1. The first number is 24, which has 2 digits, so divide the rectangle into 2 rows. The tens digit in 24 is 2, so the value of the first row is 20. The ones digit is 4, so the value of the second row is 4. 2. The second number is 36, which has 2 digits, so divide the rectangle into 2 columns. The tens digit in 36 is 3, so the value of the first column is 30. The ones digit is 6, so the value of the second column is 6. 3. Determine the partial products by multiplying the values of the rows and columns. □ 20 x 30 = 600 □ 20 x 6 = 120 □ 4 x 30 = 120 □ 4 x 6 = 24 4. Add together each of the partial products to determine the total products □ 600 + 120 + 120 + 24 = 864
{"url":"https://www.forourschool.org/math-guides/matrix-model","timestamp":"2024-11-08T01:15:15Z","content_type":"text/html","content_length":"26295","record_id":"<urn:uuid:a16803b4-7120-4ed9-96c1-71d436df4b19>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00863.warc.gz"}
DiVeRSe (Diverse Verifier on Reasoning Step) 🟦 DiVeRSe (Diverse Verifier on Reasoning Step) 🟦 This article is rated medium Reading Time: 4 minutes Last updated on October 3, 2024 Overview of DiVeRSe (Diverse Verifier on Reasoning Step)^ What is DiVeRSe? DiVeRSe (Diverse Verifier on Reasoning Steps)^ is a method designed to enhance the reasoning abilities of Large Language Models (LLMs) by improving the way they handle multi-step problems. LLMs still struggle with complex tasks like arithmetic word problems. DiVeRSe tackles this by adding three major components: 1. Diverse Prompts: It generates varied prompts to encourage different reasoning paths for the same question. 2. Verifier: A model that checks the accuracy of reasoning paths and uses a weighted voting scheme to filter out incorrect answers. 3. Step-Aware Verification: This verifies each reasoning step independently, identifying where mistakes occur and improving the model's reasoning process step by step. How DiVeRSe Works 1. Diverse Prompts: DiVeRSe generates multiple reasoning paths by sampling from different prompts. DiVeRSe randomly selects $M_1$ different prompts for each question, and then sample $M_2$ reasoning paths for each prompt using sampling decoding. This way, you obtain $M = M1 × M2$ diverse reasoning paths for each question. 2. Voting Verifier: Once the model has generated several reasoning paths, the voting verifier comes into play. It evaluates each reasoning path, scoring how likely it is to be correct. This is done using a pre-trained model which takes into account both the question and the reasoning steps. The verifier guides a voting mechanism, weighting paths based on their probability of being correct rather than simply counting how many paths lead to a specific answer. 3. Step-Aware Verification: A major innovation of DiVeRSe is its step-aware verifier, which checks the correctness of each individual step in the reasoning chain. Often, some steps may be correct while others are wrong, leading to an incorrect final answer. DiVeRSe identifies these mistakes by labeling each step and comparing it to known correct reasoning patterns. This helps improve the overall reasoning process by pinpointing where the error occurs and correcting it. How to Use DiVeRSe DiVeRSe can be applied to a range of reasoning tasks, especially those that require step-by-step logic. Here’s how to use on a math problem. 1. Generate Diverse Reasoning Paths Sample multiple reasoning paths for a given question by generating different prompts. Q: Janet’s ducks lay 16 eggs per day. She eats 3 for breakfast every morning and uses 4 eggs for baking muffins. She sells the remaining eggs for $2 each. How much money does she make per day? Generated Reasoning Paths: [Sample 1] 16 - 3 = 13 eggs left, 13 - 4 = 9 eggs left. She sells 9 eggs for $2 each, so 9 * 2 = $18. [Sample 2] 16 - 3 = 13 eggs, 13 - 4 = 9 eggs, 9 eggs sold for $2 each, so $18. 2. Score Reasoning Paths Use the verifier to score each path based on its likelihood of being correct. - Path 1: 91.2% correct. - Path 2: 88.5% correct. 3. Step-Aware Verification Apply step-aware verification to check the correctness of individual reasoning steps. - Step 1: Correct subtraction (16 - 3 = 13). - Step 2: Correct subtraction (13 - 4 = 9). - Step 3: Correct multiplication (9 * 2 = 18). 4. Final Answer Use weighted voting to arrive at the final answer, selecting the most likely correct answer based on the verified reasoning paths. Final Answer: $18. Results of DiVeRSe DiVeRSe was evaluated on several reasoning tasks, including arithmetic reasoning (e.g., GSM8K, MultiArith), commonsense reasoning (e.g., CommonsenseQA), and inductive reasoning (e.g., CLUTRR). The method achieved state-of-the-art results on many of these benchmarks, outperforming previous approaches like self-consistency and greedy decoding. Task Previous SOTA Self-Consistency DiVeRSe GSM8K 74.4% 76.7% 82.3% AsDiv 81.9% 86.2% 88.7% MultiArith 99.3% 98.6% 99.8% SVAMP 86.6% 85.8% 87.0% SingleEq 79.5% 93.7% 94.9% CLUTRR 67.0% 35.6% 95.9% DiVeRSe offers a powerful method to enhance the reasoning abilities of large language models by leveraging diverse prompts, verifier-based scoring, and step-aware verification. This approach not only improves overall accuracy but also provides finer control over the reasoning process, allowing for more reliable and interpretable results. As LLMs continue to evolve, DiVeRSe represents a step forward in making these models more capable and trustworthy in complex reasoning tasks. Get AI Certified by Learn Prompting
{"url":"https://learnprompting.org/docs/advanced/ensembling/diverse_verifier_on_reasoning_step","timestamp":"2024-11-02T09:06:57Z","content_type":"text/html","content_length":"1053425","record_id":"<urn:uuid:bd576993-55fa-4244-beb1-dc7c6fc59639>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00048.warc.gz"}
Revision history If a is a list (or a tuple, or any iterable) of numbers or symbolic expressions, sum(a) gets you the sum of its elements. Here is an example with numbers. sage: sum([1, 2, 3, 4, 5]) Here is an example with symbolic expressions. sage: x, y, z = SR.var("x y z") sage: sum([x, y, z]) x + y + z Of course you could mix the two. sage: sum([x, y, z, 1, 2, 3]) x + y + z + 6 [DEL:If :DEL]a is a [DEL:list :DEL](or a tuple, or any iterable) of numbers or symbolic [DEL:expressions, :DEL]sum(a)[DEL: :DEL]gets you the sum of its [DEL:elements.:DEL] Here is an example with numbers. sage: sum([1, 2, 3, 4, 5]) Here is an example with symbolic expressions. sage: x, y, z = SR.var("x y z") sage: sum([x, y, z]) x + y + z Of course you could mix the two. sage: sum([x, y, z, 1, 2, 3]) x + y + z + 6
{"url":"https://ask.sagemath.org/answers/39164/revisions/","timestamp":"2024-11-08T02:41:52Z","content_type":"application/xhtml+xml","content_length":"17681","record_id":"<urn:uuid:79a696c8-3dc3-433b-9d09-903a8688d366>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00151.warc.gz"}
Excel Tutorials | Create your Automated Business Tracker Profit and Loss Spreadsheet | Learn To Excel - Josh | Skillshare Playback Speed • 0.5x • 1x (Normal) • 1.25x • 1.5x • 2x Excel Tutorials | Create your Automated Business Tracker Profit and Loss Spreadsheet Watch this class and thousands more Get unlimited access to every class Taught by industry leaders & working professionals Topics include illustration, design, photography, and more Watch this class and thousands more Get unlimited access to every class Taught by industry leaders & working professionals Topics include illustration, design, photography, and more Lesson 1: Categories Lesson 2: Targets Lesson 3: Transactions Lesson 4: Products Lesson 5: Main Dashboard • -- • Beginner level • Intermediate level • Advanced level • All levels Community Generated The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected. About This Class Hi, my name is Josh, and I’m here to teach you the skills to become much more proficient in using Excel. In this course I will be showing you how to create your very own Business Tracker Profit & Loss spreadsheet to elevate your business and track your growth. • See your income, expenses and profit and loss easily • Track your product inventory and have it update automatically • And finally customise to fit your brand and business But what’s more important are the skills learnt here can be transferred to so many different tasks within Excel. The course is split into 5 sections and you will have a workbook provided to allow you to easily follow along. I will be showing you how to: • Create dashboards that are dynamic and change automatically depending on the dates you enter! • You will improve some fundamental excel skills and learn functions such as using named ranges, conditional formatting, and tons of different formulas What I wanted to showcase here was how to use many of the excel functions that you have available into a truly useful product. After this course not only will you have an easy-to use, simple tool to track your business, but you will have also learnt skills that many advanced excel users are unaware of! Level: All Levels Hands-on Class Project The project for this course is to use the Business Tracker workbook provided for each section and follow alongside the videos. The course is split into 5 different sections which corresponds to the 5 different tabs in the workbook. I have also included some example transactions in a separate "Transaction.xlsx" spreadsheet that you will need for the chapter on transactions. Class Ratings Expectations Met? 0% Yes 0% Somewhat 0% Not really Why Join Skillshare? Take award-winning Skillshare Original Classes Each class has short lessons, hands-on projects Your membership supports Skillshare teachers 1. Introduction: Hi, my name is George, and I'm here to teach you the skills to become much more proficient in using Excel. In this course, I'll be showing you how to create your very own business tracker, Profit and Loss spreadsheet to elevate your business and track your growth. C or income, expenses and profit and loss. Easily. Track your product inventory and have it update automatically. Finally, customized to fit your brand and business. But what's more important are the skills learned hair can be transferred to so many different tasks within Excel. The course is split into five sections and you will have a workbook provided to allow you to easily follow along. I'll be showing you how to create dashboards that are dynamic and change automatically depending on the days you enter. You will improve some fundamental Excel skills and learn functions such as using named ranges, Conditional Formatting, and tons of different formulas. What have wanted to showcase hair was hard to use many of the Excel functions that you have available into a truly useful product. After this course, not only will you have an easy-to-use, simple tool to track your business? But you will have also learned skills that many advanced XR uses are unaware of. And now let's begin. 2. Lesson 1: Categories: In this first video, we'll be creating our categories to use within the rest of the workbook. We will build it so that you can add up to 20 different revenue types, expense types, transaction types, and products. The really cool thing you'll learn today is how to create dynamic named ranges for all your categories. The reason we will be using these categories as drop-down lists in other tabs. And no one wants to see loads of blanks in the drop-down list. The skills learned heck, be transferable to many different scenarios. So first, open your business tracker workbook and go to the categories tab. So let's begin. First. Let's create our revenue types. To make things easier. I have highlighted cells in orange where you should enter data for revenue types will start him before. Will add to revenue types, and we'll call them product sales and affiliate marketing. Next, let's add some expense types. Starting in S4 will enter transaction fees, advertising phase, product costs, and shipping fees. Moving onto transaction types will enter some common ones, such as income, expense, balance and transfer. Now moving onto the products section, let's add some dummy product names and categories. As a startup will enter, not applicable in both the product name and product category. Has not every transaction will be related to a product and we want that flexibility. Next, we'll enter sofa, table, bed, wardrobes, and desk as the product names in column F. And finally, we'll create the following categories in column G. Now, as mentioned before, at the end of this course, you'll be able to enter more or I meant the categories we have entered to make this useful for you. However, for the course insure you keep to the same ones I have entered. Now comes the interesting bit, how to create dynamic named ranges will begin by creating a normal named range. Let's highlight so before to B203 and naming this range of cells income by going to the address box in the top left corner and entering income. Now the reason we are creating these named ranges is too easy. Refer to them in other worksheets in drop-down lists. I'll show you an example of a dropdown list and the benefits of a dynamic named range. For those of you who want an introduction into named ranges, please check out my Beginners Course. I have a section on it, the cell K4, and I'll highlight it in a different color quickly just to make it more visible. Will then go to data and data validation under Allow, select list. And then the source type. And ten equals income. You'll notice that the drop-down box has a lot of blank cells, which isn't great and can cause problems. So now let's create a dynamic named range so that only the non-blank cells will sharp using the offset function. Let me show you how this function works first, the offset function returns a cell or range of cells that is a specified number of rows and columns from a cell or range of cells. Start typing equal offset. Open parenthesis. The first argument is the starting cell we want to reference let Center before as an example. Then the next argument is the number of rows up or down from the first cell. We will enter 0 as we want to start from very, for. The next argument is the number of columns. Again, we will enter 0 as we want to reference column b. We will then enter the height, which we will make dynamic by using the count a formula in column B. Count a will count all the cells that contain data. And hence, as we add new categories, this will change. So enter count a, open parenthesis, and highlight Column B, and then close parenthesis. And since we have a title that will be counted, so we will enter minus1 after this. And finally, for the width, you'll enter one as we only want one column to be returned. Then close parenthesis. As you can see, the only categories we enter are displayed. As we add or remove a category, you'll see what is displayed gets automatically updated. Now let's update our named range with this formula. First, go to formulas and then name manager. Select the income named range, and go to the formula box at the bottom. Here, you will use the Offset command to create a dynamic named range. So type equal offset, open parenthesis, categories, exclamation mark. Before, as we want to reference the worksheet. And we will put an absolute reference around this into 0 for the row and column arguments. And to count a, open parenthesis, categories exclamation mark, column B with an absolute reference, then close parenthesis and a minus one for the height argument. And finally, one for the width argument. As a shortcut, you could have copied the formula we created an m3 and just paste it that as well. And finally click the green tick. Now looking back at the list, we can see it only contains the categories we defined. And if we add more, the list updates automatically. Let's now create the remaining named ranges starting with expense. Go back to formula's named manager. And let's copy the formula we used for the income range to speed this up. Then select me. Type expense in the income formula. Change the reference to C4. And they count a range to see and hit OK. We'll do the same for transaction types. Will name this T types. And we'll change the reference to D4 and the count a range to D. The same for product name, will name this proud name. And we'll change the reference to F4. And they count a range to F. And finally, the product category will name this proud cat and will change the reference to G4. And they count a range to G. And there you have it. We've created our categories to be used throughout the workbook using dynamic named ranges and the offset function in Excel. I've also introduce you to other functions such as data validation lists. These are just a few that you'll be using throughout this course. Keep moving forward. And by the end of this, you'll have a really great business, profit and loss tracking spreadsheet. 3. Lesson 2: Targets: In this next video, we will be creating the targets that will link through to our categories that we have defined in the previous video and allow us to set a specific revenue or expense target per each category. We will be using if statements quite extensively here. So it introduces those who have not used it to a new function. First, can you open the business, track a workbook, and go to the targets tab? Before we start, I've prepopulated some formulas as shown here. These will simply add up any data we enter to sum up the income and expense targets for each period. So let's begin by adding our categories. We want this to be dynamic and update from the categories tab. But we've done the hard work already and can refer to one of our named ranges we've created. So let's begin by going to sell BY 13 and type equal income. You can see that the income categories we've defined are now being displayed. Like in the previous videos. I have highlighted any cells in orange as a cell where the user enters free text or numbers. Let us say for product sales we have a target of 1000 per month. So we'll write a 1000 in d 13. And for affiliate marketing, we have a target of a 100 per month. So enter a 100 in d 14 will then use this to calculate the weekly, quarterly, yearly, and full history targets. First, let's use an if statement to calculate the weekly target, which is the monthly figure, divided by 4.3. As there are 4.3 weeks in a month. Go to c 13 and enter equals, if open parentheses. If d 13 equals blank, defined by these quotation marks, then blank to find by the quotation marks again. Otherwise d 13 divided by 4.3. And drag that down. Next, the quarterly. Go to E 13 and enter equal if open parenthesis, the 13 equals blank. Then blank. Otherwise d 13 times three. And the yearly go to F 13 and enter equal f, open parenthesis, d 13 equals blank, then blank. Otherwise d 13 times 12. And finally, the full history. Now this will be a more complicated formula. As your full history is not a constant. It could be three months or three years. So we need to be a bit clever hair to calculate the target for the same time period. We will need to find the number of days you have been trading. Divide that by 365, and then multiply that by your yearly income target. I'll walk you through that now. Start the same as before. Go to g 13 and enter equal f, open parenthesis, d 13 equals blank, then blank. Now in order to find the number of days you've been trading, we can use the min and max function on your transaction history. Type, max, open parenthesis, and then click on the transactions we're actually and reference column B. And then hit F4 once to put an absolute reference around this, so that when we copy this formula, it will still reference column B. Next, type subtract, and then MIN open parenthesis. And again reference column B and press F4 once to put an absolute reference. Put these formulas in brackets. Then divide by 365, and put these formulas in brackets again. And finally multiply by F 13 on the targets worksheet and close parenthesis. And just drag down there you have it. Once you populate the transactions data, your full history targets will update automatically. Let us now do the same for the expense categories. Type equal expense in IEEE 13 to reference the expense named range we have created in the previous video. Next, enter the monthly expense targets. We'll put some dummy numbers and make sure they are negative. So put minus 50 per month for transactions phase, minus 200 per month for advertising, phase, minus 500 per month for product, phase minus 100 per month for shipping fees. Now to enter the formulas, we can just copy the formulas we did for income. Select C 13 TC 30 to copy the cells and paste them in J 13. Now do the same for the quarterly, yearly and full history formulas. You can see for the full history, we're still referencing column B. If we didn't put an absolute reference and copy this across the columns would have changed. My beginner's course also covers absolute and relative references. So please check that out if you need an introduction. And finally, let's populate the annual revenue target and annual profit target. For annual revenue target go to F4 and type equals F 12. For the annual profit target, go to L4, type equal F2 plus m2. And that was the targets actually completed. In this video, you've seen how effective if statements can be to complete various tasks. Have also used some relatively complex formulas, especially when trying to calculate the full history targets. In the next session, we'll be going through the transactions were actually which we briefly touched on today. 4. Lesson 3: Transactions: Enter. If open parenthesis E5 equals blank with the quotation marks, then plank. Otherwise, some open parenthesis. Select K4, then I5, then minus J5. And close parenthesis. Copy this down to row a 100. This will sum up all your incoming and outgoing transactions to provide a total balance amount. Now instead of you having to enter transactions manually, I've created some dummy transactions that you can copy and paste values into this worksheet. For the transactions files saved in the project, and copy and paste values. All the transactions like psi. Just remember, do not paste over the product category or total balance formulas. And there you have it. In this video, we covered some more complex uses of data validation. If statements and V lookups. You also now have a working transaction C, that will be the main driver to the upcoming worksheets. In the next video, we will create our product inventory dashboard. 5. Lesson 4: Products: In this video, we will create your product inventory dashboard, will build it so that the worksheet will automatically populate as you add transactions. You don't need to worry about manually keeping track of your product list. It will be driven directly from your transactions. First opened the business track a workbook, and go to the product tab. In this worksheet will be using named ranges, SUMIFS, and is error functions to ensure the formulas cater for any scenario. And you may have noticed that the cells are highlighted in grey. This is following the same format as previous videos. Gray cells indicate that you'll enter a formula here. So let's begin with product name. Like in previous videos, we can pull through all the unique product names by referencing the named range. Go to B5 and enter equal proud name. For the product categories, we will enter if VLookup formula combination k2, C5, and enter equal f, open parenthesis. B5 is blank. Then blank. Otherwise VLookup, open parenthesis B5 for the lookup value, which is the product name, and categories, FTG for the table array. And pull three column two, and then go for an exact match. And then to close parenthesis. Now, drag this formula to the bottom. For the following columns, we will initially create a simpler version which will calculate figures based on the full transaction history. But in the next video, we'll make sure it is even more dynamic and get these figures to update based on the time period we select. For. Now, let's first calculate the total purchased. Go to D5 and enter equal f, open parenthesis. B5 is blank. Then blank. Otherwise we uses some formula which allows us to sum up data based on specific criteria. So let's enter the sum range as the Quantity column on the transactions worksheet. And the first criteria range is the product name column. And the criteria will be the product name on the product. We're actually the next criteria is the transaction type column on the transactions worksheet. And the criteria we will hard-code in the formula as expense. Remember to put in the double quotation marks. Now let's put some absolute and mix references on the cells to ensure they are locked in the correct place when we track or copy the formulas. Please refer to my lesson on cell references. If this is going too fast. And let's drag this down to the bottom. For the total products sold will copy the formula across As we only need to make one small change. Let's change expense to income as any cells will come in as income. And let's drag that down. For inventory held is simply the difference between your turtle purchased and your turtle sold. So CO2, F5, and enter equal f, open parenthesis. B5 is blank. Then blank. Otherwise, D5, total purchases minus E5, your turtle sold. And finally, for the products sold rank, we can use the rank function in Excel here. But one issue with the rank function is it doesn't cater for scenarios with the numbers are the same. So there are duplicates. Let me first show you how the rank function works. We'll go to cell G5 and into equal rank, open parenthesis. The first argument is the number we are trying to rank, which is the total sold for the first product. So select E5. The second argument is arranged. So select E5 to E3, E4, and put in absolute reference by hitting F4. The third argument is the order. So we want the largest number have rank one. So enter 0 here and drag this down to G ten. Can you see that the sofa and wardrobe had the same number of cells and the same rank. And we're also missing a rank five from the list. This highlights the issue where we are trying to rank duplicate numbers. We can get around this by making a small addition to the formula. Enter plus count if open parenthesis for your range into E5 to E5, but put in absolute reference around the first E5. For the criteria, select E5 again, close parenthesis. And then minus one close parenthesis. Drag that down to g ten. And as you can see, the sofa and wardrobe have a different rank even though they have the same number of sales. This is a very handy way to rank data with duplicates. We need to amend this formula just a little more to account for any blanks. So go to the start and create an if statement that states when the product name is blank, display a blank. Otherwise calculate the rank like say, and drag that down. And that is how you create a product inventory dashboard is driven from your transactions. You can go further and add calculations for average product cost, sales, profits, inventory values. To make this even more useful. In the next video, we will be creating the main dashboard. This is the final video in the course and will tie all the worksheets you have created so far together. This dashboard will contain charts for your revenue, expenses, and products. It will also be fully dynamic, allowing you to choose your time periods. And all the charts and figures will update accordingly. 6. Lesson 5: Main Dashboard: This is the final video that will bring everything you have done in the previous videos together. All the charts and data will be dynamic. And you'll have the option to change the time periods and have all the figures and charts update automatically. So let's begin. First, can you open the business tracker workbook and go to the main dashboard tab? This sheet is pre-populated with some simple data to get you started. But we'll do the bulk of the work together to get this dashboard up and running. First, let's start with the top-left. Like always, orange cells are cells. We enter text or numbers. Blue cells contain drop-down boxes where you will select data. And gray cells will be where you enter formulas. Let's go to cell D7 and enter the date of the first of January 2019. Then let's create the time period drop-down list. Select D8, K2, Data Validation, select list. And in the source we will manually enter our list options. For the source, we will enter weekly, comma, monthly, comma, yearly comma and full history, and select OK. As you can see, these are now available in the drop-down list. For the start date, we will need to enter an if statement to cater for the full history option. So if the time period selected equals full history, we want the start date to be the earliest transaction Day on the transactions page. Otherwise, we want the start date to be the one entered in D7. This can be done as follows. Enter equals, if open parentheses, D equals full history within the quotation marks. Then enter men. Open parenthesis. Go to the transactions worksheet and select column B. Otherwise, select D7, which is the start day. Let's also name this cell as date underscore begin, as it's much easier to reference a named range. For the end date, we will need multiple if statements to cater for every time period. So let's begin. If time period equals weekly. And the end date is the date begin plus six. Time period equals monthly. And we will want to use the date function. And the year will be the same as the date begin. The month would be the date begin plus one. And the day will be the date begin minus1. If the time period. We can copy the same formula we used for monthly, but just change the one month to 12 months. And finally, for full history, we will take the max state on the transaction worksheet. And there you have it. It was quite a long formula, but allows you to cater for all time periods. And as a final step, let's name this cell as date underscore. And I like to do one more small update that will introduce you to conditional formatting. Since the full history date range will not require you to enter a start day, it would be better to blank out what's feasible in cell D7 to avoid any confusion, this can be easily done with conditional formatting. Just highlight the cell D7. Go to the conditional formatting on the Home tab. Select new role. Go to use formula to determine which cells to format. Where it states format values where this formula is true. Enter D8 equals for history in the quotation marks. Then select Format and entered the fill and font color has the same color. We'll go with dark blue and then hit OK. Now you can see that when full history selected, the reports start date cell changes color and it is no longer visible. Next, let's populate some of the data at the bottom, starting with revenue. Go to cell C42 T1. And let's use our income named range to populate the income streams. For targets, we are going to need to pull through the targets for the correct revenue type based on the time periods selected. So we will use a combination of the index function and the match function to do that. Select the 4.2.1 type equal index, open parenthesis. Go to the targets worksheet. And the array will be b 11 to G, 32 on the target Sheet. And let's put in absolute reference around this. For the row number, we will need to look up the revenue type. So to do that, type, match open parenthesis for the lookup value back to the main dashboard and select the revenue type in C41. And we are looking this value up in column B on the target xi. So select b 11 to B32 as the lookup array and put an absolute reference around that. For the match type, we want an exact match. So put 0 and close parenthesis. And for the column number, we went to reference the column that matches the time period selected in the main dashboard. So enter match, open parenthesis. As the Lookup Value, select, deviate from the main dashboard and put in absolute reference around that. And for the lookup array, select B11 to G11 and put an absolute reference around that. For the match, we want an exact one. So put 0 and close parenthesis twice, and then drag that formula down. Now you'll notice we have a bunch of NAs that we want to get rid of. This can simply be done with an if statement. So let's update the formula. Select cell D 41, and at the start, right, if open parenthesis, select C 41, the revenue type equals blank, then blank and puts another close parenthesis at the end. Now if you drag this formula down, we get rid of the NA's. For the actual revenue we'll need to use the sum is formula to look up the transactions work she and sum up all the transactions based on the revenue type for the specific time period. This is a little more complicated, but I'll walk you through it. Select y4, z1, and type equals sumifs, open parenthesis. For the summer range, go to the transactions worksheet and select the Income column, column i, and put an absolute reference around this. For the criteria one range, select the transactions subtype on the transactions worksheet, column D, and then put an absolute reference. Four criteria. One, go back to the main dashboard and select the revenue type in C 414 criteria range to go to the transactions worksheet and select the date column, column B and F For, for an absolute reference. For our date criterias, we want to only sum the transactions that are between a start and end dates. So we need to split this into two separate criterias. For the criteria, we want to only some ANY days that are more than or equal to a start date. So we will need to enter quotation marks. The greater than sign. Equals quotation marks, the n symbol. And then we want to reference R star a. So type date underscore begin as this was the name of the start day we defined in the main dashboard. For criteria range three, select the date column, column b. Again, we want to only some, ANY days that are less than or equal to date. We will need to enter quotation marks. The less than symbol equals quotation marks and the n symbol. And we want to reference our end day. So type date underscore end, as this was the name of the end date we defined on the main dashboard. Now I know that was a lot, but we're almost there. To cater for scenarios where you may have negative transaction types for income, IE, returns, or refunds. We want to ensure these are also accounted for. So enter a minus sign and then copy the whole formula and paste it off to the minus sign. And all you need to change is the sum range from the incoming column, column i to the outgoings column, column j. Just like we did on the targets formula, we'll make a quick update so that no data is calculated when we do not have an income type. So select cell E4, t1, and at the star, right, if open parenthesis, select the revenue type in C41 equals blank, blank, and put another close parenthesis at the end. Now, drag this formula down. The differences formula is much simpler. It's just your actual income minus your target income. So select F4, t1 type equals, if open parentheses. Your income type equals blank. Blank. Otherwise your actual income minus your target. So E4, t1 minus t0 41. And drag that down. We'll do the same for expenses now. Go so H 41 and type equal expense to populate all your expense streams. For the expense targets, it's similar to the formula we did for income targets. Go to cell I4, T1, and enter equal f, open parenthesis, which 41 is blank, then blank. Otherwise we'll want to pull in the expense targets. So enter index, open parenthesis for the array. Go to the targets worksheet and select IL-1 to N32. Hit F4 for an absolute reference. For the row num into Match. Open parenthesis. For the Lookup Value, select age 41 on the main dashboard. The lookup array is IL-1 to either 32 on the targets worksheet. Hit F4 again for an absolute reference and a 0 for the match type. For the column NUM. Enter match, open parenthesis. For the Lookup Value, select D8 on the main dashboard. The lookup array is 1111 on the targets worksheet. And enter zeros for the match type. Drag this formula down. For the actual expense. We can simply copy the formula from the income table like so. And drag that down. And the difference can be copied and drag Dan as well. Lets now move onto the graphs. Will begin in the top right with a simple one. Right-click the box, have click Select Data, then select hey, H6 to j nine, and the chart will update. And click OK. Next, we'll create the graph for the income stream. And I'll show you how to make these charts dynamic so that it will ignore any blank cells. So I'll first show the problem. If we don't make the graph dynamic, right-click the box above the income table and click Select Data was holding Control. Select cells C4, T1 to E6, and then select C 39, t 39. And showed the legend entries state Target and actual. And the horizontal axis state product, sales, affiliate marketing. If they are reversed, just hit the switch row and columns button. Then HIT okay. You'll notice that the graph is not that useful because of all the blanks. And you don't want to have to update the ranges each time you add categories. So I'll show you how to make it dynamic. We need to use the offset function to define the series alongside named ranges. So let me show you the formula for the income target values. Pick any free cell. So let's pick D6, D2 and type equal offset open parenthesis. For the reference, select D 41 and hit F4 for an absolute reference. For the rows into 00 for the columns, for the height into account a. And select default t1, t2, d 60, and hit F4 for an absolute reference. And close parenthesis. Then minus count blank, open parenthesis, and select the 41 to D6 again and hit F4 and close parenthesis. So we are counting the number of cells that have data in it, including formulas minus the number of blanks, which result in the number of nonblank cells. For the width N21. Then finally close parenthesis. And as you can see, only the non-blank values for income targets are shown. And this is the formula we will use for the ink target named range. Let us quickly do the formula for the income actual named range. Copy the formula we just did for income targets and change the references from D to E psi. Now that you have the formulas, let's define a named ranges. Copy the income target formula, go to formulas, and then name manager. Select new for the name type ink. Target four refers to paste the formula. Heat. Okay. Now copy the income actual formula we did and create the ink actual name range, like say, the name ranges are created, we can update our graphs. Right-click and go select data, select the target series and click edit. And we need to do is replace the cell references at the end with ink target just after the exclamation mark. Do the same for the actual series and click Edit, replaced the cell references at the end with ink actual just after the exclamation mark. Once you hit okay, the charge now update. We'll need to do the same for expense target and expense actual. Copy the ink actual formula and change the reference from E to I like so to calculate a dynamic range for the expense target and repeat for the actual expense and change the reference from i to j. Next, we will create the name ranges for both the expense target and expense actual. Copy the formula for expense target. Then go to Name Manager, select new type exp target, and paste the formula. And do the same for actual like psi. We will name it. And finally, we will do the charts. First, right-click the box and select data. Let's select some data to get some information populated and make it much easier whilst holding Control. Select cells H 41 to j 60, and then select H 39 to J 39 ensured the Legend Entry state Target and actual and the horizontal axis state the expense types. If they are reversed. Just hit the switch row and columns button. Select the target series and click Edit. And all you need to do is replace the cell references at the end with XP target just after the exclamation mark. Do the same for the actual series and click Edit. Replace the cell references at the end with x Actual just after the exclamation mark. And now both your revenue and expense graphs are dynamic. So I have less the products section until last as we need to make an update to our formulas on the product C. So that updates based on the time period set. Go to the product worksheet and we will need to update the formulas for total purchase and total sold. Let's first go to cell D5 in the total purchase column and add conditions for the time period. For the third criteria range. Go to the transactions we're actually and select column B and then hit F4 for an absolute reference. The third criteria, type quotation marks greater than equals quotation mark. And then add the n symbol. And then reference the start date, which is the date begin. For the fourth criteria range, select column B again and then hit F4 for an absolute reference. And for the fourth criteria, type quotation marks, less than equals, quotation mark. And then the n symbol. And then reference the end date, which is called date. And the end. This will then only some the transactions within the time period selected on the main dashboard page. Let's drag that formula down. Now to speed up, updating the total sold, we can copy this part of the formula and paste the formula at the end, like so. And then drag down. Okay, great. Now that the products heat is updated, we can now complete the product data on the main dashboard. Go to the main dashboard worksheet and select cell and 41. And here we want to pull through the top-selling products. We can use the index and match functions to do this. So type equals index, open parenthesis for the array go to the product sheet and slept column B, which is the product name column, and hit F4 for an absolute reference. Therefore, the row number, we will look for one in the product, so rank column using the match function. So enter match, open parenthesis for the Lookup Value, select m for T1 on the main dashboard. Will also put a mixed reference where we want to look the columns so it's easy to copy by pressing F4 three times. For the lookup array, goto the product sheet and select column G and hit F4 once for an absolute reference. For the match type into 0 and then close parenthesis. And for the column NUM, you can enter 0 and close parenthesis. And then you can drag this formula down. You'll notice we get NAs as we have less than ten products. So we can fix this by using a combination of the if statement and is error function. Go back to n 41 and go to the beginning of the formula. And type f, open parenthesis is error, open parenthesis. And then go to the end and put a close parenthesis as a logical argument for the value if true, enter quotation marks like say and four, the value is false, copy the index formula and paste it here. What this means is if the index formula generates an error because the lookup value doesn't exist, then display a blank, otherwise, display the index result. Less. Drag that down. You can see the NAs disappear. Now, for the total purchased column, we can copy the formula for the product names. And we just need to change the array from column B and the product sheet to column D. Remember to change it twice. We change the array to column D as this was the total purchased column on the product sheet. Next, drag down. And then just quickly central line it. For the total soil column. We can copy the formula and change the array to column E. And then drag down. For the inventory held. We can copy the formula and change the array to column F, and then drag down. And finally, we will update the chart to show the top products sold will make these dynamic the same way we did for the income and expense grass. To speed this up, we will copy the offset formulas we did previously. And we will change the ranges like psi to reference the products sold data in column P. We'll then copy the formula, goto formulas, then name manager, and add a new name range for the name, call it product sold, and copy the formula in the refers to box. Then close the Name Manager books. We will then create the graphs. So right-click the box, click Select Data. Let's first select some data, self-control. Select n 41 to 50, and then select P 4150. Make sure the product names are in the horizontal box on the right hand side. For the series, click edit for the series name, right. Products sold. For the series values. Delete the cell references and replace with product sold, which is the name range. Just after the exclamation mark. We will just delete all those offset formulas we created as they're no longer needed. And that is, you have now built your business track. A spreadsheet, data is fully dynamic. You can change the time periods and you can change the start date and the graphs and data will update. Like say. I really do hope that you found this course useful. And the skills learned here will allow you to create all sorts of different spreadsheets. And so various problems within Excel. If you have any questions you want me to do, any other courses, please let me know.
{"url":"https://www.skillshare.com/en/classes/excel-tutorials-create-your-automated-business-tracker-profit-and-loss-spreadsheet/1080103938?via=similar-classes","timestamp":"2024-11-04T08:11:30Z","content_type":"application/xhtml+xml","content_length":"333485","record_id":"<urn:uuid:0008d942-5552-4b64-9883-db2922e505c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00623.warc.gz"}
and M Simulation and Modeling in the Social and Policy Sciences Mills College, Fall 2013 Lecture Tu-Th 2:30-3:45PM AHR 2001 Lab : Th 4-6:00PM STR 14 Instructor: Dan Ryan | danryan at mills dot edu | @djjr | 510-430-3242 | Vera Long 105 Office Hours M, T, W by Appointment http://officehours.danryan.us *Quick Jumps:** Course Intro | Flowcharts | Decision Models | Difference Equations | Stock and Flow Models (system dynamics) | Markov Models | Sorting and Peer Effects | Aggregation | Cellular Automata | Tipping Models | Diffusion | Coordination and Cooperation | Biases | Path Dependence | Mechanism Design | The Wisdom of Crowds | Extended Description Academic Integrity NOTE. The schedule below indicates when I expect we will be reading/discussing the readings in the course text book. Other recommended readings (and even the occasional required reading) may be added to the online syllabus from time to time (with plenty of notice and instructions on how to obtain the readings). We may fall behind this schedule from time to time. I will try to always mention what we are reading next at the end of each session. In any case, though, it is the student's responsibility to (1) read/view things on time, and (2) keep abreast of where we are in the course schedule. Week 0 Thursday August 29 Class: Course Introduction — What are models and why do we model? 1. Introduction to PPOL225 2. Introduction to Coursera & Scott E. Page (SEP) 3. Lecture 1.1: Why Model? (8:52) Lab Excel Skills 1. Deliverable: TBA After Thursday's classes: 1. Get a Coursera account and sign up for Model Thinking. 2. View the remaining lectures in Section 1 of Model Thinking, (about 1 hour) 3. (optional) Read pp. 11-43 of Schelling's Micromotives and Macrobehaviors. DUE before 11 p.m. Sunday September 1 Flow Charts CLASS 1 BEFORE Class 1. Ryan. Ryan on Flowcharts I 2. HCI Consulting. An overview (optional) 1. Univ Plymouth, UK. Flow Charts for Simple Tasks: Tutorial with exercises 2. Univ Plymouth, UK. Flow Charts for Classification: Tutorial with exercises 3. problems 71, 72, 73, 74, and 75 (in problem notebook) IN Class 1. Review of rules; Concept and utility; Deterministic models; 2. Problems 76 and 83 CLASS 2 and LAB BEFORE Class Before Lab IN Class We will work on problems 76, 83, 85, 86, 87, and 91 during this class session. IN Lab DUE on module deadline 1. Section Quiz Flowcharts Week 2 Tuesday September 10 BEFORE class Class: Introduction to Decisions and Review of Probability We will introduce decision trees, choice and chance nodes, and review probability and expected value. Note: class will begin with a very short probability diagnostic quiz. After Class Preliminary Decisions Quiz Week 2 Thursday September 12 Before Class 1. Watch lecture 4.6, "Value of Information" (8:41) 2. Re-read pp. 206-21, especially 219-221 in S&Z Class: The Value of Information Decisions under uncertainty with testing Lab 4 Decision Trees Excel skills: formatting, string formulas, conditional formatting, spinners, joining cells, borders Week 3 Tuesday September 17 BEFORE Class 1. Re-read all of the S&Z material on decision trees 2. Re-read SEP material on decision trees as necessary. 3. Be prepared to work on problems at end of SEP Decision Theory in class. Class Decision Trees and Decision Strategies Problems from Decision Theory Quiz: Decision Trees So Far Week 3 Thursday September 19 BEFORE Class 1. Re-read S&Z pp. 216-219; 2. Problems TBA (Look ahead at problems for class below) 3. Re-read S&Z pp. 221-229 4. Problems TBA (Look ahead at problems for class below) Class: Risk Aversion, Tree Flipping, and Imperfect Tests **Lab 3: Decision Trees II - Information and Tree Flipping Week 4 Tuesday September 24 BEFORE Class 1. If you are at all hesitant about using subscript notation, review it here. 2. Read Stokey and Zeckhauser, at least pp. 47-58 ch. 4 "Difference Equations" 3. Watch D Woodlock Introduction to Stock and Flow Diagrams 4. Ryan pages: 1 Introduction and 2 Rates and Amounts Class Introduction to Difference Equations Thursday September 26 BEFORE Lab Read Stokey & Zeckhauser pp. 66-73 and attempt problem 136 Class: Difference Equations II Lab 5: Difference Equations Week 5 Tuesday October 1 BEFORE Class 1. Ryan Lecturettes: 3 Equilibria 2. Read S&Z pp. 66-73 3. Look over the table of contents in Kirkwood, System Dynamics Methods: A Quick Introduction] (DL) 4. Read Kirkwood, "System Behavior and Causal Loop Diagrams" (14pp) (DL) 5. Read Kirkwood, "A Modeling Approach" (6 pp. in DL) Class: Stock and Flow Models I Week 5 Thursday October 3 BEFORE Class BEFORE Lab **Class: Stock and Flow Models II]] LAB 5: Stock and Flow Models Linear Programing and Optimization Week 6 Tuesday October 8 BEFORE Class Week 6 Thursday October 10 BEFORE Class BEFORE Lab LAB 6 Part II Families of Models Week 7 Tuesday October 15 Week 7 Thursday October 17 BEFORE Class Week 8 Tuesday October 22 Class Introduction to Markov Models BEFORE Class Week 8 Thursday October 24 LAB 6: Programing Markov Models Section Quiz Markov Process Models Actors, Others, and the Aggregation of Individual Decisions "Measuring Segregation" BEFORE Class Class: Schelling Segregation Model BEFORE Class and Lab LAB 8 : Title Tipping Points, Diffusion, and Contagion Week 9 Tuesday October 29 BEFORE Class Read Tipping Points and Lamberson and Page: Tipping Points (READ INTRO ONLY) 1. View lecture 7.1, "Tipping Points" (5:58) Class Tipping Points Week 9 Thursday October 31 BEFORE Class BEFORE Lab 1. View lecture 7.4, "Classifying Tipping Points" (8:26), and lecture 7.5, "Measuring Tips for Measuring Tips" (13:39) Class: Diffusion and Contagion Lab 9 Tipping, Diffusion, and Contagion Section Quiz Tipping Point, Diffusion, and Contagion Actors, Coordination, and Cooperation Agent Models: Neighbors, Peers, Diffusion, Contagion Week 10 November 5 Day 1 BEFORE Class we will assume you have viewed lectures on Schelling, Granovetter, and standing ovation model and read the associated materials. IN CLASS we will develop a simple agent model in "pseudocode" and talk about the components of agent models in code. HOMEWORK download NetLogo onto your work machine and work through tutorials 1, 2, and 3 Read Tipping Points and Lamberson and Page: Tipping Points (READ INTRO ONLY) 1. Read Diffusion and SIS Day 2 AT SOME POINT view lecture 7.1, "Tipping Points" (5:58), 7.2: "Percolation Models" (11.48), 7.3A: "Contagion Model 1-Diffusion" (7:24), 7.3B: Contagion Model 1-SIS (9:12), 7.4, "Classifying Tipping Points" (8:26), and 7.5, "Measuring Tips for Measuring Tips" (13:39) IN CLASS we will do some on-paper coding exercises on these models. IN LAB we will play with a few NetLogo models and build one. We'll use either that or a pre-built one to collect some simulation data. Write up will be a short assessment of that data. Week 10 Tuesday November 5 BEFORE Class **Class: Rationality, Rules, and Behavior Week 10 Thursday November 7 **BEFORE Class and Lab Collective Action, Prisoners Dilemma, and the Commons BEFORE Class Start this section with the short introduction video (17.1) and then read the brief entry in the Stanford Encyclopedia and view lecture 17.2. Complete this worksheet on prisoners' dilemma. Read Nowak and Sigmund on cooperation and then view lecture 17.3. Read Nowak and Sigmund: "Five Ways to Cooperate" 1. SEP Lecture 17.4: Collective Action and Common Pool Resource Problems (7:23) 2. SEP Lecture 17.5: No Panacea (6:03) LAB 10 Section Quiz Coordination and Cooperation Models Big Data and Visualization Week 11 Tuesday November 12 BEFORE Class Week 11 Thursday November 14 BEFORE Class and Lab LAB 11 Week 12 Tuesday November 19 BEFORE Class Week 12 Thursday November 21 BEFORE Class and Lab LAB 12 Path Dependence and Mechanism Design Week 13 Tuesday November 26 To Do. 1. Start by viewing lectures 13.1 and 13.2 (about 25 minutes). 2. Next, work through a few practice problems which we will review in class. 3. Next read through SEP paper and view lectures 13.3 and 13.4 I(about 25 minutes). 4. Follow this with lectures 13.5 and 13.6 (about 20 minutes). Week 14 Tuesday December 3 To Do. Week 14 Thursday December 5 Models and the Wisdom of Crowds The final section of the Model Thinking course looks at how diversity contributes to "wise crowds" when making predictions. It starts by reviewing category models and linear models and how they can be used to make predictions and then introduces the Diversity Prediction Theorem. The course concludes with a lecture on the value of having lots of models. Section 20: The Many Model Thinker: Diversity and Prediction]] To Do. Thursday 5 December Class: Review Lab: Take-Away Skill Show FINAL EXAM Mon Dec 16 2–5pm page revision: 214, last edited: 25 Apr 2014 04:28
{"url":"https://djjr-courses.wikidot.com/ppol225:syllabus","timestamp":"2024-11-08T00:57:22Z","content_type":"application/xhtml+xml","content_length":"91168","record_id":"<urn:uuid:05ffa60a-248d-4b8b-9d3e-b60673450f1a>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00319.warc.gz"}
1. 2. Three sided figure 2. 7. To bring two or more numbers together 3. 9. An expression of two or more algebraic terms 4. 10. A quantity representing the power to which a given number or expression is to be raised 5. 12. relative between sets and values in math 6. 14. A statement that the values of two mathematical expressions are equal 7. 17. Take away ( a number or amount) from another to calculate the difference 8. 18. Resembling without being identical 9. 20. line A line which runs up and down a page 1. 1. A branch of mathematics dealing with the relations of sides and angles of triangles and with the relevant of any angles 2. 3. Belonging to two or more quantities 3. 4. A quality expressed as a sum 4. 5. A number when multiplied by another provides a given number or expression 5. 6. A diagram representing a system of connections or interrelations among two or more things 6. 8. Find how many times a number contains another 7. 11. Plane figure with four equal straight sides 8. 13. Able to be represented by a straight line on a graph 9. 15. involving the second and no higher power of an unknown quantity or variable 10. 16. Result of subtracting one number from another 11. 19. Obtain from ( a number) another that contains the first number a specified number of times
{"url":"https://crosswordlabs.com/view/2020-04-24-705","timestamp":"2024-11-09T19:53:49Z","content_type":"text/html","content_length":"54688","record_id":"<urn:uuid:40e590c1-6ee5-4112-92c7-cfae62e06baa>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00187.warc.gz"}
Finding the First Derivative of Polynomial Functions Using Product Rule at a Point Question Video: Finding the First Derivative of Polynomial Functions Using Product Rule at a Point Mathematics • Second Year of Secondary School Find the first derivative of π (π ₯) = (9π ₯Β² β π ₯ β 7)(7π ₯Β² β 8π ₯ β 7) at π ₯ = β 1. Video Transcript Find the first derivative of π of π ₯ equals nine π ₯ squared minus π ₯ minus seven multiplied by seven π ₯ squared minus eight π ₯ minus seven at π ₯ equals negative one. Weβ ve been asked to find the first derivative of this function at a given value of π ₯. To do this, we need to differentiate the function and then substitute π ₯ equals negative one. The function weβ ve been given is the product of two polynomials. We could distribute the parentheses to give a single polynomial function and then differentiate. But we can also approach this problem by applying the product rule of differentiation. This states that for two differentiable functions π ’ and π £, the derivative of their product π ’π £ is equal to π ’ times dπ £ by dπ ₯ plus π £ times dπ ’ by dπ ₯. In other words, we multiply each function by the derivative of the other and add these expressions together. For the given function then, we can let π ’ equal the first polynomial and π £ equal the second. We need to find the derivative of each function separately. And as theyβ re both polynomials, we need to recall the power rule of differentiation. This states that for real constants π and π , the derivative with respect to π ₯ of π multiplied by π ₯ to the π th power is π π π ₯ to the π minus first power. In other words, we multiply by the exponent and then reduce the exponent by one. Applying this rule to function π ’ gives dπ ’ by dπ ₯ equals 18π ₯ minus one. Remember, the derivative of a constant with respect to π ₯ is simply zero. Applying the same rule to π £ gives dπ £ by dπ ₯ equals 14π ₯ minus eight. Next, we substitute each of these expressions into the product rule, giving π prime of π ₯ equals nine π ₯ squared minus π ₯ minus seven multiplied by 14π ₯ minus eight plus seven π ₯ squared minus eight π ₯ minus seven multiplied by 18π ₯ minus one. Distributing the first set of parentheses gives 126π ₯ cubed minus 72π ₯ squared minus 14π ₯ squared plus eight π ₯ minus 98π ₯ plus 56. And then distributing the second set of parentheses gives 126π ₯ cubed minus seven π ₯ squared minus 144π ₯ squared plus eight π ₯ minus 126π ₯ plus seven. Next, we need to simplify by grouping the like terms in this expression. This gives π prime of π ₯ equals 252π ₯ cubed minus 237π ₯ squared minus 208π ₯ plus 63. So, weβ ve found an expression for the first derivative of the function π of π ₯. We now need to evaluate this derivative when π ₯ equals negative one. Substituting π ₯ equals negative one gives 252 multiplied by negative one cubed minus 237 multiplied by negative one squared minus 208 multiplied by negative one plus 63. Thatβ s negative 252 minus 237 plus 208 plus 63, which is negative 218. So, by using the product rule to differentiate the given function, weβ ve found that the first derivative of π of π ₯ at π ₯ equals negative one is negative 218. Note that it would also have been possible to substitute π ₯ equals negative one into the unsimplified expression for the derivative and evaluate at this point.
{"url":"https://www.nagwa.com/en/videos/737138163239/","timestamp":"2024-11-13T21:21:53Z","content_type":"text/html","content_length":"254320","record_id":"<urn:uuid:16447d05-db1d-4532-8cbd-35d0ff2dadaa>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00832.warc.gz"}
Blank Multiplication Chart 1-12 Pdf 2024 - Multiplication Chart Printable Blank Multiplication Chart 1-12 Pdf Blank Multiplication Chart 1-12 Pdf – If you are looking for a fun way to teach your child the multiplication facts, you can get a blank Multiplication Chart. This may allow your little one to fill the information by themselves. You can get empty multiplication graphs for different product or service varies, which includes 1-9, 10-12, and 15 merchandise. You can add a Game to it if you want to make your chart more exciting. Here are a few tips to get your little one began: Blank Multiplication Chart 1-12 Pdf. Multiplication Maps You may use multiplication graphs in your child’s college student binder to assist them to memorize mathematics facts. Although many youngsters can remember their arithmetic facts in a natural way, it will require lots of others time to achieve this. Multiplication charts are an excellent way to reinforce their learning and boost their self-confidence. In addition to being educational, these maps may be laminated for added durability. Listed below are some beneficial approaches to use multiplication maps. You may also take a look at these web sites for valuable multiplication truth This course handles the basics of the multiplication kitchen table. As well as learning the guidelines for multiplying, individuals will comprehend the concept of aspects and patterning. Students will be able to recall basic facts like five times four, by understanding how the factors work. They may also be able to use the property of one and zero to fix more difficult products. By the end of the lesson, students should be able to recognize patterns in multiplication chart 1. As well as the common multiplication graph or chart, individuals should create a graph or chart with additional elements or less elements. To make a multiplication graph or chart with increased elements, pupils should generate 12 desks, each and every with 12 series and 3 posts. All 12 desks have to fit using one sheet of papers. Outlines needs to be driven having a ruler. Graph pieces of paper is best for this project. Students can use spreadsheet programs to make their own tables if graph paper is not an option. Online game concepts Whether you are teaching a newcomer multiplication lesson or taking care of the competence of the multiplication table, you are able to put together exciting and interesting activity tips for Multiplication Graph 1. A number of enjoyable tips are listed below. This game requires the college students to remain work and pairs on a single issue. Then, they may all hold up their charge cards and talk about the best solution for the minute. They win if they get it right! When you’re instructing kids about multiplication, one of the better resources you can give them is really a computer multiplication chart. These printable bedding appear in a variety of models and might be imprinted on one web page or several. Little ones can learn their multiplication information by copying them through the memorizing and chart them. A multiplication graph or chart will be helpful for many reasons, from supporting them discover their arithmetic information to instructing them using a calculator. Gallery of Blank Multiplication Chart 1-12 Pdf Free Printable Blank Multiplication Chart 1 12 Times Tables Worksheets Printable Blank Multiplication Chart Black White 1 12 Free Memozor Blank 12×12 Multiplication Chart Download Printable Pdf A Blank Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/blank-multiplication-chart-1-12-pdf-2/","timestamp":"2024-11-12T07:40:49Z","content_type":"text/html","content_length":"53277","record_id":"<urn:uuid:38615dd5-1f9c-45c5-ba90-c72b2b683d89>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00373.warc.gz"}
Decimeter to Meter Converter Last updated: Decimeter to Meter Converter Welcome to the decimeter to meter converter, a fast and easy tool that performs the conversion of meter to decimeter and vice versa. In this article, we will go over the following topics: • What does decimeter mean? • How to use the decimeter to meter converter. • How to convert 1 decimeter to meters and vice versa. • How many decimeters are in a meter? 🙋 If you are interested in length conversions, take a look at our other tools: Or check out the . What does decimeter mean? TL;DR → 1 meter is equal to 10 decimeters. But we wouldn't be called Omni Calculator if we didn't provide you with some interesting yet useless knowledge! The prefix "deci-" comes from Latin, decimus, which means "tenth". Therefore, 1 decimeter is equal to one-tenth of a meter! The abbreviation of decimeter is dm, just like a meter is abbreviated in m. This unit is part of the International System of Units (SI, from French, Système international d'unités), used by over 195 countries in the world, alongside the system's base unit, the meter. The Imperial System is another widely recognized measurement system currently used in the United States, Liberia, and Myanmar. 1 decimeter is equal to: • 100 millimeters (mm); • 10 centimeters (cm); • 0.1 meters (m); • 0.0001 kilometers; • 31.5 eighths of an inch (^1/[8] in); • 3.94 inches (in); • 0.33 feet (ft); and • 0.000062 miles (mi). How to use the decimeter to meter converter Our decimeter to meter converter is very simple to use. All you have to do is input a desired length in one of the two units, and the conversion will be computed automatically. How easy is that? As a bonus, we have included a third field in the More conversion options section that allows you to convert your result into another unit of your choice, for example, in inches, feet, or kilometers. If you want to convert 1 decimeter to meters manually, you have to divide by 10: meters = decimeters / 10 For example: 34.98 dm / 10 = 3.498 m On the other hand, if you want to convert 1 meter to decimeters, you have to multiply by 10: decimeters = meters × 10 For instance: 76.4 m × 10 = 764 dm Remember that dm is the abbreviation of decimeters. How many decimeters are in a meter? There are 10 decimeters in a meter. To do a conversion of decimeters to meters, you need to divide the length by 10, for example: 43 dm / 10 = 4.3 m Going from meters to decimeters, you will need to multiply instead, for instance: 56.4 m × 10 = 564 dm What is a decimeter? A decimeter is a unit of measurement of length of the metric system, otherwise known as the International System of Units (SI). It is equal to one-tenth of a meter, which is the base unit of the SI. A decimeter is approximately equal to 3.94 inches. How do I convert m to dm? To convert meters (m) to decimeters (dm), follow these steps: 1. Note down the length in meters. Let's say you want to convert 12 m. 2. Multiply by the conversion factor 10: meters = decimeters × 10 = 12 × 10 = 120. 3. Now you know there are 120 decimeters in 12 meters. How do I convert 77.8 dm to m? To convert 77.8 dm to meters (m), you need to divide the length in decimeters by 10. 77.8 dm / 10 = 7.78 m You can do this with any length in decimeters. If you want to convert meters into decimeters, you need to multiply by 10 instead.
{"url":"https://www.omnicalculator.com/conversion/decimeter-to-meter","timestamp":"2024-11-04T04:36:57Z","content_type":"text/html","content_length":"420599","record_id":"<urn:uuid:62e83257-2d8b-48b9-89b0-ad729da42601>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00153.warc.gz"}
Using two measures theory to approach bags and confinement We consider the question of bags and confinement in the framework of a theory which uses two volume elements √-gd^4x and φd ^4x, where φ is a metric independent density. For scale invariance a dilaton field φ is considered. Using the first order formalism, curvature ( φR and √-gR^2 ) terms , gauge field term( φ √-F^a[μν]F^a [αβ]g^μαg^νβ and √-gF^a[μν]F^a [αβ]g^μαg^νβ ) and dilaton kinetic terms are introduced in a conformally invariant way. Exponential potentials for the dilaton break down (softly) the conformal invariance down to global scale invariance, which also suffers s.s.b. after integrating the equations of motion. The model has a well defined flat space limit. As a result of the s.s.b. of scale invariance phases with different vacuum energy density appear. Inside the bags the gauge dynamics is normal, that is non confining, while for the outside, the gauge field dynamics is confining. ASJC Scopus subject areas • Nuclear and High Energy Physics • Atomic and Molecular Physics, and Optics Dive into the research topics of 'Using two measures theory to approach bags and confinement'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/using-two-measures-theory-to-approach-bags-and-confinement","timestamp":"2024-11-03T16:16:53Z","content_type":"text/html","content_length":"53834","record_id":"<urn:uuid:1114947b-22eb-4ddb-b917-d0c9e8c11498>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00828.warc.gz"}
Fractions 82848 - math word problem (82848) Fractions 82848 Calculate one-seventh of the quotient of the fractions three-quarters and two-thirds. Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! Tips for related online calculators Need help calculating sum, simplifying, or multiplying fractions? Try our fraction calculator You need to know the following knowledge to solve this word math problem: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/82848","timestamp":"2024-11-03T23:17:09Z","content_type":"text/html","content_length":"61543","record_id":"<urn:uuid:78145313-848d-46ac-a7af-e77a0c2f2829>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00263.warc.gz"}
Markov Chain Monte Carlo (MCMC) Markov Chain Monte Carlo (MCMC)¶ We provide a high-level overview of the MCMC algorithms in NumPyro: • NUTS, which is an adaptive variant of HMC, is probably the most commonly used MCMC algorithm in NumPyro. Note that NUTS and HMC are not directly applicable to models with discrete latent variables, but in cases where the discrete variables have finite support and summing them out (i.e. enumeration) is tractable, NumPyro will automatically sum out discrete latent variables and perform NUTS/HMC on the remaining continuous latent variables. As discussed above, model reparameterization may be important in some cases to get good performance. Note that, generally speaking, we expect inference to be harder as the dimension of the latent space increases. See the bad geometry tutorial for additional tips and tricks. • MixedHMC can be an effective inference strategy for models that contain both continuous and discrete latent variables. • HMCECS can be an effective inference strategy for models with a large number of data points. It is applicable to models with continuous latent variables. See this example for detailed usage. • BarkerMH is a gradient-based MCMC method that may be competitive with HMC and NUTS for some models. It is applicable to models with continuous latent variables. • HMCGibbs combines HMC/NUTS steps with custom Gibbs updates. Gibbs updates must be specified by the user. • DiscreteHMCGibbs combines HMC/NUTS steps with Gibbs updates for discrete latent variables. The corresponding Gibbs updates are computed automatically. • SA is a gradient-free MCMC method. It is only applicable to models with continuous latent variables. It is expected to perform best for models whose latent dimension is low to moderate. It may be a good choice for models with non-differentiable log densities. Note that SA generally requires a very large number of samples, as mixing tends to be slow. On the plus side individual steps can be fast. • AIES is a gradient-free ensemble MCMC method that informs Metropolis-Hastings proposals by sharing information between chains. It is only applicable to models with continuous latent variables. It is expected to perform best for models whose latent dimension is low to moderate. It may be a good choice for models with non-differentiable log densities, and can be robust to likelihood-free models. AIES generally requires the number of chains to be twice as large as the number of latent parameters, (and ideally larger). • ESS is a gradient-free ensemble MCMC method that shares information between chains to find good slice sampling directions. It tends to be more sample efficient than AIES. It is only applicable to models with continuous latent variables. It is expected to perform best for models whose latent dimension is low to moderate and may be a good choice for models with non-differentiable log densities. ESS generally requires the number of chains to be twice as large as the number of latent parameters, (and ideally larger). Like HMC/NUTS, all remaining MCMC algorithms support enumeration over discrete latent variables if possible (see restrictions). Enumerated sites need to be marked with infer={‘enumerate’: ‘parallel’} like in the annotation example. class MCMC(sampler, *, num_warmup, num_samples, num_chains=1, thinning=1, postprocess_fn=None, chain_method='parallel', progress_bar=True, jit_model_args=False)[source]¶ Bases: object Provides access to Markov Chain Monte Carlo inference algorithms in NumPyro. chain_method is an experimental arg, which might be removed in a future version. Setting progress_bar=False will improve the speed for many cases. But it might require more memory than the other option. If setting num_chains greater than 1 in a Jupyter Notebook, then you will need to have installed ipywidgets in the environment from which you launced Jupyter in order for the progress bars to render correctly. If you are using Jupyter Notebook or Jupyter Lab, please also install the corresponding extension package like widgetsnbextension or jupyterlab_widgets. If your dataset is large and you have access to multiple acceleration devices, you can distribute the computation across multiple devices. Make sure that your jax version is v0.4.4 or newer. For import jax from jax.experimental import mesh_utils from jax.sharding import PositionalSharding import numpy as np import numpyro import numpyro.distributions as dist from numpyro.infer import MCMC, NUTS X = np.random.randn(128, 3) y = np.random.randn(128) def model(X, y): beta = numpyro.sample("beta", dist.Normal(0, 1).expand([3])) numpyro.sample("obs", dist.Normal(X @ beta, 1), obs=y) mcmc = MCMC(NUTS(model), num_warmup=10, num_samples=10) # See https://jax.readthedocs.io/en/latest/notebooks/Distributed_arrays_and_automatic_parallelization.html sharding = PositionalSharding(mesh_utils.create_device_mesh((8,))) X_shard = jax.device_put(X, sharding.reshape(8, 1)) y_shard = jax.device_put(y, sharding.reshape(8)) mcmc.run(jax.random.PRNGKey(0), X_shard, y_shard) ☆ sampler (MCMCKernel) – an instance of MCMCKernel that determines the sampler for running MCMC. Currently, only HMC and NUTS are available. ☆ num_warmup (int) – Number of warmup steps. ☆ num_samples (int) – Number of samples to generate from the Markov chain. ☆ thinning (int) – Positive integer that controls the fraction of post-warmup samples that are retained. For example if thinning is 2 then every other sample is retained. Defaults to 1, i.e. no thinning. ☆ num_chains (int) – Number of MCMC chains to run. By default, chains will be run in parallel using jax.pmap(). If there are not enough devices available, chains will be run in sequence. ☆ postprocess_fn – Post-processing callable - used to convert a collection of unconstrained sample values returned from the sampler to constrained values that lie within the support of the sample sites. Additionally, this is used to return values at deterministic sites in the model. ☆ chain_method (str) – A callable jax transform like jax.vmap or one of ‘parallel’ (default), ‘sequential’, ‘vectorized’. The method ‘parallel’ is used to execute the drawing process in parallel on XLA devices (CPUs/GPUs/TPUs), If there are not enough devices for ‘parallel’, we fall back to ‘sequential’ method to draw chains sequentially. ‘vectorized’ method is an experimental feature which vectorizes the drawing method, hence allowing us to collect samples in parallel on a single device. ☆ progress_bar (bool) – Whether to enable progress bar updates. Defaults to True. ☆ jit_model_args (bool) – If set to True, this will compile the potential energy computation as a function of model arguments. As such, calling MCMC.run again on a same sized but different dataset will not result in additional compilation cost. Note that currently, this does not take effect for the case num_chains > 1 and chain_method == 'parallel'. It is possible to mix parallel and vectorized sampling, i.e., run vectorized chains on multiple devices using explicit pmap. Currently, doing so requires disabling the progress bar. For example, def do_mcmc(rng_key, n_vectorized=8): nuts_kernel = NUTS(model) mcmc = MCMC( return {**mcmc.get_samples(), **mcmc.get_extra_fields()} # Number of devices to pmap over n_parallel = jax.local_device_count() rng_keys = jax.random.split(PRNGKey(rng_seed), n_parallel) traces = pmap(do_mcmc)(rng_keys) # concatenate traces along pmap'ed axis trace = {k: np.concatenate(v) for k, v in traces.items()} property post_warmup_state¶ The state before the sampling phase. If this attribute is not None, run() will skip the warmup phase and start with the state specified in this attribute. This attribute can be used to sequentially draw MCMC samples. For example, mcmc = MCMC(NUTS(model), num_warmup=100, num_samples=100) first_100_samples = mcmc.get_samples() mcmc.post_warmup_state = mcmc.last_state mcmc.run(mcmc.post_warmup_state.rng_key) # or mcmc.run(random.PRNGKey(1)) second_100_samples = mcmc.get_samples() property last_state¶ The final MCMC state at the end of the sampling phase. warmup(rng_key, *args, extra_fields=(), collect_warmup=False, init_params=None, **kwargs)[source]¶ Run the MCMC warmup adaptation phase. After this call, self.post_warmup_state will be set and the run() method will skip the warmup adaptation phase. To run warmup again for the new data, it is required to run warmup() again. ○ rng_key (random.PRNGKey) – Random number generator key to be used for the sampling. ○ args – Arguments to be provided to the numpyro.infer.mcmc.MCMCKernel.init() method. These are typically the arguments needed by the model. ○ extra_fields (tuple or list) – Extra fields (aside from default_fields()) from the state object (e.g. numpyro.infer.hmc.HMCState for HMC) to collect during the MCMC run. Exclude sample sites from collection with “~`sampler.sample_field`.`sample_site`”. e.g. “~z.a” will prevent site “a” from being collected if you’re using the NUTS sampler. ○ collect_warmup (bool) – Whether to collect samples from the warmup phase. Defaults to False. ○ init_params – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn provided to the kernel. If the kernel is instantiated by a numpyro model, the initial parameters here correspond to latent values in unconstrained space. ○ kwargs – Keyword arguments to be provided to the numpyro.infer.mcmc.MCMCKernel.init() method. These are typically the keyword arguments needed by the model. run(rng_key, *args, extra_fields=(), init_params=None, **kwargs)[source]¶ Run the MCMC samplers and collect samples. ○ rng_key (random.PRNGKey) – Random number generator key to be used for the sampling. For multi-chains, a batch of num_chains keys can be supplied. If rng_key does not have batch_size, it will be split in to a batch of num_chains keys. ○ args – Arguments to be provided to the numpyro.infer.mcmc.MCMCKernel.init() method. These are typically the arguments needed by the model. ○ extra_fields (tuple or list of str) – Extra fields (aside from “z”, “diverging”) from the state object (e.g. numpyro.infer.hmc.HMCState for HMC) to be collected during the MCMC run. Note that subfields can be accessed using dots, e.g. “adapt_state.step_size” can be used to collect step sizes at each step. Exclude sample sites from collection with “~ `sampler.sample_field`.`sample_site`”. e.g. “~z.a” will prevent site “a” from being collected if you’re using the NUTS sampler. ○ init_params – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn provided to the kernel. If the kernel is instantiated by a numpyro model, the initial parameters here correspond to latent values in unconstrained space. ○ kwargs – Keyword arguments to be provided to the numpyro.infer.mcmc.MCMCKernel.init() method. These are typically the keyword arguments needed by the model. Get samples from the MCMC run. group_by_chain (bool) – Whether to preserve the chain dimension. If True, all samples will have num_chains as the size of their leading dimension. Samples having the same data type as init_params. The data type is a dict keyed on site names if a model containing Pyro primitives is used, but can be any jaxlib.pytree(), more generally (e.g. when defining a potential_fn for HMC that takes list args). You can then pass those samples to Predictive: posterior_samples = mcmc.get_samples() predictive = Predictive(model, posterior_samples=posterior_samples) samples = predictive(rng_key1, *model_args, **model_kwargs) Get extra fields from the MCMC run. group_by_chain (bool) – Whether to preserve the chain dimension. If True, all samples will have num_chains as the size of their leading dimension. Extra fields keyed by field names which are specified in the extra_fields keyword of run(). print_summary(prob=0.9, exclude_deterministic=True)[source]¶ Print the statistics of posterior samples collected during running this MCMC instance. ○ prob (float) – the probability mass of samples within the credible interval. ○ exclude_deterministic (bool) – whether or not print out the statistics at deterministic sites. Reduce the memory footprint of collected samples by transferring them to the host device. MCMC Kernels¶ class MCMCKernel[source]¶ Bases: ABC Defines the interface for the Markov transition kernel that is used for MCMC inference. >>> from collections import namedtuple >>> from jax import random >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import MCMC >>> MHState = namedtuple("MHState", ["u", "rng_key"]) >>> class MetropolisHastings(numpyro.infer.mcmc.MCMCKernel): ... sample_field = "u" ... def __init__(self, potential_fn, step_size=0.1): ... self.potential_fn = potential_fn ... self.step_size = step_size ... def init(self, rng_key, num_warmup, init_params, model_args, model_kwargs): ... return MHState(init_params, rng_key) ... def sample(self, state, model_args, model_kwargs): ... u, rng_key = state ... rng_key, key_proposal, key_accept = random.split(rng_key, 3) ... u_proposal = dist.Normal(u, self.step_size).sample(key_proposal) ... accept_prob = jnp.exp(self.potential_fn(u) - self.potential_fn(u_proposal)) ... u_new = jnp.where(dist.Uniform().sample(key_accept) < accept_prob, u_proposal, u) ... return MHState(u_new, rng_key) >>> def f(x): ... return ((x - 2) ** 2).sum() >>> kernel = MetropolisHastings(f) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=1000) >>> mcmc.run(random.PRNGKey(0), init_params=jnp.array([1., 2.])) >>> posterior_samples = mcmc.get_samples() >>> mcmc.print_summary() postprocess_fn(model_args, model_kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. abstract init(rng_key, num_warmup, init_params, model_args, model_kwargs)[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. abstract sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. property sample_field¶ The attribute of the state object passed to sample() that denotes the MCMC sample. This is used by postprocess_fn() and for reporting results in MCMC.print_summary(). property default_fields¶ The attributes of the state object to be collected by default during the MCMC run (when MCMC.run() is called). property is_ensemble_kernel¶ Denotes whether the kernel is an ensemble kernel. If True, diagnostics_str will be displayed during the MCMC run (when MCMC.run() is called) if chain_method = “vectorized”. Given the current state, returns the diagnostics string to be added to progress bar for diagnostics purpose. class BarkerMH(model=None, potential_fn=None, step_size=1.0, adapt_step_size=True, adapt_mass_matrix=True, dense_mass=False, target_accept_prob=0.4, init_strategy=<function init_to_uniform>)[source]¶ Bases: MCMCKernel This is a gradient-based MCMC algorithm of Metropolis-Hastings type that uses a skew-symmetric proposal distribution that depends on the gradient of the potential (the Barker proposal; see reference [1]). In particular the proposal distribution is skewed in the direction of the gradient at the current sample. We expect this algorithm to be particularly effective for low to moderate dimensional models, where it may be competitive with HMC and NUTS. We recommend to use this kernel with progress_bar=False in MCMC to reduce JAX’s dispatch overhead. 1. The Barker proposal: combining robustness and efficiency in gradient-based MCMC. Samuel Livingstone, Giacomo Zanella. >>> import jax >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import MCMC, BarkerMH >>> def model(): ... x = numpyro.sample("x", dist.Normal().expand([10])) ... numpyro.sample("obs", dist.Normal(x, 1.0), obs=jnp.ones(10)) >>> kernel = BarkerMH(model) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=1000, progress_bar=True) >>> mcmc.run(jax.random.PRNGKey(0)) >>> mcmc.print_summary() property model¶ property sample_field¶ The attribute of the state object passed to sample() that denotes the MCMC sample. This is used by postprocess_fn() and for reporting results in MCMC.print_summary(). Given the current state, returns the diagnostics string to be added to progress bar for diagnostics purpose. init(rng_key, num_warmup, init_params, model_args, model_kwargs)[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. postprocess_fn(args, kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. class HMC(model=None, potential_fn=None, kinetic_fn=None, step_size=1.0, inverse_mass_matrix=None, adapt_step_size=True, adapt_mass_matrix=True, dense_mass=False, target_accept_prob=0.8, num_steps= None, trajectory_length=6.283185307179586, init_strategy=<function init_to_uniform>, find_heuristic_step_size=False, forward_mode_differentiation=False, regularize_mass_matrix=True)[source]¶ Bases: MCMCKernel Hamiltonian Monte Carlo inference, using fixed trajectory length, with provision for step size and mass matrix adaptation. Until the kernel is used in an MCMC run, postprocess_fn will return the identity function. The default init strategy init_to_uniform might not be a good strategy for some models. You might want to try other init strategies like init_to_median. 1. MCMC Using Hamiltonian Dynamics, Radford M. Neal ☆ model – Python callable containing Pyro primitives. If model is provided, potential_fn will be inferred using the model. ☆ potential_fn – Python callable that computes the potential energy given input parameters. The input parameters to potential_fn can be any python collection type, provided that init_params argument to init() has the same type. ☆ kinetic_fn – Python callable that returns the kinetic energy given inverse mass matrix and momentum. If not provided, the default is euclidean kinetic energy. ☆ step_size (float) – Determines the size of a single step taken by the verlet integrator while computing the trajectory using Hamiltonian dynamics. If not specified, it will be set to 1. ☆ inverse_mass_matrix (numpy.ndarray or dict) – Initial value for inverse mass matrix. This may be adapted during warmup if adapt_mass_matrix = True. If no value is specified, then it is initialized to the identity matrix. For a potential_fn with general JAX pytree parameters, the order of entries of the mass matrix is the order of the flattened version of pytree parameters obtained with jax.tree_flatten, which is a bit ambiguous (see more at https://jax.readthedocs.io/en/latest/pytrees.html). If model is not None, here we can specify a structured block mass matrix as a dictionary, where keys are tuple of site names and values are the corresponding block of the mass matrix. For more information about structured mass matrix, see dense_mass argument. ☆ adapt_step_size (bool) – A flag to decide if we want to adapt step_size during warm-up phase using Dual Averaging scheme. ☆ adapt_mass_matrix (bool) – A flag to decide if we want to adapt mass matrix during warm-up phase using Welford scheme. ☆ dense_mass (bool or list) – This flag controls whether mass matrix is dense (i.e. full-rank) or diagonal (defaults to dense_mass=False). To specify a structured mass matrix, users can provide a list of tuples of site names. Each tuple represents a block in the joint mass matrix. For example, assuming that the model has latent variables “x”, “y”, “z” (where each variable can be multi-dimensional), possible specifications and corresponding mass matrix structures are as follows: ■ dense_mass=[(“x”, “y”)]: use a dense mass matrix for the joint (x, y) and a diagonal mass matrix for z ■ dense_mass=[] (equivalent to dense_mass=False): use a diagonal mass matrix for the joint (x, y, z) ■ dense_mass=[(“x”, “y”, “z”)] (equivalent to full_mass=True): use a dense mass matrix for the joint (x, y, z) ■ dense_mass=[(“x”,), (“y”,), (“z”)]: use dense mass matrices for each of x, y, and z (i.e. block-diagonal with 3 blocks) ☆ target_accept_prob (float) – Target acceptance probability for step size adaptation using Dual Averaging. Increasing this value will lead to a smaller step size, hence the sampling will be slower but more robust. Defaults to 0.8. ☆ num_steps (int) – if different than None, fix the number of steps allowed for each iteration. ☆ trajectory_length (float) – Length of a MCMC trajectory for HMC. Default value is \(2\pi\). ☆ init_strategy (callable) – a per-site initialization function. See Initialization Strategies section for available functions. ☆ find_heuristic_step_size (bool) – whether or not to use a heuristic function to adjust the step size at the beginning of each adaptation window. Defaults to False. ☆ forward_mode_differentiation (bool) – whether to use forward-mode differentiation or reverse-mode differentiation. By default, we use reverse mode but the forward mode can be useful in some cases to improve the performance. In addition, some control flow utility on JAX such as jax.lax.while_loop or jax.lax.fori_loop only supports forward-mode differentiation. See JAX’s The Autodiff Cookbook for more information. ☆ regularize_mass_matrix (bool) – whether or not to regularize the estimated mass matrix for numerical stability during warmup phase. Defaults to True. This flag does not take effect if adapt_mass_matrix == False. property model¶ property sample_field¶ The attribute of the state object passed to sample() that denotes the MCMC sample. This is used by postprocess_fn() and for reporting results in MCMC.print_summary(). property default_fields¶ The attributes of the state object to be collected by default during the MCMC run (when MCMC.run() is called). Given the current state, returns the diagnostics string to be added to progress bar for diagnostics purpose. init(rng_key, num_warmup, init_params=None, model_args=(), model_kwargs={})[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. postprocess_fn(args, kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. sample(state, model_args, model_kwargs)[source]¶ Run HMC from the given HMCState and return the resulting HMCState. ○ state (HMCState) – Represents the current state. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state after running HMC. class NUTS(model=None, potential_fn=None, kinetic_fn=None, step_size=1.0, inverse_mass_matrix=None, adapt_step_size=True, adapt_mass_matrix=True, dense_mass=False, target_accept_prob=0.8, trajectory_length=None, max_tree_depth=10, init_strategy=<function init_to_uniform>, find_heuristic_step_size=False, forward_mode_differentiation=False, regularize_mass_matrix=True)[source]¶ Bases: HMC Hamiltonian Monte Carlo inference, using the No U-Turn Sampler (NUTS) with adaptive path length and mass matrix adaptation. Until the kernel is used in an MCMC run, postprocess_fn will return the identity function. The default init strategy init_to_uniform might not be a good strategy for some models. You might want to try other init strategies like init_to_median. 1. MCMC Using Hamiltonian Dynamics, Radford M. Neal 2. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo, Matthew D. Hoffman, and Andrew Gelman. 3. A Conceptual Introduction to Hamiltonian Monte Carlo`, Michael Betancourt ☆ model – Python callable containing Pyro primitives. If model is provided, potential_fn will be inferred using the model. ☆ potential_fn – Python callable that computes the potential energy given input parameters. The input parameters to potential_fn can be any python collection type, provided that init_params argument to init_kernel has the same type. ☆ kinetic_fn – Python callable that returns the kinetic energy given inverse mass matrix and momentum. If not provided, the default is euclidean kinetic energy. ☆ step_size (float) – Determines the size of a single step taken by the verlet integrator while computing the trajectory using Hamiltonian dynamics. If not specified, it will be set to 1. ☆ inverse_mass_matrix (numpy.ndarray or dict) – Initial value for inverse mass matrix. This may be adapted during warmup if adapt_mass_matrix = True. If no value is specified, then it is initialized to the identity matrix. For a potential_fn with general JAX pytree parameters, the order of entries of the mass matrix is the order of the flattened version of pytree parameters obtained with jax.tree_flatten, which is a bit ambiguous (see more at https://jax.readthedocs.io/en/latest/pytrees.html). If model is not None, here we can specify a structured block mass matrix as a dictionary, where keys are tuple of site names and values are the corresponding block of the mass matrix. For more information about structured mass matrix, see dense_mass argument. ☆ adapt_step_size (bool) – A flag to decide if we want to adapt step_size during warm-up phase using Dual Averaging scheme. ☆ adapt_mass_matrix (bool) – A flag to decide if we want to adapt mass matrix during warm-up phase using Welford scheme. ☆ dense_mass (bool or list) – This flag controls whether mass matrix is dense (i.e. full-rank) or diagonal (defaults to dense_mass=False). To specify a structured mass matrix, users can provide a list of tuples of site names. Each tuple represents a block in the joint mass matrix. For example, assuming that the model has latent variables “x”, “y”, “z” (where each variable can be multi-dimensional), possible specifications and corresponding mass matrix structures are as follows: ■ dense_mass=[(“x”, “y”)]: use a dense mass matrix for the joint (x, y) and a diagonal mass matrix for z ■ dense_mass=[] (equivalent to dense_mass=False): use a diagonal mass matrix for the joint (x, y, z) ■ dense_mass=[(“x”, “y”, “z”)] (equivalent to full_mass=True): use a dense mass matrix for the joint (x, y, z) ■ dense_mass=[(“x”,), (“y”,), (“z”)]: use dense mass matrices for each of x, y, and z (i.e. block-diagonal with 3 blocks) ☆ target_accept_prob (float) – Target acceptance probability for step size adaptation using Dual Averaging. Increasing this value will lead to a smaller step size, hence the sampling will be slower but more robust. Defaults to 0.8. ☆ trajectory_length (float) – Length of a MCMC trajectory for HMC. This arg has no effect in NUTS sampler. ☆ max_tree_depth (int) – Max depth of the binary tree created during the doubling scheme of NUTS sampler. Defaults to 10. This argument also accepts a tuple of integers (d1, d2), where d1 is the max tree depth during warmup phase and d2 is the max tree depth during post warmup phase. ☆ init_strategy (callable) – a per-site initialization function. See Initialization Strategies section for available functions. ☆ find_heuristic_step_size (bool) – whether or not to use a heuristic function to adjust the step size at the beginning of each adaptation window. Defaults to False. ☆ forward_mode_differentiation (bool) – whether to use forward-mode differentiation or reverse-mode differentiation. By default, we use reverse mode but the forward mode can be useful in some cases to improve the performance. In addition, some control flow utility on JAX such as jax.lax.while_loop or jax.lax.fori_loop only supports forward-mode differentiation. See JAX’s The Autodiff Cookbook for more class HMCGibbs(inner_kernel, gibbs_fn, gibbs_sites)[source]¶ Bases: MCMCKernel [EXPERIMENTAL INTERFACE] HMC-within-Gibbs. This inference algorithm allows the user to combine general purpose gradient-based inference (HMC or NUTS) with custom Gibbs samplers. Note that it is the user’s responsibility to provide a correct implementation of gibbs_fn that samples from the corresponding posterior conditional. ☆ inner_kernel – One of HMC or NUTS. ☆ gibbs_fn – A Python callable that returns a dictionary of Gibbs samples conditioned on the HMC sites. Must include an argument rng_key that should be used for all sampling. Must also include arguments hmc_sites and gibbs_sites, each of which is a dictionary with keys that are site names and values that are sample values. Note that a given gibbs_fn may not need make use of all these sample values. ☆ gibbs_sites (list) – a list of site names for the latent variables that are covered by the Gibbs sampler. >>> from jax import random >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import MCMC, NUTS, HMCGibbs >>> def model(): ... x = numpyro.sample("x", dist.Normal(0.0, 2.0)) ... y = numpyro.sample("y", dist.Normal(0.0, 2.0)) ... numpyro.sample("obs", dist.Normal(x + y, 1.0), obs=jnp.array([1.0])) >>> def gibbs_fn(rng_key, gibbs_sites, hmc_sites): ... y = hmc_sites['y'] ... new_x = dist.Normal(0.8 * (1-y), jnp.sqrt(0.8)).sample(rng_key) ... return {'x': new_x} >>> hmc_kernel = NUTS(model) >>> kernel = HMCGibbs(hmc_kernel, gibbs_fn=gibbs_fn, gibbs_sites=['x']) >>> mcmc = MCMC(kernel, num_warmup=100, num_samples=100, progress_bar=False) >>> mcmc.run(random.PRNGKey(0)) >>> mcmc.print_summary() sample_field = 'z'¶ property model¶ Given the current state, returns the diagnostics string to be added to progress bar for diagnostics purpose. postprocess_fn(args, kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. init(rng_key, num_warmup, init_params, model_args, model_kwargs)[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. class DiscreteHMCGibbs(inner_kernel, *, random_walk=False, modified=False)[source]¶ Bases: HMCGibbs [EXPERIMENTAL INTERFACE] A subclass of HMCGibbs which performs Metropolis updates for discrete latent sites. The site update order is randomly permuted at each step. This class supports enumeration of discrete latent variables. To marginalize out a discrete latent site, we can specify infer={‘enumerate’: ‘parallel’} keyword in its corresponding sample() ☆ inner_kernel – One of HMC or NUTS. ☆ random_walk (bool) – If False, Gibbs sampling will be used to draw a sample from the conditional p(gibbs_site | remaining sites). Otherwise, a sample will be drawn uniformly from the domain of gibbs_site. Defaults to False. ☆ modified (bool) – whether to use a modified proposal, as suggested in reference [1], which always proposes a new state for the current Gibbs site. Defaults to False. The modified scheme appears in the literature under the name “modified Gibbs sampler” or “Metropolised Gibbs sampler”. 1. Peskun’s theorem and a modified discrete-state Gibbs sampler, Liu, J. S. (1996) >>> from jax import random >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import DiscreteHMCGibbs, MCMC, NUTS >>> def model(probs, locs): ... c = numpyro.sample("c", dist.Categorical(probs)) ... numpyro.sample("x", dist.Normal(locs[c], 0.5)) >>> probs = jnp.array([0.15, 0.3, 0.3, 0.25]) >>> locs = jnp.array([-2, 0, 2, 4]) >>> kernel = DiscreteHMCGibbs(NUTS(model), modified=True) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=100000, progress_bar=False) >>> mcmc.run(random.PRNGKey(0), probs, locs) >>> mcmc.print_summary() >>> samples = mcmc.get_samples()["x"] >>> assert abs(jnp.mean(samples) - 1.3) < 0.1 >>> assert abs(jnp.var(samples) - 4.36) < 0.5 init(rng_key, num_warmup, init_params, model_args, model_kwargs)[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. class MixedHMC(inner_kernel, *, num_discrete_updates=None, random_walk=False, modified=False)[source]¶ Bases: DiscreteHMCGibbs Implementation of Mixed Hamiltonian Monte Carlo (reference [1]). The number of discrete sites to update at each MCMC iteration (n_D in reference [1]) is fixed at value 1. 1. Mixed Hamiltonian Monte Carlo for Mixed Discrete and Continuous Variables, Guangyao Zhou (2020) 2. Peskun’s theorem and a modified discrete-state Gibbs sampler, Liu, J. S. (1996) >>> from jax import random >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import HMC, MCMC, MixedHMC >>> def model(probs, locs): ... c = numpyro.sample("c", dist.Categorical(probs)) ... numpyro.sample("x", dist.Normal(locs[c], 0.5)) >>> probs = jnp.array([0.15, 0.3, 0.3, 0.25]) >>> locs = jnp.array([-2, 0, 2, 4]) >>> kernel = MixedHMC(HMC(model, trajectory_length=1.2), num_discrete_updates=20) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=100000, progress_bar=False) >>> mcmc.run(random.PRNGKey(0), probs, locs) >>> mcmc.print_summary() >>> samples = mcmc.get_samples() >>> assert "x" in samples and "c" in samples >>> assert abs(jnp.mean(samples["x"]) - 1.3) < 0.1 >>> assert abs(jnp.var(samples["x"]) - 4.36) < 0.5 init(rng_key, num_warmup, init_params, model_args, model_kwargs)[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. class HMCECS(inner_kernel, *, num_blocks=1, proxy=None)[source]¶ Bases: HMCGibbs [EXPERIMENTAL INTERFACE] HMC with Energy Conserving Subsampling. A subclass of HMCGibbs for performing HMC-within-Gibbs for models with subsample statements using the plate primitive. This implements Algorithm 1 of reference [1] but uses a naive estimation (without control variates) of log likelihood, hence might incur a high variance. The function can divide subsample indices into blocks and update only one block at each MCMC step to improve the acceptance rate of proposed subsamples as detailed in [3]. New subsample indices are proposed randomly with replacement at each MCMC step. 1. Hamiltonian Monte Carlo with energy conserving subsampling, Dang, K. D., Quiroz, M., Kohn, R., Minh-Ngoc, T., & Villani, M. (2019) 2. Speeding Up MCMC by Efficient Data Subsampling, Quiroz, M., Kohn, R., Villani, M., & Tran, M. N. (2018) 3. The Block Pseudo-Margional Sampler, Tran, M.-N., Kohn, R., Quiroz, M. Villani, M. (2017) 4. The Fundamental Incompatibility of Scalable Hamiltonian Monte Carlo and Naive Data Subsampling Betancourt, M. (2015) ☆ inner_kernel – One of HMC or NUTS. ☆ num_blocks (int) – Number of blocks to partition subsample into. ☆ proxy – Either taylor_proxy() for likelihood estimation, or, None for naive (in-between trajectory) subsampling as outlined in [4]. >>> from jax import random >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import HMCECS, MCMC, NUTS >>> def model(data): ... x = numpyro.sample("x", dist.Normal(0, 1)) ... with numpyro.plate("N", data.shape[0], subsample_size=100): ... batch = numpyro.subsample(data, event_dim=0) ... numpyro.sample("obs", dist.Normal(x, 1), obs=batch) >>> data = random.normal(random.PRNGKey(0), (10000,)) + 1 >>> kernel = HMCECS(NUTS(model), num_blocks=10) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=1000) >>> mcmc.run(random.PRNGKey(0), data) >>> samples = mcmc.get_samples()["x"] >>> assert abs(jnp.mean(samples) - 1.) < 0.1 postprocess_fn(args, kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. init(rng_key, num_warmup, init_params, model_args, model_kwargs)[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. static taylor_proxy(reference_params, degree=2)[source]¶ This is just a convenient static method which calls taylor_proxy(). class SA(model=None, potential_fn=None, adapt_state_size=None, dense_mass=True, init_strategy=<function init_to_uniform>)[source]¶ Bases: MCMCKernel Sample Adaptive MCMC, a gradient-free sampler. This is a very fast (in term of n_eff / s) sampler but requires many warmup (burn-in) steps. In each MCMC step, we only need to evaluate potential function at one point. Note that unlike in reference [1], we return a randomly selected (i.e. thinned) subset of approximate posterior samples of size num_chains x num_samples instead of num_chains x num_samples x We recommend to use this kernel with progress_bar=False in MCMC to reduce JAX’s dispatch overhead. 1. Sample Adaptive MCMC (https://papers.nips.cc/paper/9107-sample-adaptive-mcmc), Michael Zhu init(rng_key, num_warmup, init_params=None, model_args=(), model_kwargs={})[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. property model¶ property sample_field¶ The attribute of the state object passed to sample() that denotes the MCMC sample. This is used by postprocess_fn() and for reporting results in MCMC.print_summary(). property default_fields¶ The attributes of the state object to be collected by default during the MCMC run (when MCMC.run() is called). Given the current state, returns the diagnostics string to be added to progress bar for diagnostics purpose. postprocess_fn(args, kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. sample(state, model_args, model_kwargs)[source]¶ Run SA from the given SAState and return the resulting SAState. ○ state (SAState) – Represents the current state. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state after running SA. class EnsembleSampler(model=None, potential_fn=None, *, randomize_split, init_strategy)[source]¶ Bases: MCMCKernel, ABC Abstract class for ensemble samplers. Each MCMC sample is divided into two sub-iterations in which half of the ensemble is updated. property model¶ property sample_field¶ The attribute of the state object passed to sample() that denotes the MCMC sample. This is used by postprocess_fn() and for reporting results in MCMC.print_summary(). property is_ensemble_kernel¶ Denotes whether the kernel is an ensemble kernel. If True, diagnostics_str will be displayed during the MCMC run (when MCMC.run() is called) if chain_method = “vectorized”. abstract init_inner_state(rng_key)[source]¶ return inner_state abstract update_active_chains(active, inactive, inner_state)[source]¶ return (updated active set of chains, updated inner state) init(rng_key, num_warmup, init_params=None, model_args=(), model_kwargs={})[source]¶ Initialize the MCMCKernel and return an initial state to begin sampling from. ○ rng_key (random.PRNGKey) – Random number generator key to initialize the kernel. ○ num_warmup (int) – Number of warmup steps. This can be useful when doing adaptation during warmup. ○ init_params (tuple) – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. The initial state representing the state of the kernel. This can be any class that is registered as a pytree. postprocess_fn(args, kwargs)[source]¶ Get a function that transforms unconstrained values at sample sites to values constrained to the site’s support, in addition to returning deterministic sites in the model. ○ model_args – Arguments to the model. ○ model_kwargs – Keyword arguments to the model. sample(state, model_args, model_kwargs)[source]¶ Given the current state, return the next state using the given transition kernel. ○ state – A pytree class representing the state for the kernel. For HMC, this is given by HMCState. In general, this could be any class that supports getattr. ○ model_args – Arguments provided to the model. ○ model_kwargs – Keyword arguments provided to the model. Next state. class AIES(model=None, potential_fn=None, randomize_split=False, moves=None, init_strategy=<function init_to_uniform>)[source]¶ Bases: EnsembleSampler Affine-Invariant Ensemble Sampling: a gradient free method that informs Metropolis-Hastings proposals by sharing information between chains. Suitable for low to moderate dimensional models. Generally, num_chains should be at least twice the dimensionality of the model. This kernel must be used with num_chains > 1 and chain_method=”vectorized in MCMC. The number of chains must be divisible by 2. emcee: The MCMC Hammer (https://iopscience.iop.org/article/10.1086/670067), Daniel Foreman-Mackey, David W. Hogg, Dustin Lang, and Jonathan Goodman. ☆ model – Python callable containing Pyro primitives. If model is provided, potential_fn will be inferred using the model. ☆ potential_fn – Python callable that computes the potential energy given input parameters. The input parameters to potential_fn can be any python collection type, provided that init_params argument to init() has the same type. ☆ randomize_split (bool) – whether or not to permute the chain order at each iteration. Defaults to False. ☆ moves – a dictionary mapping moves to their respective probabilities of being selected. Valid keys are AIES.DEMove() and AIES.StretchMove(). Both tend to work well in practice. If the sum of probabilities exceeds 1, the probabilities will be normalized. Defaults to {AIES.DEMove(): 1.0}. ☆ init_strategy (callable) – a per-site initialization function. See Initialization Strategies section for available functions. >>> import jax >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import MCMC, AIES >>> def model(): ... x = numpyro.sample("x", dist.Normal().expand([10])) ... numpyro.sample("obs", dist.Normal(x, 1.0), obs=jnp.ones(10)) >>> kernel = AIES(model, moves={AIES.DEMove() : 0.5, ... AIES.StretchMove() : 0.5}) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=2000, num_chains=20, chain_method='vectorized') >>> mcmc.run(jax.random.PRNGKey(0)) Given the current state, returns the diagnostics string to be added to progress bar for diagnostics purpose. return inner_state update_active_chains(active, inactive, inner_state)[source]¶ return (updated active set of chains, updated inner state) static DEMove(sigma=1e-05, g0=None)[source]¶ A proposal using differential evolution. This Differential evolution proposal is implemented following Nelson et al. (2013). ○ sigma – (optional) The standard deviation of the Gaussian used to stretch the proposal vector. Defaults to 1.0.e-5. ○ (optional) (g0) – The mean stretch factor for the proposal vector. By default, it is 2.38 / sqrt(2*ndim) as recommended by the two references. static StretchMove(a=2.0)[source]¶ A Goodman & Weare (2010) “stretch move” with parallelization as described in Foreman-Mackey et al. (2013). a – (optional) The stretch scale parameter. (default: 2.0) class ESS(model=None, potential_fn=None, randomize_split=True, moves=None, max_steps=10000, max_iter=10000, init_mu=1.0, tune_mu=True, init_strategy=<function init_to_uniform>)[source]¶ Bases: EnsembleSampler Ensemble Slice Sampling: a gradient free method that finds better slice sampling directions by sharing information between chains. Suitable for low to moderate dimensional models. Generally, num_chains should be at least twice the dimensionality of the model. This kernel must be used with num_chains > 1 and chain_method=”vectorized in MCMC. The number of chains must be divisible by 2. zeus: a PYTHON implementation of ensemble slice sampling for efficient Bayesian parameter inference (https://academic.oup.com/mnras/article/508/3/3589/6381726), Minas Karamanis, Florian Beutler, and John A. Peacock. Ensemble slice sampling (https://link.springer.com/article/10.1007/s11222-021-10038-2), Minas Karamanis, Florian Beutler. ☆ model – Python callable containing Pyro primitives. If model is provided, potential_fn will be inferred using the model. ☆ potential_fn – Python callable that computes the potential energy given input parameters. The input parameters to potential_fn can be any python collection type, provided that init_params argument to init() has the same type. ☆ randomize_split (bool) – whether or not to permute the chain order at each iteration. Defaults to True. ☆ moves – a dictionary mapping moves to their respective probabilities of being selected. If the sum of probabilities exceeds 1, the probabilities will be normalized. Valid keys include: ESS.DifferentialMove() -> default proposal, works well along a wide range of target distributions, ESS.GaussianMove() -> for approximately normally distributed targets, ESS.KDEMove() -> for multimodal posteriors - requires large num_chains, and they must be well initialized ESS.RandomMove() -> no chain interaction, useful for debugging. Defaults to {ESS.DifferentialMove (): 1.0}. ☆ max_steps (int) – number of maximum stepping-out steps per sample. Defaults to 10,000. ☆ max_iter (int) – number of maximum expansions/contractions per sample. Defaults to 10,000. ☆ init_mu (float) – initial scale factor. Defaults to 1.0. ☆ tune_mu (bool) – whether or not to tune the initial scale factor. Defaults to True. ☆ init_strategy (callable) – a per-site initialization function. See Initialization Strategies section for available functions. >>> import jax >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer import MCMC, ESS >>> def model(): ... x = numpyro.sample("x", dist.Normal().expand([10])) ... numpyro.sample("obs", dist.Normal(x, 1.0), obs=jnp.ones(10)) >>> kernel = ESS(model, moves={ESS.DifferentialMove() : 0.8, ... ESS.RandomMove() : 0.2}) >>> mcmc = MCMC(kernel, num_warmup=1000, num_samples=2000, num_chains=20, chain_method='vectorized') >>> mcmc.run(jax.random.PRNGKey(0)) return inner_state update_active_chains(active, inactive, inner_state)[source]¶ return (updated active set of chains, updated inner state) static RandomMove()[source]¶ The Karamanis & Beutler (2020) “Random Move” with parallelization. When this move is used the walkers move along random directions. There is no communication between the walkers and this Move corresponds to the vanilla Slice Sampling method. This Move should be used for debugging purposes only. static KDEMove(bw_method=None)[source]¶ The Karamanis & Beutler (2020) “KDE Move” with parallelization. When this Move is used the distribution of the walkers of the complementary ensemble is traced using a Gaussian Kernel Density Estimation methods. The walkers then move along random direction vectos sampled from this distribution. static GaussianMove()[source]¶ The Karamanis & Beutler (2020) “Gaussian Move” with parallelization. When this Move is used the walkers move along directions defined by random vectors sampled from the Gaussian approximation of the walkers of the complementary ensemble. static DifferentialMove()[source]¶ The Karamanis & Beutler (2020) “Differential Move” with parallelization. When this Move is used the walkers move along directions defined by random pairs of walkers sampled (with no replacement) from the complementary ensemble. This is the default choice and performs well along a wide range of target distributions. hmc(potential_fn=None, potential_fn_gen=None, kinetic_fn=None, algo='NUTS')[source]¶ Hamiltonian Monte Carlo inference, using either fixed number of steps or the No U-Turn Sampler (NUTS) with adaptive path length. 1. MCMC Using Hamiltonian Dynamics, Radford M. Neal 2. The No-U-turn sampler: adaptively setting path lengths in Hamiltonian Monte Carlo, Matthew D. Hoffman, and Andrew Gelman. 3. A Conceptual Introduction to Hamiltonian Monte Carlo`, Michael Betancourt ☆ potential_fn – Python callable that computes the potential energy given input parameters. The input parameters to potential_fn can be any python collection type, provided that init_params argument to init_kernel has the same type. ☆ potential_fn_gen – Python callable that when provided with model arguments / keyword arguments returns potential_fn. This may be provided to do inference on the same model with changing data. If the data shape remains the same, we can compile sample_kernel once, and use the same for multiple inference runs. ☆ kinetic_fn – Python callable that returns the kinetic energy given inverse mass matrix and momentum. If not provided, the default is euclidean kinetic energy. ☆ algo (str) – Whether to run HMC with fixed number of steps or NUTS with adaptive path length. Default is NUTS. a tuple of callables (init_kernel, sample_kernel), the first one to initialize the sampler, and the second one to generate samples given an existing one. Instead of using this interface directly, we would highly recommend you to use the higher level MCMC API instead. >>> import jax >>> from jax import random >>> import jax.numpy as jnp >>> import numpyro >>> import numpyro.distributions as dist >>> from numpyro.infer.hmc import hmc >>> from numpyro.infer.util import initialize_model >>> from numpyro.util import fori_collect >>> true_coefs = jnp.array([1., 2., 3.]) >>> data = random.normal(random.PRNGKey(2), (2000, 3)) >>> labels = dist.Bernoulli(logits=(true_coefs * data).sum(-1)).sample(random.PRNGKey(3)) >>> def model(data, labels): ... coefs = numpyro.sample('coefs', dist.Normal(jnp.zeros(3), jnp.ones(3))) ... intercept = numpyro.sample('intercept', dist.Normal(0., 10.)) ... return numpyro.sample('y', dist.Bernoulli(logits=(coefs * data + intercept).sum(-1)), obs=labels) >>> model_info = initialize_model(random.PRNGKey(0), model, model_args=(data, labels,)) >>> init_kernel, sample_kernel = hmc(model_info.potential_fn, algo='NUTS') >>> hmc_state = init_kernel(model_info.param_info, ... trajectory_length=10, ... num_warmup=300) >>> samples = fori_collect(0, 500, sample_kernel, hmc_state, ... transform=lambda state: model_info.postprocess_fn(state.z)) >>> print(jnp.mean(samples['coefs'], axis=0)) [0.9153987 2.0754058 2.9621222] init_kernel(init_params, num_warmup, step_size=1.0, inverse_mass_matrix=None, adapt_step_size=True, adapt_mass_matrix=True, dense_mass=False, target_accept_prob=0.8, *, num_steps=None, trajectory_length=6.283185307179586, max_tree_depth=10, find_heuristic_step_size=False, forward_mode_differentiation=False, regularize_mass_matrix=True, model_args=(), model_kwargs=None, rng_key= Initializes the HMC sampler. ☆ init_params – Initial parameters to begin sampling. The type must be consistent with the input type to potential_fn. ☆ num_warmup (int) – Number of warmup steps; samples generated during warmup are discarded. ☆ step_size (float) – Determines the size of a single step taken by the verlet integrator while computing the trajectory using Hamiltonian dynamics. If not specified, it will be set to 1. ☆ inverse_mass_matrix (numpy.ndarray or dict) – Initial value for inverse mass matrix. This may be adapted during warmup if adapt_mass_matrix = True. If no value is specified, then it is initialized to the identity matrix. For a potential_fn with general JAX pytree parameters, the order of entries of the mass matrix is the order of the flattened version of pytree parameters obtained with jax.tree_flatten, which is a bit ambiguous (see more at https://jax.readthedocs.io/en/latest/pytrees.html). If model is not None, here we can specify a structured block mass matrix as a dictionary, where keys are tuple of site names and values are the corresponding block of the mass matrix. For more information about structured mass matrix, see dense_mass argument. ☆ adapt_step_size (bool) – A flag to decide if we want to adapt step_size during warm-up phase using Dual Averaging scheme. ☆ adapt_mass_matrix (bool) – A flag to decide if we want to adapt mass matrix during warm-up phase using Welford scheme. ☆ dense_mass (bool or list) – This flag controls whether mass matrix is dense (i.e. full-rank) or diagonal (defaults to dense_mass=False). To specify a structured mass matrix, users can provide a list of tuples of site names. Each tuple represents a block in the joint mass matrix. For example, assuming that the model has latent variables “x”, “y”, “z” (where each variable can be multi-dimensional), possible specifications and corresponding mass matrix structures are as follows: ■ dense_mass=[(“x”, “y”)]: use a dense mass matrix for the joint (x, y) and a diagonal mass matrix for z ■ dense_mass=[] (equivalent to dense_mass=False): use a diagonal mass matrix for the joint (x, y, z) ■ dense_mass=[(“x”, “y”, “z”)] (equivalent to full_mass=True): use a dense mass matrix for the joint (x, y, z) ■ dense_mass=[(“x”,), (“y”,), (“z”)]: use dense mass matrices for each of x, y, and z (i.e. block-diagonal with 3 blocks) ☆ target_accept_prob (float) – Target acceptance probability for step size adaptation using Dual Averaging. Increasing this value will lead to a smaller step size, hence the sampling will be slower but more robust. Defaults to 0.8. ☆ num_steps (int) – if different than None, fix the number of steps allowed for each iteration. ☆ trajectory_length (float) – Length of a MCMC trajectory for HMC. Default value is \(2\pi\). ☆ max_tree_depth (int) – Max depth of the binary tree created during the doubling scheme of NUTS sampler. Defaults to 10. This argument also accepts a tuple of integers (d1, d2), where d1 is the max tree depth during warmup phase and d2 is the max tree depth during post warmup phase. ☆ find_heuristic_step_size (bool) – whether to a heuristic function to adjust the step size at the beginning of each adaptation window. Defaults to False. ☆ forward_mode_differentiation (bool) – whether to use forward-mode differentiation or reverse-mode differentiation. By default, we use reverse mode but the forward mode can be useful in some cases to improve the performance. In addition, some control flow utility on JAX such as jax.lax.while_loop or jax.lax.fori_loop only supports forward-mode differentiation. See JAX’s The Autodiff Cookbook for more ☆ regularize_mass_matrix (bool) – whether or not to regularize the estimated mass matrix for numerical stability during warmup phase. Defaults to True. This flag does not take effect if adapt_mass_matrix == False. ☆ model_args (tuple) – Model arguments if potential_fn_gen is specified. ☆ model_kwargs (dict) – Model keyword arguments if potential_fn_gen is specified. ☆ rng_key (jax.random.PRNGKey) – random key to be used as the source of randomness. sample_kernel(hmc_state, model_args=(), model_kwargs=None)¶ Given an existing HMCState, run HMC with fixed (possibly adapted) step size and return a new HMCState. ☆ hmc_state – Current sample (and associated state). ☆ model_args (tuple) – Model arguments if potential_fn_gen is specified. ☆ model_kwargs (dict) – Model keyword arguments if potential_fn_gen is specified. new proposed HMCState from simulating Hamiltonian dynamics given existing state. taylor_proxy(reference_params, degree)[source]¶ Control variate for unbiased log likelihood estimation using a Taylor expansion around a reference parameter. Suggested for subsampling in [1]. ☆ reference_params (dict) – Model parameterization at MLE or MAP-estimate. ☆ degree – number of terms in the Taylor expansion, either one or two. [1] On Markov chain Monte Carlo Methods For Tall Data Bardenet., R., Doucet, A., Holmes, C. (2017) BarkerMHState = <class 'numpyro.infer.barker.BarkerMHState'>¶ A namedtuple() consisting of the following fields: ☆ i - iteration. This is reset to 0 after warmup. ☆ z - Python collection representing values (unconstrained samples from the posterior) at latent sites. ☆ potential_energy - Potential energy computed at the given value of z. ☆ z_grad - Gradient of potential energy w.r.t. latent sample sites. ☆ accept_prob - Acceptance probability of the proposal. Note that z does not correspond to the proposal if it is rejected. ☆ mean_accept_prob - Mean acceptance probability until current iteration during warmup adaptation or sampling (for diagnostics). ☆ adapt_state - A HMCAdaptState namedtuple which contains adaptation information during warmup: ○ step_size - Step size to be used by the integrator in the next iteration. ○ inverse_mass_matrix - The inverse mass matrix to be used for the next iteration. ○ mass_matrix_sqrt - The square root of mass matrix to be used for the next iteration. In case of dense mass, this is the Cholesky factorization of the mass matrix. ☆ rng_key - random number generator seed used for generating proposals, etc. HMCState = <class 'numpyro.infer.hmc.HMCState'>¶ A namedtuple() consisting of the following fields: ☆ i - iteration. This is reset to 0 after warmup. ☆ z - Python collection representing values (unconstrained samples from the posterior) at latent sites. ☆ z_grad - Gradient of potential energy w.r.t. latent sample sites. ☆ potential_energy - Potential energy computed at the given value of z. ☆ energy - Sum of potential energy and kinetic energy of the current state. ☆ r - The current momentum variable. If this is None, a new momentum variable will be drawn at the beginning of each sampling step. ☆ trajectory_length - The amount of time to run HMC dynamics in each sampling step. This field is not used in NUTS. ☆ num_steps - Number of steps in the Hamiltonian trajectory (for diagnostics). In HMC sampler, trajectory_length should be None for step_size to be adapted. In NUTS sampler, the tree depth of a trajectory can be computed from this field with tree_depth = np.log2(num_steps).astype(int) + 1. ☆ accept_prob - Acceptance probability of the proposal. Note that z does not correspond to the proposal if it is rejected. ☆ mean_accept_prob - Mean acceptance probability until current iteration during warmup adaptation or sampling (for diagnostics). ☆ diverging - A boolean value to indicate whether the current trajectory is diverging. ☆ adapt_state - A HMCAdaptState namedtuple which contains adaptation information during warmup: ○ step_size - Step size to be used by the integrator in the next iteration. ○ inverse_mass_matrix - The inverse mass matrix to be used for the next iteration. ○ mass_matrix_sqrt - The square root of mass matrix to be used for the next iteration. In case of dense mass, this is the Cholesky factorization of the mass matrix. ☆ rng_key - random number generator seed used for the iteration. HMCGibbsState = <class 'numpyro.infer.hmc_gibbs.HMCGibbsState'>¶ □ z - a dict of the current latent values (both HMC and Gibbs sites) □ hmc_state - current HMCState □ rng_key - random key for the current step SAState = <class 'numpyro.infer.sa.SAState'>¶ A namedtuple() used in Sample Adaptive MCMC. This consists of the following fields: ☆ i - iteration. This is reset to 0 after warmup. ☆ z - Python collection representing values (unconstrained samples from the posterior) at latent sites. ☆ potential_energy - Potential energy computed at the given value of z. ☆ accept_prob - Acceptance probability of the proposal. Note that z does not correspond to the proposal if it is rejected. ☆ mean_accept_prob - Mean acceptance probability until current iteration during warmup or sampling (for diagnostics). ☆ diverging - A boolean value to indicate whether the new sample potential energy is diverging from the current one. ☆ adapt_state - A SAAdaptState namedtuple which contains adaptation information: ○ zs - Step size to be used by the integrator in the next iteration. ○ pes - Potential energies of zs. ○ loc - Mean of those zs. ○ inv_mass_matrix_sqrt - If using dense mass matrix, this is Cholesky of the covariance of zs. Otherwise, this is standard deviation of those zs. ☆ rng_key - random number generator seed used for the iteration. EnsembleSamplerState = <class 'numpyro.infer.ensemble.EnsembleSamplerState'>¶ A namedtuple() consisting of the following fields: ☆ z - Python collection representing values (unconstrained samples from the posterior) at latent sites. ☆ inner_state - A namedtuple containing information needed to update half the ensemble. ☆ rng_key - random number generator seed used for generating proposals, etc. AIESState = <class 'numpyro.infer.ensemble.AIESState'>¶ A namedtuple() consisting of the following fields. ☆ i - iteration. ☆ accept_prob - Acceptance probability of the proposal. Note that z does not correspond to the proposal if it is rejected. ☆ mean_accept_prob - Mean acceptance probability until current iteration during warmup adaptation or sampling (for diagnostics). ☆ rng_key - random number generator seed used for generating proposals, etc. ESSState = <class 'numpyro.infer.ensemble.ESSState'>¶ A namedtuple() used as an inner state for Ensemble Sampler. This consists of the following fields: ☆ i - iteration. ☆ n_expansions - number of expansions in the current batch. Used for tuning mu. ☆ n_contractions - number of contractions in the current batch. Used for tuning mu. ☆ mu - Scale factor. This is tuned if tune_mu=True. ☆ rng_key - random number generator seed used for generating proposals, etc. TensorFlow Kernels¶ Thin wrappers around TensorFlow Probability (TFP) MCMC kernels. For details on the TFP MCMC kernel interface, see its TransitionKernel docs. class TFPKernel(model=None, potential_fn=None, init_strategy=<function init_to_uniform>, **kernel_kwargs)[source]¶ A thin wrapper for TensorFlow Probability (TFP) MCMC transition kernels. The argument target_log_prob_fn in TFP is replaced by either model or potential_fn (which is the negative of This class can be used to convert a TFP kernel to a NumPyro-compatible one as follows: from numpyro.contrib.tfp.mcmc import TFPKernel kernel = TFPKernel[tfp.mcmc.NoUTurnSampler](model, step_size=1.) By default, uncalibrated kernels will be inner kernels of the MetropolisHastings kernel. For ReplicaExchangeMC, TFP requires that the shape of step_size of the inner kernel must be [len(inverse_temperatures), 1] or [len(inverse_temperatures), latent_size]. ☆ model – Python callable containing Pyro primitives. If model is provided, potential_fn will be inferred using the model. ☆ potential_fn – Python callable that computes the target potential energy given input parameters. The input parameters to potential_fn can be any python collection type, provided that init_params argument to init() has the same type. ☆ init_strategy (callable) – a per-site initialization function. See Initialization Strategies section for available functions. ☆ kernel_kwargs – other arguments to be passed to TFP kernel constructor.
{"url":"https://num.pyro.ai/en/stable/mcmc.html","timestamp":"2024-11-01T22:04:45Z","content_type":"text/html","content_length":"323957","record_id":"<urn:uuid:10ad952b-c1af-4df7-9f57-9c4ffd4de0e4>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00828.warc.gz"}
Modern Portfolio Theory-Searching For the Optimal Portfolio-Portfolio Management in Python In the previous installment, we presented a description of the Model Portfolio Theory and provided a concrete example in Python. We also explained the concept of an Efficient Frontier and provided a visual presentation of it. Recall that, … the efficient frontier (or portfolio frontier) is an investment portfolio which occupies the “efficient” parts of the risk–return spectrum. Formally, it is the set of portfolios which satisfy the condition that no other portfolio exists with a higher expected return but with the same standard deviation of return (i.e., the risk). The efficient frontier was first formulated by Harry Markowitz in 1952. A combination of assets, i.e. a portfolio, is referred to as “efficient” if it has the best possible expected level of return for its level of risk (which is represented by the standard deviation of the portfolio’s return). Here, every possible combination of risky assets can be plotted in risk–expected return space, and the collection of all such possible portfolios defines a region in this space. In the absence of the opportunity to hold a risk-free asset, this region is the opportunity set (the feasible set). The positively sloped (upward-sloped) top boundary of this region is a portion of a hyperbola and is called the “efficient frontier”. Read more In this follow-up post, we are going to search for the optimal portfolio, i.e. one that has the highest risk-adjusted return. To do so, we will maximize the portfolio’s Sharpe ratio. The Sharpe Ratio is a financial metric that helps investors determine the return of an investment compared to its risk. It presents the average return that investors earn above the risk-free rate per unit of volatility or risk. The higher the Sharpe Ratio of a portfolio, the better it is in terms of risk-adjusted return. Our hypothetical portfolio consists of 3 Exchange Traded Funds: SPY, TLT, and GLD which track the S&P500, long-term Treasury bond, and gold respectively. We downloaded 10 years of data from Yahoo Finance and utilized a Python program to search for the optimal portfolio. The figure below shows the Efficient Frontier along with the optimal portfolio (depicted by the red dot). The figure below shows the optimal portfolio’s composition, return, volatility, and the Sharpe ratio. Click on the link below to download the Python program. Post Source Here: Modern Portfolio Theory-Searching For the Optimal Portfolio-Portfolio Management in Python
{"url":"https://derivvaluation.medium.com/modern-portfolio-theory-searching-for-the-optimal-portfolio-portfolio-management-in-python-4c5624753bab?source=user_profile_page---------6-------------168fa6d5352f---------------","timestamp":"2024-11-08T23:47:05Z","content_type":"text/html","content_length":"97745","record_id":"<urn:uuid:b11510fe-7b43-457d-854c-7d5663ec958a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00002.warc.gz"}
st: RE: RE: RE: Find all subsets of variables [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: RE: RE: Find all subsets of variables From "Lachenbruch, Peter" <[email protected]> To <[email protected]> Subject st: RE: RE: RE: Find all subsets of variables Date Wed, 24 Sep 2008 09:27:15 -0700 With a large list of variables and fairly large subset size (e.g., number of variables, p, is 50 and subset size is 15) this could be fairly time-consuming (my guess) - it is 2.25x10^12 sets. Alan - do you have any ideas on the time required for this? Am I nuts? This says nothing about the amount of output... Peter A. Lachenbruch Department of Public Health Oregon State University Corvallis, OR 97330 Phone: 541-737-3832 FAX: 541-737-4001 -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Feiveson, Alan H. (JSC-SK311) Sent: Wednesday, September 24, 2008 7:52 AM To: [email protected] Subject: st: RE: RE: Find all subsets of variables also see tryem ssc des tryem This will run most estimation commands for a given subset size (as opposed to all subsets), but also allows for user-defined criteria to select the "best" subset. Al Feiveson -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of Martin Weiss Sent: Wednesday, September 24, 2008 9:25 AM To: [email protected] Subject: st: RE: Find all subsets of variables Also note -stepwise- and -nestreg- as similar commands. A related question would be how one could capture model selection criteria (adjusted R square) for the regressions run on the covariate combinations thrown up by cap ssc inst selectvars sysuse auto, clear selectvars headroom trunk length foreach v in `r(varlist)'{ regress mpg `v' The -postfile- suite of commands is an obvious solution, but I imagine there are more convenient ways to do it... -----Original Message----- From: [email protected] [mailto:[email protected]] On Behalf Of junin Sent: Wednesday, September 24, 2008 4:01 PM To: [email protected] Subject: st: Find all subsets of variables Dear all, i want to find out all subsets of a given set of variables for model testing. As an example: A set of variables var1 var2 var3 var4 should give me: var1 var2 var3 var4 var1 var2 var3 var1 var2 var4 var1 var3 var4 var1 var4 var2 var3 var4 and so forth. I would like to test all possible model configurations. Is there a command in Stata, which could be convenient to use? Thank you for any help, * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2008-09/msg01083.html","timestamp":"2024-11-08T02:49:17Z","content_type":"text/html","content_length":"13590","record_id":"<urn:uuid:ee891c0a-b079-4d70-ba70-a2746d654753>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00078.warc.gz"}
Energy Use of Home Appliances How is the energy use of home appliances calculated? Let’s start by looking at how you calculate the usage of power. Power Usage Let’s start with the example of two cyclists who pedaled 10 miles and used the same amount of energy (218 calories). In this example, one has a faster time than the other. This cyclist demonstrated the most power. • Power is the rate at which we do work. • Energy is the capacity to do work. • Work is the amount done. Measuring Power Units of power are not the same as units of energy (i.e., Btus, calories). Units of power are measured in terms of units of energy used per some unit of time. Examples of Units of Power include: • Watt (W) = 1 joule of energy per second or 1 J/S • BTU per hour (BTUs/h) = 1,055J • Horsepower (hp) = 550 foot-pounds per second or 550 ft lb/S • Calories per second (cal/sec) • Kilowatt (kW) = 1000 watts Calculating Power Power can be determined by the following formula: Power = Energy (or work) / Time Energy = Power x Duration of Usage (Time) On a winter day, a home needs 1 x 106 or 1,000,000 BTUs of fuel energy every 24 hours to maintain the interior at 65° F. At what rate is the energy being consumed in Watts? If 1 J/s = 1 Watt, and 1000 Watt = 1kW, then 12,200 J/s = 12,200 Watts = 12.2 kW To solve this problem, you must realize the following: You know the Power (1,000,000 BTUs/24 hours) and the time (24 hours), so you need to solve for Energy. The measurements must be consistent, so the BTUs should be converted to a consistent measure, such as Joules: 1 Watt = 1 J/s and 1 BTU = 1,055 J If using Joules per second instead of watts, you must convert 24 hours into seconds or divide it by the number of seconds in an hour (3600). Power & Cost of Energy We can also use a version of the Power formula to determine Cost of Energy: Energy Use = Power × Time of Power Use Cost of Energy = Energy Used × Cost of the Unit of Energy If a 100 W light bulb is accidentally left on overnight (8 hours), how much energy does it consume? Energy Use = Power × Time of Power Use Energy Use = 100 W × 8h = 800 Wh or 0.8kWh How much energy does this cost, if electricity costs 10 cents per Kilowatt? Cost of Energy = Energy Used × Cost of the Unit of Energy Cost of Energy = .8kWh × 10 cents = $0.08 Energy Consumption We know that power is calculated by power = energy/time or energy = power x duration of usage (time). By modifying this formula slightly, we can determine the energy consumption per day of appliances by applying the following formula. Energy Consumption / Day = Power Consumption × Hours Used / Day • Energy Consumption will be measured in Kilowatt hours (kWh) – like on your utility bills. • Power Consumption will be measured in Watts • Hours used per Day will be the actual time you use the appliance. Since we want to measure energy consumption in Kilowatt hours, we must change the way power consumption is measured from watts to kilowatts (kWh). We know that 1 kilowatt hour (kWh) = 1,000 Watts hours, so we can adjust the formula above to: Energy Consumption / Day ( KWh ) = Power Consumption ( Watts / 1000 ) × Hours Used / Day Example 1: Calculating Energy Use of a Ceiling Fan If you use a ceiling fan (200 watts) for four hours per day, and for 120 days per year, what would be the annual energy consumption? Use this formula: Energy Consumption / Day ( KWh ) = Power Consumption ( Watts / 1000 ) × Hours Used / Day Energy Consumption per Day ( kWh ) = ( 200 / 1000 ) × 4 ( hours used per day ) Energy Consumption per Day ( kWh ) = ( 1/5 ) × 4 Energy Consumption per Day ( kWh ) = 4/5 or 0.8 So, the Energy Consumption per Day is 0.8 kWh To find out energy for 120 days, do simple multiplication: 0.8 x 120 = 96 kWh Example 2: Calculating the Annual Cost of a Ceiling Fan If the price per kWh for electricity is $0.0845, what is the annual cost to operate the ceiling fan? Annual Cost = Annual Energy Consumption (KWh) × price per KWh Annual Cost = 96kWh × $0.0845/kWh = $8.12 Want Another Example? If you use a personal computer (120 Watts) and monitor (150 Watts) for four hours per day, and for 365 days per year, what would be the annual energy consumption? Energy Consumption/Day (kWh) = (270/1000) × 4 (hours used / day) Energy Consumption per Day (kWh) = 1.08 So the Energy Consumption per Day is 1.08 kWh. To find out energy for 365 days, do simple multiplication: 1.08 kWh × 365 days = 394.2 kWh If electricity is $0.0845 per kWh, the annual cost would be: Cost = 394.2 kWh × $0.0845/kWh = $33.30 Energy Usage of a Standard Refrigerator What is the energy consumption of a refrigerator with a wattage rating of 700 Watts when it is operated for 24 hours a day? Step 1 To solve, use the following formula: Energy Consumption = Power Consumption × Number of Hours Operated • Energy Consumption = Watt Hours (Wh) or KiloWatt Hours (kWh) • Power Consumption = Watts (W) or kW (KiloWatts) • Number of Hours Operated = Hours (h) For the example above: Energy Consumption = 700 W x 24 h Energy Consumption = 16800 Wh Step 2 To convert from Wh to kWh, remember that 1kWh = 1000 Wh To solve, set up as a ratio and use linear algebra to solve for ? 1 kWh/1000 Wh = ? kWh / 16800 Wh = 16,800 Wh (1 kWh) / 1000 Wh = 16.8 kWh Locating Wattage You can usually find the wattage of most appliances stamped on the bottom or back of the appliance or on its “nameplate.” The wattage listed is the maximum power drawn by the appliance. Since many appliances have a range of settings (for example, the volume on a radio), the actual amount of power consumed depends on the setting used at any one time. Photo 1. You can find the wattage information on the bottom or back of many appliances. Credit: thefamily8(link is external) from flickr is licensed under BY CC 2.0 A refrigerator, although turned “on” all the time, actually cycles on and off at a rate that depends on a number of factors. These factors include how well it is insulated, room temperature, freezer temperature, how often the door is opened, if the coils are clean if it is defrosted regularly, and the condition of the door seals. To get an approximate figure for the number of hours that a refrigerator actually operates at its maximum wattage, divide the total time the refrigerator is plugged in by three. Table 1 shows the wattage of some typical household appliances. Power consumption (Wattage) Appliance Wattage (range) Aquarium 50 – 1210 Clock Radio 10 Coffee Maker 900 – 1200 Clothes Washer 350 – 500 Clothes Dryer 1800-5000 Dishwasher 1200-2400 Dehumidifier 785 Electric Blanket – Single/Double 60 / 100 Fan – ceiling 65 – 175 Fan – window 55 – 250 Fan – furnace 750 Fan – whole house 240 – 750 Hair Dryer 1200-1875 Heater (portable) 750 – 1500 Laptop 50 Microwave Oven 750-1100 Personal Computer – CPU – awake / asleep 120 / 30 or less Personal Computer – Monitor – awake / asleep 150 / 30 or less Refrigerator 725 36“ Television 133 Toaster 800-1400 Water Heater 4500-5500 Amperes and Voltage If the wattage is not listed on the appliance, you can still estimate it by finding the current draw (in amperes) and multiplying that by the voltage used by the appliance. Most appliances in the United States use 120 volts. Larger appliances, such as clothes dryers and electric cooktops, use 240 volts. The amperes might be stamped on the unit in place of the wattage. If not, find an ammeter to measure the current flowing through it. You can obtain this type of ammeter in stores that sell electrical and electronic equipment. Take a reading while the device is running; this is the actual amount of current being used at that instant. Phantom Loads Also, note that many appliances continue to draw a small amount of power when they are switched “off.” These “phantom loads” occur in most appliances that use electricity, such as VCRs, televisions, stereos, computers, and kitchen appliances. Most phantom loads will increase the appliance’s energy consumption by a few watts per hour. These loads can be avoided by unplugging the appliance or using a power strip and using the switch on the power strip to cut all power to the appliance. Dr. Sarma Pisupati is a professor with the Department of Energy and Mineral Engineering, College of Earth and Mineral Sciences with Pennsylvania State University. Article extracted from https:// Dr. Sarma Pisupati Dr. Sarma Pisupati is a professor with the Department of Energy and Mineral Engineering, College of Earth and Mineral Sciences with Pennsylvania State University. Article extracted from https://
{"url":"https://iaeimagazine.org/electrical-fundamentals/energy-use-of-home-appliances/","timestamp":"2024-11-10T06:09:49Z","content_type":"text/html","content_length":"137482","record_id":"<urn:uuid:c0a25400-d2b6-4c47-841c-0c63c33551cd>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00583.warc.gz"}
The Rich Club of the C. elegans Neuronal Connectome There is increasing interest in topological analysis of brain networks as complex systems, with researchers often using neuroimaging to represent the large-scale organization of nervous systems without precise cellular resolution. Here we used graph theory to investigate the neuronal connectome of the nematode worm Caenorhabditis elegans, which is defined anatomically at a cellular scale as 2287 synaptic connections between 279 neurons. We identified a small number of highly connected neurons as a rich club (N = 11) interconnected with high efficiency and high connection distance. Rich club neurons comprise almost exclusively the interneurons of the locomotor circuits, with known functional importance for coordinated movement. The rich club neurons are connector hubs, with high betweenness centrality, and many intermodular connections to nodes in different modules. On identifying the shortest topological paths (motifs) between pairs of peripheral neurons, the motifs that are found most frequently traverse the rich club. The rich club neurons are born early in development, before visible movement of the animal and before the main phase of developmental elongation of its body. We conclude that the high wiring cost of the globally integrative rich club of neurons in the C. elegans connectome is justified by the adaptive value of coordinated movement of the animal. The economical trade-off between physical cost and behavioral value of rich club organization in a cellular connectome confirms theoretical expectations and recapitulates comparable results from human neuroimaging on much larger scale networks, suggesting that this may be a general and scale-invariant principle of brain network organization. The nematode worm, Caenorhabditis elegans, currently provides the only example of a nervous system that has been mapped quite completely and exactly at a cellular level. Detailed knowledge has accumulated about many aspects of this system (White et al., 1986; Hall and Altun, 2008), including the anatomical location, developmental history, and functional role (inferred from behavioral consequences of laser ablation) of each neuron (Sulston, 1976; Chalfie, 1985; Wicks et al., 1996). There is growing interest in the network properties or connectome of the C. elegans nervous system. It has been shown that the total wiring cost of the network, typically approximated by the physical connection distance between neurons, is nearly minimized by the anatomical layout of neurons and synapses (Chen et al., 2006). The topological layout of the connectome has also been quantified by representing the nervous system as a graph in which each node denotes a neuron and each (directed or undirected) edge denotes a synaptic connection between neurons. This simple graphical model of the C. elegans connectome has small-world network properties: a combination of high local clustering of connections between topologically neighboring neurons and short topological path lengths between any pair of neurons (Watts and Strogatz, 1998). Short path length is equivalent to high topological efficiency of information transfer and the high efficiency of the C. elegans connectome (47% of maximum efficiency) is achieved for relatively low connection density (4% of maximum synaptic connectivity between neurons; Latora and Marchiori, 2001). The wiring cost of the C. elegans connectome is strongly but not strictly minimized (Bassett et al., 2010). Most connections are short distance and the wiring cost of the system can be further reduced by computational rewiring algorithms, albeit at the expense of an increase in path length between neurons (Kaiser and Hilgetag, 2006; Kaiser and Varier, 2011). In the present study, we have further explored the C. elegans nervous system with a special focus on its “rich club.” Rich clubs are elite cliques of high-degree network hubs that are connected to each other topologically with high efficiency (i.e., there is a short path length between any pair of rich club nodes). Many complex systems can be partitioned into a small rich club and a large poor periphery (Colizza et al., 2006), and the rich club is usually valuable to the overall function of the network. For example, it was shown recently that brain anatomical networks derived from human neuroimaging data included a rich club of association cortical hubs that were considered likely to be valuable for adaptive (cognitive) function. The human brain rich club nodes were connected to each other efficiently by white matter tracts traversing greater anatomical distances, on average, than the tracts connecting more peripheral nodes (van den Heuvel and Sporns, 2011). Therefore, the human brain rich club putatively confers high value for high physical connection cost. We aimed to test the hypothesis that rich club organization of the cellular connectome of C. elegans conforms to similar economical constraints—a trade-off between adaptive value and physical cost—as the rich club of human brain anatomical networks. The motivating idea was that general principles of brain network organization may emerge invariantly across scales of anatomical space and across different animal species. Materials and Methods C. elegans nervous system. The dataset used to describe the hermaphrodite C. elegans neuronal network (Varshney et al., 2011) details N = 279 neurons (the 282 nonpharyngeal neurons excluding VC6 and CANL/R, which are missing connectivity data) and M = 2287 synaptic connections, with the relative physical locations of the neurons described by 2D coordinates. An undirected binary form of the network was used to characterize rich club topology. For motif analysis, we used a directed binary graph, as detailed below. In addition, neuronal birth times (Varier and Kaiser, 2011) were compared with key points in the life cycle of C. elegans allowed to develop normally at 22°C (Hall and Altun, 2008). Rich club coefficient. To quantify the rich club effect, the degree of each node in the network (i.e., the number of other nodes it is connected to) must first be calculated and all nodes with degree ≤ k removed. The rich club coefficient for the remaining subgraph, Φ(k), is then the ratio of the number of existing connections to the number that would be expected if the subgraph was fully connected and formally is given by the following equation (Zhou and Mondragon, 2004; Colizza et al., 2006): where N[>][k] is the number of nodes with degree > k and M[>][k] is the number of edges between them. The computation of Φ(k) for all values of k in the network of interest yields a rich club curve (Fig. 1a). However, the higher-degree nodes in a network have a higher probability of sharing connections with each other simply by chance, so even random networks generate increasing rich club coefficients as a function of increasing degree threshold, k. To control for this effect, the rich club curve for C. elegans was normalized relative to the rich club curves of 1000 comparable random networks. The random networks were generated by performing multiple (100 × M) double edge swaps or permutations on the original graph representing the C. elegans neuronal network. A double edge swap removes two randomly selected edges a-b and c-d and replaces them with the edges a-c and b-d (assuming they do not already exist, in which case a new edge pair must be selected). This permutation procedure ensures that the number of nodes and edges, and the degree distribution, of the nematode network are all conserved in the random networks. The normalized rich club coefficient is then given by the following equation: where Φ[random](k) is the average value of Φ(k) across the random networks. The existence of rich club organization is defined by Φ[norm](k) > 1 over some range of values of threshold degree k. We used a probabilistic approach to define the threshold criteria for a rich club more precisely. At every different threshold degree, we estimated Φ[random](k) for 1000 realizations of the random networks and estimated the SD of Φ[random](k), denoted σ. The threshold range of the rich club regime was then specified by those values of k for which Φ(k) ≥ Φ[random](k) + 1σ. Therefore, a rich club could be said to exist in the subgroup of network nodes defined by an arbitrary degree threshold if Φ[norm](k) = 1 + 1σ; but we also defined rich clubs by the more stringent criterion that Φ[norm](k) ≥ 1 + 2σ and by the even more conservative criterion that Φ[norm](k) ≥ 1 + 3σ. Connection distance and path length. To describe the nematode network fully, both physical and topological metrics are required. The only physical metric we used was the connection distance, which is the Euclidean distance between somata of synaptically connected neurons in the adult animal. Connection distance, a physical metric (in units of millimeters), provides a reasonable approximation to the axonal connection distance, or wiring cost, which is an anatomical property of the system. We also used a number of topological metrics to quantify the connectome (see the following subsections Efficiencies, Betweenness Centrality, Modularity and Related Topological Roles, and Motifs). It is important to note that we will use path length strictly to refer to a topological distance in the network and connection distance to describe a physical distance in the organism. Shorter path lengths between neurons indicate fewer synaptic connections mediating between them; if the minimum path length between two neurons is 1, they are directly, synaptically connected or nearest neighbors; if the path length is 2, they are indirectly connected by a chain of two synaptic connections, and so on. A measure of the global efficiency of a network, E[Global], is given by the mean of the sum of the inverse shortest path lengths, L[ij], between all existing node pairs i and j (Achard and Bullmore, 2007): where N is the number of nodes in the graph G. Networks for which the average path length from one node to another is small can thus be said to have high global efficiency (Achard and Bullmore, 2007). The same measure of efficiency can be estimated for a single node in the network. The nodal efficiency of an individual neuron i is defined as the inverse of the harmonic mean of the minimum path length between it and all other nodes in the network (Achard and Bullmore, 2007): If we average the nodal efficiencies for all nodes in the network, this is equivalent to estimating the global efficiency of the network. We can likewise average the nodal efficiencies of all neurons in the rich club to estimate the efficiency of the rich club E[Rich] and average the nodal efficiencies of all neurons not in the rich club to estimate the efficiency of the poor periphery E[Poor]. We also estimated the clustering of each node using the so-called local efficiency of the subgraph g(i) of n nearest neighbors of the index node (Latora and Marchiori, 2001): Betweenness centrality. Betweenness centrality characterizes the importance of a node or edge in the network by measuring the fraction of shortest paths between any two nodes in the network that pass through this particular node or edge (Freeman, 1977; Newman and Girvan, 2004). Formally, the betweenness centrality B[i] of a node i is given by the following: where and l[jk] is the number of shortest paths between j and k. B[i] is then normalized by: Modularity and related topological roles. Because there is no agreed-upon method by which to optimize a modular decomposition, we used both the Newman and Louvain algorithms (Rubinov and Sporns, 2010) to identify modules and explore the mesoscopic community structure of the system. Further, we considered the results of a prior study examining modular structure in the C. elegans network (Pan et al., 2010) in which six modules were identified by a greedy partitioning of the network. One should bear in mind that these results need to be treated with caution, for not only is there no absolute partitioning, but also the geometry of a network is known to have significant effects on its topological properties including modularity (Henderson and Robinson, 2011). Having defined the modules of a network, each of the network nodes can then be classified according to their roles in intramodular and intermodular connectivity (Sales-Pardo et al., 2007). Letting k[s[i]] be the number of connections between a node i and other nodes within its module s[i], the mean and SD of k[s[i]] over all the nodes in s[i] can be written as k̅[s[i]] and σ[k[s[i]]] respectively. The Z-score is then given by the following: This normalized intramodular degree of a node i is a measure of its connectivity to other nodes in the same module. The participation coefficient is a measure of the intermodular connectivity of a node: where k[s[i]] again denotes the intramodular degree of node i and k[i] is its total degree (Guimerá and Amaral, 2005). The participation coefficient of a node is therefore close to 1 if its links are uniformly distributed among all the modules and 0 if all of its links are within its own module. Adopting criteria from a prior study (Guimerá and Amaral, 2005), we can define the “hubs” of the network as those nodes that have high normalized intramodular degree, z[i] ≥ 0.7. A hub may be further categorized as “provincial” (most links within its own module; p ≤ 0.3), “connector” (a significant proportion of links to nodes in different modules; 0.3 < p ≤ 0.75), or “global” (with links homogeneously distributed across all modules; p > 0.75). Within the rich club organization of the nematode brain, there are three different topological categories of connection between any two neurons: the club links (C), which connect two rich club nodes; the local links (L), which connect two poor periphery nodes; and the feeder links (F), which connect a rich club node (R) to a poor periphery node (P). This categorization of edges in relation to the rich club of the network is equivalent to that described by van den Heuvel et al. (2012, except we have assigned the label of club (C), rather than rich (R), to the direct edges between two rich club nodes; the designation “rich” is reserved for nodes. On this basis, we analyzed the frequency of motifs or chains of club, feeder, or local connections between nodes in the network. Motifs are defined as shortest paths comprising a series of edges between a pair of nodes. This definition, in contrast to some other widely used definitions of network motifs (Milo et al., 2002), necessarily excludes closed loops or triangles. Some motifs can occur with greater-than-random frequency in complex networks. By considering the shortest paths between each pair of neurons within the nematode brain, we identified all motifs that linked any two nodes (Fig. 2c). For example, the motif L-L-F-C-C-C describes a path made up of two local edges, followed by one feeder edge, followed by three club edges. To focus the analysis of motifs on their topological roles in relation to the rich club, we condensed any consecutive occurrences of the same type of edge, following the example of van den Heuvel et al. (2012). So, for example, both L-L-F-C-C and L-F-C-C-C motifs were categorized as belonging to the class of L-F-C motifs (van den Heuvel et al., 2012; Fig. 2c). Metric calculations and network manipulations were carried out using the Python NetworkX library (Hagberg et al., 2008) and MATLAB. The nematode's rich club has high efficiency and high cost We used publically available data (Varshney et al., 2011) on the identity, location, and connectivity of each neuron in the C. elegans nervous system for all graph theoretical analyses. We defined binary graphs representing each neuron (N = 279) as a node and each synaptic connection (M = 2287) as an edge. As described previously, this model of the nematode connectome is an economically wired, small-world, modular network (Watts and Strogatz, 1998; Chen et al., 2006; Pan et al., 2010). Its global efficiency (E[Global] = 0.45) is intermediate between the lower efficiency of a regular lattice (E[Global] = 0.20) and the higher efficiency of a random graph (E[Global] = 0.47). It has higher clustering (0.34) than a random graph (0.14) but less than a regular lattice (0.70) of the same size. Most nodes have a small number of connections but a few hub nodes have high total degree k[i] (Fig. 3a) and the degree distribution is somewhat fat tailed. The nematode network is sparsely connected and the distribution of physical distances between connected neurons is skewed toward shorter connection distances, with relatively few outlying long-distance connections. We identified the rich club as a subset of high-degree neurons that have a significantly greater density of connections between them than would be expected in a subset of equally high-degree nodes in a random graph, defined mathematically by Φ[norm](k) ≥ 1 + 1σ. This criterion is satisfied for the C. elegans connectome when the threshold value for degree k, used to define the subset of hub neurons between which connectivity would be calculated, is in the range 35 < k < 73. We also defined rich clubs satisfying the more stringent criteria Φ[norm](k) ≥ 1 + 2σ and Φ[norm](k) ≥ 1 + 3σ. The rich club identified at the most lenient statistical threshold (1σ) includes 14 neurons; the rich club identified at the most conservative threshold (3σ) comprises a subset of 11 of these neurons ( Table 1). Below, we will focus on more detailed analysis of the rich club defined by degree threshold k = 44. This is the lowest degree threshold in the range 44 ≤ k < 53 that satisfies the most conservative statistical 3σ criterion for significance of the normalized rich club coefficient. There are 11 neurons in this rich club: eight are located anteriorly in the lateral ganglia of the head (AVAR/L, AVBR/L, AVDR/L, AVER/L) and three are located posteriorly in the lumbar (PVCR/L) and dorsorectal (DVA) ganglia (Fig. 1, Table 1). The 2σ rich club (defined by Φ[norm](k) > 1 + 2σ), and the 1σ rich club (defined by Φ[norm](k) > 1 + 1σ), are both very similar to the 3σ club. The 2σ club includes one additional neuron (AIBR) and the 1σ club includes 3 additional neurons (AIBR, RIBL, and RIAR). There is a very high efficiency of connectivity between rich club neurons: E[Rich] = 0.92. By way of comparison, the efficiency of connections between the 268 poor periphery neurons that are not in the rich club is much lower: E[Poor] = 0.38 (Fig. 3d, nodal efficiencies). The rich club is also distinguished by high betweenness centrality, indicating that rich club neurons are often on the shortest paths between all pairs of neurons in the system; nine of the 11 rich club neurons (AVAR/L, AVBR/L, AVER/L, DVA, PVCR/L) are ranked in the top 10 of all neurons in terms of their betweenness centrality (with values ranging from 0.0277 to 0.103; Fig. 3b). Rich club neurons also have high participation coefficients (with values ranging from 0.46 to 0.76), indicating that they often mediate intermodular connections between neurons in different modules of the system (Fig. 3e, Fig. 4, bottom). The rich club neurons are located close to the anterior and posterior extremes of the nervous system, polarizing the distribution of long- and short-range connections between rich club members (Fig. 3f). The average connection distance of a club (C) edge between rich club neurons is 0.51 mm, whereas the average connection distance of a feeder (F) edge between a rich club neuron and a peripheral neuron is 0.40 mm and the average connection distance of a peripheral (P) between peripheral neurons is 0.18 mm (Fig. 3c). The total distance of all connections to rich club neurons accounts for 48% of the total connection distance or wiring cost of the network; however, rich club neurons only account for 4% of the total number of neurons in the nervous system. The nematode's rich club is central to integrative communication Rich club neurons also play distinct and important topological roles in a modular decomposition of the C. elegans connectome. In a modular system, the sparse connections between modules are typically mediated by a small number of nodes, so-called connector hubs (Pan et al., 2010) that are defined by high intramodular degree and high participation coefficient (a measure of the proportion of intermodular edges connecting to each node). There is no single agreed-upon method with which to detect such modular structure optimally, so we used three alternatives: we implemented the Newman-Girvan and Louvain algorithms (Rubinov and Sporns, 2010) directly and we used prior results from a spectral decomposition (Pan et al., 2010). In all cases, all of the rich club neurons could be classified as connector hubs, indicating that the rich club plays an important role in communication between modules. Focusing on the results of the Louvain decomposition, we found that 52% of the connections to or from rich club neurons are intermodular and 48% are intramodular, whereas only 30% of connections to or from poor periphery neurons are intermodular and the great majority (70%) are intramodular (Fig. 3, Fig. 4). Similarly, for the Newman-Girvan method of modular decomposition, connections to rich club nodes are 44.6% intermodular and 55.4% intramodular, whereas connections to poor periphery nodes are 25.1% intermodular and 74.9% intramodular. All motifs were computed and classified in the following way for the directed graph of the C. elegans connectome. The frequency of each motif class was compared with its frequency in comparable random networks. To do this, we generated 1000 random graphs by the same edge-swapping permutation procedure already described for normalization of the rich club coefficient. In each random graph, we defined the rich club as the 11 nodes with the highest degree, so that rich club statistics in the random graph were based on the same number of nodes as there were rich club neurons in the C. elegans network. We assigned L, F, or C labels to all edges of the network on this basis and then counted the number of motifs of each class. The frequency of any motif class in the nematode network could be compared with the permutation distribution of its frequency in the random networks; if the observed motif frequency was greater (or smaller) than the maximum (or minimum) motif frequency in the random distribution, then it was assigned a probability p < 0.001 under the null hypothesis that the motif distribution in the nematode nervous system is random. To measure the location and dispersion of the permutation distributions of the motif frequency (Fig. 2b), we used nonparametric measures that do not assume normality. The median motif frequency was the measure of central location and the quartile deviance (simply half of the interquartile range of the motif frequency in random networks) was the measure of dispersion. Values were then assigned to the observed motif frequencies in terms of the difference between observed and random median frequency divided by the quartile deviance in the random distribution. In the analysis of topological motifs, we focused on chains of one of three classes of connections between neurons: club connections (C) between two rich club neurons, feeder connections (F) between a rich club neuron and a peripheral neuron, and local connections (L) between two peripheral neurons. We found that motifs that passed from peripheral nodes via feeder connections through the rich club and then returned via feeder connections to the periphery were much more frequent in the C. elegans connectome. The motif L-F-C-F-L exhibited the most significant enrichment in the network compared with random graphs (p < 0.001; quartile deviances from median = 54.1; Fig. 2a). It was also notable that the next most significantly occurring motifs in the nematode network, with quartile deviances from the median ranging from 20.9 to 14.2, were C-F-L, F-C-F-L, L-F-C, L-F-L, and L-F-C-F. Four of these motifs are subsets of the single most significant motif, L-F-C-F-L, and the fifth motif (L-F-L) describes a path that passes from one peripheral neuron to another via a single neuron in the rich club. Development of the nematode brain rich club The first two rich club neurons (DVA and AVDL) are born about 300 min after fertilization (Fig. 5, Table 1). The remaining seven anterior rich club neurons are born within approximately 30 min of AVDL, coinciding approximately with the birth of a series of juvenile motor neurons. The remaining two posterior rich club neurons are born approximately 450 min after fertilization. Twitching movements are first observed approximately 20 min later and coordinated movements are visible from approximately 760 min after fertilization, shortly before hatching at 800 min (Hall and Altun, 2008 ). Motor neurons controlling ventral muscle groups are born later, up to ∼1890 min after fertilization (Sulston, 1976, 1983). The adult is fully developed at ∼3450 min after fertilization. Therefore, the rich club neurons are born early and all neuronal components of the rich club have formed before the first visible signs of motor activity (twitching). To assess the probability of this observation under the null hypothesis that the birth times of the rich club neurons are drawn randomly from the distribution of all neuronal birth times, we repeatedly and randomly sampled 11 neurons from the network and counted the number of times that all 11 randomly sampled neurons were born before the onset of twitching. We found that the probability of this occurrence by chance was only 0.02, suggesting that the observed concentration of early birth times in the rich club is not likely under the null hypothesis. Moreover, the additional neurons included in the less stringently defined rich clubs (1σ and 2σ) also had early birth times (299 min after fertilization; Table 1). It is also notable that most rich club neurons are born before the embryo becomes elongated, in the period 400–640 min after fertilization when the animal's body becomes approximately three times thinner and approximately four times longer. It seems that rich club connectivity could be established between neurons when they are initially close to each other and that some of these connections could then be extended by elongation of the animal's body. Rich club: high value for high cost Although this topological analysis of the cellular connectome of C. elegans was uninformed by any prior data, other than the synaptic connectivity of each of the 279 neurons in the system, there was a remarkable degree of functional relatedness among the rich club neurons we identified. As detailed in Table 1, 10 of the neurons in the most conservatively defined (3σ) or “richest” club were the so-called command interneurons of the locomotor circuit with a functional role in forward or backward locomotion (Hall and Altun, 2008). The remaining neuron in this club, DVA, has been classified as a proprioceptive interneuron that modulates the locomotor circuit (Li et al., 2006). When the rich club was defined more liberally, up to three additional neurons were added (AIBR, RIBL, and RIAR), all of which are interneurons in the head of the animal (Table 1). The behavioral roles of each of the rich club neurons make it likely that the club as a whole is important functionally for coordinated and adaptive movement of the organism. Ten of the 11 neurons of the richest club of the nematode are neurons that have already been classified functionally as command interneurons. Six of these (AVAL/R, AVEL/R, and AVDL/R) are active during and required for backward movement (Chalfie, 1985; Chronis et al., 2007; Ben Arous et al., 2010; Piggott et al., 2011), whereas four of them (AVBL/R and PVCL/R) are active during and required for forward movement. Although there is evidence for some functional heterogeneity within these groups (Kawano et al., 2011), in general, the command neurons are thought to play a specialized role in potentiating or triggering the motor programs for forward or reverse locomotion (Tsalik and Hobert, 2003; Gray et al., 2005). The integrative topology of the rich club suggests that these neurons may not be limited to this instructive role, but might also facilitate communication or exchange of information with other parts of the nervous system. The highly efficient connectivity between rich club neurons will mediate information transfer with short synaptic delays and low noise. The functional importance of this integrative capacity is highlighted by the fact that the organism does not visibly move until all of the rich club neurons have been born. Given that coordinated movement is a fundamental component of many adaptive behaviors of the organism (e.g., feeding, egg laying, and escaping) the rich club is likely to have high value. The cost of the rich club is quantified by the Euclidean distance between synaptically connected neurons. This is a simple metric that depends on the justifiable assumptions that most axonal projections are approximately linear and that the metabolic costs of a neuronal connection increase with distance (Bullmore and Sporns, 2012). By this measure, the rich club is disproportionately costly: connectivity between and to this elite group of 11 neurons (4% of total neurons) accounts for 48% of the total connection distance or wiring cost of the network. Moreover, in previous studies measuring the mismatch between neuronal placement in the C. elegans nervous system versus neuronal placement dictated by computational rewiring algorithms designed to minimize connection distance, five of the 11 rich club neurons (DVA, AVA, and PVC classes) have been identified as outliers (Chen et al., 2006). Rich club modules and motifs Rich club neurons transact a disproportionate number of intermodular connections between neurons in different topological modules of the C. elegans network. Nodes comprising the same topological module are often anatomically colocalized so that the more numerous intramodular connections are short distance compared with the sparser and longer distance intermodular connections (Meunier et al., 2010; Alexander-Bloch et al., 2013). Therefore, the importance of the rich club for intermodular communication is consistent with its high wiring cost. Most rich club neurons also have exceptionally high centrality, meaning that they are on the shortest paths between many pairs of neurons in the system. The topological shortcuts between arbitrary pairs of neurons in different modules will have to traverse the same, relatively few intermodular connections at some point along the minimum path, conferring high centrality on the connector hubs of the rich club. The importance of the rich club for integrative processing is further emphasized by the motif analysis. Many types of real-world networks, including the C. elegans neuronal network, have been classified according to their motif frequency profiles (Milo et al., 2002, 2004; Sporns and Kötter, 2004). In the present study, we were particularly interested in the relationship between frequently occurring motifs and the rich club. It has been shown previously that motifs linking pairs of peripheral nodes in large-scale human brain structural networks are more likely to be mediated by feeder and club connections than would be expected in a random network (van den Heuvel et al., 2012). We replicated these results at the cellular scale of the C. elegans connectome. Therefore, in both macro-scale and micro-scale brain networks, the motif occurring with the greatest significance linked peripheral nodes via local, feeder, and club connections (L-F-C-F-L), confirming that a large number of shortest paths between any pair of neurons in the periphery are mediated by the rich club. Economy and scale invariance of rich clubs There is evidence to suggest that brain networks are organized to negotiate an economical trade-off between topological value and physical connection cost (Bullmore and Sporns, 2012). The rich club in C. elegans is an example of this general principle in operation. Its neurons have high efficiency, high centrality, and high importance for communication between different modules. These related topological properties are very likely to be valuable for adaptive and coordinated movement of the organism. However, this high value architecture depends on a disproportionate number of long-distance connections, amounting to a greater than average wiring cost of the rich club. It is striking that an analogous trade-off between topology and wiring cost was recently described for the rich club organization of the human brain (van den Heuvel et al., 2012). Using diffusion tensor imaging data from 40 healthy volunteers, a rich club was identified in the large-scale or macroscopic organization of the human brain, comprising a high density of tractographic connections between cortical regions. The cortical components of the human brain rich club, including areas of precuneus, anterior and posterior cingulate cortex, superior frontal cortex, and insula, were distributed spatially and connections between them accounted for a majority of the longest distance connections (58% of connections >9 cm) in the human brain. This high cost circuit demonstrated high centrality, mediating 69% of the shortest paths between all pairs of the 1170 cortical nodes in the network. A motif analysis of the diffusion tensor imaging network identified a greater-than-random frequency of motifs connecting pairs of peripheral regions via feeder and rich club regions. Together with these prior data on human brain networks (van den Heuvel and Sporns, 2011; van den Heuvel et al., 2012), the nematode data provide new evidence in support of scale invariance of brain networks. Similar rich club organization is evidently conserved over multiple scales of space. Scale invariance has already been demonstrated for network topological properties such as small-worldness (Watts and Strogatz, 1998; Achard et al., 2006) and (hierarchical) modularity (Meunier et al., 2009). The comparable rich club results in such differently sized nervous systems (<1 mm vs ∼10 cm) indicates that economical trade-offs between topological value and physical cost may also be a scale-invariant aspect of nervous systems. This is consistent with the universality hypothesis that competitive criteria of cost minimization and topological complexity drive selection of diverse information processing and communication networks embedded in physical space (Bassett et al., 2010). Experimental nematode connectomics This study is descriptive and there has been no experimental perturbation of the system. The data we have used on neurons and synapses of C. elegans are highly detailed and complete compared with the current state of data available for cellular connectomics in any other species and they are publically available in a format that has supported several prior studies of the same data (Hall and Altun, 2008). They are the results of painstaking reconstruction of serial electron micrographs, by skilled scientists literally tracing the identity of neurons from one electron micrograph slice to the next, and visually discriminating synaptic connectivity from mere proximity of two neurons (White et al., 1986). However, partly because of the time- and labor-intensive way these “gold standard” data have been generated, the nervous systems of only three animals have been at least partially mapped. It might be useful for more experimentally focused studies in the future if the connectome of C. elegans could somehow be reconstructed much more quickly and automatically (Jarrell et al., 2012), perhaps by adopting some of the techniques currently in development for computational reconstruction of the much larger cellular connectomes of the fly or the mouse. Such a high-throughput technology for nematode connectomics could allow, for example, experimental measurement of the effects of controlled perturbations on the development and function of the rich club and other features of the C. elegans nervous system. • The Behavioural and Clinical Neuroscience Institute is supported by the Medical Research Council (United Kingdom) and the Wellcome Trust. E.K.T. is supported by an Engineering and Physical Sciences Research Council (United Kingdom) doctoral studentship. S.E.A. is supported by the Royal Society (United Kingdom). • E.T.B. is employed half-time by the University of Cambridge and half-time by GlaxoSmithKline and holds stock in GlaxoSmithKline. The remaining authors declare no competing financial interests. • Correspondence should be addressed to Emma Towlson, TCM Group, Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE, United Kingdom. ekt33{at}cam.ac.uk
{"url":"https://www.jneurosci.org/content/33/15/6380","timestamp":"2024-11-05T09:05:37Z","content_type":"application/xhtml+xml","content_length":"368898","record_id":"<urn:uuid:dac84310-65f5-42b0-88d2-df8b5a29460f>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00023.warc.gz"}
[Update Links] SANS SEC595: Applied Data Science And Machine Learning For Cybersecurity Professionals (VM+VID+PDF) Free Download [Update Links] SANS SEC595: Applied Data Science and Machine Learning for Cybersecurity Professionals (VM+VID+PDF) SANS SEC595: Applied Data Science and Machine Learning for Cybersecurity Professionals (VM+VID+PDF) Genre: eLearning | Language: English | Size: 38.05 GB + 4.47 GB + 58.45 MB SEC595 provides students with a crash-course introduction to practical data science, statistics, probability, and machine learning. The course is structured as a series of short discussions with extensive hands-on labs that help students to develop useful intuitive understandings of how these concepts relate and can be used to solve real-world problems. If you’ve never done anything with data science or machine learning but want to use these techniques, this is definitely the course for you! 30 Hands-on Labs What You Will Learn Data Science, Artificial Intelligence, and Machine Learning aren’t just the current buzzwords, they are fast becoming one of the primary tools in our information security arsenal. The problem is that, unless you have a degree in mathematics or data science, you’re likely at the mercy of the vendors. This course completely demystifies machine learning and data science. More than 70% of the time in class is spent solving machine learning and data science problems hands-on rather than just talking about them. Unlike other courses in this space, this course is squarely centered on solving information security problems. Where other courses tend to be at the extremes, teaching almost all theory or solving trivial problems that don’t translate into the real world, this course strikes a balance. We cover only the theory and math fundamentals that you absolutely must know, and only in so far as they apply to the techniques that we then put into practice. The course progressively introduces and applies various statistic, probabilistic, or mathematic tools (in their applied form), allowing you to leave with the ability to use those tools. The hands-on projects covered were selected to provide you a broad base from which to build your own machine learning solutions. Major topics covered include: Data acquisition from SQL, NoSQL document stores, web scraping, and other common sources Data exploration and visualization Descriptive statistics Inferential statistics and probability Bayesian inference Unsupervised learning and clustering Deep learning neural networks Loss functions Convolutional networks Embedding layers Thise course will help your organization: Generate useful visualization dashboards Solve problems with Neural networks Improve the effectiveness, efficiency, and success of cybersecurity initiatives Build custom machine learning solutions for your organization’s specific needs You Will Be Able To: Apply statistical models to real world problems in meaningful ways Generate visualizations of your data Perform mathematics-based threat hunting on your network Understand and apply unsupervised learning/clustering methods Build Deep Learning Neural Networks Build and understand Convolutional Neural Networks Understand and build Genetic Search Algorithms You Will Receive With This Course: A supporting virtual machine Jupyter notebooks of all of the labs and complete solutions This Course Will Prepare You To: Build AI anomaly detection tools Model information security problems in useful ways Build useful visualization dashboards Solve problems with Neural networks Buy Premium Account From My Download Links & Get Fastest Speed. Happy Learning!! If any links die or problem unrar, send request to 2 thoughts on “[Update Links] SANS SEC595: Applied Data Science and Machine Learning for Cybersecurity Professionals (VM+VID+PDF)” 1. Link die 2. please share the PDF only for free download Leave a Comment
{"url":"https://tut4sec.com/sans-sec595-applied-data-science-and-machine-learning-for-cybersecurity-professionals/","timestamp":"2024-11-07T04:44:40Z","content_type":"text/html","content_length":"70240","record_id":"<urn:uuid:8b996748-11c1-4a3c-98d7-364458ce258c>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00105.warc.gz"}
1 Chen, J., Chen, H., Pan, E. and Heyliger, P.R. (2007), "Modal analysis of magneto-electro-elastic plates using the state-vector approach", J. Sound Vibr., 304, 722-734. DOI 2 Ebrahimi, F. and Barati, M.R. (2016a), "A nonlocal higher-order magneto electro visco-elastic beam model for dynamic analysis of smart nanostructures", J. Eng. Sci., 107, 183-196. DOI 3 Ebrahimi, F. and Barati, M.R. (2016b), "Dynamic modeling of a thermos-piezo-electrically actuated nanosize beam subjected to a magnetic field", Appl. Phys. A, 122(4), 451. DOI 4 Ebrahimi, F. and Barati, M.R. (2016c), "Magnetic field effects on dynamic behavior of inhomogeneous thermo-piezo-electrically actuated nanoplates", J. Brazil. Soc. Mech. Sci. Eng., 39(6), 5 Ebrahimi, F., Jafari, A. and Barati, M.R. (2017), "Vibration analysis of magneto-electro-elastic heterogeneous porous material plates resting on elastic foundations", Thin-Wall. Struct., 119, 33-46. DOI 6 Fan, X. and Wu, Z. (2016), "$\small{C_0}$-type Reddy's theory for composite beams using FEM under thermal loads", Struct. Eng. Mech., 57(3), 457-471. DOI 7 Huang, D.J., Ding, H.J. and Chen, W.Q. (2007), "Analytical solution for functionally graded magneto-electro-elastic plane beams", J. Eng. Sci., 45, 467-485. DOI 8 Jandaghian, A.A. and Rahmani, O. (2016), "Free vibration analysis of magneto-electro-thermo-elastic nanobeams resting on a Pasternak foundation", Smart Mater. Struct., 25(3), 035023. DOI 9 Kattimani, S.C. (2017), "Geometrically nonlinear vibration analysis of multiferroic composite plates and shells", Compos. Struct., 163, 185-194. DOI 10 Kattimani, S.C. and Ray, M.C (2014a), "Smart damping of geometrically nonlinear vibrations of magneto-electro-elastic plates", Compos. Struct., 114(1), 51-63. DOI 11 Kattimani, S.C. and Ray, M.C. (2014b), "Active control of large amplitude vibrations of smart magneto-electro-elastic doubly curved shells", J. Mech. Mater. Des., 10(4), 351-378. DOI 12 Kattimani, S.C. and Ray, M.C. (2015), "Control of geometrically nonlinear vibrations of functionally graded magneto-electro-elastic plates", J. Mech. Sci., 99, 154-167. DOI 13 Kondaiah, P., Shankar, K. and Ganesan, N. (2015), "Pyroeffects on magneto-electro-elastic sensor bonded on mild steel cylindrical shell", Smart Struct. Syst., 16(3), 537-554. DOI 14 Kondaiah, P., Shankar, K. and Ganesan, N. (2017), "Pyroeffects on magneto-electro-elastic sensor patch subjected to thermal load", Smart Struct. Syst., 19(3), 299-307. DOI 15 Kondaiah. P., Shankar, K. and Ganesan, N. (2012), "Studies on magneto-electro-elastic cantilever beam under thermal environment", Coupled Syst. Mech., 1(2), 205-217. DOI 16 Lage, R.G. and Soares, C.M.M. (2004), "Layerwise partial mixed finite element analysis of magneto-electro-elastic plates", Comput. Struct., 82, 1293-1301. DOI 17 Milazzo, A. (2013), "A one-dimensional model for dynamic analysis of generally layered magneto-electro-elastic beams", J. Sound Vibr., 332(2), 465-483. DOI 18 Milazzo, A., Orlando, C. and Alaimo, A. (2009), "An analytical solution for the magneto-electro-elastic bimorph beam forced vibrations problem", Smart Mater. Struct., 18(8), 85012. DOI 19 Pan, E. and Han, F. (2005), "Exact solution for functionally graded and layered magneto-electro-elastic plates", J. Eng. Sci., 43(3-4), 321-339. DOI 20 Simoes Moita, J.M., Mota Soares, C.M. and Mota Soares, C.A. (2009), "Analyses of magneto-electro-elastic plates using a higher order finite element model", Compos. Struct., 91(4), 421-426. DOI 21 Simsek, M. and Reddy, J.N. (2013), "Bending and vibration of functionally graded microbeams using a new higher order beam theory and the modified couple stress theory", J. Eng. Sci., 64, 37-53. 22 Sladek, J., Sladek, V., Krahulec, S. and Pan, E. (2013), "The MLPG analyses of large deflections of magnetoelectroelastic plates", Eng. Analy. Bound. Elem., 37(4), 673-682. DOI 23 Sladek, J., Sladek, V., Repka, M., Kasala, J. and Bishay, P. (2017), "Evaluation of effective material properties in magneto-electro-elastic composite materials", Compos. Struct., 174, 176-186. 24 Vaezi, M., Shirbani, M.M. and Hajnayeb, A. (2016), "Free vibration analysis of magneto-electro-elastic microbeams subjected to magneto-electric loads", Phys. E: Low-Dimens. Syst. Nanostruct., 75, 280-286. DOI 25 Vinyas, M. and Kattimani, S.C. (2017c), "Static behavior of thermally loaded multilayered magneto-electro-elastic beam", Struct. Eng. Mech., 63(4), 481-495. DOI 26 Vinyas, M. and Kattimani, S.C. (2017e), "Multiphysics response of magneto-electro-elastic beams in thermo-mechanical environment", Coupled Syst. Mech., 3(4), 351-367. 27 Vinyas, M. and Kattimani, S.C. (2017f), "Hygrothermal analysis of magneto-electro-elastic plate using 3D finite element analysis", Compos. Struct., 180, 617-637. DOI 28 Vinyas, M. and Kattimani, S.C. (2017a), "Static studies of stepped functionally graded magneto-electro-elastic beam subjected to different thermal loads", Compos. Struct., 163, 216-237. DOI 29 Vinyas, M. and Kattimani, S.C. (2017b), "Static analysis of stepped functionally graded magneto-electro-elastic plates in thermal environment: A finite element study", Compos. Struct., 178, 63-86. 30 Vinyas, M. and Kattimani, S.C. (2017d), "A Finite element based assessment of static behavior of multiphase magneto-electro-elastic beams under different thermal loading", Struct. Eng. Mech., 62 (5), 519-535. DOI 31 Balu, S., Kannan, G.R. and Rajalingam, K. (2014), "Static studies on piezoelectric/piezomagnetic composite structure under mechanical and thermal loading", IJERST, 3(2), 678-685. 32 Aktas, A. (2001), "Determination of the deflection function of a composite cantilever beam using theory of anisotropic elasticity", Math. Comput. Appl., 6(1), 67-74. 33 Annigeri, A.R., Ganesan, N. and Swarnamani, S. (2007), "Free vibration behaviour of multiphase and layered magneto-electro-elastic beam", J. Sound Vibr., 299(1-2), 44-63. DOI 34 Arefi, M. and Zenkour, A.M. (2017), "Electro-magneto-elastic analysis of a three layer curved beam", Smart Struct. Syst., 19(6), 695-703. DOI 35 Benedetti, I. and Milazzo, A. (2017), "Advanced models for smart multilayered plates based on reissner mixed variational theorem", Compos. Part B: Eng., 119, 215-229. DOI 36 Bhangale, R.K. and Ganesan, N. (2006), "Free vibration of simply supported functionally graded and layered magneto-electro-elastic plates by finite element method", J. Sound Vibr., 294(4), 1016-1038. DOI 37 Biju, B.N., Ganesan, N. and Shankar, K. (2011), "Dynamic response of multiphase magnetoelectroelastic sensors using 3D magnetic vector potential approach", IEEE Sens. J., 11(9), 2169-2176. DOI 38 Carrera, E., Brischetto, S., Fagiano, C. and Nali, P. (2009), "Mixed multilayered plate elements for coupled magneto-electro-elastic analysis", Multidiscipl. Model. Mater. Struct., 5, 251-256. DOI
{"url":"https://koreascience.kr/ksci/search/article/articleView.ksci?articleBean.atclMgntNo=TPTPNS_2017_v6n4_465","timestamp":"2024-11-09T13:51:10Z","content_type":"application/xhtml+xml","content_length":"51360","record_id":"<urn:uuid:4408ee27-6f33-4a3a-a54c-df5fe8df87b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00756.warc.gz"}
A small helium-neon laser emits red visible light with a power of 5.90 mW in a... A small helium-neon laser emits red visible light with a power of 5.90 mW in a... A small helium-neon laser emits red visible light with a power of 5.90 mW in a beam that has a diameter of 2.10 mm . a) What is the amplitude of the electric field of the light? b) What is the amplitude of the magnetic field of the light? c) What is the average energy density associated with the electric field? d) What is the average energy density associated with the magnetic field? e) What is the total energy contained in a 1.00-mm length of the beam?
{"url":"https://justaaa.com/physics/89509-a-small-helium-neon-laser-emits-red-visible-light","timestamp":"2024-11-03T06:06:48Z","content_type":"text/html","content_length":"39734","record_id":"<urn:uuid:bc17e787-9070-4233-bd02-cd7adce4ef9c>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00059.warc.gz"}
Subtract mixed fraction calculator | mixed fraction subtraction calculator | mixed fraction subtraction Formula Could you provide examples from real-life scenarios where the subtraction of mixed fractions is commonly applied? Subtraction of mixed fractions is commonly applied in various fields like cooking, construction, financial calculations, healthcare, time management, production and transportation. For example, in transportation, if delivery truck starts with 20 and 1/2 gallons of fuel. If 5 and 3/4 gallons are used during the journey, by subtracting 5 3/4 from 20 1/2 gives the remaining fuel which is 14 and 3 /4 gallons.
{"url":"https://www.visualfractioncalculator.com/en/mixed-fraction-subtraction/frac-6","timestamp":"2024-11-10T05:15:09Z","content_type":"application/xhtml+xml","content_length":"107732","record_id":"<urn:uuid:46221ded-4cf0-4ca8-b72a-881bb50bb8af>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00166.warc.gz"}
What approaches can I take as a newbie seller in Fiverr? Most of the time I do copy paste for sending by request. I am a graphic designer that I don’t have much experience or knowledge about how to to write a killer offer latter. Should I include my behance are dribbble link in the by request offer letter? I am new here in fiverr. I have created a gig about you tube thumbnail. can you guys please have a look and tell me is everything ok or not ? These are my suggestions/comments: Gig: I will create 3 catchy, modern youtube thumbnails in just few hours In the gig description: “need a thumbnail’s ?” could be “need a thumbnail?” “So this gig is perfectly for you” could be “So this gig is perfect for you.” “Have more quires ?” could be “Have more queries?” It says “Life time support” but according to the TOS gigs are not supposed to have >30 days of service duration. In the FAQ section: “more then 15” could be “more than 15” in question 4. “new new projects” could be “new projects” in answer 5. Gig: I will do professionally transparent, png any photo background by clipping path In the FAQ section: “i am fluent” could be “I am fluent” in answer 1. “You will be satisfy” could be “You will be satisfied” in answer 2. “i am always concern about timing” could be “I am always concerned about timing” in answer 4. In the profile: “stationary design” could be “stationery design” “and make my client satisfy.” could be “and make my client satisfied.” Hey, your gig video is cropped to much. Can you fix it? Hey, your gig video is cropped to much. Can you fix it? bro thank you so much. I have changed the video. please have a look again and put your opinion. These are my suggestions/comments: Gig: I will create 3 catchy, modern youtube thumbnails in just few hours In the gig description: “need a thumbnail’s ?” could be “need a thumbnail?” “So this gig is perfectly for you” could be “So this gig is perfect for you.” “Have more quires ?” could be “Have more queries?” It says “Life time support” but according to the TOS gigs are not supposed to have >30 days of service duration. In the FAQ section: “more then 15” could be “more than 15” in question 4. “new new projects” could be “new projects” in answer 5. Gig: I will do professionally transparent, png any photo background by clipping path In the FAQ section: “i am fluent” could be “I am fluent” in answer 1. “You will be satisfy” could be “You will be satisfied” in answer 2. “i am always concern about timing” could be “I am always concerned about timing” in answer 4. In the profile: “stationary design” could be “stationery design” “and make my client satisfy.” could be “and make my client satisfied.” UK1000 Please kindly checkout my gig UK1000 Please kindly checkout my gig If you create a separate thread in the “improve my gigs” section for your gig(s) I’ll make some suggestions. I think someone said the forum rules are that they’d need to be in a new thread (1 thread for 1 person’s gigs). I am so upset that i am not getting any order. i tried to do my level best to creating my gigs. u can check those out. My impression is coming lower then before. what to do ! You should change the gig title to increase the impression and daily send effective buyer request to get an order. Once you get an order then your profile will rank on fiverr and buyer automatically go on your profile. You should change the gig title to increase the impression and daily send effective buyer request to get an order. Once you get an order then your profile will rank on fiverr and buyer automatically go on your profile. if i change the gig title , means if i edit the gig , is my gig go down? • 2 months later... Do anyone notice anything new features on Fiverr? Actually I am little bit curious. • 5 months later... 2 days ago I have applied for Fiverr pro. Now I am level 1 seller in Fiverr. I am really nerves and super excited. How many days Fiverr take to review? I have provided information about myself and relevant business. Check this out: (Fiverr Pro) On 7/4/2021 at 8:13 PM, irshan_cool said: You can read the following article 37 minutes ago, raj_proservice said: This article must need to read This article link has already been posted by me above. Don’t copy forum posts The above is from the forum rules. Actually i have not copied from your link. I found it from a previous discussed topic. One thing, that is also from the same source from where you took it. Thats all For your information I found it from the below discussion: Top 7 tips for Fiverr new sellers should follow 1. Whenever you come up with a great idea about the marketplace, look at the profiles of other sellers in the same category to create gigs according to your skills. Why that person is at the top among thousands of people, what kind of image of that person has been used in the gig, how he has arranged his description, what kind of keywords he has used. 2. After reviewing the above-mentioned topics, take a good note of them and analyze the suggested keywords of the topics you want to gig. Keywords that are relatively low in competition but have a good search volume, you can create a full gig by targeting those keywords. 3. Don't go editing again and again after creating a gig unnecessarily. Here you have to test your patience. Try to stay online most of the time and do some marketing on social media that will help increase traffic to your profile. 4. When you submit a proposal to the buyer's request, you will read the matter well. Never copy and paste. If you understand that the thing is a little late then read well then give the proposal. 5. Whenever a buyer knocks on you, try to find out the details of his project and ask different kinds of questions so that the buyers are very happy and engaged. At the same time, you will be able to understand the project better. 6. You should contact Fiverr Support immediately when you make a mistake or get into trouble. I have received maximum support every time I have contacted Fiverr Support. 7. Stay active in the Fiverr Forum. By doing this you will get updates on various topics and find solutions to many questions. 9 minutes ago, coderboss said: You are welcome mate Who are you saying this to? 😅😆 1 minute ago, coderboss said: I wanna say, Who are you? Lol you should stop spamming the forum You need to remove this to My Fiverr Gigs as posting a link to advertise your gigs is not allowed in Tips for Sellers. Can you suggest me how to remove the link? 31 minutes ago, lloydsolutions said: You need to remove this to My Fiverr Gigs as posting a link to advertise your gigs is not allowed in Tips for Sellers. I want to remove the link but don't see any option here. can you help me out? 3 minutes ago, bilal4382 said: I want to remove the link If you click the 3 lines at the top of your post on the right hand side you can edit a post. This expires after a time so maybe you could report your post and ask for it to be edited. 7 hours ago, bilal4382 said: Can you suggest me how to remove the link? Did you report yourself using the 3 dots, so the mods can change it for you? • 1 month later... Is there any way to apply for fiverr seller plus? This topic is now archived and is closed to further replies.
{"url":"https://community.fiverr.com/forums/topic/255908-what-approaches-can-i-take-as-a-newbie-seller-in-fiverr/","timestamp":"2024-11-04T02:59:01Z","content_type":"text/html","content_length":"372271","record_id":"<urn:uuid:ed5388a3-ab4a-4c45-a1c6-c5a6db2372f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00730.warc.gz"}
Investigation of boundary layer-like flows with pressure gradients at high free stream Mach numbers The applicability of numerical methods for the solution of systems of partial differential equations, the mathematical difficulties and the demands on flow physics which arise due to the use of these solution methods are discussed by means of theoretical investigations of boundary layer-like flows with pressure gradients in main-stream direction and cross-flow direction. The flow fields investigated are hypersonic-slip flow boundary layers and structures of oblique shock waves and their intersections. In both cases a mixed initial value-boundary value problem has to be solved which is described by governing equations derived from the Navier-Stokes equations. Results of calculations are presented, as well as comparisons with experimental data. Ph.D. Thesis Pub Date: March 1976 □ Boundary Layer Flow; □ Hypersonic Flow; □ Pressure Gradients; □ Boundary Value Problems; □ Cross Flow; □ Flow Distribution; □ Momentum Theory; □ Navier-Stokes Equation; □ Oblique Shock Waves; □ Slip Flow; □ Fluid Mechanics and Heat Transfer
{"url":"https://ui.adsabs.harvard.edu/abs/1976PhDT........79H/abstract","timestamp":"2024-11-07T22:57:02Z","content_type":"text/html","content_length":"34991","record_id":"<urn:uuid:eaae9de0-5cdd-4ead-86f9-be4747028793>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00878.warc.gz"}
Alternate Segment Theorem If a line touches a circle and from the point of contact a chord is drawn, the angles between the tangent and the chord are respectively equal to the angles in the corresponding alternate segments. In the figure, the chord \(PQ\) divides the circle into two segments. Then, the tangent \(AB\) is drawn such that it touches the circle at \(P\). Thus, the angle in the alternate segment for \(\angle QPB\) is \(\angle QSP\) and that for \(\angle QPA\) is \(\angle PTQ\) are equal. Then, \(\angle QPB\) \(=\) \(\angle QSP\) and \(\angle QPA\) \(=\) \(\angle PTQ\). Let \(O\) be the centre of the circle. The tangent \(AB\) touches the circle at \(P\), and \(PQ\) is a chord. Let \(S\) and \(T\) be the two points on the circle on the opposite sides of chord \(PQ\). To prove: (i) \(\angle QPB\) \(=\) \(\angle QSP\) and (ii) \(\angle QPA\) \(=\) \(\angle PTQ\) Draw the diameter \(POR\) and draw \(QR\), \(QS\) and \(PS\). The diameter \(RP\) is perpendicular to the tangent \(AB\). So, \(\angle RPB\) \(=\) \(90^{\circ}\). \(\Rightarrow\) \(\angle RPQ\) \(+\) \(\angle QPB\) \(=\) \(90^{\circ}\) …… \((1)\) Consider the triangle \(RPQ\). We know that: Angle in a semicircle is \(90^{\circ}\). By the theorem, we have \(\angle RQP\) \(=\) \(90^{\circ}\). …… \((2)\) Thus, \(RPQ\) is a right-angled triangle. The sum of the two acute angles in the right-angled triangle, is \(90^{\circ}\). \(\Rightarrow\) \(\angle QRP\) \(+\) \(\angle RPQ\) \(=\) \(90^{\circ}\) …… \((3)\) From equations \((1)\) and \((3)\), we have: \(\angle RPQ\) \(+\) \(\angle QPB\) \(=\) \(\angle QRP\) \(+\) \(\angle RPQ\) \(\Rightarrow\) \(\angle QPB\) \(=\) \(\angle QRP\) …… \((4)\) We know that: Angles in the same segment are equal. By the theorem, we have \(\angle QRP\) \(=\) \(\angle PSQ\). …… \((5)\) From equations \((4)\) and \((5)\), we have: \(\angle QPB\) \(=\) \(\angle PSQ\). …… \((6)\) Therefore, statement (i) is proved. It is observed that \(\angle QPA\) and \(\angle QPB\) are linear angles. We know that: The linear pair of angle are always supplementary. \(\Rightarrow\) \(\angle QPA\) \(+\) \(\angle QPB\) \(=\) \(180^{\circ}\) …… \((7)\) Consider the cyclic quadrilateral \(PSQT\). The sum of opposite angles of a cyclic quadrilateral is \(180^{\circ}\). \(\angle PSQ\) \(+\) \(\angle PTQ\) \(=\) \(180^{\circ}\). …… \((8)\) From equations \((7)\) and \((8)\), we have: \(\angle QPA\) \(+\) \(\angle QPB\) \(=\) \(\angle PSQ\) \(+\) \(\angle PTQ\) Substitute equation \((6)\) in the above equation. \(\Rightarrow\) \(\angle QPA\) \(+\) \(\angle QPB\) \(=\) \(\angle QPB\) \(+\) \(\angle PTQ\) \(\Rightarrow\) \(\angle QPA\) \(=\) \(\angle PTQ\) Therefore, statement (ii) is proved. In the figure, \(AB\) is the tangent, and \(CF\) is the chord of the given circle. Next, find the value of \(x\). Given, \(\angle BCF\) \(=\) \(57^{\circ}\). By the Alternate Segment Theorem, \(\angle BCF\) \(=\) \(\angle CDE\). Thus, \(x\) \(=\) \(57^{\circ}\). Therefore, the value of \(x\) \(=\) \(57^{\circ}\). This theorem is also known as Tangent - chord theorem.
{"url":"https://www.yaclass.in/p/mathematics-state-board/class-10/geometry-11420/circles-and-tangents-13076/re-af154a93-9852-4208-86ca-4ab4085fbc29","timestamp":"2024-11-13T21:12:42Z","content_type":"text/html","content_length":"55736","record_id":"<urn:uuid:1f843d79-c239-4210-896b-394598b22f3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00084.warc.gz"}
Doomsday Protocol — A Review As Gregor Samsa awoke one morning from uneasy dreams he found himself stuck within a barren wasteland. Apparently, he has time travelled to the year 2637, and the day was March 14. The marvellous infrastructure that was once human civilization has been flattened and the Earth reduced to an inhospitable toxic wasteland five centuries prior when a nuclear war broke out between powerful and uncompromising nations. Being Gregor Samsa, he needs to figure out what day of the week it is so as to determine his work schedule. However, our friend is in a bit of a plight here. All electronic devices are gone, and any calendar that might have the year 2637 on it for whatever reason has also decayed over the past few hundred years. Gregor remembers that “yesterday,” September 28, 1908 was a Monday. He could count his way up to the current day, but that would definitely take too long, and he would be late for his nonexistent train should it happen to be a weekday, god forbid. Moreover, Gregor didn’t pay attention in school when his math teacher went off on a tangent about the genius behind the Gregorian calendar system. Austria had adopted this more accurate system in 1583, long before Gregor was born.^2 At least Gregor wouldn’t have to remember that the October revolution happened in November. Calendar Systems 101 Since Gregor didn’t pay attention in school when his math teacher ranted about the Gregorian calendar (despite the similarities between their names), Gregor probably doesn’t remember how leap years work in Gregorian either. Here’s a recap: Here’s a quantity that will become important: a tropical year is approximately 365.24219 days long.^3 Clearly, having 365 days in a year is a little bit too short, and 366 days a little bit too long. These errors add up over time and mess up our calendar, shifting seasons and causing unintended consequences. Therefore, a fix is needed. We could first try to use a combination of 365-day-years and 366-day-years to even out the error, which is exactly what both calendars do. • For both Julian and Gregorian calendars, every four years (at exact multiples of 4, such as 2000, 2004, 2008, etc.) a leap day is inserted as February 29 making the year a leap year. □ This results in the average year being $ 365 + 1/4 = 365.25 $ days long, which is a slight overshot, although already much better than before. However, as we know, the Gregorian calendar continues to make optimizations. • In the Gregorian calendar, every hundredth year (or every 25th leap year) the leap day is removed, making years such as 2000, 2100, 2200, and so on no longer leap years. □ With this tweak, the average year is now $ 365 + 1/4 - 1/100 = 365.24 $ days long. But now, we are on average underestimating the length of the year. So, logically, we would have to add some • In the Gregorian calendar, every four hundredth year, the leap day is once again added back, making years such as 1600, 2000, and 2400, which were previously stripped of their leap days, leap years once again after returning their leap days. □ Finally, the average year is now $ 365 + 1/4 - 1/100 + 1/400 = 365.2425 $ days long. Is That Accurate Enough? For the purposes of a functional society, this level of precision is acceptable, as we won’t have to deal with any significant shifting for the forseeable future. Moreover, at larger time scales, Earth’s speed of rotation on its own axis (i.e. the length of a day) changes as well, so we’ll just have to see in a myriad years what to change about the Gregorian system. Hopefully, by then, they would have gotten rid of weekdays and just given us weekends every day instead.^4 Something Crazy to Ponder Matt Parker’s Stand-up Maths channel has a video on YouTube that presented a leap year system deriving from the binary representation of each year and is more accurate (although less practical) than the Gregorian calendar. Gregor’s Solution: The Doomsday Protocol Since we learned that the Gregorian calendar follows a small set of rules strictly, we could try to come up with a formula that allows us to find the weekday given any date (YYYY-MM-DD). The solution we get is oftentimes known as the Doomsday Protocol, a fitting name. First, notice that between centures (e.g. between 1900 and 2000, or between 2600 and 2700), none of the years in between are multiples of 100. For convenience, let’s associate each weekday with a number in $ \mathbb{Z}/7\mathbb{Z} $ (the integers modulo 7). Weekday Number Sunday 0 Monday 1 Tuesday 2 Wednesday 3 Thursday 4 Friday 5 Saturday 6 Notice that all the weekdays are just assigned to their intuitive value except Sunday is assigned to 0. Hopefully, this also satisfies those who feel strongly about Sunday being the first day of the week. Now, we should take note that $ 365 % 7 = 1 % and $ 366 % 7 = 2 % (364 is a multiple of 7). This means that each normal year, the weekdays slide over by 1, and each leap year the weekdays slide over by 2. So, for example, if March 14th, 2008 was on a Friday (which it is), then March 14th, 2009 would be on a Saturday. Whereas if March 14th, 2003 was on a Friday, then March 14th, 2004 would be on a Sunday. Note that in the first case no leap days passed in bewteen the two dates, whereas in the second case there was the date February 29, 2004. Summarizing this idea, once we know the weekday of a particular day of a year, then we can deduce the weekday of that same day of a different year by counting the years elapsed, then counting the number of leap days elapsed, summing them and taking them to modulo 7 (the remainder when they are divided by 7). Generally, since we start from centuries and end on the next, our year code would be \[X + \left\lfloor \frac{X}{4} \right\rfloor\] Here I would like to make the disclaimer that this post isn’t really meant to teach the Doomsday protocol. Rather, I am just replicating the method I learned and bringing light to some of its methodologies or justification. As for the centuries (first two digits), you will have to add a century code to your calculation to account for initial offset. These are trickier to calculate, but they repeat every four years, so they are probably worth memorizing. These codes, also known as anchors, are summarized in the table below. Centuries Anchor Code …,1600,2000,2400,… 2 …,1700,2100,2500,… 0 …,1800,2200,2600,… 5 …,1900,2300,2700,… 3 The Easy Dates Method This was the method that I first learned many years ago (and still remember to this day). At this point, most methods are just about how much you memorize and how much you calculate. Much like designing algorithms for machines to run, human calculation methods often see a tradeoff between memorization and speed-delay from real-time calculation. This method definitely belongs on the lighter-weight side. Once we have the weekday of a specific date of the prevalent year, we could shift around to find other dates.^5 However, shifting across one or more months is highly prone to error. Hence, if we could remember a day of each month whose weekday is equivalent to the code we obtain from calculating the anchor code and years formula. This method does exactly that: giving us twelve relatively easy days to remember as they are related to one another. These are the dates circled in the above calendar. Notice that they all land under Monday (1), and the year 2022 corresponds to the code 2 + 22 + 22//4 = 2 + 22 + 5 = 29 = 1 (mod 7)–this adds up! Important “Doomsdays”: • January and February: both on the last day of the month. • March: Pi day: on the 14th. • Even months, April through December: Amazingly, 4/4, 6/6, 8/8, 10/10, and 12/12 all satisfy our requirement. • Remaining odd months: split into two pairs: 9/5 and 5/9, then 11/7 and 7/11.^6 Don’t forget: when we are computing a date in January or February of a leap year, we always have to subtract the extra leap day we added as it has not happened yet! The More Efficient Method Of course, my research would lead me to more efficient methods that hobbyists and professional memory-sport athletes have devised over the years. I have to note that there are many similar if not equivalent systems out on the internet, since the choices we make and dates we set to be our “anchors” in general are arbitrary (we only choose those that seem easy to remember). I’ve found that most efficient methods, in the interest of time, replaced the date-finding system I presented above with another series of “month codes” for the user to memorize. Once these codes have been memorized, the doomsday protocol could be reasonably computed for most dates within 5 seconds. Here is the list of month codes: Month Anchor Code January 4 February 0 March 0 April 3 May 5 June 1 July 3 August 6 September 2 October 4 November 0 December 2 Just as in the case with century anchor codes, where we needed to somehow memorize 2053, for these codes we just have to memorize 400351362402. I prefer doing this by splitting them into threes; however I don’t really have a mnemonic method. If you use one, you might have to account for that as well. As long as you can recall these values on demand, then you are all set! A Demonstration and Verbatim Now, Gregor is prepared to discover the answer his long-awaited query. He checks that the century 2600 has the anchor code 5, and that the number of leap days passed since 2600 is 37//4 = 9. Doing a bit of simplification as we add these numbers, we find that 5 + 34 + 9 = 5 - 1 + 2 = 6 (mod 7) It just so happens that Gregor’s current date is Pi day, one of our Doomsdays from the Easy Dates Method. Hence, we get the answer that March 14, 2637 is a Saturday, and poor Mr. Samsa finally gets a break, although there isn’t much to do outside. Note that the Easy Dates Method could be inefficient when the date given is far away from the Doomsday of that month. Using the month-anchored method, which is usually faster, we take the month anchor 0 and date 14 to directly obtain the answer: 5 + 34 + 9 + 0 + 14 = 5 - 1 + 2 = 6 (mod 7) And voilà, there we have it! Update: A friend of mine recently learned this method and remarked that she could not stop calculating weekdays while working on her urgent history essay. If the same is happening for you, my advice would be to just calculate all the weekdays and then write them down next to the dates so you won’t feel tempted to calculate them again. Also, always ask yourself whether you’re sure that this date is in the Gregorian calendar before applying the formula.
{"url":"https://dzhu.page/mathematics/doomsday-protocol-a-review/","timestamp":"2024-11-04T00:32:10Z","content_type":"text/html","content_length":"40445","record_id":"<urn:uuid:a0f66853-77bc-4c85-8605-a1fb52599e0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00419.warc.gz"}
Harmonic Progression (HP): Definition, Formula with Examples What is Harmonic Progression? A sequence of numbers is said to be a harmonic progression if the reciprocal of the terms is in arithmetic progression. If a,b,c,d,e,f is in arithmetic progression then harmonic progression can be written as 1/a, 1/b, 1/c, 1/d, 1/e, 1/f. What is Harmonic Progression Formula? If the arithmetic progression is written in the form: \(a,\ a+d,\ a+2d,\ a+3d,\dots a+\left(n-1\right)d\)Then the harmonic progression formula is as follows: \(\frac{1}{a},\ \frac{1}{a+d},\ \frac{1} {a+2d},\ \frac{1}{a+3d},\dots\) What is Sum of Harmonic Progression Formula? The sum of n terms in Harmonic Progression is\(\text{For }\frac{1}{a},\frac{1}{a+d},\frac{1}{a+2d},\dots.,\frac{1}{a+(n-1)d}\)\(S_n=\frac{1}{d}\ln\left(\frac{2a+\left(2n−1\right)d}{2a−d}\right)\) What is Harmonic Mean? Harmonic mean serves to find multiplicative or divisor relations among fractions without worrying about common denominators. What is Harmonic Sequence? A sequence of numbers is said to be in harmonic sequence if the reciprocals of all the elements/numbers/data of the sequence form an arithmetic sequence.Harmonic sequence: \(\frac{1}{a_1},\frac{1}
{"url":"https://testbook.com/maths/harmonic-progression","timestamp":"2024-11-08T14:00:50Z","content_type":"text/html","content_length":"864287","record_id":"<urn:uuid:c473d91e-8f3b-492a-9f2b-5142064bb03a>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00160.warc.gz"}
What is extraneous solution? + Example What is extraneous solution? 1 Answer If you are given a problem A to solve, you may convert it by a set of steps into a problem B which is easier to solve. Some of the solutions of problem B may be solutions of the original problem, but some may not. The ones which are not are known as extraneous solutions. This can happen if you multiply an equation through by an expression that may take the value $0$, or if you square both sides of an equation, etc. For example, suppose you are asked to solve: $\frac{{x}^{2} - 4}{x - 2} = 0$ Multiplying both sides of the equation by $\left(x - 2\right)$ you get: ${x}^{2} - 4 = 0$ which has solutions $x = 2$ and $x = - 2$ The value $x = - 2$ is a solution of the original equation, but $x = 2$ is not since it results in division by $0$. Impact of this question 20724 views around the world
{"url":"https://socratic.org/questions/what-is-extraneous-solution","timestamp":"2024-11-07T12:11:39Z","content_type":"text/html","content_length":"34213","record_id":"<urn:uuid:41c08234-c13f-4ca7-ab33-d576a1bf03cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00183.warc.gz"}
Sound Speed and Poisson’s Ratio Calibration of (Split) Hopkinson Bar via Iterative Dispersion Correction of Elastic Wave A process of calibrating a one-dimensional sound speed (c[o]) and Poisson’s ratio (ν) of a (split) Hopkinson bar is presented. This process consists of Fourier synthesis and iterative dispersion correction (time-domain phase shift) of the elastic pulse generated by the striker impact on a circular bar. At each iteration, a set of c[o] and ν is assumed, and the sound speed versus frequency (c [dc] versus f[dc]) relationship under the assumed set is obtained using the Pochhammer–Chree equation solver developed herein for ground state excitation. Subsequently, each constituting wave of the overall elastic pulse is phase shifted (dispersion corrected) using the c[dc]–f[dc] relationship. The c[o] and ν values of the bar are determined in the iteration process when the dispersion-corrected overall pulse profiles are reasonably consistent with the measured profiles at two travel distances in the bar. The observed consistency of the predicted (dispersion-corrected) wave profiles with the measured profiles is a mutually self-consistent verification of (i) the calibrated values of c[o] and ν, and (ii) the combined theories of Fourier and Pochhammer–Chree. The contributions of the calibrated values of c[o] and ν to contemporary bar technology are discussed, together with the physical significance of the tail part of a traveling wave according to the combined theories. A preprocessing template (in Excel^®) and calibration platform (in matlab^®) for the presented calibration process are openly available online in a public repository. Issue Section: Research Papers 1 Introduction Precise calibration of the one-dimensional (1D) sound speed ( ) and Poisson’s ratio ( ) of a circular bar is essential in using the split Hopkinson bar (SHB) [ ] and Hopkinson bar (HB) [ ]. In the case of SHB experiments, the specimen properties are generally determined using the following equations [ , and are the nominal stress, nominal strain, and nominal strain rate of the specimen, respectively; is the reflected pulse strain recorded in the incident bar; is the transmitted pulse strain recorded in the transmitted bar; denote the initial cross-sectional area and initial length of the specimen, respectively; , and denote the cross-sectional area, elastic modulus, and sound speed of the bar, respectively; is the time. These notations are explained here (instead of in Nomenclature) as the above equations are unused in the dispersion correction (dc) process. According to Eq. , an accurate value of is necessary for the precise measurement of specimen stress because is given as , where is the bar density. In Eqs. , an accurate value of is also essential for accurate measurement of the strain rate and strain. When dispersion correction (dc) is applied to , the necessity of precise calibration of both arises (which will be described below for the HB application). Therefore, for the accurate measurement of specimen properties using SHB, precise calibration of is fundamental. In the case of the HB, the wave profiles at the impact-entering end surface of the bar are obtained via dispersion correction (dc) [17,–31] of the measured wave profile at the interim axial position of the bar, which necessitates a series of sound speeds at a range of frequencies. This sound speed (c[dc]) versus frequency (f[dc]) relationship, called the dispersion relationship, can be obtained by solving the Pochhammer–Chree equation (PCE) [18,32–41]. References [37,38] expressed the PCE in terms of the normalized frequency (F), normalized sound speed (C), and Poisson’s ratio (ν). The solver in [37,38] subsequently solves the PCE first at arbitrary F values to obtain the (F,C) matrix by solely using the Poisson’s ratio (ν) information. Then, it finally obtains the PCE solutions, i.e., (F[dc,]C[dc]) matrix, at exact F values (F[dc] = af[dc]/c[o]) necessary for dispersion correction (dc), which are determined using the information of the one-dimensional sound speed (c[o]) and bar radius (a). Therefore, the precise calibration of c[o] and ν for the HB and SHB is fundamental for the accurate measurement of a wave profile using HB (and SHB) via dispersion correction (dc). Despite the importance of calibrating the properties (c[o] and ν) of SHB and HB, only a few studies pursued precise calibration of them. For instance, Ref. [39] calibrated them using a limited number of frequencies involved in wave profile. However, the calibration based on a thorough dc that utilized all involved frequencies were rarely performed. Furthermore, to the best of the author’s knowledge, the verification of predicted (dispersion-corrected) wave profiles at certain travel distances compared with measured profiles was also rare, which can occur only when the combined theories of Fourier and Pochhammer–Chree (PC) used in the dc are correct. Consequently, (i) this study presents a procedure and tool for the calibration of c[o] and ν based on the iterative dc of an elastic wave using the combined theories. A preprocessing template (prepared in Excel^®) and calibration platform (written in matlab^®) are available online in a public repository [41]. Simultaneously, (ii) this study pursues the mutually self-consistent verification of the combined theories by demonstrating the coincidence of the predicted (dispersion-corrected) and measured wave profiles at two travel distances in a circular bar. 2 Literature Survey 2.1 Dispersion Correction in Bar Technology. HB [17–31] has traditionally been used to measure a transient pulse generated by the impact of a near-field blast or bullets. Conversely, SHB, which is also called the Kolsky bar [1–16], has been used extensively to measure dynamic material properties such as the stress–strain and strain rate–strain curves of versatile materials at strain rates of approximately 10^2–10^4 s^−1. These curves, together with the accurately extracted quasi-static material properties [42–44], are generally used to calibrate a strain rate-dependent constitutive model [45,46], which is indispensable for the simulation of the dynamic deformation behavior of solids and structures [47–54]. The shape of the elastic wave in SHB and HB distends with travel; this phenomenon is called dispersion. The physical origin of dispersion from the viewpoint of medium particle motion is inertia in the lateral motion associated with the axial disturbance. From the viewpoint of the wave propagation, a high-frequency wave component that constitutes the overall elastic wave is sluggish compared with the wave component with a lower frequency. The wave profile is generally measured at the interim axial position of the bar. In the case of HB, the front surface of the bar is the location of interest where an impact pulse enters the bar, whereas the specimen location is of interest in the case of SHB. Therefore, the measured wave profile in SHB and HB needs to be corrected to obtain the wave profiles at the locations of interest, which is a process called dc [18,27–30,55–66]. 2.2 Combined Theories of Fourier and Pochhammer–Chree. Before performing dc, the measured wave profile at a certain position needs to be modeled mathematically using Fourier’s theory in terms of a series of sinusoidal wave components with a range of frequencies; this process is called Fourier synthesis. Subsequently, the phase of each wave component of the Fourier-synthesized function is shifted to predict the overall wave profile at a given travel distance. To shift the phases of the wave components that constitute the overall elastic pulse, a series of sound speeds over a range of frequencies must be known a priori. The c[dc] versus f[dc] relationship can be obtained by solving either the Rayleigh–Love equation (RLE) [23,67–69] or PCE [18,32–41]. The former (RLE) is the 1D wave equation with lateral inertia correction, whereas the latter (PCE) is the full 3D wave equation of motion that inherently accounts for lateral motion. The solutions (C versus F relationship) of both equations for a Poisson’s ratio of 0.29 were well documented by Kolsky [70] and Graff [71], which illustrated the deviation of the RLE solution from the PCE counterpart as the frequency increased. Thus, using PCE solutions predominantly for dc seems natural, as observed in previous studies [18,27–30,55–66]. Like any theory, the PC theory needs experimental verification. It was originally derived for standing waves in the bar extending from minus to plus infinity, which inherently considered up to an infinite wavelength (a/Λ = 0). Therefore, the applicability of PCE to transient waves in a bar of finite length with definite bar ends requires verification. Furthermore, the Fourier synthesis and phase shift of the elastic pulse recorded at discrete time points are performed under the premise of a finite time period with fundamental frequency, which introduces a limit in the wavelengths of the constituting waves (Λ = c/f). Therefore, the applicability of the combined theories of Fourier and PC to the dc of transient waves in SHB and HB requires verification. 2.3 Usage and Verification of the Combined Theories. As regards studies on SHB applications, idealized trapezoidal and/or rectangular pulses were often considered as the original (reference) pulse in Refs. [18,56,62,66] to predict the shape of traveled pulses at a given travel distance; the predicted wave profile could not be verified in essence because of the assumption of the idealized shapes as the original pulse. Wang and Li [65] obtained the dispersion-corrected signal at the specimen position without verification. In some studies [58,59,61], the dispersion-corrected profile after traveling a certain distance was compared with the measured profile, which demonstrated qualitative consistency. In the foregoing studies, information on the c[o] and ν values and/or their determination method were unavailable, limiting the rigorous verification of the dispersion-corrected profile. In most studies on the dc of SHB signal [55–66], less fluctuating stress–strain curves of SHB specimens were demonstrated as a result of using unverified dispersion-corrected wave profiles. In the study by Davies on the use of HB [18], the strain pulse generated by impact loading was shown to disperse with travel and develop oscillations in the plateau and tail of the traveling pulse. The periods of oscillations determined using a few oscillations were qualitatively consistent with the combined theory prediction with notable data scatter (Fig. 25 in Ref. [18]). Thenceforth, a number of studies have determined physical quantities (such as wave velocity and arrival time of waves) from a few oscillations based on visual inspection. In the case of Oliver [19], a few phase velocities and group velocities were determined from the wave oscillations and compared with PC theory-predicted velocity versus period curves; qualitative consistency was observed. Curtis et al. [20 –22] interpreted the different types of oscillations to be caused by up to sixth-mode PC vibrations based on a qualitatively drawn frequency versus arrival time diagram. The qualitative consistencies of the wave velocities and arrival times in previous studies [18–22] essentially have limitations as a verification of the PC theory because the coincidence of the predicted wave profiles themselves with the experiment was unavailable. Lee et al. [24,25] quantified the frequency versus arrival time map (an intensity map) via the Gaussian-windowed Fourier transform of the experimental wave profile in HB. Their intensity map was qualitatively consistent with the predicted curve based on the PC theory (Fig. 4 in Ref. [25]). They also compared the predicted values of the phase and group velocities with their experimental counterparts (Fig. 5 in Ref. [25]), which also showed a qualitative coincidence but with a notable discrepancy. Yew and Chen [26] presented the feasibility of the fast Fourier transform method in analyzing wave motion generated by striker impact based on the assumptions of material properties and waveform, which limited the verification of the PC theory. While Barr et al. [28] predicted the incident wave profiles to a HB under different assumptions using a rigorous treatment including magnitude correction [27,28,65], the predicted wave profiles themselves were compared without The most rigorous and clearest verification of the combined theories or their applicability may be the direct verification of the predicted (dispersion-corrected) wave profile with reference to the measured profile at a sufficiently traveled distance. As observed earlier and to the best of the author’s knowledge, no direct verification of the applicability of the combined theories existed. As mentioned, only when both theories are correct, the predicted wave profile using them coincides with that of the experiment. Two reasons may cause the unavailability of direct verification of the predicted (dispersion-corrected) wave profile. First, the sound speed and Poisson’s ratio of the bar could not be precisely calibrated because of the absence of an appropriate calibration tool. Therefore, in reality, the manufacturer-provided literature values of c[o] and ν have been more readily available than the calibrated ones, although the determination method of the former has hardly been disclosed. As regards Poisson’s ratio, its value was often assumed to be the values (e.g., 0.29 or 0.30) for which the PCE solutions were available in the existing studies [18,36], or the bar with such a manufacturer-provided value was selected. The second cause may be the difficulty in obtaining PCE solutions specific to the researcher’s bar with unique values of c[o,]ν, and a. The PCE has an infinite number of solution branches and is cumbersome to handle. No analytical solution was derived, and the numerical solutions (c versus f relationship) were not easily obtained as well owing to the complicated, especially twisted nature of the PC function surfaces [37]. Before Ref. [37], PCE solutions in the table form were available [18,36] only for limited values of Poisson’s ratio in the limited frequency ranges. Furthermore, the table solutions were provided at unnecessary frequencies for dc because the necessary frequency values can be determined only with information on the bar material (c[o]) and bar radius (a) of the user [37]. (Examples of determining the necessary frequencies for dc of the user are available in Ref. [37].) Therefore, unless otherwise specified in existing studies, the table solutions were first interpolated to the solutions for a specific Poisson’s ratio (ν) of the researcher’s bar material, followed by further interpolation of the formerly interpolated solutions to the solutions at the necessary frequencies for dc for a given bar material (c[o]) and bar radius (a). The use of the available solutions is further limited in that some of the solutions in Ref. [36] differ from the solver developed herein, which will be presented subsequently (Sec. 3.3). Only a few studies [29,30,66] independently obtained PCE solutions using in-house (closed source) schemes and presented their solutions for a few Poisson’s ratio values in graph forms rather than table values, which limited the application of the solutions. 2.4 Strategy: Bar Property Calibration Together With Verification of the Combined Theories. For the direct verification of the predicted (dispersion-corrected) wave profile with experiment at a given travel distance, two issues must be resolved. First, the bar properties (ν and c[o]) should be precisely calibrated in advance. Second, a precise c[dc] value must be available at the exact frequencies necessary for dc (f[dc]) for a given bar material (ν and c[o]) and bar radius (a). Regarding the second issue, because of the availability of the open-source solver in a recent study [37], it is now possible to obtain accurate PCE solutions for a wide range of ν values (0.02 ≤ ν ≤ 0.48) with down to three decimal places. This solver essentially satisfies most of the dc needs of the (split) Hopkinson bars because the Poisson's ratios of the bar materials are generally reported down to the second decimal place. However, to obtain the c[dc] versus f[dc] relationship, c[o] and ν information is necessary as mentioned in Introduction, which returns the second issue of obtaining the c[dc] versus f[dc] relationship to the first issue of obtaining the calibrated values of the bar properties (c[o] and ν). However, precise calibration of both ν and c[o] of the bar is never a simple task. The solution to handling the forgoing fastidious issues is to predict the traveling wave profile for a range of (ν, c[o]) sets. That is, to predict the wave profile iteratively by assuming a range of (ν, c[o]) sets. If a (ν, c[o]) set is determined, which results in a predicted wave profile that reasonably coincides with the experimentally measured wave profile at a given travel distance, the (ν, c[o]) set in such a moment in the iteration process is the precisely calibrated bar property. This result of iterative dc; that is, the coincidence of the two profiles, if obtained, is the mutually self-consistent verification of (i) calibrated values of ν and c[o] and (ii) combined theories of Fourier and PC, because such coincidence will occur only when the combined theories are correct and, simultaneously, the calibrated values (ν and c[o]) are accurate. Accordingly, an iterative dc is performed herein to investigate the existence of a (ν, c[o]) set that can result in a wave profile that reasonably coincides with the traveling wave. For this purpose, an exclusive PCE solver was first developed because the iteration process massively requires accurate PCE solutions for arbitrary ν values down to six decimal places (as will be explained later), whereas the reliable operation of the existing solver in Ref. [37] was verified for ν values down to only three decimal places. 3 Exclusive Pochhammer–Chree Equation Solver for Ground State Excitation 3.1 Necessity of an Exclusive Solver. This section presents an exclusive PCE solver that can be used in the iterative dc process later in this study to calibrate the bar properties (c[o] and ν). The two characteristic features of PCE solutions necessary for such a purpose are as follows. First, only the PCE solutions for the first excitation state (n = 1) are necessary because the striker impact on the bar usually excites the particle vibration to the first excitation state [18,56,72] (this point will be subsequently verified). Second, although only the PCE solutions for n = 1 are necessary, solutions for arbitrary Poisson’s ratio values down to six decimal places are needed because most of the general-purpose optimization algorithms, such as “fminseach” in matlab^®, generally change the value of Poisson’s ratio to six decimal places during the iterative optimization process of ν and c[o]. Therefore, for a PCE solver to be used in an iterative dc algorithm, it must be able to reliably provide massive PCE solutions for arbitrary Poisson’s ratio values to six decimal places. Considering the mentioned characteristic features of PCE solutions for the iterative dc process, it was decided to develop an exclusive PCE solver for n = 1 for the following reasons. First, the existing solver for up to n = 20 [37] was verified to function reliably only at 0.001 intervals of Poisson’s ratio (only down to three decimal places). Second, if the existing solver is employed in the bar property calibrator program (which will be explained later), a considerable portion of the existing solver for n ≥ 2 is not used for bar property calibration, whereas the unused portion is intrigued with the overall calibrator program. In a separate trial, the existing solver made the overall bar property calibrator program cumbersome and difficult to modify. Third, thus far, the need for PCE solutions for n = 1 for the dc of SHB signals has been very high compared with the solutions for n ≥ 2, necessitating a more handy but reliable solver for n = 1. In this section, we present an exclusive PCE solver for n = 1 that can reliably provide PCE solutions at arbitrary Poisson’s ratio values down to six decimal places in an iterative dc process. Compared with the existing solver (up to n = 20), the exclusive solver herein should be more robust, straightforward, and ease understanding of the overall bar property calibrator program for any modification. The proposed solver will be verified with reference to the table solutions in existing studies [18,36] for different values of Poisson’s ratio. When used in an iterative dc process, its reliability is verified in Sec. 5. 3.2 Solution Scheme. Reference [ ] addressed the benefit of the Bancroft version [ ] of the PCE in the solution process and expressed it using physics-friendly non-dimensional variables ( is the value of the PC function; are the Bessel functions of the first kind of order zero and one, respectively. The solver herein solves Eq. , as in the existing solver [ ]: the proposed solver determines the values in a range of values for a given value when = 0. As illustrated in Ref. [37], the PCE has an infinite number of function surfaces with versatile slopes, which leads to an infinite number of C–F solution curves (dispersion curves) that make the G value zero. The schematic of the dispersion curve for the first excitation mode (n = 1) is illustrated in Fig. 1. Similar to the existing solver [37], this study is based on an iterative root-finding process. However, in the current solver, the search algorithm for C solutions starts from the F value of zero in the C–F domain, as illustrated in Fig. 1, whereas the existing solver [37] sought solutions from the intermediate F value (6 ≤ F ≤ 11 depending on the Poisson’s ratio). The consequences of the mentioned differences are discussed later (Sec. 3.4). The scheme of the current solver is shown in Fig. 1. The proposed solver utilizes the trivial solution of C[t] = 1 at F[t] = 0 as the first initial solution. The second initial solution (C[1]) is determined at F[1] (= 1 × dF = 0.001). Then, the PC function value (G) is monitored at F[1] for a range of C values from 1 to 0.99 at C intervals of dC = 1 × 10^−6. Once the sign of the PC function value (G) changes, the C values before and after the sign change are taken as C[hb] and C[lb,] respectively. C[1] is subsequently determined using the bisection method. Note that the trivial solution (C[t] = 1 at F[t] = 0) is unnecessary in dc. Once two initial solutions (C[t] and C[1] at F[t] and F[1,] respectively) are available, the current solver utilizes two previous solutions C[i-2] and C[i-1] at F[i-2] and F[i-1,] respectively, to predict the approximate solution (C[a]) at current F[i] via linear extrapolation (subscript i is the index of F and C; subscript a denotes “approximate”). It finally determines C[i] at the independent variable F[i] via the bisection method using two bound values (C[hb] and C[lb]), which process is described in Ref. [37]. The determined C value (C[i]) is accurate down to the ninth decimal place, as the tolerance limit in the bisection method is set herein as 1 × 10^−10. In general, the determined (F, C) matrix differs from the (F[dc,]C[dc]) matrix which is used for dc (see Ref. [37]). The (F, C) matrix is typically obtained at finer F intervals (dF = 0.001) than the necessary interval (dF[dc]) of the (F[dc,]C[dc]) matrix. If necessary, the user can adjust the dF value of the proposed solver (the dF[dc] value is determined based on the need for dc). Once the (F, C) matrix is obtained (Fig. 1), the proposed solver determines the (F[dc,]C[dc]) matrix by linearly interpolating the (F, C) matrix at each F[dc] component. The schematic of the linear interpolation process is illustrated in Fig. 2. In Fig. 2, subscript k is the index of F[dc] and C[dc]. The proposed solver first obtains the approximate solution C[a] at the current F[dc,k] via the linear interpolation of the C solutions at F[i-1] and F[i,] between where the current F[dc,k] is located. Subsequently, C[dc,k] values at F[dc,k] are searched at intervals of dC[dc] = 1 × 10^−6 until C[dc_hb] and C[dc_lb] are found, where the PC function (G) values have different signs. Next, the C[dc,k] solution at the current F[dc,k] is obtained via the bisection method using C[dc_hb] and C [dc_lb] values as the two bounds for the solution. The above algorithms (illustrated in Figs. 1 and 2) consist of the main part of the solver and two subroutines, as illustrated in Fig. 3. Briefing their roles, the main part performs the overall procedures mentioned above (summarized in Fig. 3), and calls the subroutines when in need of the PC function value (subroutine “pcf”) or the C (or C[dc]) value determined using the bisection method (subroutine “bisectc”). The main part provides ν, C (or C[dc]), and F (or F[dc]) values to the subroutine “pcf”, and ν and two bound values of C (or C[dc]) at a given F (or F[dc]) to the subroutine “bisectc”. The aforementioned algorithms were implemented in matlab^® software (PCE_solver_n1.m), which is available online [41]. The current solver writes the finally obtained (F[dc,]C[dc]) matrix to “Cdc-Fdc.xlsx” file. 3.3 Solver Verification. The reliability of the proposed solver is verified by comparing the obtained solutions (accurate to the ninth decimal place) with those studied by Bancroft [36] (available to the fifth decimal place) for a range of Poisson’s ratios. Figure 4 compares the current solutions with those in Bancroft [36]. To avoid spatial crowding, Fig. 4 is divided into two parts depending on the values of Poisson’s ratio: Fig. 4(a) for ν = 0.1, 0.2, 0.3, and 0.4, and Fig. 4(b) for ν = 0.15, 0.25, and 0.35. The C values in Ref. [36] were listed in table at L (= a/Λ) intervals of 0.025 or 0.1, which were transformed herein into C–F domain solutions; the results are presented in Fig. 4 as open circles. In this figure, the intervals in Bancroft’s data are irregular in the F-axis owing to the transformation from L to F (F = CL). Figure 4 shows the current solutions obtained at constant F intervals of 0.01 (marked as “×”). C solutions at regular F intervals are necessary for dc. In Fig. 4(a), the current solutions are consistent with those in Bancroft [36] for ν = 0.1, 0.2, 0.3, and 0.4, which verifies the current solver. In Fig. 4(b), the consistency of the current solutions with those in Bancroft [36] is observed for ν = 0.25 and 0.35, which also verifies the current solver. However, a notable discrepancy is observed for ν = 0.15. In separate trials, the solver in Ref. [36] resulted in the same C values as the current solver, down to nine decimal places of C for all values of Poisson’s ratio considered in Fig. 4. The C solutions for ν = 0.15 in Bancroft [36], especially in the range of approximately 0.25 ≤ F ≤ 0.4 need to be further verified by other solvers. 3.4 Benefit of the Exclusive Solver. The C vs. F solution curves for n ≥ 2 are unavailable in the low F regime (below the cut-off F values) [37]. Under such circumstances, the existing solver [37] obtained the C–F solutions using the initial two solutions at two intermediate F values that belong to the range of 6 ≤ F ≤ 11 because all the solution curves up to the 20^th order are available and can be suitably determined in this F range. If the C solution is searched (with finite C intervals) at an overly higher F value than the ones in this F range, the C solution is evanescent, which gnaws the robustness of the PCE solver. For the first excitation state (n = 1), the dispersion curve is available down to F = 0 (no cut-off F value). Therefore, the exclusive PCE solver herein employs the trivial solution (C[t] = 1 at F[t] = 0) as an initial solution and determines another initial solution in the vicinity of the trivial solution. The characteristic features of the current algorithm are as follows. The necessity of determining only one initial solution in the vicinity of the trivial solution accelerates the overall solver as compared with the existing solver [37]. Second, because the slope of the PC function surface varies slightly in the vicinity of the trivial solution (F = 0), higher reliability is imparted in determining the initial solution than in the case where the initial solutions are determined at increased F values [37]. Finally, the succinct nature of the current solver avoids any unfavorable routine in the complicated solver for n ≤ 20 in Ref. [37]. The aforementioned characteristics of the current solver, and the flexibility in setting the dF value (as in the existing solver [37]), would contribute to the robustness of the proposed exclusive solver for n = 1. The reliability of the current solver when used in a general-purpose optimization algorithm, that requests massive C–F solutions for various ν values down to six decimal places, is verified later (Sec. 5). Once the bar properties are calibrated using our calibrator program (dispersion_correction_iteration.m [41]), the calibrated value of Poisson’s ratio of the bar can be truncated to three decimal places. Then, the existing solver in Ref. [37] can be used to obtain PC solutions up to n = 20, which are specific for the calibrated property of the researcher’s bar. A higher-order dc using the solver in Ref. [37] for up to n = 20 is worthwhile only when the (S)HB properties are well calibrated using the solver herein for n = 1. Therefore, the usage of the solver presented in Ref. [37] for up to n = 20 is supplemented by the current solver for n = 1 [41]. 4 Bar Property Calibration 4.1 Experiment. The specification of the bar material requested to a local supplier was Maraging steel C350. When the bar was supplied to the author’s laboratory, one end surface of the bar (19.1 mm in diameter) was indented to mark “C350”. The bar was cut to dimensions of 2000 and 300 mm for the bar and striker, respectively. A schematic of the experimental setup is shown in Fig. 5. The striker collided with the bar at a speed of 11.7 m/s, as measured using a high-speed camera. There was neither a specimen nor a transmission bar (bar-alone or bar-apart test). The profile of the elastic wave generated in the bar was measured using a strain gauge attached to the surface of the bar at 948.5 mm from the impact surface (1,051.5 mm from the rear end). The size of the strain gage (metallic part) was 1.1 × 1.3 mm (120 ohms; gage factor 2.1). This strain gauge formed a 1/4 bridge of the Wheatstone bridge circuit. The output signal of the Wheatstone bridge was amplified and transferred to an oscilloscope that digitized the incoming analog signal at a sampling rate of 5 MHz. The digitized data were stored on a personal computer. 4.2 Recorded Wave Profile. The measured strain signal on the bar surface is shown in Fig. 6, where the positive sign denotes tension. In Fig. 6, the vertical position of the entire profile was shifted slightly arbitrarily after the measurement for the baseline alignment. The time origin was set arbitrarily. The data for Fig. 6 are available in Ref. [41] (experiment.csv and dispersion_correction.xlsm). In Fig. 6, the pulse marked RP is the incident strain pulse from the impact surface. This pulse was used as the reference pulse (RP), which will be used later for mathematical modeling (Fourier synthesis), followed by a phase shift. The compressive incident pulse (RP) reflects at the rear end of the bar and forms a tensile wave from the end surface owing to impedance mismatch at the rear surface [13]. The first traveled pulse, denoted as TP-1, arrived from the left-end surface of the bar. The travel distance from RP to TP-1 was 2103 mm (z[1]). TP-2 arrived from the right-end surface (i.e., the former impact surface) of the bar. The travel distance from RP to TP-2 was 4000 mm (z[2]). 4.3 Target Signal Preparation. Fourier synthesis is the process of mathematically modeling a periodic signal by combining sine and/or cosine waves in certain proportions whose frequencies are multiples of the lowest, or fundamental, frequency. The signal to be used for Fourier synthesis is called herein the target signal for Fourier synthesis. For the convenience of explanation, the prepared target signal is illustrated in advance in Fig. 7(a) before explaining how it was prepared. For target signal preparation, the onset point of the RP was determined based on visual inspection. As observed in Fig. 7(a), the ordinate values before and at the onset point of the incident pulse (RP) were zero-padded in the target signal. The endpoint of the RP was selected as the point when the pulse magnitude in the releasing part of the wave changed sign for the first time. The ordinate value after and at the endpoint of the RP were also zero-padded in the target signal. The reason the endpoint of the RP was selected as such is as follows. The tail part of the RP is composed of sluggish wave components with higher frequencies; the sluggish nature explains why they are located at the tail part. Therefore, the wave components in the tail part are increasingly behind the main pulse as the RP travels in the bar. From tests using the Excel^® template (dispersion_correction.xlsm), the inclusion of the tail part of the RP into the target signal does not influence the main part of the dispersion-corrected wave profile at a given travel distance; however, it only complicates the far-tail part of the dispersion-corrected wave profile. The recorded data located between the start and end times of the RP pulse were included in the target signal data. The determination of the start and endpoints of the RP can be assisted using the Excel^® template (dispersion_correction.xlsm) in Ref. [41]. 4.4 Fourier Synthesis. The time-dependent pulse in Fig. (target signal) can be described using the framework ) = ), where is the measured quantity (usually with a dimension; mV herein), is the magnitude constant with a dimension of ) is a non-dimensional shape function, and is the time. is set to 1 mV, and thus, the ordinate of Fig. ). The elastic pulse recorded at discrete time points can be mathematically modeled (synthesized) using a Fourier series: is the index to describe time points spanning from 0 to is the number of data points in the time window with a fundamental period ( ); Δ is the time interval for sampling; is the fundamental frequency (=1/ ); k is the index to describe the terms of the Fourier series spanning from 1 to is the summation limit of the Fourier series, which is the Nyquist number ( , and are the Fourier coefficients given as: The target profile shown in Fig. 7(a) was synthesized using Eqs. (5)–(8). The conditions for Fourier synthesis were as follows: t[0] = 1420 μs, f[0] = 1/t[0] = 704.225325 Hz, df = f[0], Δt = 0.2 μs, N[t] = t[0]/Δt = 7100, f[s] = 1/Δt = 5 MHz, f[Ny] = f[s]/2 = 2.5 MHz, K = N[y] = f[Ny]/df = 3550. The Fourier synthesis of the target signal using Eqs. (5)–(8) can be assisted using an Excel^® template (dispersion_correction.xlsx) [41]. The signal synthesized using the Excel^® template is shown in Fig. 7(b). As shown in Fig. 7(b), the synthesized signal successfully describes the target signal that includes the main part of the experimentally measured RP. 4.5 Dispersion Correction (Phase Shift). As the elastic stress wave travels through the bar, the th frequency component ( ) with a speed of travels a distance Δ in time Δ , which is the time lag of the wave component with the th frequency ( ). Hence, the Fourier series expression for the stress pulse after traveling a distance Δ where Δ is positive for forward travel and negative for backward travel. Equation describes the translation of each wave component with a particular frequency by an amount of +Δ in the time axis. Note that in numerically solving the PCE was derived from the concept of , respectively, which are exactly required values, respectively, in dc. Correlating the with the variables in Eq. = k (k = 1,2,3…). 4.6 Iterative Dispersion Correction. The dc process described in the previous sections can be suitably performed using the Excel^® template (dispersion_correction.xlsm) and the PCE solver (PCE_solver_n1.m) available in Ref. [41], provided the bar sound speed (c[o]) and Poisson’s ratio (ν) were calibrated in advance. For mutually self-consistent verification of the combined theories, and the calibrated set of (ν, c[o]), iterative dc is necessary. Thus, a calibrator program (dispersion_correction_iteration.m) was prepared, which included the exclusive PCE solver developed herein (PCE_solver_n1.m). The algorithm of the calibrator program is illustrated in Fig. 8. As shown in Fig. 8, once Fourier synthesis is performed, the iterative dc starts with the initial guess values of ν and c[o]. The program then performs the first run of dc, followed by error calculation of the dispersion-corrected (predicted) profiles with reference to the measured profiles at z[1] and z[2]. From the second run of the foregoing processes, the ν and c[o] set is determined using the general-purpose optimization function “fminsearch” available in matlab.^® “fminsearch” is an unconstrained nonlinear optimization function that determines the minimum scalar function of several variables, starting from an initial estimate [73]. The iteration of dc continues until the discrepancy (error) between the predicted wave profiles and the experimentally measured profiles is diminished below a preset condition [40]. The error between the predicted and experimentally measured pulse profiles at travel distances was quantified as follows: is the index of the time data of the pulse at the travel distance of either is the magnitude of the nondimensional shape function, ); superscripts dc and exp denote dispersion corrected and experiment, respectively; is the number of data in a given pulse (TP-1 or TP-2); and is the maximum magnitude (positive) of the measured pulse in TP-1 or TP-2. is the average absolute deviation. (These notations are explained here as Eq. is limited to the error calculation instead of dc.). Equation describes the concept of average absolute deviation with reference to the maximum pulse height of the experimental pulse. To calculate the error of a dispersion-corrected pulse with reference to the traveled pulse in the experiment, the error calculation range must be defined. It started from the onset point of each traveled pulse, as shown in Fig. 9. The tail part that follows the main pulse is suggested to be included in the error calculation range because of the reason that will be explained later. Consequently, the endpoint of the error calculation range was selected arbitrarily to include the tail part. The selected error calculation ranges for the two traveled pulses are illustrated in Fig. 9 as e[1] and e[2]. Unless terminated by the count limit for the iteration loop [40], the calibrator program exits the iteration loop when both ν and c[o] values with six decimal places do not vary significantly as the iterations continue. 5 Discussion 5.1 Bar Property Calibration. The calibrator program herein [41] successfully found the (ν,c[o]) set, at which condition the dc of the reference pulse created wave profiles that reasonably coincided with the experimentally measured profiles at travel distances of z[1] and z[2]. Figure 9 compares the predicted (dispersion-corrected) profiles with the experimentally measured profiles when ν = 0.335050 and c[o] = 4588.233496 m/s. As mentioned, e[1] (560.6–900.0 μs) and e[2] (977.8–1419.8 μs) in Fig. 9 denote the error calculation ranges for the traveled pulses at z[1] and z[2], respectively, which were set arbitrarily to monitor the error in predicting the respective pulse and its tail part. In Fig. 9, the error values of the dispersion-corrected pulses with reference to the experimental signal were 1.479843 and 1.105220% (average error was 1.292531%) at travel distances of z[1] = 2103 and z[2] = 4000 mm, respectively. The origins of such errors may include (i) the change in diameter of the bar and striker along the axial direction, (ii) imperfect planes of impact and wave reflection; that is, imperfect end surfaces of the bar and striker, and (iii) imperfect alignment of the striker and bar. Despite such error sources in the experiment, the coincidence of the predicted (dispersion-corrected) wave profiles to the measured profiles is remarkable, which indicates the reliability of the calibrated values of c[o] and ν of the bar used in this study. In separate trials [40], the calibrated values were insensitive to the initial guess values. Based on a number of trials with different initial guess values for the calibrator program, Ref. [40] calibrated the values of c[o] and ν to 0.335050 and 4588.2335 m/s, respectively, for the experimental profile considered. 5.2 Verification of the Combined Theories. The remarkable consistency of the experimental and the predicted profiles in Fig. 9 is semanticizable in a number of points, as follows. First, the observed consistency is a direct experimental verification of the Fourier theory (using a physical quantity; that is, mechanical vibration), which states that any periodic event can be described using the sum of sine and/or cosine waves with multiples of the fundamental frequency (i.e., waves with varying sound speeds, i.e., wavelengths). The use of Eqs. (1)–(5) means this study mathematically treats the one-time event presented in Fig. 2 as a periodic event with a period of t[0] = 1420 μs (fundamental frequency, f[0] = 704.225325 Hz). Second, the result in Fig. 9 also experimentally verifies the PC theory [18,32–41] and its solver [37,41] because, as in the case of the Fourier theory, the consistency of the dispersion-corrected profiles with the measured profiles is never achieved unless the C[dc] versus F[dc] relationship obtained by solving the PCE is correct. Further, the result in Fig. 9 verifies that the first excitation mode in the PC theory reasonably describes the propagation of an elastic wave generated via striker impact on the bar [18,56,72]. As mentioned earlier, the result of iterative dc, that is, the coincidence of the two profiles in Fig. 9, mutually self-consistently verifies (i) the combined theories of Fourier and PC that resulted in the consistency and (ii) calibrated values of ν and c[o]. Such successful verification is attributed to the availability of the (i) calibrator program with iterative dc algorithm and (ii) exclusive PCE solver (which massively and reliably provided the PCE solutions for Poisson’s ratios down to six decimal places upon request of the iterative dc algorithm). The verification of the combined theories of Fourier and PC imparts physical meaning to each hill and valley in the plateau region of the elastic pulse after traveling some distances (Fig. 9). The hills and valleys in the plateau region are the physical corollaries resulting from the travel of a series of wave components with different sound speeds (frequencies), as predicted by the combined theories of Fourier and PC. The hills and valleys, even in the tail part of the traveling pulse, are similar to those in the plateau region of the main pulse; the tail parts of the traveled pulses are not merely meaningless noise-associated fluctuations but result from sluggish high-frequency wave components that formerly belonged to the plateau region (primary pulse) at a shorter travel distance. Because of such significance of the tail part, the ranges of the error calculation (e[1] and e[2] in Fig. 9) were selected to be wider than the main traveled pulses (TP-1 and TP-2, respectively). It means that the wave components with higher frequencies leak from the plateau part of a traveling pulse to form the tail. However, the influence of the leakage on the overall pulse magnitude is negligible (see Fig. 9) in an SHB test, where the travel distance before measuring the pulse signal is less than approximately three meters. 5.3 Contribution to the Bar Technology. Once the sound speed (c[o]) and Poisson’s ratio (ν) of the user's bar are calibrated using the proposed method and templates [41], the higher-mode vibration signals generated via the impact of explosive and/or blast waves on HB may be measured at two travel distances. Then, the blast wave profile at one of the travel distances can be predicted from another via dc using the PCE solver in Refs. [37,38]. This solver can provide PCE solutions up to an order of n = 20. The consistency of the predicted profiles with the experimentally measured wave profile with higher-mode vibrations, if observed, will further verify the combined theories of Fourier and PC, which should subsequently ensure the reliability of the predicted blast wave profiles at the front surface of the HB. Thus far, no international standard has been established for the SHB test despite its extensive exploitation in measuring the stress–strain curves of versatile materials at high strain rates [11]. Once c[o] is calibrated, the density (ρ) of a piece of bar material can be measured using a method based on the Archimedes principle [74,75] or using a pycnometer [76]. The elastic modulus (E[o]) of the bar can then be determined using c[o] and ρ information: $Eo=ρco2$ (ρ is the bar density). Because the stress, strain rate, and strain values of the specimens determined via the SHB test directly depend on c[o] and E[o] values [1–16], the precise calibration of the bar sound speed (c[o]) using the method and templates herein [41] will contribute to the reliable measurement of stress–strain and strain rate–strain curves using an SHB. The strain rate equation available in Ref. [12] enables researchers to verify the reliability of an SHB experiment via the correlation of the stress–strain curve with the strain rate–strain curve. Therefore, the calibration of the speed of sound of the bar (c[o]) and the strain rate equation [12] may contribute to the standardization of the SHB test. The reliability of the direct impact bar test [77–80] also resorts to the precisely calibrated bar properties (ν,ρ,c[o,] and E[o]). The c[o] calibration of the bar also renders the 1D equations [13] on the bar technology more useful. The calibrated values of ν, ρ, and E[o] enable an explicit finite element analysis of the events achieved in the (S)HB and the direct impact bar. The foregoing quantities can then be used for inverse engineering [81] of many bar technology events. For instance, the constitutive parameters of a specimen or small structure can be identified via simulations of (i) the plastic deformation of small structures (or specimens) sandwiched between two elastic bars and (ii) resultant SHB wave signals. The material properties of the specimen (e.g., constitutive parameters) inputted to the simulation can be varied until the simulated and measured bar wave profiles in the bar overlap, which eventually allows the determination of the specimen properties. Precise calibration of the bar properties is a prerequisite for such simulation-based inverse engineering of many events in bar technology. 5.4 Further Discussion. In this study, the surface strain waves at two travel distances were measured (Fig. 2), and the corresponding surface wave profiles were predicted (Fig. 6); this is called the standard dc process. Magnitude correction [27,28,63] refers to the process of obtaining the (i) average strain over the bar cross section from the surface strain and (ii) the dynamic elastic modulus that converts the bar strain to bar stress. This study is limited to standard dc because the calibration of the bar properties requires only the phase shift (Eq. (5)) of the wave profiles measured on the bar surface. Standard dc can be performed either in the time [18,60] or frequency domain [26–30,55–59,61–66]. If performed correctly, both approaches will result in the same dispersion-corrected pulse profiles. Therefore, there is no reason that one type of approach should be used predominantly over another (their usage depends on the choice of the user); however, the time-domain approach employed is simple, straightforward, and easy to understand. Dispersion correction via the two approaches will be compared in a review paper elsewhere. 6 Conclusion The process of calibrating the 1D sound speed (zero-frequency sound speed, c[o]) and Poisson’s ratio (ν) of an (S)HB is presented. The process comprises Fourier synthesis and iterative dc (time-domain phase shift) of the elastic pulse generated via striker impact on a circular bar. An exclusive PCE solver for ground state excitation was developed, as it was necessary for dc herein. The reliability of the developed solver was verified by comparison with table solutions in previous studies. The developed solver could reliably provide PCE solutions according to the massive request of the iteration algorithm for Poisson’s ratios with six decimal places. At each iteration in the iterative dc process, a set of c[o] and ν was assumed, and the sound speed versus frequency (c[dc] versus f[dc]) relationship under the assumed set was obtained using the PCE solver developed herein. Subsequently, each constituting wave of the overall elastic pulse was phase shifted (dispersion-corrected) using the c[dc]–f[dc] relationship. The c[o] and ν values of the bar were determined in the iteration process when the predicted (phase-shifted, dispersion-corrected) overall pulse profiles were reasonably consistent with the measured profiles at two travel distances (2103 and 4000 mm) in the bar. The observed consistency of the predicted wave profiles with the measured profiles at the calibrated (ν, c[o]) set was the mutually self-consistent verification of (i) the calibrated values of ν and c[o] and (ii) combined theories of Fourier and PC that resulted in the coincidence of the wave profiles. As revealed by this verification, the hills and valleys in the plateau part as well as the tail part of a traveling wave are the physical corollaries resulting from the travel of a series of wave components with different sound speeds (frequencies), as predicted by the combined theories. The calibrated values of ν and c[o] may contribute to the contemporary bar technology in (i) further verifying the combined theories when higher-mode vibrations are present; (ii) increasing the reliability of the stress–strain and strain rate–strain curves measured using SHB; (iii) utilizing the strain rate equation for the verification of an SHB experiment; (iv) utilizing 1D equations; and (v) inverse engineering of SHB events. The author appreciates Jae Eon Kim and Jun Moo Lee for their technical assistance. Funding Data • This study was financially supported by a National Research Foundation of Korea Grant under Contract No. 2020R1A2C2009083 funded by the Ministry of Science and Technology (Korea). Conflict of Interest There are no conflicts of interest. Data Availability Statement Data are openly available in a public repository [41]. The datasets generated and supporting the findings of this article are also obtainable from the corresponding author upon reasonable request. Nomenclature for Dispersion Correction , “ An Investigation of the Mechanical Properties of Materials at Very High Rates of Loading Proc. Phys. Soc. Sect. B ), pp. E. D. H. , and S. C. , “ The Dynamic Compression Testing of Solids by the Method of the Split Hopkinson Pressure Bar J. Mech. Phys. Solids ), pp. W. E. , “ Reexamination of the Kolsky Technique for Measuring Dynamic Material Behavior ASME J. Appl. Mech. ), pp. J. D. , and R. H. , “ On the Use of a Torsional Split Hopkinson Bar to Study Rate Effects in 1100-0 Aluminum ASME J. Appl. Mech. ), pp. , “ An Analysis of the Split Hopkinson Bar Technique for Strain-Rate-Dependent Material Behavior ASME J. Appl. Mech. ), pp. L. D. , “ Feasibility of Two-Dimensional Numerical Analysis of the Split-Hopkinson Pressure Bar System ASME J. Appl. Mech. ), pp. , and , “ A New Method for the Separation of Waves: Application to the SHPB Technique for an Unlimited Duration of Measurement J. Mech. Phys. Solids ), pp. , and Split Hopkinson (Kolsky) Bar—Design, Testing, and Applications Springer: Science+Business Media, LLC New York , and , “ Understanding the Anomalously Long Duration Time of the Transmitted Pulse From a Soft Specimen in a Kolsky Bar Experiment Int. J. Precis. Eng. Manuf. ), pp. The Kolsky-Hopkinson Bar Machine M. A. R. C. D. W. , and G. S. , “ Round-Robin Test of Split Hopkinson Pressure Bar Int. J. Impact Eng. , pp. , and J. B. , “ Evolution Specimen Strain Rate in Split Hopkinson Bar Test Proc. Inst. Mech. Eng. Part C: J. Mech. Eng. Sci. ), pp. , and , “ One-Dimensional Analyses of Striker Impact on Bar With Different General Impedance Proc. Inst. Mech. Eng. Part C: J. Mech. Eng. Sci. ), pp. M. K. , and S. A. , “ New Procedure to Evaluate Parameters of Johnson–Cook Elastic–Plastic Material Model From Varying Strain Rate Split Hopkinson Pressure Bar Tests J. Mater. Eng. Perform. , pp. , and , “ Design Guidelines for the Striker and Transfer Flange of a Split Hopkinson Tension Bar and the Origin of Spurious Waves Proc. Inst. Mech. Eng. Part C: J. Mech. Eng. Sci. ), pp. , and , “ Stress Transfer Mechanism of Flange in Split Hopkinson Tension Bar Appl. Sci. ), p. , “ X. A Method of Measuring the Pressure Produced in the Detonation of High Explosives or by the Impact of Bullets Philos. Trans. R. Soc. London Ser. A ), pp. R. M. , “ A Critical Study of the Hopkinson Pressure Bar Philos. Trans. R. Soc. London Ser. A Math. Phys. Sci. ), pp. , “ Elastic Wave Dispersion in a Cylindrical Rod by a Wide-Band Short-Duration Pulse Technique J. Acoust. Soc. Am. ), pp. C. W. , “ Second Mode Vibrations of the Pochhammer–Chree Frequency Equation J. Appl. Phys. ), p. , and C. W. , “ Elastic Strain Produced by Sudden Application of Pressure to One End of a Cylindrical Bar. II. Experimental Observations J. Acoust. Soc. Am. ), pp. , “ Propagation of an Elastic Pulse in a Semi-Infinite Bar International Symposium on Stress Wave Propagation in Materials Pennsylvania State University June 30–July 3, 1959 , pp. J. R. , and C. M. , “ Higher Modes of Longitudinal Wave Propagation in Thin Rod J. Acoust. Soc. Am. ), pp. C. K. B. , and R. C. , “ A New Method for Analysing Dispersed Bar Gauge Data Meas. Sci. Technol. ), pp. C. K. B. R. C. K. A. , and , “ Evidence of Higher Pochhammer–Chree Modes in an Unsplit Hopkinson Bar Meas. Sci. Technol. ), pp. E. H. , and C. S. , “ Experimental Study of Dispersive Waves in Beam and Rod Using FFT ASME J. Appl. Mech. ), pp. , and , “ On Backward Dispersion Correction of Hopkinson Pressure Bar Signals Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. , p. A. D. S. E. , and , “ Correction of Higher Mode Pochhammer–Chree Dispersion in Experimental Blast Loading Measurements Int. J. Impact Eng. , p. , and , “ Local Phase-Amplitude Joint Correction for Free Surface Velocity of Hopkinson Pressure Bar Appl. Sci. ), p. S. E. , and A. D. , “ A Review of Pochhammer–Chree Dispersion in the Hopkinson Bar Proc. Inst. Civil. Eng.—Eng. Comput. Mech. ), pp. , and , “ High-Frequency Low-Loss Ultrasonic Modes in Imbedded Bars ASME J. Appl. Mech. ), pp. A. E. H. A Treatise on the Mathematical Theory of Elasticity 4th ed. , Reprinted, Dover Publications New York , “ Über Fortplanzungsgeschwindigkeiten Kleiner Schwingungen in Einem Unbergrenzten Isotropen Kreiszylinder (On the Propagation Velocities of Small Oscillations in an Unlimited Isotropic Circular Zeitschrift für Reine und Angewandte Mathematik Z. Reine Angew Math. , pp. , “ Longitudinal Vibrations of a Circular Bar Q. J. Pure Appl. Math. , pp. , “ The Equations of an Isotropic Elastic Solid in Polar and Cylindrical Coordinates, Their Solutions and Applications Trans. Cambridge Philos. Soc. , pp. , “ The Velocity of Longitudinal Wave in Cylindrical Bars Phys. Rev. ), pp. , “ Pochhammer–Chree Equation Solver for Dispersion Correction of Elastic Waves in a (Split) Hopkinson Bar Proc. Inst. Mech. Eng. Part C: J. Mech. Eng. Sci. ), pp. , “ An Impact Test to Determine the Wave Speed in SHPB: Measurement and Uncertainty J. Dyn. Behav. Mater ), pp. , “ Manual for Calibrating Sound Speed and Poisson’s Ratio of (Split) Hopkinson Bar via Dispersion Correction Using Excel^® and Matlab^® Templates .” Submitted to Data. , and , “ Determination of the Flow Stress–Strain Curve of Aluminum Alloy and Tantalum Using the Compressive Load–Displacement Curve of a Hat-Type Specimen ASME J. Appl. Mech. ), p. , and , “ Numerical Verification of the Schroeder–Webster Surface Types and Friction Compensation Models for a Metallic Specimen in Axisymmetric Compression Test ASME J. Tribol. ), p. , and , “ A Design of a Phenomenological Friction-Compensation Model via Numerical Experiment for the Compressive Flow Stress–Strain Curve of Copper (in Korean) Kor. J. Comput. Des. Eng. ), pp. G. R. , and W. H. , “ A Constitutive Model and Data for Metals Subjected to Large Strains, High Strain Rates and High Temperatures Proceedings of 7th International Symposium of Ballistics The Hague, Netherlands Apr. 19–21 , pp. , and , “ A Phenomenological Constitutive Equation to Describe Various Flow Stress Behaviors of Materials in Wide Strain Rate and Temperature Regimes ASME J. Eng. Mater. Technol. ), p. , and , “ Performance of a Flying Cross Bar to Incapacitate a Long-Rod Penetrator Based on a Finite Element Model Eng. Comput. ), pp. , and , “ Effects of Impact Location and Angle of a Flying Cross Bar on the Protection of a Long-Rod Penetrator Trans. Can. Soc. Mech. Eng. ), pp. , and K. W. , “ A Numerical Study on the Influence of the Flow Stress of Copper Liner on the Penetration Performance of a Small-Caliber High Explosive (in Korean) Kor. J. Comput. Des. Eng. ), pp. , and , “ Effect of the Velocity of a Single Flying Plate on the Protection Capability Against Obliquely Impacting Long-Rod Penetrators Combust. Explos. Shock Waves ), pp. , and , “ Protection Capability of Dual Flying Plates Against Obliquely Impacting Long-Rod Penetrators Int. J. Impact Eng. ), pp. , and , “ Ricochet of a Tungsten Heavy Alloy Long-Rod Projectile From Deformable Steel Plates J. Phys. D: Appl. Phys. ), pp. , and , “ A Numerical Study on Jet Formation and Penetration Characteristics of the Shaped Charge With an Aspect Ratio of 2.73 and a High-Strength Copper Liner (in Korean) Kor. J. Comput. Des. Eng. ), pp. , and , “ A Determination Procedure for Element Elimination Criterion in Finite Element Analysis of High-Strain-Rate Impact/Penetration Phenomena JSME Int. J. Ser. A Solid Mech. Mater. Eng. ), pp. D. A. , “ A Numerical Method for the Correction of Dispersion in Pressure Bar Signals J. Phys. E Sci. Instum. ), pp. P. S. , and , “ Wave Propagation in the Split Hopkinson Pressure Bar ASME J. Eng. Mater. Technol. ), pp. C. W. , “ The Response of Soil to Impulse Loads Using the Split Hopkinson Pressure Bar Technique ,” AFWL-TR-85-92, Final Report, Air Force Weapons Lab, Kirtland Air Force Base, NM. J. C. L. E. , and D. A. , “ Dispersion Investigation in the Split Hopkinson Pressure Bar ASME J. Eng. Mater. Technol. ), pp. J. M. , and , “ Data Processing in the Split Hopkinson Pressure Bar Tests Int. J. Impact Eng. ), pp. , and , “ Determination of the Dynamic Response of Brittle Composites by the Use of the Split Hopkinson Pressure Bar Compos. Sci. Technol. ), pp. S. T. R. B. T. J. , and G. N. , “ Material Testing at High Strain Rate Using the Split Hopkinson Pressure Bar Latin Am. J. Solids Struct. ), pp. B. A. S. L. , and J. W. , “ Hopkinson Bar Experimental Technique: A Critical Review ASME Appl. Mech. Rev. ), pp. , and , “ On the Errors Associated With the Use of Large Diameter SHPB, Correction for Radially Non-Uniform Distribution of Stress and Particle Velocity in SHPB Testing Int. J. Impact Eng. ), pp. H. H. Z. H. , and , “ An Investigation on Dynamic Properties of Aluminium Alloy Foam Using Modified Large Scale SHPB Based on Dispersion Correction Comput. Mater. Continua ), pp. , and , “ Characterisation of Dynamic Behaviour of Alumina Ceramics: Evaluation of Stress Uniformity AIP Adv. ), p. A. M. A. K. D. A. , and K. Y. , “ Dispersion Correction in Split-Hopkinson Pressure Bar: Theoretical and Experimental Analysis Continuum Mech. Thermodyn. J. W. S. The Theory of Sound Vols. I and II. Dover Publications New York P. C. , and A. V. , “ The Effect of Stress Wave Dispersion on the Drivability Analysis of Large-Diameter Monopiles Procedia Eng. , pp. , and , “ Propagation of Stress Pulses in a Rayleigh-Love Elastic rod Int. J. Impact Eng. , p. . . Stress Waves in Solids Clarendon Press , p. K. F. Wave Motion in Elastic Solids Clarendon Press , p. D. Y. , and , “ An Experimental Study of Pulse Propagation in Elastic Cylinder Proc. Phys. Soc. ), pp. ASTM B962-17 Standard Test Methods for Density of Compacted or Sintered Powder Metallurgy (PM) Products Using Archimedes’ Principle ASTM International West Conshohocken, PA ASTM D792-20 Standard Test Methods for Density and Specific Gravity (Relative Density) of Plastics by Displacement ASTM International West Conshohocken, PA ASTM D854-14 Standard Test Methods for Specific Gravity of Soil Solids by Water Pycnometer ASTM International West Conshohocken, PA C. K. H. , and F. E. , “ Determination of Stress-Strain Characteristics at Very High Strain Rates Exp. Mech. ), pp. J. Z. J. R. , and Z. L. , “ Miniaturized Compression Test at Very High Strain Rates by Direct Impact Exp. Mech. ), pp. , “ The Use of the Direct Impact Hopkinson Pressure Bar Technique to Describe Thermally Activated and Viscous Regimes of Metallic Materials Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. , p. , and , “ The Direct Impact Method for Studying Dynamic Behavior of Viscoplastic Materials J. Appl. Comput. Mech. ), pp. T. J. , and , “ Plastic Constitutive Johnson–Cook Model Parameters by Optimization-Based Inverse Method J. Comput. Design Eng. ), pp. Copyright © 2022 by ASME; reuse license CC-BY 4.0
{"url":"https://appliedmechanics.asmedigitalcollection.asme.org/appliedmechanics/article/89/6/061007/1139588/Sound-Speed-and-Poisson-s-Ratio-Calibration-of","timestamp":"2024-11-06T01:14:53Z","content_type":"text/html","content_length":"459874","record_id":"<urn:uuid:3d27db5a-175a-4113-a83b-5b70f13074cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00081.warc.gz"}