text
stringlengths
100
957k
meta
stringclasses
1 value
## Springer recommends eqnarray I just read on LaTeX-Community.org, that the publisher Springer still recommends to use eqnarray. I could not believe that, so I went to Book Manuscript Guidelines, choose Manuscript Preparation in LaTeX and downloaded svmult.zip, which contains the Springer class for contributed books, proceedings, and similar. It has a folder called templates, which contains a file author.tex. In this file I could read: ... % Use this file as a template for your own input. ... Use the standard \verb|equation| environment to typeset your equations, e.g. % \begin{equation} a \times b = c\;, \end{equation} % however, for multiline equations we recommend to use the \verb|eqnarray| environment\footnote{In physics texts please activate the class option \texttt{vecphys} to depict your vectors in \textbf{\itshape boldface-italic} type - as is customary for a wide range of physical subjects}. \begin{eqnarray} a \times b = c \nonumber\\ \vec{a} \cdot \vec{b}=\vec{c} \label{eq:01} \end{eqnarray} A close look shows this template doesn’t even align at the relation symbol, which could be done with eqnarray. The example equations are simply right aligned. One could see that in the output if one of those equations would be extended. eqnarray is considered to be obsolete and faulty, as I wrote 2008 in the comparison eqnarray vs. align. Actually it’s been obsolete since the amsmath package appeared. The better ways are described in its manual, such as using align, gather or multline. I’m sure most experienced LaTeX users know that fact, and LaTeX beginners are told this frequently in forums and Usenet groups. Why it does not reach Springer? Perhaps this publisher doesn’t really welcome LaTeX for scientific publishing and doesn’t care if his templates are outdated. I wonder what they use then. 06. April 2012 by stefan
{}
# How to use the Chaikin oscillator When big investors buy stocks and buy them aggressively, their demand should generate higher prices in the future, making stocks a wise investment. In some ways, this is nothing more than Market 101: Checking supply and demand. Since price is a function of demand, tracking the fluctuations of one element without paying attention to another element seems counterintuitive. This is where the Chaikin Oscillator comes in. It checks the closing price and buying and selling pressure to determine the potential demand for the stock. The Chaikin oscillator was invented by Marc Chaikin, a long-term stock trader and analyst who created dozens of indicators during his outstanding career, many of which are now the main content of Wall Street technical analysis.He designed the shock indicator as a way to measure the accumulation or distribution of securities by institutional investors (investors who drive the market). ### Key points • The Chaikin Oscillator checks the strength of price changes and potential buying and selling pressures to provide a reading of the demand for securities and possible turning points in prices. • The divergence between the price and the Chaikin Oscillator is the most common signal of this indicator, and it usually marks a short-term price reversal. ## How the Chaikin oscillator works The Chaikin Oscillator is essentially a momentum indicator, but it is a cumulative distribution line, not just a price. It looks at the intensity of price changes and potential buying and selling pressures within a specific time period. A reading of the Chaikin Oscillator above zero indicates net buying pressure, while a reading below zero indicates net selling pressure.The difference between indicators and pure price changes is the most common Signals from indicators, And often mark a turning point in the market. ## Chaikin oscillator construction The oscillator is based on the concept of moving average convergence divergence or MACD. MACD is derived from the moving average, which is the average price of a certain problem in a certain period of time. The transition from MACD to Chaikin Oscillator requires several steps. The creation of the Chaikin oscillator is referenced Accumulation/distribution, Another Chai Jin’s creativity. The acc/dis line is built on Money flow multiplier, It attempts to quantify the amount of funds entering the market and its impact on stock prices. The multiplier formula is as follows: F = [ ( Close Low ) ( High Close ) ] ( High low ) f = frac{[(text{Close} – text{ Low}) – (text{High} – text{ Close})]} {(text{high}-text{low})} F=(High low)[(Close Low)(High Close)] Suppose the stock in the previous example reached a peak of $25 during the period under review and then fell to$21. One day later, it closed at \$22.In this case, the flow of funds multiplier will be [ ( 2 2 2 1 ) ( 2 5 2 2 ) ] ( 2 5 2 1 ) = . 5 fracture{[(22 -21) – (25 – 22)]}{(25-21)} = -.5 (2521)[(2221)(2522)]=.5 Multiply this number by the number of stocks traded during the period to get Money flow, And the running total generates the acc/dis line. The last step is to apply this output to MACD. ## Chai Jin believers The oscillator lacks simplicity, it makes up for its authority. By using the MACD model to measure the momentum of the accumulation/distribution line, the oscillator should predict when the line will change direction. So far, we have removed several levels from stock prices, but believers in Chai Jin believe that distance is needed to determine the importance of quantity and price changes. In addition, the value of three days and ten days is not static. For example, swapping the 6-day and 20-day EMA will cause the direction of the Chaikin Oscillator to change less suddenly. ## Bottom line The technical output produced by the Chaikin oscillator supports reasonable buying or selling decisions, but it is best used in combination with fundamentals and other indicators. (For more reading, please check: How to use trading volume to improve your trading.) .
{}
Ratrecon - Maple Programming Help Ratrecon inert rational function reconstruction Calling Sequence Ratrecon(u, m, x, N, D) mod p Ratrecon(u, m, x, N, D, 'n', 'd') mod p Parameters u, m - polynomials in x x - name N, D - (optional) non-negative integers n, d - (optional) variables p - integer > 1 Description • This routine reconstructs a rational function $\frac{n}{d}$ from its image $u\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}m$ where u and m are polynomials in ${F}_{x}$, and $F$ is a field of characteristic p. • Given $u\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}m$ and non-negative integers N and D, if the call Ratrecon(u,m,x,N,D) mod p succeeds then the output is a rational function n/d in x such that $\frac{n}{d}==u\mathrm{mod}m,\mathrm{gcd}\left(n,d\right)=1,\mathrm{degree}\left(n,x\right)\le N\mathrm{and degree}\left(d,x\right)\le \mathrm{D}.$ Otherwise Ratrecon returns FAIL indicating that no such polynomials n and d exist.  The reconstruction is unique up to multiplication by a constant in $F$ if the following condition holds. N + D < degree(m,x) • If the optional parameters N and D are not specified then they are determined by the degree of m. They are assigned the largest possible values satisfying the above constraint such that N=D or N-D=1. • If the optional parameters n and d are specified then Ratrecon returns either true or false.  If rational reconstruction succeeds then true is returned and these parameters are assigned the numerator and denominator separately, otherwise false is returned and these parameters are not changed. • The special case of $m={x}^{k}$ corresponds to computing the (N, D) Pade approximate to the series u of order $\mathrm{O}\left({x}^{k}\right)$ . • If the first input u is a polynomial in variables other than x then Ratrecon is applied to the coefficients of the polynomial in those variables.  See the last example in the Examples section. • For the special case of $N=0$, the polynomial $\frac{d}{n}$ is the inverse of u in $\frac{{F}_{x}}{m}$ provided u and m are relatively prime. Examples > $u≔3+5x+7{x}^{2}+11{x}^{3}+13{x}^{4}$ ${u}{≔}{13}{}{{x}}^{{4}}{+}{11}{}{{x}}^{{3}}{+}{7}{}{{x}}^{{2}}{+}{5}{}{x}{+}{3}$ (1) > $m≔{x}^{5}$ ${m}{≔}{{x}}^{{5}}$ (2) > $p≔97$ ${p}{≔}{97}$ (3) > $r≔\mathrm{Ratrecon}\left(u,m,x\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}p$ ${r}{≔}\frac{{19}{}{{x}}^{{2}}{+}{56}{}{x}{+}{77}}{{{x}}^{{2}}{+}{19}{}{x}{+}{58}}$ (4) > $\mathrm{Ratrecon}\left(u,m,x,2,2\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}p$ $\frac{{19}{}{{x}}^{{2}}{+}{56}{}{x}{+}{77}}{{{x}}^{{2}}{+}{19}{}{x}{+}{58}}$ (5) > $\mathrm{Ratrecon}\left(u,m,x,1,3\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}p$ $\frac{{27}{}{x}{+}{52}}{{{x}}^{{3}}{+}{43}{}{{x}}^{{2}}{+}{34}{}{x}{+}{82}}$ (6) > $\mathrm{Ratrecon}\left(u,m,x,1,1\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}p$ ${\mathrm{FAIL}}$ (7) > $\mathrm{Ratrecon}\left(u,m,x,2,2,'n','d'\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}p$ ${\mathrm{true}}$ (8) > $n,d$ ${19}{}{{x}}^{{2}}{+}{56}{}{x}{+}{77}{,}{{x}}^{{2}}{+}{19}{}{x}{+}{58}$ (9) > $\mathrm{alias}\left(\mathrm{α}=\mathrm{RootOf}\left({x}^{2}+5\right)\right)$ ${\mathrm{α}}$ (10) > $u≔5+7\mathrm{α}+\left(11+13\mathrm{α}\right)x+\left(17+19\mathrm{α}\right){x}^{2}$ ${u}{≔}{5}{+}{7}{}{\mathrm{α}}{+}\left({11}{+}{13}{}{\mathrm{α}}\right){}{x}{+}\left({17}{+}{19}{}{\mathrm{α}}\right){}{{x}}^{{2}}$ (11) > $r≔\mathrm{Ratrecon}\left(u,{x}^{3},x\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}97$ ${r}{≔}\frac{{2}{}{x}{}{\mathrm{α}}{+}{3}{}{\mathrm{α}}{+}{71}{}{x}{+}{46}}{{x}{+}{10}{}{\mathrm{α}}{+}{21}}$ (12) > $\mathrm{evala}\left(\mathrm{series}\left(r,x,3\right)\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}97$ ${5}{+}{7}{}{\mathrm{α}}{+}\left({11}{+}{13}{}{\mathrm{α}}\right){}{x}{+}\left({17}{+}{19}{}{\mathrm{α}}\right){}{{x}}^{{2}}{+}{\mathrm{O}}\left({{x}}^{{3}}\right)$ (13) > $\mathrm{mod}≔\mathrm{mods}:$ > $u≔{x}^{3}+\left(6{t}^{3}-3t-5\right){x}^{2}+\left(4{t}^{3}+6{t}^{2}+5t-4\right)x+t-1$ ${u}{≔}{{x}}^{{3}}{+}\left({6}{}{{t}}^{{3}}{-}{3}{}{t}{-}{5}\right){}{{x}}^{{2}}{+}\left({4}{}{{t}}^{{3}}{+}{6}{}{{t}}^{{2}}{+}{5}{}{t}{-}{4}\right){}{x}{+}{t}{-}{1}$ (14) > $m≔\left(t-2\right)\left(t-3\right)\left(t-4\right)\left(t-5\right)$ ${m}{≔}\left({t}{-}{2}\right){}\left({t}{-}{3}\right){}\left({t}{-}{4}\right){}\left({t}{-}{5}\right)$ (15) > $p≔13$ ${p}{≔}{13}$ (16) > $\mathrm{Ratrecon}\left(u,m,t,1,2\right)\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}\mathbf{mod}\phantom{\rule[-0.0ex]{0.5em}{0.0ex}}p$ ${t}{-}{1}{+}\frac{{2}{}{t}{}{x}}{{{t}}^{{2}}{-}{1}}{-}\frac{{t}{}{{x}}^{{2}}}{{t}{-}{1}}{+}{{x}}^{{3}}$ (17) >
{}
# Ways to compute the n-the derivative of a discrete signal This is a pretty general question about how to compute derivatives of a digital signal $x[n]$. I would like to know what are the different approaches (from naive to complex) and how are they compared to one another? Is it possible with FIR/IIR filters? What are the pro's and contra's. Which are better for real-time applications? The estimation of derivative is straightforward: $$x'(n)~=\frac{x(n+1)-x(n-1)}{2}$$ $$x''(n)~={x(n+1)-2*x({n})+x({n-1})}$$ or if you have a signal sampled at $t_i=i\Delta t$, it is $$x'(t_{i})~=\frac{x(t_{i+1})-x(t_{i-1})}{2*\Delta t}$$ $$x''(t_{i})~=\frac{x(t_{i+1})-2*x(t_{i})+x(t_{i-1})}{(\Delta t)^2}$$ What you are interested in may be how to smooth the estimation. And yes you can use some recursive filters such as $y(n) = a \cdot x(n) + (1-a) \cdot y(n-1)$, or just implement your estimation with some simple windows (Hann window for example). To achieve the high SNR without distorting the signal very much, Savitzky–Golay filter also smooths your data by fitting successive sub-sets of adjacent data points with a low-degree polynomial with linear least squares. EDIT Matlab code for an N-th derivative of signal row vector x dx = x; %'Zeroth' derivative for n = 1:N % Apply iteratively dif = diff(dx,1); % First derivative first = [dif(1) dif]; last = [dif dif(end)]; dx = (first+last)/2; end • What is the n-th derivative then in the straightforward approach? – JustGoscha Feb 9 '14 at 1:13 • you can implement the 1st derivative iteratively. I'm not sure which programming language you are using, I added a simple implementation of N-th with Matlab for your reference. Thanks – lennon310 Feb 9 '14 at 1:21 There is a good article in Wikipedia: Numerical Differentiation Since you'd expect differentiation to be (Anti?)symmetric, using IIR filters might not be the wisest of ideas.
{}
# How do you maximize 3x+4y-yz, subject to x+y<4, z>3? May 22, 2016 $\infty$ #### Explanation: $f \left(x , y , z\right) = 3 x + 4 y - y z$ has not stationary points because there are no points obeying the condition $\nabla f \left(x , y , z\right) = \vec{0}$ So their extrema could be located at the viable region frontiers. Taking a restriction frontier, for instance ${g}_{2} \left(x , y , z\right) = z - 3 = 0$ and substituting in $f \left(x , y , z\right)$ we get $f {\left(x , y , z\right)}_{{g}_{2}} = {f}_{{g}_{2}} \left(x , y\right) = 3 x + y$ calculating $\nabla {f}_{{g}_{2}} \left(x , y\right) = \left\{3 , 1\right\}$ so also no stationary points over $z = 3$ Maximize ${f}_{{g}_{2}} \left(x , y\right) = 3 x + y$ with the border restriction $x + y = 4$. Applying the same idea as before, substituting the border relation in the objective function, we attain ${\left({f}_{{g}_{2}}\right)}_{{g}_{1}} = 3 x + \left(4 - x\right) = 2 x + 4$ we see that the value range for ${\left({f}_{{g}_{2}}\right)}_{{g}_{1}}$is unlimited
{}
Logstash configuration tuning | Elastic Blog # Logstash configuration tuning Logstash is a powerful beast and when it’s firing on all cylinders to crunch data, it can use a lot of resources. The goal of this blog post is to provide a methodology to optimise your configuration and allow Logstash to get the most out of your hardware. As always, there is no golden rule to optimise which will work for everybody and it will be influenced heavily by • your data (size and complexity of the documents, mapping used, etc.) • your hardware (cpus, memory, disks, over allocation, etc.) … which is why your mileage may vary and it is always best to determine the best configuration for your data yourself. I have split this up into 6 steps which should be carried out in order for best results - later on you will be able to repeat individual steps to optimise individual sections. For your test setup, it is important that Logstash has the full resources of the machine available and is not sharing resources (e.g. redis/elasticsearch node running on the same machine, or virtualized on a host with over-allocation), otherwise your test results may vary greatly from one run to the next. # Step 1: the sample data To be able to tell if any change to the system or configuration has had a positive (or negative) impact on your throughput, you must be able to do repeatable tests, and that means sample data.  You need data which is representative of the data your production system will see, and a volume of data which allows the system to warm up.  You will have to do some testing to see how much is enough, but be sure that 10 documents will not give any meaningful results... try starting off with 1GB of data and see how far that gets you. # Step 2: metrics filter { metrics { meter => "documents" flush_interval => 60 } } And this to output the metrics: output { if "metric" in [tags] { stdout { codec => line { format => "1m rate: %{documents.rate_1m} ( %{documents.count} )" } } } } Of course, as with any system… measuring it will influence the system being measured, but the metrics plugin is very light, and the additional load should be linear and not be influenced by the size/complexity of your data. Assuming you already have a complete Logstash configuration which you want to optimise, this would be a good time to take a baseline measurement of what your current documents/minute rate is so we can see at the end how much improvement you actually made. # Step 3: Optimising filters To optimise your filters you need to ensure that your inputs are as fast as possible:  use the file input to read from your sample data, and send all the documents to /dev/null to ensure that there is no bottleneck with processing or outputting your documents. input { file { path => [ "/path/to/your.log" ] start_position => "beginning" sincedb_path => "/dev/null" } } ... output { null{} } Note the start_position and sincedb_path - this will allow you to read the file from the beginning every time you start Logstash without having to delete sincedb files. The first thing to note here is that the Logstash command line argument --filterworkers (or -w) only influences the number of worker threads for filters and has no influence whatsoever on the inputs or outputs. The current default value for filter workers is 1 (this may be changed in future releases), so set --filterworkers to the number of cores that you have on your machine to make sure you have all the resources available for your filters. If you made a change here already, then start off by taking a baseline measurement to see the effect of any changes. You will probably see a warm up phase at the beginning, but it should soon settle to a steady rate with little variation (assuming your documents are relatively homogenous).  After 5 minutes you should expect it to be warmed up and representative rates being displayed. The following are some examples I tried to illustrate the methods and some results, but please note that the results that I got may differ greatly from those that you get with your data - please do not assume that just because a configuration change on my hardware with my data increased my throughput that it will for yours - you must test yourself. ## Conditionals Is it more efficient to do two boolean comparisons if [useragent][name] == "Opera" or [useragent][name] == "Chrome" { drop {} } or a compact regular expression? if [useragent][name] =~ /^(?:Opera|Chrome)\$/ { drop {} } In my tests, the first gets a rate of 2600 documents/minute and the regex only does 2000… as expected, regular expressions are more powerful, but also more expensive. Counter example: If your binary comparison has 100 “or’s”, but you can solve the same problem with a very simple regex, you may find the regex to be the faster solution. ## Type casting outside of grok Which is more efficient? grok { match => { "message" => "CustomerID%{INT:CustomerID:int}" "tag_on_failure" => [] } } or grok { match => { "message" => "CustomerID%{INT:CustomerID}" "tag_on_failure" => [] } } mutate { convert => { "CustomerID" => "integer" } } In my tests, the first gets a rate of 17500 while the second only 14000 documents/minute, so yes - for my data it is good to type cast within grok where you can. ## Conditional before grok or not From the syslog logs I have on my machine I am interested (who knows why…) in capturing the IP address and renewal time from this dhclient line: Sep 9 07:50:43 es-rclarke dhclient: bound to 10.10.10.89 -- renewal in 426936 seconds. In my sample file, dhclient entries make up only 3% of all the log entries, and the renewal entries make only 8% of those… The lazy way to write it would be: filter { grok { match => { "message" => "%{SYSLOGLINE}" } overwrite => [ "message" ] } grok { match => { "message" => "bound to %{IPV4:[dhclient][address]} -- renewal in %{INT:[dhclient][renewal]:int} seconds." } tag_on_failure => [] } } but how much performance would it bring to only grok on the dhclient messages? filter { grok { match => { "message" => "%{SYSLOGLINE}" } overwrite => [ "message" ] } if [program] == "dhclient" { grok { match => { "message" => "bound to %{IPV4:[dhclient][address]} -- renewal in %{INT:[dhclient][renewal]:int} seconds." } tag_on_failure => [] } } } In this example… surprisingly little: 12100 documents/minute without, and 14000 with the conditional. ## But the point is… Test, test, test!  Keep on reviewing your filters, think of ways to optimise, and if you have an idea put it to the test.  If someone tells you “always do XYZ in your Logstash configurations - it’s faster!”, test it with your data on your hardware to be sure it is better for you.  You will be surprised how much performance you can gain by optimising your filters, and how that translates to less hardware being purchased. # Step 4: Optimising inputs To optimise your inputs you need to remove all filters (with the exception of the metrics filter), and again send all the documents to /dev/null to ensure that there is no bottleneck with processing or outputting your documents. The intention here is to identify the (theoretical) maximum throughput of Logstash on your system with this input, and moreover to put your input source under load to discover its weak spots. As with the filter optimisation, try different settings comparing results, e.g. try changing the number of threads if your input has this setting (you will probably get best results if you set this to the number of cores on the machine, but never more than that), and of course monitor your source to ensure that the limiting factor is not Logstash. If possible your input source (e.g. a Redis server) is on separate hardware to your Logstash, otherwise they could both be contending for resources and not give you repeatable or reliable results. # Step 5: Testing outputs To optimise your output you would be best to prepare an input file by reading in from your source, applying all the filters you want to have in production, and writing out to file using the json_lines codec.  You can now use this file as an input, again having no filters other than the metric filter, and output to your target ensuring that there is no load from filters which would influence your tests.  Ensure you have enough data: any test involving Elasticsearch should run long enough for it to warm up and have caches filled and garbage collection running normally (probably about  30 minutes, but check your marvel stats to be sure). Again, as with optimising the input filters, modify configuration variables on your output (most notably the “workers” option on the Elasticsearch output which will probably be best at the number of cores your machine has) and determine what the best configuration is, all the while monitoring your output target (e.g. Elasticsearch cluster) to ensure that it is not the bottleneck.  As with the input optimisation, it is important that your output target system is not on the same hardware as your Logstash agent, otherwise they could be contending for resources. # Step 6: Putting it all together Now that you have optimised your input, filters and output it is time to put your configuration together.  I usually find it easier to manage if I keep all of my Logstash configuration files in one directory, and name them so that they will be read in the correct order by the Logstash agent, e.g. ├── 100-input.conf ├── 200-metrics.conf ├── 500-filter.conf └── 900-output.conf It is important to note that while you can have input, filter and output sections in any order within your files, Logstash will first concatenate all the files by alphanumeric filename order.  i.e. if you want to have one filter applied first, you must name the file accordingly.
{}
Multi-Chain Merge Mining Flare brings together existing networks into a unified composable system so that they can operate together via smart contracts, and without levying any changes on how the underlying networks already work today. Therefore, Flare adopts a 'merge-mining' like approach for its own block production whereby the miners of these underlying networks gain the opportunity to share responsibility in producing blocks on Flare in parallel to their mining responsibility on their host chain, earning mining rewards on Flare for doing so. The key design decisions of this mechanism are: • The set of underlying chains used for merge-mining is based on the chains leveraged in the F-asset protocol. • The relative validation power of each underlying chain on Flare is shared uniformly among all the underlying chains. • Miners gain more sampling probability in consensus, i.e. more importance to the safety of the network, based on their relative mining power output on their underlying chain. • For example, if a miner from an underlying chain mines 40 out of the past 100 blocks, then their chain-level sampling probability is 0.4. This number is then normalized such that it sums to 1 when added to the chain-level sampling probabilities of other miners from its host chain; this step accounts for incomplete participation from the miners from the underlying chain as merge-mining validators on Flare. This resultant normalized number is then divided by the number of chains in the F-asset protocol, N, to obtain the global sampling probability of the underlying chain node. • Miners from underlying chains link their Flare account to their underlying-chain mining beneficiary account via a simple transaction on the underlying chain originating from the mining beneficiary account which contains the Flare account listed in the 'memo' field. • The previous two points are publicly verifiable and require no trusted third party to derive them. Each week, this data is used to construct the consensus sampling probability distribution of validators on Flare, which is supplied independently to each node on Flare by its node operator, without need for a multi-signature or any governance process to sign this into effect. • All Flare nodes can optionally deviate from this 'ground truth' consensus sampling probability distribution (for safety validation, not leader-election) by a certain amount as described in detail in this paper in section 2.1.1 "Consensus". Below is an example Flare consensus sampling probability distribution: 1 { 2 "validators": [ 3 { 4 "nodeID": "NodeID-GQ4292fG2RMRWa7RtphPJTYHeMR5YAQPM", 5 "origin": "chain-0", 6 "weighting": 25 7 }, 8 { 9 "nodeID": "NodeID-GMHrauiUPGikdbT4Z65dEBFpfQWKovLy5", 10 "origin": "chain-1", 11 "weighting": 20 12 }, 13 { 14 "nodeID": "NodeID-DhdvGK268cNmDPzvh1Vw7rzSmT1tptSUB", 15 "origin": "chain-3", 16 "weighting": 19 17 }, 18 { 19 "nodeID": "NodeID-hBfmpWJ87GSPHUtxthGd2fHsVdaGmkgq", 20 "origin": "chain-2", 21 "weighting": 14 22 }, 23 { 24 "nodeID": "NodeID-LtahNtUH9tb4VCZZipLNqkBCxzjpFTdHs", 25 "origin": "chain-1", 26 "weighting": 11 27 }, 28 { 29 "nodeID": "NodeID-34KwvqefLeXYPrzcNjc4yMPRpMfE89ppw", 30 "origin": "chain-2", 31 "weighting": 8 32 }, 33 { 34 "nodeID": "NodeID-G9CJC4te7FyH1XyMugsRqVYYZBuTreFvd", 35 "origin": "chain-3", 36 "weighting": 2 37 }, 38 { 39 "nodeID": "NodeID-HhAo3hwTn73UB1LxU131gXrs7HMnMxmdE", 40 "origin": "chain-0", 41 "weighting": 1 42 } 43 ] 44 } 45 Copied! The probability of any particular validator $V$ in the above list being sampled during consensus is then: $\frac{\texttt{weighting}_V}{\sum{\texttt{weighting}}}$ Validator rewards are based on rewarding elected leader nodes during consensus, with leader nodes being drawn automatically based on their 'ground-truth' consensus sampling probability.
{}
This document outlines how to propose a change to pkgdown. For more detailed info about contributing to this and other tidyverse packages, please see the tidyverse contribution guide. ## Package reprexes If you encounter unexpected errors after running pkgdown::build_site(), try to build a minimal package that recreates the error. An ideal minimal package has no dependencies, making it easy to install and quickly reproduce the error. An example of a minimal package was this issue, where a minimal package containing a single .R file with two lines could reproduce the error. Once you have built a minimal package that recreates the error, create a GitHub repository from the package, and file an issue with a link to the repository. The quickest way to set up minimal example package is with usethis::create_package(): library(usethis) library(pkgdown) tmp <- file.path(tempdir(), "test") usethis::create_package(tmp, open) # ... edit files ... pkgdown::build_site(tmp, new_process = FALSE, preview = FALSE) ## Rd translation If you encounter problems with Rd tags, please use rd2html() to create a reprexes: library(pkgdown) rd2html("a\n%b\nc") rd2html("a & b") ## Pull request process • We recommend that you create a Git branch for each pull request (PR). • Look at the Travis and AppVeyor build status before and after making changes. The README should contain badges for any continuous integration services used by the package. • New code should follow the tidyverse style guide. You can use the styler package to apply these styles, but please don’t restyle code that has nothing to do with your PR. • We use roxygen2, with Markdown syntax, for documentation. • We use testthat. Contributions with test cases included are easier to accept. • For user-facing changes, add a bullet to the top of NEWS.md below the current development version header describing the changes made followed by your GitHub username, and links to relevant issue(s)/PR(s). ### Netlify We might ask you for a Netlify preview of your changes i.e. how do your local changes affect the pkgdown website? 1. Build and install the amended package, then re-build the website (clean_site() and then build_site()) which will update the docs/ folder. 2. Log into Netlify at https://app.netlify.com/sites/, and scroll to the bottom. You’ll see a box with dashed outline that says “Want to deploy a new site without connecting to Git?”. 3. Open up a file browser, navigate to docs/, and drag the docs/ folder to the dashed box, which will copy all the files into a temporary netlify site. 4. After the file transfer completes, netlify will generate a temporary URL on a new page that you can copy/paste in the PR discussion. ## Fixing typos Small typos or grammatical errors in documentation may be edited directly using the GitHub web interface, so long as the changes are made in the source file. • YES: you edit a roxygen comment in a .R file below R/. • NO: you edit an .Rd file below man/. ## Prerequisites Before you make a substantial pull request, you should always file an issue and make sure someone from the team agrees that it’s a problem. If you’ve found a bug, create an associated issue and illustrate the bug with a minimal reprex. ## Code of Conduct Please note that the pkgdown project is released with a Contributor Code of Conduct. By contributing to this project you agree to abide by its terms.
{}
Article Contents Article Contents # Slow passage through multiple bifurcation points • The slow passage problem, the slow variation of a control parameter, is explored in a model problem that posses several co-existing equilibria (fixed points, limit cycles and 2-tori), and these are either created or destroyed or change their stability as control parameters are varied through Hopf, Neimark-Sacker and torus break-up bifurcations. The slow passage through the Hopf bifurcation behaves as determined in previous studies (the delay in the observation of oscillations depends only on how far from critical the ramped parameter is at the start of the ramp--a memory effect), and that through the Neimark-Sacker bifurcation also behaves similarly. We show that the range of the ramped parameter over which a Hopf oscillation can be observed (limited by the subsequent onset of torus oscillations from the Neimark-Sacker bifurcation) is twice that predicted from a static-parameter bifurcation analysis, and this is a memory-less result independent of the initial value of the ramped parameter. These delay and memory effects are independent of the ramp rate, for small enough ramp rates. The slow passage through the torus break-up bifurcation is qualitatively different. It does not depend on the initial value of the ramped parameter, but instead is found to depend, on average, on the square-root of the ramp rate. This is typical of transient behavior. We show that this transient behavior is due to the stable and unstable manifolds of the saddle limit cycles forming a very narrow escape tunnel for trajectories originating near the unstable 2-torus no matter how slow a ramp speed is used. The type of bifurcation sequence in the model problem studied (Hopf, Neimark-Sacker, torus break-up) is typical of those for the transition to spatio-temporal chaos in hydrodynamic problems, and in those physical problems the transition can occur over a very small range of the control parameter, and so the inevitable slow drift of the parameter in an experiment may lead to observations where the slow passage results reported here need to be taken into account. Mathematics Subject Classification: Primary: 37B55, 74H60; Secondary: 70K70. Citation: • [1] S. M. Baer, T. Erneux and J. Rinzel, The slow passage through a Hopf bifurcation: delay, memory effects, and resonance, SIAM J. Appl. Math., 49 (1989), 55-71.doi: 10.1137/0149003. [2] M. Marhl, T. Haberichter, M. Brumen and R. Heinrich, Complex calcium oscillations and the role of mitochondria and cytosolic protens, BioSystems, 57 (2000), 75-86.doi: 10.1016/S0303-2647(00)00090-3. [3] M. Perc and M. Marhl, Chaos in temporarily destabilized regular systems with the slow passage effect, Chaos Solitons & Fractals, 27 (2006), 395-403.doi: 10.1016/j.chaos.2005.03.045. [4] P. Strizhak and M. Menzinger, Slow passage through a supercritical Hopf bifurcation: Time-delayed response in the Belousov-Zhabotinsky reaction in a batch reactor, J. Chem. Phys., 105 (1996), 10905-10910.doi: 10.1063/1.472860. [5] Y. Park, Y. Do, and J. M. Lopez, Slow passage through resonance, Phys. Rev., E, 84 (2011), 056604.doi: 10.1103/PhysRevE.84.056604. [6] K. Park, G. L. Crawford and R. J. Donnelly, Determination of transition in Couette flow in finite geometries, Phys. Rev. Lett., 47 (1981), 1448.doi: 10.1103/PhysRevLett.47.1448. [7] J. E. Hart and S. Kittelman, Instabilities of the sidewall boundary layer in a differentially driven rotating cylinder, Phys. Fluids, 8 (1996), 692-696.doi: 10.1063/1.868854. [8] J. von Stamm, U. Gerdts, T. Buzug and G. Pfister, Symmetry breaking and period doubling on a torus in the VLFregime in Taylor-Couette flow , Phys. Rev., E, 54 (1996), 4938.doi: 10.1103/PhysRevE.54.4938. [9] C. S. Dutcher and S. J. Muller, Spatio-temporal mode dynamics and higher order transitions in high aspect ratio Newtonian Taylor-Couette flows, J. Fluid Mech., 641 (2009), 85-113.doi: 10.1017/S0022112009991431. [10] J. Su, Persistent unstable periodic motions, I, J. Math. Analysis and Applications, 198 (1996), 796-825.doi: 10.1006/jmaa.1996.0113. [11] J. Su, Persistent unstable periodic motions, II, J. Math. Analysis and Applications, 199 (1996), 88-119.doi: 10.1006/jmaa.1996.0128. [12] L. Holden and T. Erneux, Slow passage through a Hopf-bifurcation-From oscillatios to steady-state solutions, SIAM J. Appl. Math., 53 (1993), 1045-1058.doi: 10.1137/0153052. [13] S. M. Baer and E. M. Gaekel, Slow acceleration and deacceleration through a Hopf bifurcation: Power ramps, target nucleation, and elliptic bursting, Phys. Rev., E, 78 (2009), 036205.doi: 10.1103/PhysRevE.78.036205. [14] R. Haberman, Slowly varying jump and transition phenomena associated with algebraic bifurcation problems, SIAM J. Appl. Math., 37 (1979), 69-106.doi: 10.1137/0137006. [15] V. Booth, T. W. Carr and T. Erneux, Near-threshold bursting is delayed by a slow passage near a limit point, SIAM J. Appl. Math., 57 (1997), 1406-1420.doi: 10.1137/S0036139995295104. [16] L. Ng, R. Rand and M. O'Neil, Slow passage through resonance in Mathieu's equation, J. Vibration & Control, 9 (2003), 685-707.doi: 10.1177/1077546303009006004. [17] J. P. Denier and R. Grimshaw, Slowly-varying bifurcation theory in dissipative systems, J. Austral. Math. Soc. Ser. B, 31 (1990), 301-318.doi: 10.1017/S0334270000006664. [18] P. Hall, On the nonlinear stability of slowly varying time-dependent viscous flows, J., Fluid Mech., 126 (1983), 357-368.doi: 10.1017/S0022112083000208. [19] P. Yu, Analysis on double Hopf bifurcation using computer algebra with the aid of multiple scales, Nonlinear Dyn., 27 (2002), 19-53.doi: 10.1023/A:1017993026651. [20] Y. A. Kuznetsov, "Elements of Applied Bifurcation Theory," Springer, third edition, 2004. [21] F. Marques, J. M. Lopez and J. Shen, Mode interactions in an enclosed swirling flow: A double Hopf bifurcation between azimuthal wavenumbers 0 and 2, J. Fluid Mech., 455 (2002), 263-281.doi: 10.1017/S0022112001007285. [22] J. M. Lopez, J. E. Hart, F. Marques, S. Kittelman and J. Shen, Instability and mode interactions in a differentially-driven rotating cylinder, J. Fluid Mech., 462 (2002), 383-409.doi: 10.1017/S0022112002008649. [23] J. M. Lopez and F. Marques, Small aspect ratio Taylor-Couette flow: On set of a very-low-frequency three-torus state, Phys. Rev. E, 68 (2003), 036302.doi: 10.1103/PhysRevE.68.036302. [24] F. Marques, A. Y. Gelfgat and J. M. Lopez, Tangent double Hopf bifurcation in a differentially rotating cylinder flow, Phys. Rev. E, 68 (2003), 016310.doi: 10.1103/PhysRevE.68.016310. [25] J. M. Lopez and F. Marques, Mode competition between rotating waves in a swirling flow with reflection symmetry, J. Fluid Mech., 507 (2004), 265-288.doi: 10.1017/S002211200400864X. [26] J. M. Lopez, F. Marques and J. Shen, Complex dynamics in a short annular container with rotating bottom and inner cylinder, J. Fluid Mech., 51 (2004), 327-354.doi: 10.1017/S0022112003007493. [27] J. M. Lopez and F. Marques, Finite aspect ratio Taylor-Couette flow: Shil'nikov dynamics of 2-tori, Physica D, 211 (2005), 168-191.doi: 10.1016/j.physd.2005.08.011. [28] M. Avila, A. Meseguer and F. Marques, Double Hopf bifurcation in corotating spiral Poiseuille flow, Phys. Fluids, 18 (2006), 064101.doi: 10.1063/1.2204967. [29] J. M. Lopez, Y. D. Cui and T. T. Lim, An experimental and numerical investigation of the competition between axisymmetric time-periodic modes in an enclosed swirling flow, Phys. Fluids, 18 (2006), 104106.doi: 10.1063/1.2362782. [30] F. Marques and J. M. Lopez, Onset of three-dimensional unsteady states in small aspect-ratio Taylor-Couette flow, J. Fluid Mech., 561 (2006), 255-277.doi: 10.1017/S0022112006000681. [31] F. Marques, I. Mercader, O. Batiste and J. M. Lopez, Centrifugal effects in rotating convection: Axisymmetric states and three-dimensional instabilities, J. Fluid Mech., 580 (2007), 303-318.doi: 10.1017/S0022112007005447. [32] J. M. Lopez, F. Marques, I. Mercader and O. Batiste, Onset of convection in a moderate aspect-ratio rotating cylinder: Eckhaus-Benjamin-Feir instability, J. Fluid Mech., 590 (2007), 187-208.doi: 10.1017/S0022112007008038. [33] M. Avila, M. Grimes, J. M. Lopez and F. Marques, Global endwall effects on centrifugally stable flows, Phys. Fluids, 20 (2008), 104104.doi: 10.1063/1.2996326. [34] J. M. Lopez and F. Marques, Centrifugal effects in rotating convection: Nonlinear dynamics, J. Fluid Mech., 628 (2009), 269-297.doi: 10.1017/S0022112009006193. [35] J. M. Lopez and F. Marques, Sidewall boundary layer instabilities in a rapidly rotating cylinder driven by a differentially co-rotating lid, Phys. Fluids, 22 (2010), 114109.doi: 10.1063/1.3517292. [36] Y. Do and Y.-C. Lai, Scaling laws for noise-induced superpersistent chaotic transients, Phys. Rev. E, 71 (2005), 046208.doi: 10.1103/PhysRevE.71.046208. [37] A. Rubio, J. M. Lopez and F. Marques, Onset of Küppers-Lortz-like dynamics in finite rotating thermal convection, J. Fluid Mech., 644 (2010), 337-357.doi: 10.1017/S0022112009992400. [38] M. Avila, F. Marques, J. M. Lopez and A. Meseguer, Stability control and catastrophic transition in a forced Taylor-Couette system, J. Fluid Mech., 590 (2007), 471-496.doi: 10.1017/S0022112007008105. [39] M. Sinha, I. G. Kevrekidis and A. J. Smits, Experimental study of a Neimark-Sacker bifurcation in axially forced Taylor-Couette flow, J. Fluid Mech., 558 (2006), 1-32.doi: 10.1017/S0022112006009207.
{}
# Holographic S-fold theories at one loop ### Submission summary As Contributors: Connor Behan Arxiv Link: https://arxiv.org/abs/2202.05261v4 (pdf) Date accepted: 2022-04-21 Date submitted: 2022-04-12 14:44 Submitted by: Behan, Connor Submitted to: SciPost Physics Academic field: Physics Specialties: High-Energy Physics - Theory Approach: Theoretical ### Abstract A common feature of tree-level holography is that a correlator in one theory can serve as a generating function for correlators in another theory with less continuous symmetry. This is the case for a family of 4d CFTs with eight supercharges which have protected operators dual to gluons in the bulk. The most recent additions to this family were defined using S-folds which combine a spatial identification with an action of the S-duality group in type IIB string theory. Differences between these CFTs which have a dynamical origin first become manifest at one loop. To explore this phenomenon at the level of anomalous dimensions, we use the AdS unitarity method to bootstrap a one-loop double discontinuity. Compared to previous studies, the subsequent analysis is performed without any assumption about which special functions are allowed. Instead, the Casimir singular and Casimir regular terms are extracted iteratively in order to move from one Regge trajectory to the next. Our results show that anomalous dimensions in the presence of an S-fold are no longer rational functions of the spin. Published as SciPost Phys. 12, 149 (2022) ### List of changes In accordance with the referee comments: 1. Section 4.3.3 has been added to report spin-1 and spin-2 anomalous dimensions for the k=2 S-fold. 2. Section 4.5 has been added to discuss the one-loop Mellin amplitude. 3. A paragraph has been added to section 5 about which complications might arise in other theories. 4. There are now a few more citations to prior work on holographic correlators. 5. The main results previously had a sign error and a more substantial coefficient error in the k=4 case. These have been corrected. ### Submission & Refereeing History #### Published as SciPost Phys. 12, 149 (2022) Resubmission 2202.05261v4 on 12 April 2022 Submission 2202.05261v1 on 21 February 2022 ## Reports on this Submission ### Report With the new corrections, I think the draft is ready for submission. I think it would be nice if the $k>2$ case could also be given an exact expression, but perhaps that will require new methods. • validity: - • significance: - • originality: - • clarity: - • formatting: - • grammar: -
{}
# A Rust library for the Marlin preprocessing zkSNARK ### Related tags Cryptography rust cryptography r1cs marlin zksnark # Marlin marlin is a Rust library that implements a preprocessing zkSNARK for R1CS with universal and updatable SRS This library was initially developed as part of the Marlin paper, and is released under the MIT License and the Apache v2 License (see License). WARNING: This is an academic prototype, and in particular has not received careful code review. This implementation is NOT ready for production use. ## Overview A zkSNARK with preprocessing achieves succinct verification for arbitrary computations, as opposed to only for structured computations. Informally, in an offline phase, one can preprocess the desired computation to produce a short summary of it; subsequently, in an online phase, this summary can be used to check any number of arguments relative to this computation. The preprocessing zkSNARKs in this library rely on a structured reference string (SRS), which contains system parameters required by the argument system to produce/validate arguments. The SRS in this library is universal, which means that it supports (deterministically) preprocessing any computation up to a given size bound. The SRS is also updatable, which means that anyone can contribute a fresh share of randomness to it, which facilitates deployments in the real world. The construction in this library follows the methodology introduced in the Marlin paper, which obtains preprocessing zkSNARKs with universal and updatable SRS by combining two ingredients: • an algebraic holographic proof • a polynomial commitment scheme The first ingredient is provided as part of this library, and is an efficient algebraic holographic proof for R1CS (a generalization of arithmetic circuit satisfiability supported by many argument systems). The second ingredient is imported from poly-commit. See below for evaluation details. ## Build guide The library compiles on the stable toolchain of the Rust compiler. To install the latest version of Rust, first install rustup by following the instructions here, or via your platform's package manager. Once rustup is installed, install the Rust toolchain by invoking: rustup install stable After that, use cargo (the standard Rust build tool) to build the library: git clone https://github.com/arkworks-rs/marlin.git cd marlin cargo build --release This library comes with some unit and integration tests. Run these tests with: cargo test Lastly, this library is instrumented with profiling infrastructure that prints detailed traces of execution time. To enable this, compile with cargo build --features print-trace. ## Benchmarks All benchmarks below are performed over the BLS12-381 curve implemented in the ark-bls12-381 library, with the asm feature activated. Benchmarks were run on a machine with an Intel Xeon 6136 CPU running at 3.0 GHz. ### Running time compared to Groth16 The graphs below compare the running time, in single-thread execution, of Marlin's indexer, prover, and verifier algorithms with the corresponding algorithms of Groth16 (the state of the art in preprocessing zkSNARKs for R1CS with circuit-specific SRS) as implemented in groth16. We evaluate Marlin's algorithms when instantiated with the PC scheme from [CHMMVW20] (denoted "M-AHP w/ PC of [CHMMVW20]"), and the PC scheme from [MBKM19] (denoted "M-AHP w/ PC of [MBKM19]"). The following graphs compare the running time of Marlin's prover when instantiated with the PC scheme from [CHMMVW20] (left) and the PC scheme from [MBKM19] (right) when executed with a different number of threads. ### Proof size We compare the proof size of Marlin with that of Groth16. We instantiate the Marlin SNARK with the PC scheme from [CHMMVW20], and the PC scheme from [MBKM19]. Scheme Proof size in bytes Marlin AHP with PC of [CHMMVW20] 880 Marlin AHP with PC of [MBKM19] 784 [Groth16] 192 Unless you explicitly state otherwise, any contribution that you submit to this library shall be dual licensed as above (as defined in the Apache v2 License), without any additional terms or conditions. ## Reference paper Marlin: Preprocessing zkSNARKs with Universal and Updatable SRS Alessandro Chiesa, Yuncong Hu, Mary Maller, Pratyush Mishra, Psi Vesely, Nicholas Ward EUROCRYPT 2020 ## Acknowledgements This work was supported by: an Engineering and Physical Sciences Research Council grant; a Google Faculty Award; the RISELab at UC Berkeley; and donations from the Ethereum Foundation and the Interchain Foundation. • #### Fix outlining This PR turns on outlining and adds a test that triggers the outlining. Previous tests all would not trigger outlining and therefore did not discover this problem earlier. This PR is expected to fail before https://github.com/arkworks-rs/snark/pull/331 is merged. This closes #56. opened by weikengchen 8 A serious limitation of the benchmark is that it does not quantify the "weight" / "nonzero" / "density" yet. Any idea or suggestion? This, if good enough, closes https://github.com/arkworks-rs/marlin/issues/31. opened by weikengchen 8 • #### Trouble compiling Marlin with IPA-PC I'm trying to get a dummy test compiling with the IPA_PC scheme and Marlin / Pallas but my compiler says it doesn't implement the PolynomialCommitment trait which I can see it does (unless it just doesn't for this specific curve type) Am I setting things up incorrectly or does it not work for Fp256? error[E0277]: the trait bound InnerProductArgPC<ark_ec::models::short_weierstrass_jacobian::GroupAffine<PallasParameters>, Blake2s, DensePolynomial<Fp256<FrParameters>>>: ark_poly_commit::PolynomialCommitment<Fp256<FrParameters>, DensePolynomial<Fp256<FrParameters>>> is not satisfied --> src/tests.rs:79:19 | 79 | let srs = Marlin::< | ___________________^ 80 | | $bench_field, 81 | | InnerProductArgPC<$affine_curve, Blake2s, DensePolynomial<$bench_field>>, 82 | | Blake2s, 83 | | >::universal_setup(65536, 65536, 65536, rng) | |__________________________^ the trait ark_poly_commit::PolynomialCommitment<Fp256<FrParameters>, DensePolynomial<Fp256<FrParameters>>> is not implemented for InnerProductArgPC<ark_ec::models::short_weierstrass_jacobian::GroupAffine<PallasParameters>, Blake2s, DensePolynomial<Fp256<FrParameters>>> I'm under the impression that this is the implementation that I thought should work: https://github.com/arkworks-rs/poly-commit/blob/constraints/src/ipa_pc/mod.rs#L304. Nonetheless, I'm evoking this with the following args: fn bench_prove() { marlin_prove_bench!(pallas, ark_pallas::Fr, ark_pallas::Affine); } fn bench_verify() { marlin_verify_bench!(pallas, ark_pallas::Fr, ark_pallas::Affine); } opened by drewstone 6 • #### Master build is broken As of subject. There seems to be an incompatibility with the latest zexe. The culprit is plausibly this commit in zexe: https://github.com/scipr-lab/zexe/commit/ecc6057971c1bb1595ca5c5537afc5e0561a38d5#diff-4cca21435ee820e184b16118ad656f07 opened by matteocam 5 • #### Compile error "the trait ff_fft::domain::EvaluationDomain cannot be made into an object" This look like a breaking change in ff_fft crate. I suggest including a working Cargo.lock into the code, or pinning to a specific commit when you use git dependencies. ff-fft = { git = "https://github.com/scipr-lab/zexe/", default-features = false } opened by juchiast 5 • #### Replacing the FiatShamirRng with Merlin? Currently Marlin includes its own FiatShamirRng which uses chacha20 and a generic Digest function (instantiated with I think blake2s). Would you be interested in a PR that replaces it with Merlin? In contrast to the existing implementation, this provides more secure prover randomness generation, allows binding Marlin proofs to arbitrary structured application data rather than just a single domain separator string, or to transcripts of other proof protocols, and potentially makes the implementation slightly cleaner (although the FiatShamirRng API is already pretty reasonable). It also simplifies the (cryptographic) dependencies, as rather than relying on the security of both chacha20 and blake2s (or some other hash function), the security relies only on keccak-f/1600. I would be happy to create a PR for this change but only if it's one that you'd actually want. opened by hdevalence 5 • #### Update diagram.tex Mistake in definition of b(X) ## Description closes: #XXXX Before we can merge this PR, please make sure that all the following items have been checked off. If any of the checklist items are not applicable, please leave them but write a little note why. • [ ] Targeted PR against correct branch (master) • [ ] Linked to Github issue with discussion and accepted design OR have an explanation in the PR that describes this work. • [ ] Wrote unit tests • [ ] Updated relevant documentation in the code • [ ] Added a relevant changelog entry to the Pending section in CHANGELOG.md • [ ] Re-reviewed Files changed in the Github PR explorer opened by jon-chuang 4 • #### Unify the interface for Fiat-Shamir transform ## Description This PR abstracts the functionality needed for creating Fiat-Shamir challenges in Marlin. It allows for choices of other primitives instead of the hard-coded ChaChaRng with hash seed currently used. This hard-coded choice makes it difficult to use Marlin for some applications, e.g., in a solidity verifier where keccak would be preferred over chacha (https://github.com/Zokrates/ZoKrates/pull/1103). Other projects (https://github.com/AleoHQ/snarkVM/blob/testnet2/marlin/src/fiat_shamir/traits/fiat_shamir.rs#L28) have similarly abstracted out the Fiat-Shamir functionality for the purpose of using a SNARK-friendly hash. This PR can serve as a quick fix to allow customizability of the Marlin proof system. If a more comprehensive solution that supports constraints is desired (following the above link), perhaps that can be added in the future. This PR does the following: • Adds a FiatShamirRng trait to abstract the functionality needed for creating Fiat-Shamir challenges • Adds a SimpleHashFiatShamirRng struct that implements the functionality using the existing SeedableRng and Digest composition (the previous hard-coded functionality is obtained by instantiating with ChaChaRng) Test strategy: cargo test succeeds for Marlin instantiated with SimpleHashFiatShamirRng with ChaChaRng and Blake2s. ## Checklist • [x] Targeted PR against correct branch (master) • [x] Linked to Github issue with discussion and accepted design OR have an explanation in the PR that describes this work. • [ ] Wrote unit tests (tests pass, no functionality change -- just a refactor) • [x] Updated relevant documentation in the code • [x] Added a relevant changelog entry to the Pending section in CHANGELOG.md (not sure what to add here) • [x] Re-reviewed Files changed in the Github PR explorer opened by nirvantyagi 3 • #### A question in Marlin paper in section 5.3.2 Hello arkworks-rs, thanks for your the greater work Marlin. While reading the paper I encountered a question. In page 28, section 5.3.2, it says: $q_1(X)&space;=&space;h_1(X)&space;v_{H}(X)+Xg_1(X)$ However, in the same page equation (6) it says $\inline&space;q_1(X)$ sums to $\inline&space;\sigma_1$ on H. Therefor, according the univariate sumcheck for subgroups protocol introduced in section 5.1, $\inline&space;q_1(X)$ should be written as: $q_1(X)&space;=&space;h_1(X)v{H}(X)+Xg_1(X)+\sigma_1/|S|&space;$ if $\inline&space;\sigma_1$ is not zero. Is my understanding about wrong or something I have missed? Any help is appreciated, thanks. opened by wang12d 3 • #### Update digest requirement from 0.8 to 0.9 Updates the requirements on digest to permit the latest version. Commits Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase. Dependabot commands and options You can trigger Dependabot actions by commenting on this PR: • @dependabot rebase will rebase this PR • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it • @dependabot merge will merge this PR after your CI passes on it • @dependabot squash and merge will squash and merge this PR after your CI passes on it • @dependabot cancel merge will cancel a previously requested merge and block automerging • @dependabot reopen will reopen this PR if it is closed • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language • @dependabot badge me will comment on this PR with code to add a "Dependabot enabled" badge to your readme Additionally, you can set the following in your Dependabot dashboard: • Update frequency (including time of day and day of week) • Pull request limits (per update run and/or open at any time) • Out-of-range updates (receive only lockfile updates, if desired) • Security updates (receive only security updates, if desired) dependencies opened by dependabot-preview[bot] 3 • #### Unable to build marlin Thanks for the awesome work. I think poly-commit does have these features. the package marlin depends on poly-commit, with features: parallel, std but poly-commit does not have these features. failed to select a version for poly-commit which could resolve this conflict If I remove these features, I am still facing other issues probably because of the latest update to zexe library. I think everything works for commit #4ae139c commit in the zexe library if that helps in any debugging. opened by sanket1729 3 • #### Add sponge module ## Description Introduce the ability to create a sponge from a rate argument. This should later upstream to ark-sponge; for now, we are avoiding breaking changes in 0.3 Before we can merge this PR, please make sure that all the following items have been checked off. If any of the checklist items are not applicable, please leave them but write a little note why. • [ ] Targeted PR against correct branch (master) • [ ] Linked to Github issue with discussion and accepted design OR have an explanation in the PR that describes this work. • [ ] Wrote unit tests • [ ] Updated relevant documentation in the code • [ ] Added a relevant changelog entry to the Pending section in CHANGELOG.md • [ ] Re-reviewed Files changed in the Github PR explorer opened by vlopes11 0 • #### Fix and update dependencies to 0.3 ## Description The branch constraints isn't building with latest dependencies. Open questions • [ ] We declare DensePolynomial<F> concretely in some places, and expect the generic PC::BatchProof in others • [ ] fiat_shamir::AlgebraicSponge looks duplicate to ark_sponge::CryptographicSponge and should possibly be replaced closes: #87 • [ ] Targeted PR against correct branch (master) - N/A, this aims to fix constraints so the latter can be merged to master • [x] Linked to Github issue with discussion and accepted design OR have an explanation in the PR that describes this work. • [ ] Wrote unit tests • [ ] Updated relevant documentation in the code • [ ] Added a relevant changelog entry to the Pending section in CHANGELOG.md • [x] Re-reviewed Files changed in the Github PR explorer opened by vlopes11 2 • #### Fix and update dependencies to 0.3 ## Summary of Bug The branch constraints isn't building with latest dependencies. ## Version (https://github.com/arkworks-rs/marlin/commit/903c741e038be240d1c80622a843c4b011965d75) ## Steps to Reproduce $ cargo build opened by vlopes11 0 ## Summary My notes on how to implement Marlin + Plookup: https://hackmd.io/@gPH8I-Z5RcSMH2KTXLDeDg/BJcMTU9wO @Pratyush you may still recognize this. I don't think I'll have time to work on this; I'll let another enthusiastic cryptographer do so if they so desire. ## Proposal • [ ] Not duplicate issue • [ ] Appropriate labels applied • [ ] Appropriate contributors tagged • [ ] Contributor assigned/self-assigned opened by jon-chuang 0 ## Summary of Bug While adding Marlin as a backend for ZoKrates, I'm hitting an underflow panic during the setup phase: This works (1 public input: the return value): def main(private field x) -> field: return x**2 This fails (1 public input: the return value): def main(private field x) -> field: return x attempt to subtract with overflow ark-marlin-0.2.0/src/ahp/mod.rs:108:28 It seems like k_size is expected to be bigger than 2, but in this case isn't. 0.2.0 ## Steps to Reproduce For these examples I used a universal setup degree of 5 for all parameters. T-bug D-easy P-medium opened by Schaeff 3 ## Description Currently, the witness polynomial multiplication in the second round of the AHP requires performing an FFT with a domain of size 4 * |H|. This change distributes the multiplication so that it only requires an FFT with a domain of size 2 * |H| along with some cheap multiplications/additions. Running marlin-benches on a fairly powerful machine from master: per-constraint proving time for Bls12_381: 51662 ns/constraint per-constraint proving time for MNT4_298: 45649 ns/constraint per-constraint proving time for MNT6_298: 55951 ns/constraint per-constraint proving time for MNT4_753: 321626 ns/constraint Running marlin-benches on the same machine from this PR: per-constraint proving time for Bls12_381: 50834 ns/constraint per-constraint proving time for MNT4_298: 46081 ns/constraint per-constraint proving time for MNT6_298: 54172 ns/constraint per-constraint proving time for MNT4_753: 320387 ns/constraint So it seems to give an overall slight latency improvement and reduces the memory overhead of proving. I would guess that performance gains would be more noticeable on weaker machines. This PR depends on https://github.com/arkworks-rs/algebra/pull/258 Before we can merge this PR, please make sure that all the following items have been checked off. If any of the checklist items are not applicable, please leave them but write a little note why. • [x] Targeted PR against correct branch (master) • [x] Linked to Github issue with discussion and accepted design OR have an explanation in the PR that describes this work. • [ ] Wrote unit tests • [x] Updated relevant documentation in the code • [ ] Added a relevant changelog entry to the Pending section in CHANGELOG.md • [ ] Re-reviewed Files changed in the Github PR explorer opened by ryanleh 0 • #### v0.3.0(Jun 7, 2021) ###### arkworks An ecosystem for developing and programming with zkSNARKs ###### The Light Protocol program verifies zkSNARK proofs to enable anonymous transactions on Solana. Light Protocol DISCLAIMER: THIS SOFTWARE IS NOT AUDITED. Do not use in production! Tests cd ./program && cargo test-bpf deposit_should_succeed cd ./pr 34 Nov 20, 2022 ###### L2 validity rollup combined with blind signatures over elliptic curves inside zkSNARK, to provide offchain anonymous voting with onchain binding execution on Ethereum blind-ovote Blind-OVOTE is a L2 voting solution which combines the validity rollup ideas with blind signatures over elliptic curves inside zkSNARK, to 3 Nov 18, 2022 ###### A Rust library for working with Bitcoin SV Rust-SV A library to build Bitcoin SV applications in Rust. Documentation Features P2P protocol messages (construction and serialization) Address enco 51 Oct 13, 2022 ###### A Rust library for generating cryptocurrency wallets Table of Contents 1. Overview 2. Build Guide 2.1 Install Rust 2.2a Build from Homebrew 2.2b Build from Crates.io 2.2c Build from Source Code 3. Usage 534 Nov 22, 2022 ###### A modern TLS library in Rust Rustls is a modern TLS library written in Rust. It's pronounced 'rustles'. It uses ring for cryptography and libwebpki for certificate verification. S 3.9k Nov 27, 2022 ###### Sodium Oxide: Fast cryptographic library for Rust (bindings to libsodium) sodiumoxide |Crate|Documentation|Gitter| |:---:|:-----------:|:--------:|:-----:|:------:|:----:| |||| NaCl (pronounced "salt") is a new easy-to-use h 642 Nov 12, 2022 ###### rabe is an Attribute Based Encryption library, written in Rust Rabe rabe is a rust library implementing several Attribute Based Encryption (ABE) schemes using a modified version of the bn library of zcash (type-3 51 Nov 1, 2022 ###### A Rust library for lattice-based additive homomorphic encryption. Cupcake Cupcake is an efficient Rust library for the (additive version of) Fan-Vercauteren homomorphic encryption scheme, offering capabilities to enc 364 Dec 3, 2022 ###### Mundane is a Rust cryptography library backed by BoringSSL that is difficult to misuse, ergonomic, and performant (in that order). Mundane Mundane is a Rust cryptography library backed by BoringSSL that is difficult to misuse, ergonomic, and performant (in that order). Issues and 1k Nov 10, 2022 ###### The most advanced Merkle tree library for Rust rs-merkle rs-merkle is the most advanced Merkle tree library for Rust. Basic features include building a Merkle tree, creation, and verification of Me 80 Nov 23, 2022 ###### A wallet library for Elements / Liquid written in Rust! EDK Elements Dev Kit A modern, lightweight, descriptor-based wallet library for Elements / Liquid written in Rust! Inspired by BDK for Elements & Liqu 11 Dec 11, 2021 ###### Pairing cryptography library in Rust bn This is a pairing cryptography library written in pure Rust. It makes use of the Barreto-Naehrig (BN) curve construction from [BCTV2015] to provide 137 Aug 27, 2022 ###### Pairing cryptography library in Rust bn This is a pairing cryptography library written in pure Rust. It makes use of the Barreto-Naehrig (BN) curve construction from [BCTV2015] to provide 23 Apr 22, 2022 ###### Aya is an eBPF library for the Rust programming language, built with a focus on developer experience and operability. Aya API docs | Chat | Aya-Related Projects Overview eBPF is a technology that allows running user-supplied programs inside the Linux kernel. For more 1.4k Nov 25, 2022 ###### A Rust Library of China's Standards of Encryption Algorithms (SM2/3/4) Libsm Libsm is an open source pure rust library of China Cryptographic Algorithm Standards. It is completed by a collaborative effort between the Cryp 146 Nov 7, 2022 ###### A library to help you sew up your Ethereum project with Rust and just like develop in a common backend SewUp Secondstate EWasm Utility Program, a library helps you sew up your Ethereum project with Rust and just like development in a common backend. The 47 Nov 25, 2022 ###### Rust FFI bindings for StarkWare's crypto-cpp library starkware-crypto-rs Rust FFI bindings for StarkWare's crypto-cpp library Note that currently target x86_64-pc-windows-msvc is not supported. If you're 11 Aug 22, 2022 ###### An implementation of the paper "Honey Badger of BFT Protocols" in Rust. This is a modular library of consensus. Honey Badger Byzantine Fault Tolerant (BFT) consensus algorithm Welcome to a Rust library of the Honey Badger Byzantine Fault Tolerant (BFT) consensus 333 Nov 10, 2022 ###### Enigma Core library. The domain: Trusted and Untrusted App in Rust. Enigma Core library Service Master Develop CI Badge Pure Rust Enclave && Untrusted in Rust. Core is part of the Enigma node software stack. The Core c 89 Sep 14, 2022
{}
Hyacinthe Mukaritaganda of Kagenge village helped build a communal water tank, as part of the Millennium Villages project. Credit: S. TOMLIN Celestin Ndahayo smiles broadly at me from below the corn (maize) that towers a metre or more above him, his daughter Annalita clutching his hand. This is the first corn harvest he has seen here in almost ten years. There was a smaller harvest of beans and sorghum in 2001; last year there was nothing. In the years without good rains, the people of Kagenge (sometimes called Mayange) in Rwanda survive the best they can. Some walk four nights and three days to reach a more productive region. Ndahayo sometimes takes construction jobs to support his wife and four children. Hyacinthe Mukaritaganda's husband is one of those currently working elsewhere, leaving her to manage their land and look after three children on her own. This season she planted corn on one-fifth of their land, a short walk from Ndahayo's homestead. Thanks to the rains, she is expecting a good harvest, which should provide enough seeds to plant all 2.5 hectares next year. Then, she hopes, her husband will stay at home to help. The rains, though, are not the only things bringing hope to Kagenge. In 2005, the village was chosen to take part in the Millennium Villages project. Led by the Earth Institute at Columbia University in New York, the project is applying a range of poverty-slashing interventions to 12 sites across Africa (see map). The idea is not just to show that interventions in a number of different areas, properly coordinated and financed, can make a sustainable change to the lives of the world's poorest communities. It is to show how that can be done quickly in a way that can be replicated easily. Donald Ndahiro, an agronomist trained in Uganda, is the project's agriculture coordinator for Rwanda. He says that when he arrived in Kagenge late last year conditions were desperate. “The villagers were emaciated.” They wanted food aid more than they wanted the agricultural advice, drought-resistant seeds, fertilizer and new techniques that the project was offering. “They thought we were making fun of them,” Ndahiro says. “We were telling them how to plant, how to harvest, but they were saying they were never getting any good rains. We told them to get organized.” Five months later and the villagers are getting organized. Ndahayo is a member of the agriculture committee that will decide what to do with the surplus from this year's corn harvest. Mukaritaganda is helping to clear land for a tree nursery (villagers sometimes walk ten kilometres to gather firewood) and was part of the team that just built a communal tank to collect rainwater. She invites me with pride to a ceremony in which certificates are awarded to her and the 25 other villagers who worked on the tank. Fertilizer, seeds and advice are being given to 12 villages in Africa, to demonstrate how properly coordinated interventions can make a sustainable difference to people's lives. Credit: D. NDAHIRO The Millennium Villages project aims to provide improved resources and techniques not only in agriculture, but also in health, education, transport, energy and water provision, and financial management. The plan is to achieve the United Nations' Millennium Development Goals (see box) for the 5,000 or so people in Kagenge, and for the tens of thousands of people in the 11 villages elsewhere within 5 years — 5 years ahead of the UN target date. The eight goals, committed to by 189 heads of state in 2000, include halving the number of people living on less than US$1 a day and controlling malaria by 2015. Progress so far has been limited, especially in Africa — far too slow for the impatient economist Jeffrey Sachs, head of the UN Millennium Project and the Earth Institute. Sachs wants the 'research villages' and the data that they provide to offer ways of picking up the pace: “The idea is to demonstrate a practical path and to mobilize governments.” The idea is to demonstrate a practical path and to mobilize governments. Jeffrey Sachs The man in charge of making such a demonstration is Josh Ruxin, a Columbia University public-health expert and the project's director in Rwanda. Ruxin, imbued with an impressive energy and passion, was initially sceptical of the village-by-village approach: he wanted to target a millennium country not an isolated village. But Ruxin is encouraged by the Rwandan government's own ambitious poverty-reduction strategy, known as Vision 2020. Ben Karenzi, the Rwandan health ministry's secretary general says “We believe it's possible, especially with the focused leadership we have and the commitment of our people, to make Rwanda a mid-level income country by 2020.” In the context of that commitment, Ruxin is confident that with the help of the Millennium Villages project, Rwandans can succeed in not just turning round one village, but in transforming life for poor farmers across the country. Money cares Part of Ruxin's confidence comes from an assessment of the government. In the aftermath of the genocide of 1994 and the resettlement of some two million returnees from neighbouring countries, 64% of Rwanda's population was living in poverty (on less than US$1 a day) in 2000. But despite its internationally criticized role in the Congo war, the government of Paul Kagame is widely seen as committed to poverty reduction, and as embodying principles of good governance from the top down (for example, all ministers are required to declare their annual income). Ruxin believes that good governance will be an important factor in the long-term success of the millennium villages. Those running the project have deliberately avoided what they see as the worst African regimes. But they say that even in corruption-prone nations, such as Ethiopia and Kenya, the research villages so far remain free of corruption. Sachs points out that if you focus on supplying commodities, such as seeds, fertilizer and nets for protection against malaria, “there's very little money that changes hands”. That said, Sachs is less worried than many about corruption; he knows people criticize this lack of concern, but doesn't care. “Corruption is way down the list of practical issues,” he argues, Africa's miserable roads, poor soil and endemic disease burden are at the top. Indeed, the road from Kigali, Rwanda's capital, to Kagenge in the Nyamata district, throws up choking red dust in the dry season and can be impassable in the rainy season. Although the land looks green from the air, the rains can be infrequent and Ndahiro confirms that the soils are poor. Some 70% of the patients at the village's clinic have malaria and the district has one of the country's highest levels of HIV, at about 13%. As well as suffering from Sachs's top three problems, Kagenge has its own particular sadnesses. The local mayor, Gaspard Musonera, lost three-quarters of his family in the 1994 genocide. “Nyamata district lost more than half of its population,” he says. “The implications and consequences of that you can imagine for yourself.” Angelique Kanyange is one of a handful of doctors working in rural Rwanda. Credit: S. TOMLIN Kagenge itself is a community created since the genocide. Half the households live in settlement housing — or umudugudu — built by the government for survivors and returnees. Ndahiro is himself a returnee, living in Nyamata near the church where 10,000 people were murdered in 1994 — the blood stains on the walls and altar cloth remain as a memorial. He recalls how lifeless the town was when he arrived in 1997. People were bitter, he says; some didn't want to continue living. Grand plan Musonera sees the Millennium Villages project as a sign of hope for the most vulnerable people in his district, and a big test for poverty-reduction measures. “If it can be done here, it means it can be done elsewhere,” he says — and that indeed is the point. The project is not just about breaking the cycle of poverty in 12 villages, but about learning how to do it in 1,200 or 12,000. Sachs's plan is to show that with a five-year investment of about US$550 per person —$50 a year from the project, $30 from government,$20 from other donors and $10 from the villagers — an integrated package of low-cost interventions can produce long-term financial sustainability in a way that not only can be repeated but can also be scaled up. The project plans to grow to 78 villages this year by creating clusters around the 12 original research villages. The expansion is being funded by the US Millennium Promise charity, which has so far raised$100 million to support Sachs's vision. Not everyone is convinced that the Millennium Villages project will succeed. Ecologist Ian Scoones at the Institute of Development Studies at the University of Sussex in Brighton, UK, is a member of the Future Agricultures Consortium, which was put together by the UK Department for International Development to focus on African agriculture and development. Scoones points to the Integrated Rural Development and 'villagization' schemes that tried to boost African agriculture in the 1970s and 1980s. “They created little islands of success but when donors pulled the plug they all collapsed.” Scoones says he is very pleased that the millennium villages are putting African agriculture back on the map, but he is afraid of old mistakes being repeated, and worried about things moving too quickly. “India launched its green revolution in the mid-1960s on the back of decades of solid investment and research,” he points out. “It didn't happen overnight.” For his part, Sachs sees patience, like a well-developed sensitivity to the issue of corruption, as an overrated virtue. He has no worries about moving too fast. “It happens to be an emergency,” he says. And he has no illusions about the projects working as examples simply by word of mouth. “This is not viral. You can't do it without resources,” Sachs notes — as ambitions grow, so must spending. The biggest risk, he says, is for official donors to sit on their hands. The goal is not to do without large transfers of money to Africa, it is to work out how to make those transfers more effective. After all, the individual interventions used in the Millennium Villages project are tried and tested methods, even if they haven't been applied all together in one location before. Asked about the target of reaching the millennium goals in five years, Celina Schocken, an international-affairs fellow at the US Council on Foreign Relations, says “I absolutely believe they will succeed. I don't see how they can't.” But she's less convinced about how scale up will be achieved. “What good is an island of prosperity anyway?” she asks. Scoones agrees that the big question is: “How, without that external support, do you replicate?” Donald Ndahiro, an agronomist working for the Millennium Villages project, has helped to transform conditions in Kagenge in just five months. Credit: D. NDAHIRO That is the question Sachs, Ruxin and their colleagues are trying to answer. By documenting all the inputs and outputs for each research village they hope to tease out the synergies between overlapping interventions. Measuring 35 indicators for the 8 goals across several hundred households in 12 villages is time consuming and costly, but it is necessary to show not just that the investments work, but also how they work, and how they can work better. Only then can they be scaled up to the truly monumental level envisioned by Sachs, who wants to see development aid change the course of history. “I think the biggest challenge is the defeatist attitude of the official donor community,” says Sachs. Such rhetoric reinforces the suspicion that Sachs is unwilling to learn from lessons of the past. “People in the development community see some benefit in the publicity Jeff Sachs gets,” says Schocken, who used to work with Ruxin in Rwanda, “but they've seen these ideas before.” Skill shortage In Kagenge, the villagers assembled for the water-tank certificate ceremony are briefly reminded of the international debate over their future. “This is an important day for the project,” Ruxin tells them during a short address, “You are now the teachers for us and for the world.” Some of the farmers I met in the fields yesterday have donned suits and ties for the occasion. Each villager who received training from visiting Kenyan water specialists receives a signed certificate — the expectation is that they will take the skills they have learned and pass them on to others. After many more speeches by village leaders, the villagers distribute soft drinks and, for those who can stomach it, fermented sorghum, the local brew. In the weeks before the corn is harvested, the contrast between Kagenge and the surrounding area is already striking. An emergency feeding centre supported by the UN Children's Fund UNICEF and the World Food Programme was set up in Kagenge in early March, in response to reports of serious malnutrition following last year's drought. Four hundred people from the wider local population are still receiving weekly rations, but not those of Kagenge. The bean crop and corn picked straight from the fields before the harvest mean that they have enough to eat. They also have a functioning health centre, which serves Kagenge and four neighbouring communities of similar size. The centre now has its own doctor, Angelique Kanyange, known to everyone as Dr Angelique, and its nursing staff has doubled in number. Dr Angelique is zealously improving the nurses' cleaning procedures with demonstrations of the use of a broom and disinfectant. Today, the clinic is seeing more than 35 patients a day, as well as some 50 mothers bringing children for immunization. One of the new patients is Musabyimana, an 8-year-old boy who is blind and in pain because of severe cataracts. His mother noticed his poor sight when he was three months old, but this is his first visit to the clinic. Dr Angelique is not sure what caused the cataracts, but there is hope, she says, because Musabyimana seems to be able to detect some light and colour. She will refer him to a specialist for treatment. For now, the project will fund it; the mother, a widow, could never afford it. Rural parts of Rwanda have community medical insurance schemes, but only 12% of families in Kagenge have cover. The goal of the Millennium Villages project is to get 100% coverage, with the hope that as the clinic becomes more useful to patients, more will join the scheme. From village to province It is here, though, that scaling up looks harder than it does in agriculture. Buying more fertilizers is easier than making more doctors. “Angeliques are hard to come by,” admits Ruxin. Indeed, Dr Angelique is the first government-appointed doctor in a rural health centre, demonstrating the government's commitment to the Millennium Project but also, perhaps, the project's weakness. “The president of Rwanda says 'I want this village to work', so they are going to get the best,” says Schocken. There are currently about 200 doctors in the country. The medical schools may be able to produce 60 or 80 a year, but the country has a long way to go to reach the World Health Organization's minimum recommended level of one doctor per 5,000 people. Sachs claims recruitment problems can be overcome with decent salaries. Although the project is mostly about spending money on physical resources, he is in favour of top-up payments for doctors. But even with targeted salary increases, a country such as Rwanda suffers skill shortages in every sector. Kagenge currently has a team of ten dedicated people who work long hours to motivate the villagers and document their progress. Detailed accounts were not made available to Nature, but in its first year the Kagenge project will spend as much on personnel as on materials. The budget for the cluster villages being set up in addition to the original 12 is smaller, and they will have much fewer support staff. “Research on top is extremely expensive,” notes Ruxin, explaining that in future, and in villages that aren't research focused, costs should be much lower. But Sachs's claim that “the science behind this is broadly transferable without needing large teams” has yet to be put to hard tests. We want a Millennium province, not just a Millennium village. Theoneste Mutsindashyaka The budgets matter to Theoneste Mutsindashyaka, a former mayor of Kigali and the governor of Rwanda's eastern province, which includes Nyamata and covers a quarter of Rwanda's population. He is a great fan of the Kagenge project, in part because it fits so well with the government's Vision 2020. He wants the figures so that he can roll out projects informed by the experience more widely. “The documentation is very important to me because I have to negotiate with partners,” he says. He is impatient to get moving on the next stage of the project: “We want a millennium province, not just a millennium village,” he says. Within the next year, Mutsindashyaka wants to set up a Kagenge-like village in each of his provinces' seven districts. “We are going to move village to sector, sector to district, but you have to have money,” he says. And he is certain he can sell the idea to his friends all over the world, from Quincy Jones to Donald Kaberuka, the president of the African Development Bank. And although scaling up to 3,600 villages is daunting, the governor says he only needs the numbers from Kagenge to get started: “I am total 100% confident that the project will succeed.” From Sachs to the president to the governor to the mayor, the ambitions for transforming the country are vast. But in Kagenge, despite the good rains, the villagers themselves remain wary. They are not as confident that they will achieve rapid progress as the project leaders. Anxiety about what to do with the harvest surplus is high. Celestin Ndahayo and other farmers worry about whether they can really afford both to sell corn and store enough for food security; they are not sure they believe Ndahiro's forecasts for the yields of their smallholdings. And what if the rains don't come next year? In his experience, says one umudugudu farmer, when a project is here, then the rains come. Back in 2001, an organization helped them to plant cassava and sweet potato and the rains came. But when they left the rains stopped. So as long as the Millennium Villages project is here he believes it will rain again. He doesn't believe, yet, that his village can learn to flourish in the project's absence.
{}
# Irrigation Engineering Questions and Answers – Regulation Modules – Metering Flumes Types « » This set of Irrigation Engineering Multiple Choice Questions & Answers (MCQs) focuses on “Regulation Modules – Metering Flumes Types”. 1. For what purpose is the meter used? a) Measuring the amount of Silt entering the Canal b) Measuring the Velocity of the Flow c) Measuring the Hydraulic Jump d) Measuring Discharge Explanation: The meter is a structure constructed in a canal for measuring the discharge of the canal accurately. It is an artificially flumed structure. 2. What is the slope of a throat? a) From 0.5:1 to 1:1 b) From 1:1 to 3:1 c) From 1:1 to 2:1 d) From 1:3 to 1:2 Explanation: By using the masonry walls the normal section of the channel is narrowed with a slope of 1:1 to 2:1 to a rectangular section known as the throat. 3. After the throat the channel is diverged to avoid the loss of head in the flume. a) True b) False Explanation: From the throat the channel is diverged so as to attain its normal section with the help of masonry wings with a slope of 2:1 to 10:1. Therefore due to gradual convergence and divergence the loss of head in the flume will be less. 4. Metering flumes works on the principle of venturi meter. a) True b) False Explanation: Since we use the throat for converging purpose in the flume the velocity of the flow increases. After the throat the channel is diverged for expansion, thus again the velocity becomes normal. Due to this increase and decrease of the velocity we can measure the discharge of the flow. This is quiet the procedure on which the venturi meter works. Hence, the metering flumes works on the principle of venturi meter. 5. How many types of metering flumes are used? a) 3 b) 2 c) 4 d) 5 Explanation: The two types of metering flumes used, which work on the principle of venturi meter are non-modular venturi flume and standing wave flume. 6. Which type of flume does the diagram depict? a) Modular Venturi Flume b) Free Flow Venturi Flume c) Venturi Flume d) Standing Wave Flume Explanation: The diagram represents a gradual channel leading to throat and gradual expanding of channel leading away. Moreover stilling wells are provided at the entrance and at the throat for measuring the head. 7. What is the formula for discharge when venturi meter is used? a) $$Q = C_d (a_2/\sqrt{(a_1^2 – a_2^2)})\sqrt{2h}$$ b) $$Q = C_d (a_1 * a_2/\sqrt{(a_1^2 – a_2^2)})\sqrt{2gh}$$ c) $$Q = (a_1 * a_2/\sqrt{(a_1^2 – a_2^2)})\sqrt{2g}$$ d) $$Q = C_d (a_1/\sqrt{(a_1^2 – a_2^2)})\sqrt{2gh}$$ Explanation: When the difference between the two stilling wells is h, then the discharge is given by the formula $$Q = C_d(a_1 * a_2 / \sqrt{(a_1^2 – a_2^2)})\sqrt{2gh}$$ where, Cd = 0.95 to 1, a1 = area at entrance, and a2 = area at the throat. 8. What structure does the diagram represent? a) Standing Wave Flume b) Venturi Flume c) Non modular Venturi Flume d) Drowned Venturi Flume Explanation: The standing wave or say the hydraulic jump forms on the downstream glacis in the diverging channel. Therefore this diagram is standing wave flume. 9. What is the main drawback of standing wave flume? a) Loss of Hydraulic Jump b) Loss of Velocity of Flow d) Loss of Discharge Explanation: The main and only drawback of standing wave flume is that, it has a greater tendency for the loss of head (HL) and if this loss is not available it acts as a venturi meter. Due to this reason the canal fall sites are used as standing wave flumes. 10. What is the formula for discharge, when standing wave flume is used? a) Q = 1.7 x B x H3/2 b) Q = 1.7 x Cd x B x H3/2 c) Q = 1.7 x Cd x B x H d) Q = 1.7 x Cd x H3/2 Explanation: When the width of the throat (B) is known we can calculate the discharge in the standing wave flume using the formula, Q = 1.7 x Cd x B x H3/2 where Cd = 0.95 to 1. Sanfoundry Global Education & Learning Series – Irrigation Engineering. To practice all areas of Irrigation Engineering, here is complete set of 1000+ Multiple Choice Questions and Answers.
{}
# Need help understanding $\frac {dx}{\cos^2(\frac{x}{2})} = 2d(\operatorname{tg}(\frac{x}{2}))$ I have found this statement somewhere, however, I dont really understand it. Could someone explain me where does $2$ before $\operatorname{tg}(x/2)$ come from? $$\frac {dx}{\cos^2(\frac{x}{2})} = 2d(\operatorname{tg}(\frac{x}{2}))$$ - $\displaystyle{sec^2 x = \frac{1}{cos^2 x}}$ –  Kirthi Raman Mar 15 '12 at 2:49 • $d(\tan x)=\sec^2 x \rm dx$ • $\operatorname{tg}(x)$ is an archaic name for $\tan x$ $d(\tan \dfrac x 2)=\dfrac 1 2\sec^2\dfrac x 2 \mathrm{d}x \implies \dfrac{\mathrm{d}x}{\cos^2 \dfrac{x}{2}}=2\mathrm{d}(\tan \dfrac{x}{2})$ $\mathrm{tg}$ is also the notation used in Russia and the Balkan countries, if memory serves. –  J. M. Aug 1 '12 at 8:16
{}
how do u simplify radicals? 5x√75x^3 2. $5x\sqrt{75x^3}$ Does it look like this? If so look for squares in the radical. (3)(5)(5)(x)(x)(x) so $25x^2\sqrt{3x}$ 3. yes & thx 4. Originally Posted by blame_canada100 how do u simplify radicals? 5x√75x^3 $=5x\sqrt{75x^3} $ $=5x\sqrt{5.5.3.x^2.x}$ $=5x.5x\sqrt{3x}$ $=25x^2\sqrt{3x}$
{}
# Calc6.15 Discuss the convergence of the series $\sum_{n=1}^{\infty}\frac{(-1)^{n+1}n^2}{e^n}\,$. The absolute value of the terms of this sequence is decreasing and goes to 0 as $n\rightarrow \infty$. If you can not see this, use $L'H\hat{o}pital's$ Rule. First, in taking the limit as $n\rightarrow \infty$, we get $\frac{\infty}{\infty}$. Since this is an indeterminate form, it is likely that $L'H\hat{o}pital's$ Rule might be the way to find the limit of the terms. $\lim_{n \to \infty}\frac{n^2}{e^n}=\lim_{n \to \infty}\frac{2n}{e^n}=\lim_{n \to \infty}\frac{2}{e^n}=0\,$ Thus, by $L'H\hat{o}pital's$ Rule, since the limit exists, this method gives us the correct limit. The terms of this series go to 0 because the exponential function increases much more quickly than the power function. This also indicates that the absolute value of the terms is decreasing. Therefore, by the alternate series test, we know that this series converges. ##### Toolbox Get A Wifi Network Switcher Widget for Android
{}
anonymous 5 years ago write an expression equal to 1/tan^2x $\frac{1}{\tan^2x} = \cot^2x = \frac{\cos^2x}{\sin^2x}$ ^_^
{}
## Getting Started with Python PyAutoGUI ### Introduction In this tutorial, we're going to learn how to use pyautogui library in Python 3. The PyAutoGUI library provides cross-platform support for managing mouse and keyboard operations through code to enable automation of tasks. The pyautogui library is also available for Python 2; however, we will be using Python 3 throughout the course of this tutorial. A tool like this has many applications, a few of which include taking screenshots, automating GUI testing (like Selenium), automating tasks that can only be done with a GUI, etc. Before you go ahead with this tutorial, please note that there are a few prerequisites. You should have a basic understanding of Python's syntax, and/or have done at least beginner level programming in some other language. Other than that, the tutorial is quite simple and easy to follow for beginners. ### Installation The installation process for PyAutoGUI is fairly simple for all Operating Systems. However, there are a few dependencies for Mac and Linux that need to be installed before the PyAutoGUI library can be installed and used in programs. #### Windows For Windows, PyAutoGUI has no dependencies. Simply run the following command in your command prompt and the installation will be done. $pip install PyAutoGUI #### Mac For Mac, pyobjc-core and pyobjc modules are needed to be installed in sequence first. Below are the commands that you need to run in sequence in your terminal for successful installation: $ pip3 install pyobjc-core $pip3 install pyobjc$ pip3 install pyautogui #### Linux For Linux, the only dependency is python3-xlib (for Python 3). To install that, followed by pyautogui, run the two commands mentioned below in your terminal: $pip3 install python3-xlib$ pip3 install pyautogui ### Basic Code Examples In this section, we are going to cover some of the most commonly used functions from the PyAutoGUI library. #### Generic Functions ##### The position() Function Before we can use PyAutoGUI functions, we need to import it into our program: import pyautogui as pag This position() function tells us the current position of the mouse on our screen: pag.position() Output: Point (x = 643, y = 329) ##### The onScreen() Function The onScreen() function tells us whether the point with coordinates x and y exists on the screen: print(pag.onScreen(500, 600)) print(pag.onScreen(0, 10000)) Output: True False Here we can see that the first point exists on the screen, but the second point falls beyond the screen's dimensions. ##### The size() Function The size() function finds the height and width (resolution) of a screen. pag.size() Output: Size (width = 1440, height = 900) Your output may be different and will depend on your screen's size. #### Common Mouse Operations In this section, we are going to cover PyAutoGUI functions for mouse manipulation, which includes both moving the position of the cursor as well as clicking buttons automatically through code. ##### The moveTo() Function The syntax of the moveTo() function is as follows: pag.moveTo(x_coordinate, y_coordinate) The value of x_coordinate increases from left to right on the screen, and the value of y_coordinate increases from top to bottom. The value of both x_coordinate and y_coordinate at the top left corner of the screen is 0. Look at the following script: pag.moveTo(0, 0) pag.PAUSE = 2 pag.moveTo(100, 500) # pag.PAUSE = 2 pag.moveTo(500, 500) In the code above, the main focus is the moveTo() function that moves the mouse cursor on the screen based on the coordinates we provide as parameters. The first parameter is the x-coordinate and the second parameter is the y-coordinate. It is important to note that these coordinates represent the absolute position of the cursor. One more thing that has been introduced in the code above is the PAUSE property; it basically pauses the execution of the script for the given amount of time. The PAUSE property has been added in the above code so that you can see the function execution; otherwise, the functions would execute in a split second and you wont be able to actually see the cursor moving from one location to the other on the screen. Another workaround for this would be to indicate the time for each moveTo() operation as the third parameter in the function, e.g. moveTo(x, y, time_in_seconds). Executing the above script may result in the following error: Note: Possible Error Traceback (most recent call last): File "a.py", line 5, in <module> pag.moveTo (100, 500) File "/anaconda3/lib/python3.6/site-packages/pyautogui/__init__.py", line 811, in moveTo _failSafeCheck() File "/anaconda3/lib/python3.6/site-packages/pyautogui/__init__.py", line 1241, in _failSafeCheck raise FailSafeException ('PyAutoGUI fail-safe triggered from mouse moving to a corner of the screen. To disable this fail-safe, set pyautogui.FAILSAFE to False. DISABLING FAIL-SAFE IS NOT RECOMMENDED.') pyautogui.FailSafeException: PyAutoGUI fail-safe triggered from mouse moving to a corner of the screen. To disable this fail-safe, set pyautogui.FAILSAFE to False. DISABLING FAIL-SAFE IS NOT RECOMMENDED. If the execution of the moveTo() function generates an error similar to the one shown above, it means that your computer's fail-safe is enabled. To disable the fail-safe, add the following line at the start of your code: pag.FAILSAFE = False This feature is enabled by default so that you can easily stop execution of your pyautogui program by manually moving the mouse to the upper left corner of the screen. Once the mouse is in this location, pyautogui will throw an exception and exit. ##### The moveRel() Function The coordinates of the moveTo() function are absolute. However, if you want to move the mouse position relative to the current mouse position, you can use the moveRel() function. What this means is that the reference point for this function, when moving the cursor, would not be the top left point on the screen (0, 0), but the current position of the mouse cursor. So, if your mouse cursor is currently at point (100, 100) on the screen and you call the moveRel() function with the parameters (100, 100, 2) the new position of your move cursor would be (200, 200). You can use the moveRel() function as shown below: pag.moveRel(100, 100, 2) The above script will move the cursor 100 points to the right and 100 points down in 2 seconds, with respect to the current cursor position. ##### The click() Function The click() function is used to imitate mouse click operations. The syntax for the click() function is as follows: pag.click(x, y, clicks, interval, button) The parameters are explained as follows: • x: the x-coordinate of the point to reach • y: the y-coordinate of the point to reach • clicks: the number of clicks that you would like to do when the cursor gets to that point on screen • interval: the amount of time in seconds between each mouse click i.e. if you are doing multiple mouse clicks • button: specify which button on the mouse you would like to press when the cursor gets to that point on screen. The possible values are right, left, and middle. Here is an example: pag.click(100, 100, 5, 2, 'right') You can also execute specific click functions as follows: pag.rightClick(x, y) pag.doubleClick(x, y) pag.tripleClick(x, y) pag.middleClick(x, y) Here the x and y represent the x and y coordinates, just like in the previous functions. You can also have more fine-grained control over mouse clicks by specifying when to press the mouse down, and when to release it up. This is done using the mouseDown and mouseUp functions, respectively. Here is a short example: pag.mouseDown(x=x, y=y, button='left') pag.mouseUp(x=x, y=y, button='left') The above code is equivalent to just doing a pag.click(x, y) call. ##### The scroll() Function The last mouse function we are going to cover is scroll. As expected, it has two options: scroll up and scroll down. The syntax for the scroll() function is as follows: pag.scroll(amount_to_scroll, x=x_movement, y=y_movement) To scroll up, specify a positive value for amount_to_scroll parameter, and to scroll down, specify a negative value. Here is an example: pag.scroll(100, 120, 120) Alright, this was it for the mouse functions. By now, you should be able to control your mouse's buttons as well as movements through code. Let's now move to keyboard functions. There are plenty, but we will cover only those that are most frequently used. #### Common Keyboard Operations Before we move to the functions, it is important that we know which keys can be pressed through code in pyautogui, as well as their exact naming convention. To do so, run the following script: print(pag.KEYBOARD_KEYS) Output: ['\t', '\n', '\r', ' ', '!', '"', '#', '\$', '%', '&', "'", '(', ')', '*', '+', ',', '-', '.', '/', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', ':', ';', '<', '=', '>', '?', '@', '[', '\\', ']', '^', '_', '', 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', '{', '|', '}', '~', 'accept', 'add', 'alt', 'altleft', 'altright', 'apps', 'backspace', 'browserback', 'browserfavorites', 'browserforward', 'browserhome', 'browserrefresh', 'browsersearch', 'browserstop', 'capslock', 'clear', 'convert', 'ctrl', 'ctrlleft', 'ctrlright', 'decimal', 'del', 'delete', 'divide', 'down', 'end', 'enter', 'esc', 'escape', 'execute', 'f1', 'f10', 'f11', 'f12', 'f13', 'f14', 'f15', 'f16', 'f17', 'f18', 'f19', 'f2', 'f20', 'f21', 'f22', 'f23', 'f24', 'f3', 'f4', 'f5', 'f6', 'f7', 'f8', 'f9', 'final', 'fn', 'hanguel', 'hangul', 'hanja', 'help', 'home', 'insert', 'junja', 'kana', 'kanji', 'launchapp1', 'launchapp2', 'launchmail', 'launchmediaselect', 'left', 'modechange', 'multiply', 'nexttrack', 'nonconvert', 'num0', 'num1', 'num2', 'num3', 'num4', 'num5', 'num6', 'num7', 'num8', 'num9', 'numlock', 'pagedown', 'pageup', 'pause', 'pgdn', 'pgup', 'playpause', 'prevtrack', 'print', 'printscreen', 'prntscrn', 'prtsc', 'prtscr', 'return', 'right', 'scrolllock', 'select', 'separator', 'shift', 'shiftleft', 'shiftright', 'sleep', 'space', 'stop', 'subtract', 'tab', 'up', 'volumedown', 'volumemute', 'volumeup', 'win', 'winleft', 'winright', 'yen', 'command', 'option', 'optionleft', 'optionright'] ##### The typewrite() Function The typewrite() function is used to type something in a text field. Syntax for the function is as follows: pag.typewrite(text, interval) Here text is what you wish to type in the field and interval is time in seconds between each key stroke. Here is an example: pag.typewrite('Junaid Khalid', 1) Executing the script above will enter the text "Junaid Khalid" in the field that is currently selected with a pause of 1 second between each key press. Another way this function can be used is by passing in a list of keys that you'd like to press in a sequence. To do that through code, see the example below: pag.typewrite(['j', 'u', 'n', 'a', 'i', 'd', 'e', 'backspace', 'enter']) In the above example, the text junaide would be entered, followed by the removal of the trailing e. The input in the text field will be submitted by pressing the Enter key. ##### The hotkey() Function If you haven't noticed this so far, the keys we've shown above have no mention for combined operations like Control + C for the copy command. In case you're thinking you could do that by passing the list ['ctrl', 'c'] to the typewrite() function, you are wrong. The typewrite() function would press both those buttons in a sequence, not simultaneously. And as you probably already know, to execute the copy command, you need to press the C key while holding the ctrl key. To press two or more keys simultaneously, you can use the hotkey() function, as shown here: pag.hotkey('shift', 'enter') pag.hotkey('ctrl', '2' ) # For the @ symbol pag.hotkey('ctrl', 'c') # For the copy command ##### The screenshot() Function If you would like to take a screenshot of the screen at any instance, the screenshot() function is the one you are looking for. Let's see how we can implement that using PyAutoGUI: scree_shot = pag.screenshot() # to store a PIL object containing the image in a variable This will store a PIL object containing the image in a variable. If, however, you want to store the screenshot directly to your computer, you can call the screenshot function like this instead: pag.screenshot('ss.png') This will save the screenshot in a file, with the filename given, on your computer. ##### The confirm(), alert(), and prompt() Functions The last set of functions that we are going to cover in this tutorial are the message box functions. Here is a list of the message box functions available in PyAutoGUI: 1. Confirmation Box: Displays information and gives you two options i.e. OK and Cancel 2. Alert Box: Displays some information and to acknowledge that you have read it. It displays a single button i.e. OK 3. Prompt Box: Requests some information from the user, and upon entering, the user has to click the OK button Now that we have seen the types, let's see how we can display these buttons on the screen in the same sequence as above: pag.confirm("Are you ready?") In the output, you will see the following sequence of message boxes. Confirm: In this tutorial, we learned how to use PyAutoGUI automation library in Python. We started off by talking about pre-requisites for this tutorial, its installation process for different operating systems, followed by learning about some of its general function. After that we studied the functions specific to mouse movements, mouse control, and keyboard control. After following this tutorial, you should be able to use PyAutoGUI` to automate GUI operations for repetitive tasks in your own application.
{}
# Practical Introduction to Multiresolution Analysis This example shows how to perform and interpret basic signal multiresolution analysis (MRA). The example uses both simulated and real data to answer questions such as: What does multiresolution analysis mean? What insights about my signal can I gain performing a multiresolution analysis? What are some of the advantages and disadvantages of different MRA techniques? Many of the analyses presented here can be replicated using the Signal Multiresolution Analyzer app. ### What Is Multiresolution Analysis? Signals often consist of multiple physically meaningful components. Quite often, you want to study one or more of these components in isolation on the same time scale as the original data. Multiresolution analysis refers to breaking up a signal into components, which produce the original signal exactly when added back together. To be useful for data analysis, how the signal is decomposed is important. The components ideally decompose the variability of the data into physically meaningful and interpretable parts. The term multiresolution analysis is often associated with wavelets or wavelet packets, but there are non-wavelet techniques which also produce useful MRAs. As a motivating example of the insights you can gain from an MRA, consider the following synthetic signal. ```Fs = 1e3; t = 0:1/Fs:1-1/Fs; comp1 = cos(2*pi*200*t).*(t>0.7); comp2 = cos(2*pi*60*t).*(t>=0.1 & t<0.3); trend = sin(2*pi*1/2*t); rng default wgnNoise = 0.4*randn(size(t)); x = comp1+comp2+trend+wgnNoise; plot(t,x) xlabel('Seconds') ylabel('Amplitude')``` The signal is explicitly composed of three main components: a time-localized oscillation with a frequency of 60 cycles/second, a time-localized oscillation with a frequency of 200 cycles/second, and a trend term. The trend term here is also sinusoidal but has a frequency of 1/2 cycle per second, so it completes only 1/2 cycle in the one-second interval. The 60 cycles/second or 60 Hz oscillation occurs between 0.1 and 0.3 seconds, while the 200 Hz oscillation occurs between 0.7 and 1 second. Not all of this is immediately evident from the plot of the raw data because these components are mixed. Now, plot the signal from a frequency point of view. ```xdft = fft(x); N = numel(x); xdft = xdft(1:numel(xdft)/2+1); freq = 0:Fs/N:Fs/2; plot(freq,20*log10(abs(xdft))) xlabel('Cycles/second'); ylabel('dB') grid on``` From the frequency analysis, it is much easier for us to discern the frequencies of the oscillatory components, but we have lost their time-localized nature. It is also difficult to visualize the trend in this view. To gain some simultaneous time and frequency information, we can use a time-frequency analysis technique like the continuous wavelet transform. `cwt(x,Fs)` Now you see the time extents of the 60 Hz and 200 Hz components. However, we still do not have any useful visualization of the trend. The time-frequency view provides useful information, but in many situations you would like to separate out components of the signal in time and examine them individually. Ideally, you want this information to be available on the same time scale as the original data. Multiresolution analysis accomplishes this. In fact, a useful way to think about multiresolution analysis is that it provides a way of avoiding the need for time-frequency analysis while allowing you to work directly in the time domain. ### Separating Signal Components in Time Real-world signals are a mixture of different components. Often you are only interested in a subset of these components. Multiresolution analysis allows you to narrow your analysis by separating the signal into components at different resolutions. Extracting signal components at different resolutions amounts to decomposing variations in the data on different time scales, or equivalently in different frequency bands (different rates of oscillation). Accordingly, you can visualize signal variability at different scales, or frequency bands simultaneously. Analyze and plot the synthetic signal using a wavelet MRA. The signal is analyzed at eight resolutions or levels. ```mra = modwtmra(modwt(x,8)); helperMRAPlot(x,mra,t,'wavelet','Wavelet MRA',[2 3 4 9])``` Without explaining what the notations on the plot mean, let us use our knowledge of the signal and try to understand what this wavelet MRA is showing us. If you start from the uppermost plot and proceed down until you reach the plot of the original data, you see that the components are become progressively smoother. If you prefer to think about data in terms of frequency, the frequencies contained in the components are becoming lower. Recall that the original signal had three main components, a high frequency oscillation at 200 Hz, a lower frequency oscillation of 60 Hz, and a trend term, all corrupted by additive noise. If you look at the $\underset{}{\overset{\sim }{D}}2$ plot, you can see that the time-localized high frequency component is isolated there. You can see and investigate this important signal feature essentially in isolation. The next two plots contain the lower frequency oscillation. This is an important aspect of multiresolution analysis, namely important signal components may not end up isolated in one MRA component, but they are rarely located in more than two. Finally, we see the $S8$ plot contains the trend term. For convenience, the color of the axes in those components has been changed to highlight them in the MRA. If you prefer to visualize this plot or subsequent ones without the highlighting, leave out the last numeric input to `helperMRAPlot`. The wavelet MRA uses fixed functions called wavelets to separate the signal components. The kth wavelet MRA component, denoted by $\underset{}{\overset{\sim }{D}}k$ in the previous plot, can be regarded as a filtering of the signal into frequency bands of the form $\left[\frac{1}{{2}^{k+1}\Delta t},\frac{1}{{2}^{k}\Delta t}\right]$ where $\Delta t$ is the sampling period, or sampling interval. The final smooth component, denoted in the plot by $SL$, where $L$ is the level of the MRA, captures the frequency band $\left[0,\frac{1}{{2}^{L+1}\Delta t}\right]$. The accuracy of this approximation depends on the wavelet used in the MRA. See [4] for detailed descriptions of wavelet and wavelet packet MRAs. However, there are other MRA techniques to consider. The empirical mode decomposition (EMD) is a data-adaptive multiresolution technique. EMD recursively extracts different resolutions from the data without the use of fixed functions or filters. EMD regards a signal as consisting of a fast oscillation superimposed on a slower one. After the fast oscillation is extracted, the process treats the remaining slower component as the new signal and again regards it as a fast oscillation superimposed on a slower one. The process continues until some stopping criterion is reached. While EMD does not use fixed functions like wavelets to extract information, the EMD approach is conceptually very similar to the wavelet method of separating the signal into details and approximations and then separating the approximation again into details and an approximation. The MRA components in EMD are referred to as intrinsic mode functions (IMF). See [3] for a detailed treatment of EMD. Plot the EMD analysis of the same signal. ```[imf_emd,resid_emd] = emd(x); helperMRAPlot(x,imf_emd,t,'emd','Empirical Mode Decomposition',[1 2 3 6])``` While the number of MRA components is different, the EMD and wavelet MRAs produce a similar picture of the signal. This is not accidental. See [2] for a description of the similarities between the wavelet transform and EMD. In the EMD decomposition, the high-frequency oscillation is localized to the first intrinsic mode function (IMF 1). The lower frequency oscillation is localized largely to IMF 2, but you can see some effect also in IMF 3. The trend component in IMF 6 is very similar to the trend component extracted by the wavelet technique. Yet another technique for adaptive multiresolution analysis is variational mode decomposition (VMD). Like EMD, VMD attempts to extract intrinsic mode functions, or modes of oscillation from the signal without using fixed functions for analysis. But EMD and VMD determine the modes in very different ways. EMD works recursively on the time domain signal to extract progressively lower frequency IMFs. VMD starts by identifying signal peaks in the frequency domain and extracts all modes concurrently. See [1] for a treatment of VMD. ```[imf_vmd,resid_vmd] = vmd(x); helperMRAPlot(x,imf_vmd,t,'vmd','Variational Mode Decomposition',[2 4 5])``` The key thing to note is that similar to the wavelet and EMD decompositions, VMD segregates the three components of interest into completely separate modes or into a small number of adjacent modes. All three techniques allow you to visualize signal components on the same time scale as the original signal. For a real-world example of useful component separation, consider a seismograph (vertical acceleration, $\text{nm}/{\text{s}}^{2}$) of the Kobe earthquake, recorded at Tasmania University, Hobart, Australia on 16 January 1995 beginning at 20:56:51 (GMT) and continuing for 51 minutes at 1 second intervals. ```load KobeTimeTable T = KobeTimeTable.t; kobe = KobeTimeTable.kobe; figure plot(T,kobe) title('Kobe Earthquake Seismograph') axis tight grid on``` Obtain and plot a wavelet MRA of the data. ```mraKobe = modwtmra(modwt(kobe,8)); figure helperMRAPlot(kobe,mraKobe,T,'Wavelet','Wavelet MRA Kobe Earthquake',[4 5])``` The plot shows the separation of primary and delayed secondary wave components in MRA components $\underset{}{\overset{\sim }{D}}4$ and $\underset{}{\overset{\sim }{D}}5$. Components in a seismic wave travel at different velocities with primary (compressional) waves traveling faster than secondary (shear) waves. MRA techniques can enable you to study these components in isolation on the original time scale. ### Reconstructing Signals from MRA The point of separating signals into components is often to remove certain components or mitigate their effect on the signal. Crucial to MRA techniques is the ability to reconstruct the original signal. First, let us demonstrate that all these multiresolution techniques allow you to perfectly reconstruct the signal from the components. ```sigrec_wavelet = sum(mra); sigrec_emd = sum(imf_emd,2)+resid_emd; sigrec_vmd = sum(imf_vmd,2)+resid_vmd; figure subplot(3,1,1) plot(t,sigrec_wavelet); title('Wavelet reconstruction'); set(gca,'XTickLabel',[]); ylabel('Amplitude'); subplot(3,1,2); plot(t,sigrec_emd); title('EMD reconstruction'); set(gca,'XTickLabel',[]); ylabel('Amplitude'); subplot(3,1,3) plot(t,sigrec_vmd); title('VMD reconstruction'); ylabel('Amplitude'); xlabel('Time');``` The maximum reconstruction error on a sample-by-sample basis for each of the methods is on the order of $1{0}^{-12}$ or smaller, indicating they are perfect reconstruction methods. Because the sum of the MRA components reconstructs the original signal, it stands to reason that including or excluding a subset of components could produce a useful approximation. Return to our original wavelet MRA of the synthetic signal and suppose that you were not interested in in the trend term. Because the trend term is localized in the last MRA component, just exclude that component from the reconstruction. ```sigWOtrend = sum(mra(1:end-1,:)); figure plot(t,sigWOtrend) title('Trend Term Removed')``` To remove other components, you can create a logical vector with `false` values in components you do not wish to include. Here we remove the trend and highest frequency component along with the first MRA component (which looks to be largely noise). Plot the actual second signal component (60 Hz) along with the reconstruction for comparison. ```include = true(size(mra,1),1); include([1 2 9]) = false; ts = sum(mra(include,:)); plot(t,comp2,'b') hold on plot(t,ts,'r') title('Trend Term and Highest Frequency Component Removed') xlabel('Seconds'); legend('Component 2','Partial Reconstruction') xlim([0.0 0.4])``` In the previous example, we treated the trend term as a nuisance component to be removed. There are a number of applications where the trend may be the primary component of interest. Let us visualize the trend terms extracted by our three example MRAs. ```figure plot(t,trend,'LineWidth',2) hold on plot(t,[mra(end,:)' imf_vmd(:,end) imf_emd(:,end)]) grid on legend('Trend','Wavelet','VMD','EMD') title('Trend in three MRAs')``` Note that the trend is smoother and most accurately captured by the wavelet technique. EMD finds a smooth trend term, but it is shifted with respect to the true trend amplitude while the VMD technique seems inherently more biased toward finding oscillations than the wavelet and EMD techniques. The implications of this are further discussed in the MRA Techniques — Advantages and Disadvantages section. ### Detecting Transient Changes Using MRA In the first examples, the role of multiresolution analysis in the detection of oscillatory components in the data and an overall trend was emphasized. However, these are not the only signal features that can be analyzed using multiresolution analysis. MRA can also help localize and detect transient features in a signal like impulsive events, or reductions or increases in variability in certain components. Changes in variability localized to certain scales or frequency bands often indicate significant changes in the process generating the data. These changes are frequently more easily visualized in the MRA components than in the raw data. To illustrate this, consider the quarterly chain-weighted U.S. real gross domestic product (GDP) data for 1947Q1 to 2011Q4. Quarterly samples correspond to a sampling frequency of 4 samples/year. The vertical black line marks the beginning of the "Great Moderation" signifying a period of decreased macroeconomic volatility in the U.S. beginning in the mid-1980s. Note that this is difficult to discern looking at the raw data. ```load GDPData figure plot(year,realgdp) hold on plot([year(146) year(146)],[-0.06 0.14],'k') title('GDP Data'); xlabel('Year'); hold off``` Obtain a wavelet MRA of the GDP data. Plot the finest resolution MRA component with the period of the great moderation marked. Because the wavelet MRA is obtained using fixed filters, we can associate the finest-scale MRA component with frequencies of 1 cycle per year to 2 cycles per year. A component which oscillated with a period of two quarters has a frequency of 2 cycles per year. In this case, the finest-resolution MRA component captures changes in the GDP that occur between adjacent two-quarter intervals to changes that occur from quarter to quarter. ```mra = modwtmra(modwt(realgdp,'db2')); figure plot(year,mra(1,:)) hold on plot([year(146) year(146)],[-0.015 0.015],'k') title('Wavelet MRA - Quarter-to-Quarter Changes'); hold off``` The reduction in variability, or in economic terms volatility, is much more readily apparent in the finest resolution MRA component than in the raw data. Techniques to detect changes in variance like the MATLAB `findchangepts` (Signal Processing Toolbox) often work better on MRA components than on the raw data. In this example, we discussed wavelet and data-adaptive techniques for multiresolution analysis. What are some of the advantages and disadvantages of the various techniques? In other words, for what applications might I choose one over the other? Let us start with wavelets. The wavelet techniques in this example use fixed filters to obtain the MRA. This means that the wavelet MRA has a well-defined mathematical explanation and we can predict the behavior of the MRA. We are also able to tie events in the MRA to specific time scales in the data as was done in the GDP example. The disadvantage is that the wavelet transform divides the signal into octave bands (a reduction in center frequency by 1/2 in each component) so that at high center frequencies the bandwidths are much larger than those at lower frequencies. This means two closely spaced high frequency oscillations can easily end up in the same MRA component with a wavelet technique. Repeat the first synthetic example but move the two oscillatory components within one octave of each other. ```Fs = 1e3; t = 0:1/Fs:1-1/Fs; comp1 = cos(2*pi*150*t).*(t>=0.1 & t<0.3); comp2 = cos(2*pi*200*t).*(t>0.7); trend = sin(2*pi*1/2*t); rng default; wgnNoise = 0.4*randn(size(t)); x = comp1+comp2+trend+wgnNoise; plot(t,x) xlabel('Seconds') ylabel('Amplitude')``` Repeat and plot the wavelet MRA. ```mra = modwtmra(modwt(x,8)); helperMRAPlot(x,mra,t,'wavelet','Wavelet MRA',[2 9])``` Now we see that $\underset{}{\overset{\sim }{D}}2$ contains both the 150 Hz and 200 Hz components. If you repeat this analysis using EMD, you see the same result. Now let us use the VMD. ```[imf_vmd,~,info_vmd] = vmd(x); helperMRAPlot(x,imf_vmd,t,'vmd','VMD',[1 2 3 5]);``` VMD is able to separate the two components. The high frequency oscillation localized to IMF 1, while the second component is spread over two adjacent IMFs. If you look at the estimated central frequencies of the VMD modes, the technique has localized the first two components around 200 and 150 Hz. The third IMF has a center frequency close to 150 Hz, which is why we see the second component in two MRA components. `info_vmd.CentralFrequencies*Fs` ```ans = 5×1 202.7204 153.3278 148.8022 84.2802 0.2667 ``` VMD is able to do this because it starts by identifying candidate center frequencies for IMFs by looking at a frequency domain analysis of the data. While the wavelet MRA is not able to separate the two high-frequency components, there is an additional wavelet packet MRA which can. ```wpt = modwptdetails(x,3); helperMRAPlot(x,flip(wpt),t,'wavelet','Wavelet Packet MRA',[5 6 8]);``` Now you see the two oscillations are separated in $\underset{}{\overset{\sim }{D}}5$ and $\underset{}{\overset{\sim }{D}}6$. From this example we can extract a general rule. If an initial wavelet or EMD decomposition shows components with apparently different rates of oscillation in the same component, consider VMD or a wavelet packet MRA. If you suspect that your data has high frequency components close together in frequency, VMD or a wavelet packet approach will generally work better than a wavelet or EMD approach. Recall the problem of extracting a smooth trend. Repeat both the wavelet MRA and the EMD. ```mra = modwtmra(modwt(x,8)); helperMRAPlot(x,mra,t,'wavelet','Wavelet MRA',[2 9])``` ```imf_emd = emd(x); helperMRAPlot(x,imf_emd,t,'EMD','Empirical Mode Decomposition',[1 2 6])``` The trends extracted by the wavelet and EMD techniques are closer to the true trend than those extracted by VMD and the wavelet packet technique. VMD is inherently biased toward finding narrowband oscillatory components. This is a strength of VMD in detecting closely spaced oscillations, but a disadvantage when extracting smooth trends in the data. The same is true of wavelet packet MRAs when the decomposition goes beyond a few levels. This leads to a second general recommendation. If you are interested in characterizing a smooth trend in your data for identification or removal, try a wavelet or EMD technique. What about detecting transient changes as we saw in the GDP data? Let us repeat the GDP analysis using VMD. ```[imf_vmd,~,vmd_info] = vmd(realgdp); figure subplot(2,1,1) plot(year,realgdp) title('Real GDP'); hold on plot([year(146) year(146)],[-0.1 0.15],'k') hold off subplot(2,1,2) plot(year,imf_vmd(:,1)); title('First VMD IMF'); hold on plot([year(146) year(146)],[-0.02 0.02],'k') hold off``` While the highest-frequency VMD component also appears to show some reduction in variability beginning in the mid-1980s, is it not as readily apparent as in the wavelet MRA. Because the VMD technique is biased toward finding oscillations, there is substantial ringing in the first VMD IMF which obscures the volatility changes. Repeat this analysis using EMD. ```imf_emd = emd(realgdp); figure subplot(2,1,1) plot(year,realgdp) title('Real GDP'); hold on plot([year(146) year(146)],[-0.1 0.15],'k') hold off subplot(2,1,2) plot(year,imf_emd(:,1)); title('First EMD IMF'); hold on plot([year(146) year(146)],[-0.06 0.05],'k') hold off``` The EMD technique is less useful in finding the change in volatility (variance). In this case, the fixed functions using in the wavelet MRA were more advantageous than the data adaptive techniques. This leads to our final general rule. If you are interested in detecting transient changes in a signal like impulsive events are reductions and increases in variability, try a wavelet technique. ### Conclusions This example showed how multiresolution decomposition techniques such as wavelet, wavelet packet, empirical mode decomposition, and variational mode decomposition allow you to study signal components in relative isolation on the same time scale as the original data. Each technique has proven itself powerful in a number of applications. The example has given a few rules of thumb to get you started, but these should not be regarded as absolute. The following table recaps properties of the MRA techniques presented here along with some general rules of thumb. Properties are denoted in blue and recommendations in orange. • If your data appears to contain oscillatory components close in frequency, try VMD or wavelet packets. VMD identifies the important center frequencies directly from the data, while wavelet packets use a fixed frequency analysis, which may be less flexible than VMD. • If you are interested in transient events in your data like impulsive events or transient reductions and increases in variability, try a wavelet MRA. In any MRA, these events are usually localized to the finest scale (highest center frequency) MRA components. • If you are interested in representing a smooth trend term in the data, consider EMD or a wavelet MRA. With Signal Multiresolution Analyzer, you can perform multiresolution analysis on a signal, obtain metrics on various MRA components, experiment with partial reconstructions, and generate MATLAB scripts to reproduce the analysis at the command line. ### References [1] Dragomiretskiy, Konstantin, and Dominique Zosso. “Variational Mode Decomposition.” IEEE Transactions on Signal Processing 62, no. 3 (February 2014): 531–44. https://doi.org/10.1109/TSP.2013.2288675. [2] Flandrin, P., G. Rilling, and P. Goncalves. “Empirical Mode Decomposition as a Filter Bank.” IEEE Signal Processing Letters 11, no. 2 (February 2004): 112–14. https://doi.org/10.1109/LSP.2003.821662. [3] Huang, Norden E., Zheng Shen, Steven R. Long, Manli C. Wu, Hsing H. Shih, Quanan Zheng, Nai-Chyuan Yen, Chi Chao Tung, and Henry H. Liu. “The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis.” Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 454, no. 1971 (March 8, 1998): 903–95. https://doi.org/10.1098/rspa.1998.0193. [4] Percival, Donald B., and Andrew T. Walden. Wavelet Methods for Time Series Analysis. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge ; New York: Cambridge University Press, 2000. Get trial now
{}
### definition crusher double Team fortress 2 item definition indexes,  · This is a list of the item definition indexes in TF2, these are useful for distinguishing an unlockable weapon from its original (for example: The Backburner from an ordinary Flamethrower.) This list is divided in to Weapons, Hats and Crafting Items. Usage: SetEntProp (edict, Prop_Send, "m_iItemDefinitionIndex", index); int iItemDefinitionIndex = GetEntProp (entity, Prop_Send, "m ...Biosafety Levels 1, 2, 3 & 4 | What's The Difference?,  · Learn the important differences between BSL-1, BSL-2, BSL-3 and BSL-4 labs. Updated 3/31/20: Information about the biosafety level requirements for handling SAR-CoV-2 (COVID-19 coronavirus) can be found Safety Levels (BSL) are a series of ...Modelling energy and size distribution in cone crushers,  · The experimental approach to mimic the conditions in a crusher where compressive crushing occurs is to use a Piston and die test (Bengtsson et al., 2006, Barrios and Tavares, 2016).The piston and die test can be used for testing both single and multi-particle ...Solidified Experience,  · Solidified Experience is an item added by Actually Additions. It will occasionally drop from killed mobs instead of Experience Points (XP). Regular Experience Points can be converted to Solidified Experience by using an Experience Solidifier. Right-clicking it will consume it, at 8 XP per item. Sneak right-clicking will consume the entire stack. Solidified Experience on the online Actually ...Alumina3 Precip,  · Setting up the PSD Definition The present implementation requires geometrically spaced bins, with a size ratio of $\sqrt[3]{2}$. The easiest way to specify this sieve series is to use the Q option, with a Q-value of Refer to Size Configuration. ##### Get Price 22 Essential Bar Tools and Equipment Every Bar Should Have,  · It can also double as a wine cooler to keep wine chilled. Source: Ice crusher Many cocktails call for crushed ice. Having an ice crusher behind the bar is the ideal way for bartenders to crush ice to the right consistency for your signature cocktailsHow to Maintain a Coffee Maker: 6 Steps (with Pictures),  · How to Maintain a Coffee Maker. Maintaining your coffee maker on a regular basis will help keep it in good working order and can improve your coffee's flavor and freshness. Leftover coffee oils can accumulate inside the coffee maker to...Leading Crusher Manufacturer In China | HXJQ,  · For smaller discharge sizes, a three-stage crusher can be used, for example, the fine crushing crusher or the roller crusher is used to further crush the ore to less than 10 mm. In the actual production, the suitable crusher can be selected according to the size of the concrete block.What Is the Definition of "scale" in Art?,  · What Is the Definition of "scale" in Art? By Staff Writer Last Updated Mar 29, 2020 12:33:12 AM ET In art, scale refers to the size ratio between everything within the image. Using a scale allows the size relationships between objects to appear real or believable.Stacker,  · Its function is to pile bulk material such as limestone, ores and cereals on to a stockpile. A reclaimer can be used to recover the material. Gold dredges in Alaska had a stacker that was a fixed part of the dredge. It carried over-size material to the tailings pile..
{}
Sha Li -- Once you've got a task to do, it's better to do it than live with the fear of it. 23 Oct 2018 Grounded Language Learning by Mechanical Turker Descent Paper Title: Mastering the Dungeon: Grounded Language Learning by Mechanical Turker Descent Natural Language Grouding The task of natural language grounding is to find a mapping from natural language to actionable terms. Turkers are asked to provide training pairs $(x,y)$ where $x$ is a natural language command and $y$ is a action sequence in the game world of “Mastering the Dungeon”. The set of $y$ is rather restricted and turkers are encouraged to use flexible vocabulary in giving the command $x$. An example is: “Steal the crown from the troll and put it on” $\rightarrow$ take silver crown from troll, wear silver crown Mechanical Turk Descent This paper purposes a new way of collecting training data from turkers by encouraging competition across turkers in the short run and collaborating to train a model in the long run. The training consists of several rounds. In each round, turkers think of new training examples for their own models (“training their own dragons”) and compete against each other for a monetary reward. Each model is evaluated on the common test set and the training sets generated by other turkers. Then at the end of each round, all of the training examples are collected and merged into the common training set and common test set. This mechanism incentizes turkers to provide examples that are not too easy nor too hard: both too easy or too hard examples will not improve the turkers score compared to others. In a sense, the turkers are creating a curriculum for the model of harder and harder examples. GraphWorld Representation The world state is represented by a graph where each concept, object, location and actor is a node and the edges represent relations between them. “Relation” here is used in a quite general sense, covering positional relations and properties. Each action has a set of prerequisites and applies a transformation on the graph. Language Grounding Model Since the language grounding task is essentially finding a mapping between two sequences, a natural baseline is the seq2seq model with attention. On the encoder side, our input is natural language, which fits the seq2seq model well, but on the decoder side, our output is a structured action sequence. This bring restrictions to our decoding process and also inductive biases that should be included into the model design. 1. Compositionality of actions. Every action is associated with arguments and thus can be represented as $(action type, arg1, arg2)$ . In its vector representation, the action type embedding is concatenated by the two argument embeddings $a = [Emb(type); Emb(arg1); Emb(arg2) ]$. This compositional representation is data efficient since the different actions share some common action types or arguments. 2. Selection preference. An attention vector is computed by taking each candidate left argument(right argument) as the query vector and the hidden state of the encoder as the key/value. 3. Environmental clues. A context vector is added to the input of the GRU by concatenating the count of the action in the previous decoded sequence and the current location. $env_{a,j} = [count_{a,j}; location_j ]$ . 4. Action prerequisites. Hard constraints are imposed during decoding so that only valid actions are taken. In the AC-seq2seq model, a separate hidden state $s_{a,j}$ is maintained for each candidate action at each time step (the standard seq2seq model only has $s_j$.) This is like having $|A|$ GRUs working simutaneously on the decoder side, but all GRUs share parameters. Hidden states are updated by $s_{a,j} = GRU([a; attn_{a,j}, env_{a,j}], s_{a,j-1})$ Experiments Ablations for Mechanical Turk Descent include removing the time constraint for a fix number of training examples, removing feedback from the model and removing the competition altogether. Model-wise, AC-seq2seq consistently outperforms the baseline seq2seq model. Takeaways Many companies and researchers rely on crowdworkers to provide labeled data for training complex machine learning models. Instead of developing methods to rate the quality of crowdsourced labels and remove noisy labels, we can incentize crowdworkers to provide higher quality labels by gamification. (The authors put their data collection task in the context of playing a text adventure game where each worker gets to train their own dragon.) Although the paper works on the text grounding task, the data collection gamification idea can be applied to a wide spectrum of tasks. This is a part of the DMG reading group series. Stay tuned for updates:) Til next time, Zoey at 00:00
{}
veksetz 2022-01-04 A capacitor consists of two 6.0-cm-diameter circular plates separated by 1.0 mm. The plates are charged to 150 V, then the battery is removed. a. How much energy is stored in the capacitor? b. How much work must be done to pull the plates apart to where the distance between them is 2.0 mm? ramirezhereva Expert We are given: $D=6cm$, $d=1mm$, $V=150V$ a) The energy stored in the capacitor will be obtained using this formula: $U=\frac{1}{2}C{V}^{2}$ Where $C$ is the capacitance of the capacitor and $D$ is the potential The capacitance we can get from the following formula: $C=\frac{{ϵ}_{0}A}{d}$ A capacitor consists of circular plates, so its surface is: $A=\pi ×{r}^{2}$ $r=\frac{D}{2}=\frac{6}{2}=3cm$ ${ϵ}_{0}$ is the dialectric constznt of vacuum and has the following value: ${ϵ}_{0}=8.854×{10}^{-12}\frac{F}{m}$ Thus, the first formula becomes: $U=\frac{1}{2}×\frac{{ϵ}_{0}\pi {r}^{2}}{d}×{V}^{2}$ Now, convert cm and mm into m Finally, we have: $U=\frac{1}{2}×\frac{8.854×{10}^{-12}\pi ×{\left(3×{10}^{-2}\right)}^{2}}{1×{10}^{-3}}×{\left(150\right)}^{2}$ $U=0.28×{10}^{-6}J$ b)Here the distance between the plates increases to ${d}_{1}=2mm$ The work required to achieve will be obtained as the energy difference: $W=U-{U}_{1}$ ${U}_{1}=\frac{1}{2}×\frac{8.854×{10}^{-12}\pi ×{\left(3×{10}^{-2}\right)}^{2}}{2×{10}^{-3}}×{150}^{2}=0.14×{10}^{-6}J$ And we have: $W=0.28×{10}^{-6}-0.14×{10}^{-6}$ $W=0.14×{10}^{-6}J$ Do you have a similar question? Recalculate according to your conditions!
{}
Question \text { The function } f \text { is defined by } f(x)=2^{x}+1 . \text { Which of the following is the graph of } y=-f(x) \text { in the } x y-\text { plane? } Fig: 1 Fig: 2 ### Submit a new Query Success Assignment is successfully created
{}
Jeremy Côté Bits, ink, particles, and words. Notation In mathematics, notation is simultaneously everything and nothing. It isn’t difficult to imagine another alien species havig the same notions of calculus as we do, but without the symbols of integration or differentiation. It might seem so natural now to see the expression $\partial x$, but that’s only because we’ve spent years working with these symbols, forging a connection between concepts and notation. Due to this, it can seem entirely natural to look at notation and instantly understand what it’s about as a concept, rather than just symbols. This is quite similar to our experience with foreign languages, where the words and characters look alien to us, yet our own languages seem so obvious. The Importance of Factoring When you’re trying to solve a simple algebraic expression like $ab = 5b$ for the variable $a$, it quickly becomes second-nature to divide both sides of the equation by $b$, yielding $a = 5$. This makes complete sense, and it’s what most people would do right off, without even thinking. I mean, look at both sides of that equation! If there’s a $b$ on both sides, then the other value on each side of the equation should be equal to each other, giving us $a = 5$.
{}
# How to use Automated Labelling for documents? [closed] Let's say I have been given 1000 documents and 6 labels from someone. My job is to label each of these 1000 documents into 1 of the 6 labels which are words not numbers. How can I automate or semi-automate this process using data science?? Or, you can use unsupervised learning, these are techniques which do not need a label. You can use k-means to cluster your data into $$k=6$$ labels. Then you can associate these clusters with the label based on your experience.
{}
# Which really big number is bigger? This question is tricky (and in particular harder than Which big number is bigger?), for those who like more challenging puzzles. Input Integers a1, a2, a3, a4, a5, b1, b2, b3, b4, b5 each in the range 1 to 10. Output True if a1^(a2^(a3^(a4^a5))) > b1^(b2^(b3^(b4^b5))) and False otherwise. ^ is exponentiation in this question. Rules This is code-golf. Your code must terminate correctly within 10 seconds for any valid input on TIO. If your language is not on TIO, the code should finish under 10 seconds on your machine. You can output anything Truthy for True and anything Falsey for False. Test cases Recall that by the rules of exponentiaon, a1^(a2^(a3^(a4^a5))) == a1^a2^a3^a4^a5. 10^10^10^10^10 > 10^10^10^10^9 1^2^3^4^5 < 5^4^3^2^1 2^2^2^2^3 > 10^4^3^2^2 6^7^8^9^10 is not bigger than 6^7^8^9^10 10^6^4^2^2 < 10^6^2^4^2 2^2^2^2^10 > 2^2^2^10^2 10^9^8^7^6 < 6^7^8^9^10 3^1^10^10^10 > 2^1^10^10^10 9^10^10^10^10 < 10^9^10^10^10 New test cases from Kevin Cruijssen [10,10,10,10,10, 10,10,10,10,9] #true [2,2,2,2,3, 10,4,3,2,2] #true [2,2,2,2,10, 2,2,2,10,2] #true [10,10,10,10,10, 9,10,10,10,10] #true [3,2,2,1,1, 2,5,1,1,1] #true [2,2,3,10,1, 2,7,3,9,1] #true [7,9,10,10,10, 6,9,10,10,10] #true [3,2,2,2,2, 2,2,2,2,2] #true [8,3,1,2,1, 2,2,3,1,1] #true [2,4,2,1,1, 3,3,2,1,1] #true [5,4,3,2,1, 1,2,3,4,5] #true [1,2,3,4,5, 5,4,3,2,1] #false [6,7,8,9,10, 6,7,8,9,10] #false [10,6,4,2,2, 10,6,2,4,2] #false [10,9,8,7,6, 6,7,8,9,10] #false [1,10,10,10,10, 1,10,10,10,9] #false [2,4,1,1,1, 2,2,2,1,1] #false [2,2,2,1,1, 2,4,1,1,1] #false [2,5,1,1,1, 3,2,2,1,1] #false [4,2,1,1,1, 2,4,1,1,1] #false [2,4,1,1,1, 4,2,1,1,1] #false [2,3,10,1,1, 8,3,9,1,1] #false [8,3,9,1,1, 2,3,10,1,1] #false [2,4,1,1,1, 3,3,1,1,1] #false [2,2,1,9,9, 2,2,1,10,10] #false [2,2,1,10,10, 2,2,1,9,9] #false [1,1,1,1,1, 1,2,1,1,1] #false • I'm VTC'ing this, even though it isn't a dupe; it's just too close to a challenge you posted 4 hours prior and shows a lack of effort to think up unique challenges. – Magic Octopus Urn Apr 25 '19 at 18:37 • I feel like 9 people agreed on my point with their votes; but, as you say, it's your choice to keep it even though it has 9 downvotes. Was just shedding some light on why there may be downvotes. – Magic Octopus Urn Apr 25 '19 at 18:49 • Was just my two cents man, honestly; we don't need to go into detail here. Regret I even said anything; the last thing I wanted was an argumentative response. I was just stating why I gave a -1. – Magic Octopus Urn Apr 25 '19 at 18:51 • I'm voting to reopen this post because it has different difficulty parameter and the required approach to solve it is very different. Meta post. – user202729 Apr 26 '19 at 9:29 • Suggested test cases (for the edge cases encountered by the Python, Ruby, Java, and 05AB1E answers) – Kevin Cruijssen May 17 '19 at 11:27 # Ruby, 150 bytes See revisions for previous byte counts. ->a,b,c,d,e,f,g,h,i,j{l=->s,t=c{Math.log(s,t)};y,z=l[l[g,b]]-d**e+l[h]*i**=j,l[l[a,f]*b**c,g];a>1?f<2?1:b<2||g<2?z>h:c<2||d<2?l[z,h]>i:y==0?a>f:y<0:p} -10 bytes thanks to @ValueInk +16 bytes thanks to @RosLuP for bugs. Compare different base powers-towers (of 'height' five)? ### Ungolfed code: -> a, b, c, d, e, f, g, h, i, j { l =-> s, t = c {Math.log(s, t)} i **= j y = l[l[g, b]] - d ** e + l[h] * i z = l[l[a, f] * b ** c, g] if a == 1 return p elsif f == 1 return 1 elsif b == 1 || g == 1 return z > h elsif d == 1 || c == 1 return l[z, h] > i elsif y == 0 return a > f else return y < 0 end } ### Code breakdown: l =-> s, t = c {Math.log(s, t)} This is the base t logarithm, which will be used to reduce the size of the numbers we are comparing. It defaults to base c when only one argument is given. i **= j y = l[l[g, b]] - d ** e + l[h] * i z = l[l[a, f] * b ** c, g] This updates i = i ** j since i never gets used on it's own, and y is the result of logging b^c^d^e == g^h^i(^j) twice and moving everything to one side. We then let z = l[a, f] * b ** c as the log base g of the log base f of a ** b ** c. if a == 1 return p elsif f == 1 return 1 1^b^c^d^e = 1 is never greater than f^g^h^i^j, and likewise, a^b^c^d^e is always greater than 1^g^h^i^j = 1 if a != 1. Note that return p returns nil, which is falsey, and return 1 returns 1, which is truthy. elsif b == 1 return z > h If b == 1 or g == 1, then this reduces to comparing a ** b ** c to f ** g ** h, which is done with two logs to both sides. elsif d == 1 || c == 1 return l[z, h] > i This compares a ** b ** c with f ** g ** h ** i by rearranging it as log[log[b ** c * log[a, f], g], h] compared to i. (Recall that i **= j in the beginning and z = log[b ** c * log[a, f], g].) elsif y == 0 return a > f else return y < 0 end This compares the 4 highest powers after logging both sides twice. If they are equal, it compares the base. # Python 2, 671612495490611 597 bytes lambda a,b:P(S(a,b))>P(S(b,a))if P(a)==P(b)else P(a)>P(b) def S(a,b): if a and a[-1]==b[-1]: a.pop() b.pop() return S(a,b) from math import* L=log E=exp N=lambda m,n,x:N(m,n+1,L(x))if x>=1else N(m,n-1,E(x))if x<0else(m+n,x) A=lambda a,n,x:(0,1)if a==1else(1,R(x,n)*L(a))if a<1else N(2,*C(L(L(a)),x,n-1))if n else(1,x*L(a)) def C(c,x,n): if c*n==0:return(0if c else n,x+c) z=R(x,n-1) if z<=L(abs(c)):return(0,E(z)+c) return N(1,*C(L(1-E(L(-c)-z)if c<0else 1+E(L(c)-z)),x,n-1)) def R(x,n): try:exec'x=E(x)'*n except:x=float('inf') return x P=lambda b:b and N(0,*A(b[0],*P(b[1:])))or(0,1) -59 bytes thanks to @EmbodimentOfIgnorance -117 bytes thanks to @Neil +121 bytes for about five bug-fixes, all found by @ngn Takes the inputs as two lists. NOTE: Also works with larger lists or those of unequal length. EDIT: No longer true; it still works if P(a) and P(b) result in different tuples, but if they are the same this updated code above only works with lists with a fixed size of 5 now. Try it online. Explanation: Golfed version of this answer on math.stackexchange.com, so all credit goes to @ThomasAhle. To quote his answer: The idea is to represent power towers as a single floating point number with $$\n\$$ exponentiations: $$\(x\mid n) := exp^n(x)\$$. Normalizing $$\x\in[0,1)\$$, this format allows easy comparison between numbers. What remains is a way to calculate $$\a^{(x\mid n)}\$$ for any real, positive $$\a\$$. My Python code below is an attempt to do so, while being as numerically stable as possibly, e.g. by using the log-sum trick. My code runs in time proportional to the height of the tower (number of apow calls) and the iterated-log of it's value (number of recursive calls). I haven't been able to find two towers with values close enough to case my method to fail. At least for integer exponents. With fractional exponents it is possible to create very towers too close for my representation to handle. E.g. $$\2^{2^{2^{2^0}}}<2^{2^{2^{2^{(1/2)^{2^{2^{2^2}}}}}}}\$$ I would be interested in suggestions to other types of counter examples, especially integer ones. It seems to me that for the problem to be in P, we need to non-numerical methods. It doesn't seem unlikely at all, that certain analytical cases are harder than P. Examples: powtow([2,2,2,2,2,2,2,2,2,2,2,2,2,2,4,2,2,2]) = (0.1184590219613409, 18) powtow([9,9,9,9,9,9,9,9,9,9,9,9,9,9,9,9]) = (0.10111176550354063, 18) powtow([2,2,5,2,7,4,9,3,7,6,9,9,9,9,3,2]) = (0.10111176550354042, 17) powtow([3,3,6,3,9,4,2,3,2,2,2,2,2,3,3,3]) = (0.19648862015624008, 17) Counter examples: powtow([2,2,2,2,2,2,2]) = (0.8639310719129168, 6) powtow([3,2,2,2,2,2,2]) = (0.8639310719129168, 6) Regarding the counter examples, he mentions the following in the comment-section: I believe if we bound the exponents so $$\1 (or maybe a much larger upper bound) we can use my method for comparing the height 3 or 4 head of the tower. It should be strong enough to tell us if they are equal or one is larger. Plan B is to choose the highest tower. Plan C is the interesting one: At this point the values of the rest of the tower only matter if the heads are equal, so we can walk down the towers in parallel, stopping as soon as we see a differing value. Hence the main thing to be proven is that once the head of a tower exceeds a certain point, and the rest of the exponents are bounded (and equally numerous), we can simply look at the top differing value. It's a bit counter intuitive, but it seems very likely from the simple inequalities you get. Since plan A and B are irrelevant in this challenge, since the height is 5 for both power-towers we input, plan C it is. So I've changed P(a)>P(b) to P(S(a,b))>P(S(b,a))if P(a)==P(b)else P(a)>P(b) with the recursive function S(a,b). If P(a) and P(b) result in the same tuple, the P(S(a,b))>P(S(b,a)) will first remove trailing values which are equal at the same indices, before doing to same P(A)>P(B) check on these now shorter lists. • I also suck at golfing in python, but here is a 612 byter – Embodiment of Ignorance Apr 25 '19 at 18:27 • 495 bytes – Neil Apr 25 '19 at 19:43 • Fails for [10,10,10,10,10]>[9,10,10,10,10] – Embodiment of Ignorance Apr 26 '19 at 17:15 • You only use the function R once, so maybe you can just inline it? – Embodiment of Ignorance Apr 26 '19 at 18:09 • @EmbodimentofIgnorance There's still an outstanding call to R on line 5... – Neil Apr 29 '19 at 10:32 # 05AB1E, 96 104 bytes 3èI4èmU8.$m©I7èI2è.n*I6èI1è.nI2è.n+Vнi0ë5èi1ë2ô1èßi¦2£mIнI5è.n*I6è.nDI7èDi\1›·<žm*ë.n}®›ëXYQiнI5è›ëXY› Port of @SimplyBeautifulArt's Ruby answer, so make sure to upvote him! +8 bytes as work-around, because $$\\log_1(x)\$$ should result in POSITIVE_INFINITY for $$\x\gt1\$$ and NEGATIVE_INFINITY for $$\x\lt1\$$, but results in 0.0 for both cases instead in 05AB1E (i.e. test cases [3,2,2,1,1,2,5,1,1,1] (POSITIVE_INFINITE case) and [2,4,1,1,1,3,3,1,1,1] (NEGATIVE_INFINITY case). Input as a list of ten integers: [a,b,c,d,e,f,g,h,i,j]. Explanation: 3èI4èm # Calculate d**e U # And pop and store it in variable X 8.$m # Calculate i**j © # Store it in variable ® (without popping) I7èI2è.n # Calculate c_log(h) * # Multiply it with i**j that was still on the stack: i**j * c_log(h) I6èI1è.nI2è.n # Calculate c_log(b_log(g)) + # And sum them together: i**j * c_log(h) + c_log(b_log(g)) V # Pop and store the result in variable Y нi # If a is 1: 0 # Push 0 (falsey) ë5èi # Else-if f is 1: 1 # Push 1 (truthy) ë2ô1èßi # Else-if the lowest value of [c,d] is 1: ¦2£m # Calculate b**c IнI5è.n # Calculate f_log(a) * # Multiply them together: b**c * f_log(a) I6è.n # Calculate g_log(^): g_log(b**c * f_log(a)) D # Duplicate it I7è # Push h Di # Duplicate it as well, and if h is exactly 1: \ # Discard the duplicated h 1› # Check if the calculated g_log(b**c * f_log(a)) is larger than 1 # (which results in 0 for falsey and 1 for truthy) ·< # Double it, and decrease it by 1 (it becomes -1 for falsey; 1 for truthy) žm* # Multiply that by 9876543210 (to mimic POSITIVE/NEGATIVE INFINITY) ë # Else: .n # Calculate h_log(g_log(b**c * f_log(a))) instead } # After the if-else: ®› # Check whether the top of the stack is larger than variable ® ëXYQi # Else-if variables X and Y are equal: нI5è› # Check whether a is larger than f ë # Else: XY› # Check whether X is larger than Y # (after which the top of the stack is output implicitly as result) If anyone wants to try and golf it further, here is a helper program I've used to get the correct variables from the input-list. • Very impressed this has got under 100! And thank you so much for adding the bounty. – Anush May 16 '19 at 18:35 • @Anush I actually have the feeling 96 is pretty long, considering the non-golfing language Ruby got 151. ;p And np about the bounty. It's mainly for @SimplyBeautifulArt's approach, but at the same time to give the challenge some attention. The reason it got downvoted is because you posted it a few hours after your earlier answer with 3 powers. I personally like this challenge, and was the first to upvote and answer it, but I can still kinda see the truth in the very first comment under the challenge post at the same time. Hopefully the bounty will make your challenge 0 or positive, though :) – Kevin Cruijssen May 16 '19 at 18:44 • I dream of a getting 0! :) – Anush May 16 '19 at 18:50 • [2,1,1,1,1,3,1,1,1,1] result 1 instead has to result 0 – RosLuP May 18 '19 at 9:03 • @RosLuP Fixed, along with other INFINITY cases due to $log_1(x)$. Unfortunately the byte-count is now longer below 100. – Kevin Cruijssen May 23 '19 at 10:03 ## C, 168 180 bytes C port from Kevin Cruijssen's answer. #define l(a,b)log(a)/log(b) z(a,b,c,d,e,f,g,h,i,j){float t=pow(i,j),y=l(l(g,b),c)-pow(d,e)+l(h,c)*t,z=l(l(a,f)*pow(b,c),g);return~-a&&f<2|(b<2|g<2?z>h:c<2|d<2?l(z,h)>t:y?y<0:a>f);} Try it online # APL(NARS), chars 118, bytes 236 {p←{(a b c d)←⍵⋄a=1:¯1⋄b=1:⍟⍟a⋄(⍟⍟a)+(c*d)×⍟b}⋄(=/(a b)←{p 1↓⍵}¨⍺⍵)∧k←(∞ ∞)≡(m n)←{p(3↑⍵),*/3↓⍵}¨⍺⍵:(↑⍺)>↑⍵⋄k:a>b⋄m>n} The function above call z, in "a z w" would return 1 if the number in a is greater than the number in w, otherwise it would return 0. If I have f(a,b,c,d,e)=a^b^c^d^e It will be f(aa)>f(bb) with both aa and bb array of 5 positive numbers if and only if (if the a>1 of aa and bb) log(log(f(aa)))>log(log(f(bb))) one has to use the log() laws: log(A*B)=log(A)+log(B) log(A^B)=B*log(A) for build v(aa)=log(log(aa))=v(a,b,c,d,e)=log(log(a))+log(b)(c^(d^e))={p(3↑⍵),/3↓⍵} function and so the exercise is find when v(aa)>v(bb). But there is a case where v(aa) and v(bb) are both infinite (APL has end the float space) in that case i would use the unsecure function s(a,b,c,d,e)=log(log(b))+log(c)*(d^e)={p 1↓⍵} that i not fully understand if it is ok and it not take in count a parameter too... test: z←{p←{(a b c d)←⍵⋄a=1:¯1⋄b=1:⍟⍟a⋄(⍟⍟a)+(c*d)×⍟b}⋄(=/(a b)←{p 1↓⍵}¨⍺⍵)∧k←(∞ ∞)≡(m n)←{p(3↑⍵),*/3↓⍵}¨⍺⍵:(↑⍺)>↑⍵⋄k:a>b⋄m>n} 10 10 10 10 10 z 10 10 10 10 9 1 1 2 3 4 5 z 5 4 3 2 1 0 2 2 2 2 3 z 10 4 3 2 2 1 10 6 4 2 2 z 10 6 2 4 2 0 2 2 2 2 10 z 2 2 2 10 2 1 10 9 8 7 6 z 6 7 8 9 10 0 10 10 10 10 10 z 10 10 10 10 9 1 2 2 2 2 3 z 10 4 3 2 2 1 2 2 2 2 10 z 2 2 2 10 2 1 10 10 10 10 10 z 9 10 10 10 10 1 3 2 2 1 1 z 2 5 1 1 1 1 2 2 3 10 1 z 2 7 3 9 1 1 7 9 10 10 10 z 6 9 10 10 10 1 3 2 2 2 2 z 2 2 2 2 2 1 3 10 10 10 10 z 2 10 10 10 10 1 8 3 1 2 1 z 2 2 3 1 1 1 2 4 2 1 1 z 3 3 2 1 1 1 5 4 3 2 1 z 1 2 3 4 5 1 1 2 3 4 5 z 5 4 3 2 1 0 6 7 8 9 10 z 6 7 8 9 10 0 10 6 4 2 2 z 10 6 2 4 2 0 10 9 8 7 6 z 6 7 8 9 10 0 1 10 10 10 10 z 1 10 10 10 9 0 2 4 1 1 1 z 2 2 2 1 1 0 2 2 2 1 1 z 2 4 1 1 1 0 2 5 1 1 1 z 3 2 2 1 1 0 4 2 1 1 1 z 2 4 1 1 1 0 2 4 1 1 1 z 4 2 1 1 1 0 2 3 10 1 1 z 8 3 9 1 1 0 8 3 9 1 1 z 2 3 10 1 1 0 2 4 1 1 1 z 3 3 1 1 1 0 2 2 1 9 9 z 2 2 1 10 10 0 2 2 1 10 10 z 2 2 1 9 9 0 1 1 1 1 1 z 1 2 1 1 1 0 1 1 1 1 2 z 1 1 1 1 1 0 1 1 1 1 1 z 1 1 1 1 1 0 9 10 10 10 10 z 10 9 10 10 10 1 9 10 10 10 10 z 10 10 10 10 10 0 10 10 10 10 10 z 10 10 10 10 10 0 11 10 10 10 10 z 10 10 10 10 10 1 • The tests in the challenge description are lacking some edge cases. Could you verify that it's also working for all these test cases? – Kevin Cruijssen May 17 '19 at 10:24 • @KevinCruijssen Here your test , if exclude the one above seems ok... – RosLuP May 17 '19 at 11:26 • If all test cases are correct, then +1 from me. Looking forward seeing an explanation of your code. :) – Kevin Cruijssen May 17 '19 at 11:28 • You said you calculated each by taking log(log()), but for that test case, the difference between log(log(10^10^10^10^10)) and log(log(9^10^10^10^10)) would require an absurd amount of accuracy to pick up on. You'd need to have a floating point with about 2e10 base 10 digits of accuracy. And this is ignoring the fact that both sides are approximately as large as 10^10^10, which I find it hard to believe you were able to compute. – Simply Beautiful Art May 17 '19 at 12:05 • Perhaps it fails 9, 10, 10, 10, 10, 10, 9, 10, 10, 10, which should return 1, but s(9,10,10,10,10) < s(10,9,10,10,10). – Simply Beautiful Art May 17 '19 at 15:04 # Java 8, 299288286252210208 224 bytes Math M;(a,b,c,d,e,f,g,h,i,j)->{double t=M.pow(i,j),y=l(l(g,b),c)-M.pow(d,e)+l(h,c)*t,z=l(l(a,f)*M.pow(b,c),g);return a>1&&f<2|(b<2|g<2?z>h:c<2|d<2?l(z,h)>t:y==0?a>f:y<0);}double l(double...A){return M.log(A[0])/M.log(A[1]);} Port of @SimplyBeautifulArt's Ruby answer, so make sure to upvote him! -14 bytes thanks to @SimplyBeautifulArt. +17 bytes for the same bug-fixes as the Ruby answer. Try it online. Explanation: Math M; // Math M=null on class-level to save bytes (a,b,c,d,e,f,g,h,i,j)->{ // Method with ten integer parameters and boolean return-type double t=M.pow(i,j), // Temp t = i to the power j y=l(l(g,b),c) // Temp y = c_log(b_log(g)) -M.pow(d,e) // - d to the power e +l(h,c)*t, // + c_log(h) * t z=l(l(a,f)*M.pow(b,c),g);// Temp z = g_log(f_log(a) * b to the power c) return a>1&& // If a is 1: // Return false f<2|( // Else-if f is 1: // Return true b<2|g<2? // Else-if either b or g is 1: z>h // Return whether z is larger than h :c<2|d<2? // Else-if either c or d is 1: l(z,h)>t // Return whether h_log(z) is larger than t :y==0? // Else-if y is 0: a>f // Return whether a is larger than f : // Else: y<0);} // Return whether y is negative // Separated method to calculate B_log(A) for inputs A,B double l(double...A){return M.log(A[0])/M.log(A[1]);} • It seems to work fine if you use x==y instead of M.abs(x-y)<1e-9. – Simply Beautiful Art May 16 '19 at 12:22 • @SimplyBeautifulArt Wait, it does?.. Wtf. When I had my ungolfed version it didn't work for one test case. The string output was the same, but internally it ever so slightly differed. The ungolfed version was your ungolfed version, before I changed it to the golfed ternary you have in your Ruby answer as well. Stupid floating point precision.. Will change it, since it works for the test cases in the current approach indeed. Thanks. – Kevin Cruijssen May 16 '19 at 12:28 • Lol, while you're at it you might want to look at my updates :^) – Simply Beautiful Art May 16 '19 at 12:28 • t is can be removed to save one byte by putting it into y like I did. TIO – Simply Beautiful Art May 17 '19 at 13:02 • @SimplyBeautifulArt Nvm about updating my 05AB1E answer with the same change. The byte-count would remain 96. – Kevin Cruijssen May 17 '19 at 13:36
{}
# zbMATH — the first resource for mathematics ## Engl, Heinz W. Compute Distance To: Author ID: engl.heinz-w Published as: Engl, H.; Engl, H. W.; Engl, Heinz; Engl, Heinz W. External Links: MGP · Wikidata · GND Documents Indexed: 125 Publications since 1976, including 11 Books Reviewing Activity: 33 Reviews all top 5 #### Co-Authors 34 single-authored 10 Neubauer, Andreas 7 Burger, Martin 7 Scherzer, Otmar 6 Rundell, William 5 Kügler, Philipp 4 Groetsch, Charles W. 4 Landl, Gerhard 4 Louis, Alfred Karl 4 Nashed, M. Zuhair 3 Capasso, Vincenzo 3 Langthaler, Thomas 3 Markowich, Peter Alexander 3 McLaughlin, Joyce Rogers 3 Römisch, Werner 3 Wacker, Hansjörg 2 Binder, Andreas 2 Colton, David Lem 2 Egger, Herbert 2 Grever, Wilhelm 2 Hanke-Bourgeois, Martin 2 Haslinger, J. R. 2 Iusem, Alfredo Noel 2 Kaltenbacher, Barbara 2 Kindermann, Stefan 2 Klibanov, Michael V. 2 Kress, Rainer 2 Kunisch, Karl 2 Leitão, Antonio 2 Lindner, Ewald Hans 2 Manselli, Paolo 2 Resmerita, Elena 2 Vessella, Sergio 2 Wakolbinger, Anton 2 Zarzer, Erich Alexander 2 Zeisel, Helmut 2 Zulehner, Walter 1 Albrecher, Hansjörg 1 Anderssen, Robert Scott 1 Bodenhofer, Ulrich 1 Buchberger, Bruno 1 Carthel, C. 1 Cesari, Lamberto 1 Deuflhard, Peter 1 Drab, C. B. 1 Eisenberg, Robert S. 1 Felici, Thomas P. 1 Flamm, Christoph 1 Fúsek, Peter 1 Gfrerer, Helmut 1 Goekler, Gerald 1 Hanke, Michael 1 Hodina, Günther 1 Hofinger, Andreas 1 Hofmann, Bernd 1 Holzleitner, Ludwig 1 Kauffmann, Harald F. 1 Kendermann, S. 1 Kirsch, Andreas 1 Lu, James J. 1 Mahmoud, K. G. 1 Mayer, Philipp A. 1 Müller, Stefan 1 Nanda, Arati 1 Offner, Günter 1 Pereverzev, Sergei V. 1 Périaux, Jacques F. 1 Pfau, Ralf Uwe 1 Pietra, Paola 1 Rosenkranz, Markus 1 Schatz, Andrea 1 Schuster, Peter 1 Seidel, Alexander 1 Stangl, Claudia 1 Yamamoto, Masahiro 1 Zou, Jun all top 5 #### Serials 13 Inverse Problems 6 Zeitschrift für Angewandte Mathematik und Mechanik (ZAMM) 6 Numerical Functional Analysis and Optimization 6 Journal of Inverse and Ill-Posed Problems 4 Journal of Mathematical Analysis and Applications 3 Applicable Analysis 3 Surveys on Mathematics for Industry 3 Nonlinear Analysis. Theory, Methods & Applications 2 Journal of Approximation Theory 2 Pacific Journal of Mathematics 2 SIAM Journal on Numerical Analysis 2 Stochastic Analysis and Applications 2 Journal of Integral Equations and Applications 1 Bulletin of the Australian Mathematical Society 1 Computers and Structures 1 Journal of Computational Physics 1 Mathematical Methods in the Applied Sciences 1 ZAMP. Zeitschrift für angewandte Mathematik und Physik 1 Mathematics of Computation 1 Applied Mathematics and Optimization 1 Computing 1 Czechoslovak Mathematical Journal 1 Journal of Computational and Applied Mathematics 1 Journal of Optimization Theory and Applications 1 Monatshefte für Mathematik 1 Numerische Mathematik 1 Proceedings of the American Mathematical Society 1 Results in Mathematics 1 Journal of Integral Equations 1 Mitteilungen der Gesellschaft für Angewandte Mathematik und Mechanik 1 Applied Numerical Mathematics 1 Internationale Mathematische Nachrichten 1 European Journal of Applied Mathematics 1 Aequationes Mathematicae 1 SIAM Journal on Applied Mathematics 1 Zeitschrift für Operations Research. Serie B: Praxis 1 Bollettino della Unione Matemàtica Italiana. Serie VI. A 1 Nonlinear World 1 Advances in Computational Mathematics 1 European Mathematical Society Newsletter 1 Multibody System Dynamics 1 ZAMM. Zeitschrift für Angewandte Mathematik und Mechanik 1 Milan Journal of Mathematics 1 Multiscale Modeling & Simulation 1 Bollettino della Unione Matematica Italiana. Series V. A 1 Bulletin de l’Académie Polonaise des Sciences, Série des Sciences Mathématiques, Astronomiques et Physiques 1 Lecture Notes in Mathematics 1 Mathematics and its Applications (Dordrecht) 1 Aportaciones Matematicas. Textos all top 5 #### Fields 72 Numerical analysis (65-XX) 49 Operator theory (47-XX) 40 Partial differential equations (35-XX) 20 Integral equations (45-XX) 16 Probability theory and stochastic processes (60-XX) 8 General and overarching topics; collections (00-XX) 7 History and biography (01-XX) 7 Optics, electromagnetic theory (78-XX) 7 Biology and other natural sciences (92-XX) 6 Operations research, mathematical programming (90-XX) 5 Mechanics of deformable solids (74-XX) 5 Statistical mechanics, structure of matter (82-XX) 4 Ordinary differential equations (34-XX) 3 Measure and integration (28-XX) 3 Approximations and expansions (41-XX) 3 Calculus of variations and optimal control; optimization (49-XX) 3 Statistics (62-XX) 3 Classical thermodynamics, heat transfer (80-XX) 3 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Integral transforms, operational calculus (44-XX) 2 General topology (54-XX) 2 Fluid mechanics (76-XX) 2 Geophysics (86-XX) 2 Systems theory; control (93-XX) 2 Information and communication theory, circuits (94-XX) 1 Real functions (26-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Difference and functional equations (39-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Functional analysis (46-XX) 1 Algebraic topology (55-XX) 1 Mechanics of particles and systems (70-XX) 1 Quantum theory (81-XX) #### Citations contained in zbMATH Open 87 Publications have been cited 2,193 times in 1,743 Documents Cited by Year Regularization of inverse problems. Zbl 0859.65054 Engl, Heinz W.; Hanke, Martin; Neubauer, Andreas 1996 Convergence rates for Tikhonov regularisation of nonlinear ill-posed problems. Zbl 0695.65037 Engl, Heinz W.; Kunisch, Karl; Neubauer, Andreas 1989 Tikhonov regularization of nonlinear differential-algebraic equations. Zbl 0711.34018 Engl, H. W.; Hanke, M.; Neubauer, A. 1990 Tikhonov regularization applied to the inverse problem of option pricing: convergence analysis and rates. Zbl 1205.65194 Egger, Herbert; Engl, Heinz W. 2005 Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. Zbl 0799.65060 Scherzer, O.; Engl, H. W.; Kunisch, K. 1993 A new approach to convergence rate analysis of Tikhonov regularization for parameter identification in heat conduction. Zbl 0968.35124 Engl, Heinz W.; Zou, Jun 2000 Regularization methods for the stable solution of inverse problems. Zbl 0776.65043 Engl, Heinz W. 1993 A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Zbl 0915.65053 Deuflhard, Peter; Engl, Heinz W.; Scherzer, Otmar 1998 Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal convergence rates. Zbl 0586.65045 Engl, H. W. 1987 A posteriori parameter choice for general regularization methods for solving linear ill-posed problems. Zbl 0647.65038 Engl, Heinz W.; Gfrerer, Helmut 1988 Inverse problems in systems biology. Zbl 1193.34001 Engl, Heinz W.; Flamm, Christoph; Kügler, Philipp; Lu, James; Müller, Stefan; Schuster, Peter 2009 Uniqueness and stable determination of forcing terms in linear partial differential equations with overspecified boundary data. Zbl 0809.35154 Engl, Heinz W.; Scherzer, Otmar; Yamamoto, Masahiro 1994 Random fixed point theorems for multivalued mappings. Zbl 0355.47035 Engl, Heinz W. 1978 Global uniqueness and Hölder stability for recovering a nonlinear source term in a parabolic equation. Zbl 1086.35132 Egger, Herbert; Engl, Heinz W.; Klibanov, Michael V. 2005 A Mann iterative regularization method for elliptic Cauchy problems. Zbl 0998.65114 Engl, H. W.; Leitão, A. 2001 On the choice of the regularization parameter for iterated Tikhonov regularization of ill-posed problems. Zbl 0608.65033 Engl, Heinz W. 1987 New extremal characterizations of generalized inverse of linear operators. Zbl 0492.47012 Engl, Heinz W.; Nashed, M. Z. 1981 The expectation-maximization algorithm for ill-posed integral equations: a convergence analysis. Zbl 1282.65173 Resmerita, Elena; Engl, Heinz W.; Iusem, Alfredo N. 2007 Necessary and sufficient conditions for convergence of regularization methods for solving linear operator equations of the first kind. Zbl 0472.65045 Engl, Heinz W. 1981 A general stochastic fixed-point theorem for continuous random operators on stochastic domains. Zbl 0398.60063 Engl, Heinz W. 1978 Inverse problems related to ion channel selectivity. Zbl 1127.35074 Burger, Martin; Eisenberg, Robert S.; Engl, Heinz W. 2007 Using the $$L$$-curve for determining optimal regularization parameters. Zbl 0819.65090 Engl, Heinz W.; Grever, Wilhelm 1994 Convergence rates results for iterative methods for solving nonlinear ill-posed problems. Zbl 0998.65058 Engl, H. W.; Scherzer, O. 2000 Convergence rates for maximum entropy regularization. Zbl 0790.65110 Engl, Heinz W.; Landl, Gerhard 1993 Some random fixed point theorems for strict contractions and nonexpansive mappings. Zbl 0382.60068 Engl, Heinz W. 1978 Identification of doping profiles in semiconductor devices. Zbl 0989.35139 Burger, Martin; Engl, Heinz W.; Markowich, Peter A.; Pietra, Paola 2001 Inverse and ill-posed problems. (Papers presented at the Alpine-U.S. Seminar on Inverse and Ill-posed Problems, held June 1986 in St. Wolfgang, Austria). Zbl 0623.00010 Engl, Heinz W. (ed.); Groetsch, C. W. (ed.) 1987 Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Zbl 0835.65078 Binder, Andreas; Engl, Heinz W.; Groetsch, Charles W.; Neubauer, Andreas; Scherzer, Otmar 1994 Generalized inverses of random linear operators in Banach spaces. Zbl 0485.47001 Engl, Heinz W.; Nashed, M. Zuhair 1981 Stochastic projectional schemes for random linear operator equations of the first and second kinds. Zbl 0446.60048 Engl, Heinz W.; Nashed, M. Z. 1979 Random generalized inverses and approximate solutions of random operator equations. Zbl 0427.60072 Nashed, M. Z.; Engl, H. W. 1979 Inverse problems related to crystallization of polymers. Zbl 0918.35143 Burger, Martin; Capasso, Vincenzo; Engl, Heinz W. 1999 A regularization scheme for an inverse problem in age-structured populations. Zbl 0841.92021 Engl, Heinz W.; Rundell, William; Scherzer, Otmar 1994 An improved version of Marti’s method for solving ill-posed linear integral equations. Zbl 0578.65135 Engl, Heinz W.; Neubauer, Andreas 1985 Optimal discrepancy principles for the Tikhonov regularization of integral equations of the first kind. Zbl 0562.65086 Engl, Heinz W.; Neubauer, Andreas 1985 Parameter identification in a random environment exemplified by a multiscale model for crystal growth. Zbl 1185.60010 Capasso, Vincenzo; Engl, Heinz W.; Kindermann, Stefan 2008 Solving linear boundary value problems via non-commutative Gröbner bases. Zbl 1042.34030 Rosenkranz, Markus; Buchberger, Bruno; Engl, Heinz W. 2003 Regularization methods for solving inverse problems. Zbl 0992.65058 Engl, Heinz W. 2000 Training neural networks with noisy data as an ill-posed problem. Zbl 1126.41301 Burger, Martin; Engl, Heinz W. 2000 Integral equations. Zbl 0898.45001 Engl, Heinz W. 1997 Stability estimates and regularization for an inverse heat conduction problem in semi-infinite and finite time intervals. Zbl 0682.35101 Engl, Heinz W.; Manselli, Paolo 1989 Identification of the local speed function in a Lévy model for option pricing. Zbl 1149.91034 Kendermann, S.; Mayer, P.; Albrecher, H.; Engl, H. 2008 On inverse problems for semiconductor equations. Zbl 1214.82122 Burger, M.; Engl, H. W.; Leitao, A.; Markowich, P. A. 2004 Regularized data-driven construction of fuzzy controllers. Zbl 1022.65073 Burger, M.; Haslinger, J.; Bodenhofer, U.; Engl, H. W. 2002 Some inverse problems for a nonlinear parabolic equation connected with continuous casting of steel: Stability estimates and regularization. Zbl 0738.65088 Binder, Andreas; Engl, Heinz W.; Vessella, Sergio 1990 Optimal parameter choice for ordinary and iterated Tikhonov regularization. Zbl 0627.65060 Engl, Heinz W.; Neubauer, Andreas 1987 Convergence of approximate solutions of nonlinear random operator equations with non-unique solutions. Zbl 0517.60069 Engl, Heinz W.; Roemisch, Werner 1983 Random fixed point theorems. Zbl 0462.60065 Engl, Heinz W. 1978 Natural linearization for the identification of nonlinear heat transfer laws. Zbl 1095.35067 Engl, H. W.; Fusek, P.; Pereverzev, S. V. 2005 Identification of a temperature dependent heat conductivity by Tikhonov regularization. Zbl 1027.35159 Kügler, P.; Engl, H. W. 2002 Surveys on solution methods for inverse problems. Zbl 0948.00011 Colton, David (ed.); Engl, Heinz W. (ed.); Louis, Alfred K. (ed.); McLaughlin, Joyce R. (ed.); Rundell, William (ed.) 2000 Parameter identification from boundary measurements in a parabolic equation arising from geophysics. Zbl 0776.35083 Scherzer, O.; Engl, H. W.; Anderssen, R. S. 1993 A parameter choice strategy for (iterated) Tikhonov regularization of ill-posed problems leading to superconvergence with optimal rates. Zbl 0621.65053 Engl, Heinz W.; Neubauer, Andreas 1988 Weak convergence of approximate solutions of stochastic equations with applications to random differential and integral equations. Zbl 0631.60062 Engl, Heinz W.; Römisch, Werner 1987 On an inverse problem for a nonlinear heat equation connected with continuous casting of steel. Zbl 0629.35115 Engl, Heinz W.; Langthaler, Thomas; Manselli, Paolo 1987 Numerical solution of an inverse problem connected with continuous casting of steel. Zbl 0591.90096 Engl, H. W.; Langthaler, T. 1985 On the convergence of regularization methods for ill-posed linear operator equations. Zbl 0514.65039 Engl, H. W. 1983 Convergence rates in the Prokhorov metric for assessing uncertainty in ill-posed problems. Zbl 1134.65350 Engl, Heinz W.; Hofinger, Andreas; Kindermann, Stefan 2005 The influence of the equation type on iterative parameter identification problems which are elliptic or hyperbolic in the parameter. Zbl 1055.35137 Engl, Heinz W.; Kügler, Philipp 2003 Introduction to: Surveys on solution methods for inverse problems. Zbl 0963.01010 Colton, D.; Engl, H. W.; Louis, A. K.; McLaughlin, J. R.; Rundell, W. 2000 An optimal stopping rule for the $$v$$-method for solving ill-posed problems, using Christoffel functions. Zbl 0812.41028 Hanke, Martin; Engl, Heinz W. 1994 Convergence rates for Tikhonov regularization in finite-dimensional subspaces of Hilbert scales. Zbl 0654.65041 Engl, Heinz W.; Neubauer, Andreas 1988 Approximate solutions of nonlinear random operator equations: Convergence in distribution. Zbl 0525.60068 Engl, Heinz W.; Römisch, Werner 1985 Existence and uniqueness of solutions for nonlinear alternative problems in a Banach space. Zbl 0499.34040 Cesari, Lamberto; Engl, Heinz W. 1981 Existence of measurable optima in stochastic nonlinear programming and control. Zbl 0418.90067 Engl, Heinz W. 1979 Inverse doping problems for semiconductor devices. Zbl 1255.82062 Burger, Martin; Engl, Heinz W.; Markowich, Peter A. 2002 Inverse Problems in medical imaging and nondestructive testing. Proceedings of the conference in Oberwolfach, Germany, February 4–10, 1996. Zbl 0871.00041 Engl, Heinz W. (ed.); Louis, Alfred K. (ed.); Rundell, William (ed.) 1997 Inverse problems in geophysical applications. Proceedings of the GAMM-SIAM conference held in Yosemite, CA, USA, December 16–19, 1995. Zbl 0857.00035 Engl, Heinz W. (ed.); Louis, Alfred K. (ed.); Rundell, William (ed.) 1997 Inverse problems in diffusion processes. Proceedings of the GAMM-SIAM symposium, St. Wolfgang, Austria, June 27-July 1, 1994. Zbl 0829.00025 Engl, Heinz W. (ed.); Rundell, William (ed.) 1995 Optimum structural design using MSC/NASTRAN and sequential quadratic programming. Zbl 0868.73057 Mahmoud, K. G.; Engl, H. W.; Holzleitner, L. 1994 Hansjörg Wacker (1939-1991). Zbl 0733.01023 Engl, Heinz W. 1991 Projection-regularization methods for linear operator equations of the first kind. Zbl 0671.47006 Engl, Heinz W.; Groetsch, Charles W. 1988 Case studies in industrial mathematics. Zbl 0649.00007 Engl, Heinz W. (ed.); Wacker, Hansjörg (ed.); Zulehner, Walter (ed.) 1988 Continuity properties of the extension of a locally Lipschitz continuous map to the space of probability measures. Zbl 0571.60069 Engl, H. W.; Wakolbinger, A. 1985 A combined boundary value and transmission problem arising from the calculation of eddy currents: well-posedness and numerical treatment. Zbl 0578.35012 Engl, H. W.; Lindner, E. 1984 Weak convergence of asymptotically regular sequences for nonexpansive mappings and connections with certain Chebyshef-centers. Zbl 0409.47040 Engl, Heinz W. 1977 Corrigendum: The expectation-maximization algorithm for ill-posed integral equations: a convergence analysis. Zbl 1282.65174 Resmerita, Elena; Engl, Heinz W.; Iusem, Alfredo N. 2008 Identification of parameters in polymer crystallization, semiconductor models and elasticity via iterative regularization methods. Zbl 1059.35166 Engl, H. W. 2002 Identification of heat transfer functions in continuous casting of steel by regularization. Zbl 0973.35195 Carthel, C.; Engl, H. W. 2000 Tikhonov regularization for an inverse source problem for a coupled system of reaction-diffusion equations. Zbl 0919.35146 Engl, H. W.; Nanda, A. 1998 Regularization methods for nonlinear ill-posed problems with applications to phase reconstruction. Zbl 0880.65034 Blaschke-Kaltenbacher, Barbara; Engl, Heinz W. 1997 A decreasing rearrangement approach for a class of ill-posed nonlinear integral equations. Zbl 0799.65149 Engl, Heinz W.; Hofmann, Bernd; Zeisel, Helmut 1993 Uniform convergence of regularization methods for linear ill-posed problems. Zbl 0747.65037 Engl, Heinz W.; Hodina, Günther 1991 Inverse and ill-posed problems. Zbl 0747.35052 Engl, Heinz W. 1991 On weak limits of probability distributions on Polish spaces. Zbl 0512.60008 Engl, Heinz W.; Wakolbinger, Anton 1983 Bemerkungen zur Aufwandsminimierung bei Stetigkeitsmethoden sowie Alternativen bei der Behandlung der singulären Situation. Zbl 0402.65039 Wacker, Hj.; Engl, H.; Zarzer, E. 1977 Ausgewählte Aspekte der Kontrolltheorie. Zbl 0388.49003 Engl, H. W.; Wacker, Hj.; Zarzer, E. A. 1976 Inverse problems in systems biology. Zbl 1193.34001 Engl, Heinz W.; Flamm, Christoph; Kügler, Philipp; Lu, James; Müller, Stefan; Schuster, Peter 2009 Parameter identification in a random environment exemplified by a multiscale model for crystal growth. Zbl 1185.60010 Capasso, Vincenzo; Engl, Heinz W.; Kindermann, Stefan 2008 Identification of the local speed function in a Lévy model for option pricing. Zbl 1149.91034 Kendermann, S.; Mayer, P.; Albrecher, H.; Engl, H. 2008 Corrigendum: The expectation-maximization algorithm for ill-posed integral equations: a convergence analysis. Zbl 1282.65174 Resmerita, Elena; Engl, Heinz W.; Iusem, Alfredo N. 2008 The expectation-maximization algorithm for ill-posed integral equations: a convergence analysis. Zbl 1282.65173 Resmerita, Elena; Engl, Heinz W.; Iusem, Alfredo N. 2007 Inverse problems related to ion channel selectivity. Zbl 1127.35074 Burger, Martin; Eisenberg, Robert S.; Engl, Heinz W. 2007 Tikhonov regularization applied to the inverse problem of option pricing: convergence analysis and rates. Zbl 1205.65194 Egger, Herbert; Engl, Heinz W. 2005 Global uniqueness and Hölder stability for recovering a nonlinear source term in a parabolic equation. Zbl 1086.35132 Egger, Herbert; Engl, Heinz W.; Klibanov, Michael V. 2005 Natural linearization for the identification of nonlinear heat transfer laws. Zbl 1095.35067 Engl, H. W.; Fusek, P.; Pereverzev, S. V. 2005 Convergence rates in the Prokhorov metric for assessing uncertainty in ill-posed problems. Zbl 1134.65350 Engl, Heinz W.; Hofinger, Andreas; Kindermann, Stefan 2005 On inverse problems for semiconductor equations. Zbl 1214.82122 Burger, M.; Engl, H. W.; Leitao, A.; Markowich, P. A. 2004 Solving linear boundary value problems via non-commutative Gröbner bases. Zbl 1042.34030 Rosenkranz, Markus; Buchberger, Bruno; Engl, Heinz W. 2003 The influence of the equation type on iterative parameter identification problems which are elliptic or hyperbolic in the parameter. Zbl 1055.35137 Engl, Heinz W.; Kügler, Philipp 2003 Regularized data-driven construction of fuzzy controllers. Zbl 1022.65073 Burger, M.; Haslinger, J.; Bodenhofer, U.; Engl, H. W. 2002 Identification of a temperature dependent heat conductivity by Tikhonov regularization. Zbl 1027.35159 Kügler, P.; Engl, H. W. 2002 Inverse doping problems for semiconductor devices. Zbl 1255.82062 Burger, Martin; Engl, Heinz W.; Markowich, Peter A. 2002 Identification of parameters in polymer crystallization, semiconductor models and elasticity via iterative regularization methods. Zbl 1059.35166 Engl, H. W. 2002 A Mann iterative regularization method for elliptic Cauchy problems. Zbl 0998.65114 Engl, H. W.; Leitão, A. 2001 Identification of doping profiles in semiconductor devices. Zbl 0989.35139 Burger, Martin; Engl, Heinz W.; Markowich, Peter A.; Pietra, Paola 2001 A new approach to convergence rate analysis of Tikhonov regularization for parameter identification in heat conduction. Zbl 0968.35124 Engl, Heinz W.; Zou, Jun 2000 Convergence rates results for iterative methods for solving nonlinear ill-posed problems. Zbl 0998.65058 Engl, H. W.; Scherzer, O. 2000 Regularization methods for solving inverse problems. Zbl 0992.65058 Engl, Heinz W. 2000 Training neural networks with noisy data as an ill-posed problem. Zbl 1126.41301 Burger, Martin; Engl, Heinz W. 2000 Surveys on solution methods for inverse problems. Zbl 0948.00011 Colton, David (ed.); Engl, Heinz W. (ed.); Louis, Alfred K. (ed.); McLaughlin, Joyce R. (ed.); Rundell, William (ed.) 2000 Introduction to: Surveys on solution methods for inverse problems. Zbl 0963.01010 Colton, D.; Engl, H. W.; Louis, A. K.; McLaughlin, J. R.; Rundell, W. 2000 Identification of heat transfer functions in continuous casting of steel by regularization. Zbl 0973.35195 Carthel, C.; Engl, H. W. 2000 Inverse problems related to crystallization of polymers. Zbl 0918.35143 Burger, Martin; Capasso, Vincenzo; Engl, Heinz W. 1999 A convergence analysis of iterative methods for the solution of nonlinear ill-posed problems under affinely invariant conditions. Zbl 0915.65053 Deuflhard, Peter; Engl, Heinz W.; Scherzer, Otmar 1998 Tikhonov regularization for an inverse source problem for a coupled system of reaction-diffusion equations. Zbl 0919.35146 Engl, H. W.; Nanda, A. 1998 Integral equations. Zbl 0898.45001 Engl, Heinz W. 1997 Inverse Problems in medical imaging and nondestructive testing. Proceedings of the conference in Oberwolfach, Germany, February 4–10, 1996. Zbl 0871.00041 Engl, Heinz W. (ed.); Louis, Alfred K. (ed.); Rundell, William (ed.) 1997 Inverse problems in geophysical applications. Proceedings of the GAMM-SIAM conference held in Yosemite, CA, USA, December 16–19, 1995. Zbl 0857.00035 Engl, Heinz W. (ed.); Louis, Alfred K. (ed.); Rundell, William (ed.) 1997 Regularization methods for nonlinear ill-posed problems with applications to phase reconstruction. Zbl 0880.65034 Blaschke-Kaltenbacher, Barbara; Engl, Heinz W. 1997 Regularization of inverse problems. Zbl 0859.65054 Engl, Heinz W.; Hanke, Martin; Neubauer, Andreas 1996 Inverse problems in diffusion processes. Proceedings of the GAMM-SIAM symposium, St. Wolfgang, Austria, June 27-July 1, 1994. Zbl 0829.00025 Engl, Heinz W. (ed.); Rundell, William (ed.) 1995 Uniqueness and stable determination of forcing terms in linear partial differential equations with overspecified boundary data. Zbl 0809.35154 Engl, Heinz W.; Scherzer, Otmar; Yamamoto, Masahiro 1994 Using the $$L$$-curve for determining optimal regularization parameters. Zbl 0819.65090 Engl, Heinz W.; Grever, Wilhelm 1994 Weakly closed nonlinear operators and parameter identification in parabolic equations by Tikhonov regularization. Zbl 0835.65078 Binder, Andreas; Engl, Heinz W.; Groetsch, Charles W.; Neubauer, Andreas; Scherzer, Otmar 1994 A regularization scheme for an inverse problem in age-structured populations. Zbl 0841.92021 Engl, Heinz W.; Rundell, William; Scherzer, Otmar 1994 An optimal stopping rule for the $$v$$-method for solving ill-posed problems, using Christoffel functions. Zbl 0812.41028 Hanke, Martin; Engl, Heinz W. 1994 Optimum structural design using MSC/NASTRAN and sequential quadratic programming. Zbl 0868.73057 Mahmoud, K. G.; Engl, H. W.; Holzleitner, L. 1994 Optimal a posteriori parameter choice for Tikhonov regularization for solving nonlinear ill-posed problems. Zbl 0799.65060 Scherzer, O.; Engl, H. W.; Kunisch, K. 1993 Regularization methods for the stable solution of inverse problems. Zbl 0776.65043 Engl, Heinz W. 1993 Convergence rates for maximum entropy regularization. Zbl 0790.65110 Engl, Heinz W.; Landl, Gerhard 1993 Parameter identification from boundary measurements in a parabolic equation arising from geophysics. Zbl 0776.35083 Scherzer, O.; Engl, H. W.; Anderssen, R. S. 1993 A decreasing rearrangement approach for a class of ill-posed nonlinear integral equations. Zbl 0799.65149 Engl, Heinz W.; Hofmann, Bernd; Zeisel, Helmut 1993 Hansjörg Wacker (1939-1991). Zbl 0733.01023 Engl, Heinz W. 1991 Uniform convergence of regularization methods for linear ill-posed problems. Zbl 0747.65037 Engl, Heinz W.; Hodina, Günther 1991 Inverse and ill-posed problems. Zbl 0747.35052 Engl, Heinz W. 1991 Tikhonov regularization of nonlinear differential-algebraic equations. Zbl 0711.34018 Engl, H. W.; Hanke, M.; Neubauer, A. 1990 Some inverse problems for a nonlinear parabolic equation connected with continuous casting of steel: Stability estimates and regularization. Zbl 0738.65088 Binder, Andreas; Engl, Heinz W.; Vessella, Sergio 1990 Convergence rates for Tikhonov regularisation of nonlinear ill-posed problems. Zbl 0695.65037 Engl, Heinz W.; Kunisch, Karl; Neubauer, Andreas 1989 Stability estimates and regularization for an inverse heat conduction problem in semi-infinite and finite time intervals. Zbl 0682.35101 Engl, Heinz W.; Manselli, Paolo 1989 A posteriori parameter choice for general regularization methods for solving linear ill-posed problems. Zbl 0647.65038 Engl, Heinz W.; Gfrerer, Helmut 1988 A parameter choice strategy for (iterated) Tikhonov regularization of ill-posed problems leading to superconvergence with optimal rates. Zbl 0621.65053 Engl, Heinz W.; Neubauer, Andreas 1988 Convergence rates for Tikhonov regularization in finite-dimensional subspaces of Hilbert scales. Zbl 0654.65041 Engl, Heinz W.; Neubauer, Andreas 1988 Projection-regularization methods for linear operator equations of the first kind. Zbl 0671.47006 Engl, Heinz W.; Groetsch, Charles W. 1988 Case studies in industrial mathematics. Zbl 0649.00007 Engl, Heinz W. (ed.); Wacker, Hansjörg (ed.); Zulehner, Walter (ed.) 1988 Discrepancy principles for Tikhonov regularization of ill-posed problems leading to optimal convergence rates. Zbl 0586.65045 Engl, H. W. 1987 On the choice of the regularization parameter for iterated Tikhonov regularization of ill-posed problems. Zbl 0608.65033 Engl, Heinz W. 1987 Inverse and ill-posed problems. (Papers presented at the Alpine-U.S. Seminar on Inverse and Ill-posed Problems, held June 1986 in St. Wolfgang, Austria). Zbl 0623.00010 Engl, Heinz W. (ed.); Groetsch, C. W. (ed.) 1987 Optimal parameter choice for ordinary and iterated Tikhonov regularization. Zbl 0627.65060 Engl, Heinz W.; Neubauer, Andreas 1987 Weak convergence of approximate solutions of stochastic equations with applications to random differential and integral equations. Zbl 0631.60062 Engl, Heinz W.; Römisch, Werner 1987 On an inverse problem for a nonlinear heat equation connected with continuous casting of steel. Zbl 0629.35115 Engl, Heinz W.; Langthaler, Thomas; Manselli, Paolo 1987 An improved version of Marti’s method for solving ill-posed linear integral equations. Zbl 0578.65135 Engl, Heinz W.; Neubauer, Andreas 1985 Optimal discrepancy principles for the Tikhonov regularization of integral equations of the first kind. Zbl 0562.65086 Engl, Heinz W.; Neubauer, Andreas 1985 Numerical solution of an inverse problem connected with continuous casting of steel. Zbl 0591.90096 Engl, H. W.; Langthaler, T. 1985 Approximate solutions of nonlinear random operator equations: Convergence in distribution. Zbl 0525.60068 Engl, Heinz W.; Römisch, Werner 1985 Continuity properties of the extension of a locally Lipschitz continuous map to the space of probability measures. Zbl 0571.60069 Engl, H. W.; Wakolbinger, A. 1985 A combined boundary value and transmission problem arising from the calculation of eddy currents: well-posedness and numerical treatment. Zbl 0578.35012 Engl, H. W.; Lindner, E. 1984 Convergence of approximate solutions of nonlinear random operator equations with non-unique solutions. Zbl 0517.60069 Engl, Heinz W.; Roemisch, Werner 1983 On the convergence of regularization methods for ill-posed linear operator equations. Zbl 0514.65039 Engl, H. W. 1983 On weak limits of probability distributions on Polish spaces. Zbl 0512.60008 Engl, Heinz W.; Wakolbinger, Anton 1983 New extremal characterizations of generalized inverse of linear operators. Zbl 0492.47012 Engl, Heinz W.; Nashed, M. Z. 1981 Necessary and sufficient conditions for convergence of regularization methods for solving linear operator equations of the first kind. Zbl 0472.65045 Engl, Heinz W. 1981 Generalized inverses of random linear operators in Banach spaces. Zbl 0485.47001 Engl, Heinz W.; Nashed, M. Zuhair 1981 Existence and uniqueness of solutions for nonlinear alternative problems in a Banach space. Zbl 0499.34040 Cesari, Lamberto; Engl, Heinz W. 1981 Stochastic projectional schemes for random linear operator equations of the first and second kinds. Zbl 0446.60048 Engl, Heinz W.; Nashed, M. Z. 1979 Random generalized inverses and approximate solutions of random operator equations. Zbl 0427.60072 Nashed, M. Z.; Engl, H. W. 1979 Existence of measurable optima in stochastic nonlinear programming and control. Zbl 0418.90067 Engl, Heinz W. 1979 Random fixed point theorems for multivalued mappings. Zbl 0355.47035 Engl, Heinz W. 1978 A general stochastic fixed-point theorem for continuous random operators on stochastic domains. Zbl 0398.60063 Engl, Heinz W. 1978 Some random fixed point theorems for strict contractions and nonexpansive mappings. Zbl 0382.60068 Engl, Heinz W. 1978 Random fixed point theorems. Zbl 0462.60065 Engl, Heinz W. 1978 Weak convergence of asymptotically regular sequences for nonexpansive mappings and connections with certain Chebyshef-centers. Zbl 0409.47040 Engl, Heinz W. 1977 Bemerkungen zur Aufwandsminimierung bei Stetigkeitsmethoden sowie Alternativen bei der Behandlung der singulären Situation. Zbl 0402.65039 Wacker, Hj.; Engl, H.; Zarzer, E. 1977 Ausgewählte Aspekte der Kontrolltheorie. Zbl 0388.49003 Engl, H. W.; Wacker, Hj.; Zarzer, E. A. 1976 all top 5 #### Cited by 2,246 Authors 63 Reichel, Lothar 33 Hofmann, Bernd 28 Neubauer, Andreas 26 Engl, Heinz W. 26 Fu, Chuli 25 Nair, M. Thamban 24 George, Santhosh 24 Scherzer, Otmar 23 Kaltenbacher, Barbara 23 Xiong, Xiangtuan 22 Wei, Ting 18 Donatelli, Marco 18 Pereverzev, Sergei V. 18 Ramlau, Ronny 17 Jin, Qinian 17 Lu, Shuai 17 Mathé, Peter 16 Burger, Martin 16 Han, Bo 16 Kindermann, Stefan 15 Yang, Liu 14 Deng, Zuicha 12 Buccini, Alessandro 12 Jin, Bangti 12 Leitão, Antonio 11 Kokurin, Mikhail Yur’evich 11 Leonov, Aleksandr Sergeevich 11 Lesnic, Daniel 11 Nashed, M. Zuhair 11 Serra-Capizzano, Stefano 11 Sgallari, Fiorella 10 De Cezaro, Adriano 10 Haltmeier, Markus 10 He, Guoqiang 10 Klibanov, Michael V. 10 Smirnova, Alexandra B. 10 Trong, Dang Duc 10 Yang, Fan 9 Hanke-Bourgeois, Martin 9 Johansson, B. Tomas 9 Morigi, Serena 9 Qian, Zhi 9 Rajan, M. P. 9 Tautenhahn, Ulrich 9 Zhang, Ye 9 Zou, Jun 8 Argyros, Ioannis Konstantinos 8 Ascher, Uri M. 8 Blanchard, Gilles 8 Cheng, Xiaoliang 8 Đinh Nho Hào 8 Estatico, Claudio 8 Golubev, Yuriĭ K. 8 Groetsch, Charles W. 8 Hohage, Thorsten 8 Loubes, Jean-Michel 8 Mahale, Pallavi 8 Marin, Liviu 8 Noschese, Silvia 8 Plato, Robert 8 Rosasco, Lorenzo A. 8 Sadok, Hassane 8 Stuart, Andrew M. 8 Yamamoto, Masahiro 8 Yang, Hongqi 7 Anderssen, Robert Scott 7 Bakushinskĭ, Anatoliĭ Borisovich B. 7 Florens, Jean-Pierre 7 Freeden, Willi 7 Hämarik, Uno 7 Hamdi, Adel 7 Hein, Torsten 7 Jbilou, Khalide 7 Kunisch, Karl 7 Li, Xiaoxiao 7 Maass, Peter 7 Meng, Zehong 7 Potthast, Roland W. E. 7 Slodička, Marián 7 Spies, Ruben D. 7 Xu, Hong-Kun 7 Yu, Jianning 7 Zhao, Zhenyu 6 Beilina, Larisa 6 Bissantz, Nicolai 6 Cheng, Hao 6 De Micheli, Enrico 6 Feng, Xiaoli 6 Gerth, Daniel 6 Gong, Rongfang 6 Gulliksson, Mårten E. 6 Hasanoǧlu, Alemdar 6 Hochstenbach, Michiel E. 6 Hofinger, Andreas 6 Hon, Yiu-Chung 6 Huang, Guang-Xin 6 Jia, Zhongxiao 6 Kabanikhin, Sergeĭ Igorevich 6 Kang, Chuangang 6 Liu, Tao ...and 2,146 more Authors all top 5 #### Cited in 293 Serials 105 Journal of Computational and Applied Mathematics 90 Journal of Inverse and Ill-Posed Problems 69 Applied Mathematics and Computation 54 Inverse Problems 54 Inverse Problems in Science and Engineering 48 Journal of Mathematical Analysis and Applications 45 Numerical Functional Analysis and Optimization 43 Applied Numerical Mathematics 33 Journal of Computational Physics 31 Applicable Analysis 31 Numerische Mathematik 30 Numerical Algorithms 29 Journal of Integral Equations and Applications 28 Computers & Mathematics with Applications 27 Applied Mathematical Modelling 26 Inverse Problems and Imaging 20 Journal of Optimization Theory and Applications 19 Mathematics of Computation 19 Mathematics and Computers in Simulation 17 SIAM Journal on Imaging Sciences 16 Linear Algebra and its Applications 15 SIAM Journal on Scientific Computing 14 Computer Methods in Applied Mechanics and Engineering 14 BIT 14 Computing 14 Journal of Scientific Computing 14 Mathematical Problems in Engineering 13 SIAM Journal on Numerical Analysis 12 Journal of Complexity 12 Applied Mathematics Letters 12 International Journal of Computer Mathematics 12 Advances in Computational Mathematics 11 Journal of Econometrics 11 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 11 SIAM Journal on Applied Mathematics 11 Journal of Mathematical Imaging and Vision 11 Journal of Applied Mathematics and Computing 11 Electronic Journal of Statistics 10 Journal of Engineering Mathematics 10 Mathematical and Computer Modelling 10 Computational Optimization and Applications 10 ETNA. Electronic Transactions on Numerical Analysis 10 Nonlinear Analysis. Real World Applications 9 The Annals of Statistics 9 Journal of Approximation Theory 9 Engineering Analysis with Boundary Elements 9 GEM - International Journal on Geomathematics 9 Nonlinear Analysis. Theory, Methods & Applications 8 Stochastic Analysis and Applications 8 Applied and Computational Harmonic Analysis 8 Abstract and Applied Analysis 8 Computational Methods in Applied Mathematics 8 European Series in Applied and Industrial Mathematics (ESAIM): Mathematical Modelling and Numerical Analysis 7 Computer Physics Communications 7 Journal of Mathematical Biology 7 Calcolo 7 Integral Equations and Operator Theory 7 Computational Mathematics and Mathematical Physics 7 Russian Mathematics 6 Mathematical Biosciences 6 Proceedings of the American Mathematical Society 6 Zeitschrift für Analysis und ihre Anwendungen 6 Applied Mathematics and Mechanics. (English Edition) 6 Acta Mathematicae Applicatae Sinica. English Series 6 Journal of Symbolic Computation 6 Mathematical Methods of Statistics 6 Journal of Inequalities and Applications 6 Comptes Rendus. Mathématique. Académie des Sciences, Paris 5 International Journal of Heat and Mass Transfer 5 Mathematical Methods in the Applied Sciences 5 Journal of Differential Equations 5 Journal of Statistical Planning and Inference 5 Journal of Mathematical Sciences (New York) 5 Computational and Applied Mathematics 5 Bernoulli 5 Journal of Shanghai University 5 Communications in Nonlinear Science and Numerical Simulation 5 Computational Geosciences 5 Analysis and Applications (Singapore) 5 Fixed Point Theory and Applications 5 Journal of Theoretical Biology 5 SIAM/ASA Journal on Uncertainty Quantification 4 Chinese Annals of Mathematics. Series B 4 Acta Applicandae Mathematicae 4 Physica D 4 Computational Mechanics 4 Science in China. Series A 4 Annales de l’Institut Henri Poincaré. Probabilités et Statistiques 4 SIAM Journal on Optimization 4 The Journal of Fourier Analysis and Applications 4 Doklady Mathematics 4 European Series in Applied and Industrial Mathematics (ESAIM): Control, Optimization and Calculus of Variations 4 Journal of Applied Mathematics 4 Journal of Machine Learning Research (JMLR) 4 Multiscale Modeling & Simulation 3 Bulletin of the Australian Mathematical Society 3 International Journal of Solids and Structures 3 Lithuanian Mathematical Journal 3 Bulletin of Mathematical Biology 3 Annali di Matematica Pura ed Applicata. Serie Quarta ...and 193 more Serials all top 5 #### Cited in 53 Fields 1,134 Numerical analysis (65-XX) 496 Partial differential equations (35-XX) 433 Operator theory (47-XX) 170 Statistics (62-XX) 152 Calculus of variations and optimal control; optimization (49-XX) 143 Integral equations (45-XX) 118 Biology and other natural sciences (92-XX) 110 Information and communication theory, circuits (94-XX) 90 Probability theory and stochastic processes (60-XX) 82 Operations research, mathematical programming (90-XX) 80 Computer science (68-XX) 65 Optics, electromagnetic theory (78-XX) 58 Mechanics of deformable solids (74-XX) 58 Classical thermodynamics, heat transfer (80-XX) 53 Systems theory; control (93-XX) 51 Fluid mechanics (76-XX) 51 Geophysics (86-XX) 50 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 49 Ordinary differential equations (34-XX) 29 Functional analysis (46-XX) 25 Linear and multilinear algebra; matrix theory (15-XX) 24 Integral transforms, operational calculus (44-XX) 22 Statistical mechanics, structure of matter (82-XX) 21 Approximations and expansions (41-XX) 18 Harmonic analysis on Euclidean spaces (42-XX) 12 Functions of a complex variable (30-XX) 12 Potential theory (31-XX) 11 Dynamical systems and ergodic theory (37-XX) 8 Real functions (26-XX) 7 General topology (54-XX) 5 Special functions (33-XX) 5 Convex and discrete geometry (52-XX) 4 History and biography (01-XX) 4 Measure and integration (28-XX) 4 Differential geometry (53-XX) 4 Global analysis, analysis on manifolds (58-XX) 4 Quantum theory (81-XX) 4 Astronomy and astrophysics (85-XX) 3 Mechanics of particles and systems (70-XX) 2 General and overarching topics; collections (00-XX) 2 Mathematical logic and foundations (03-XX) 2 Combinatorics (05-XX) 2 Commutative algebra (13-XX) 2 Mathematics education (97-XX) 1 Number theory (11-XX) 1 Field theory and polynomials (12-XX) 1 Algebraic geometry (14-XX) 1 Several complex variables and analytic spaces (32-XX) 1 Difference and functional equations (39-XX) 1 Sequences, series, summability (40-XX) 1 Abstract harmonic analysis (43-XX) 1 Algebraic topology (55-XX) 1 Manifolds and cell complexes (57-XX) #### Wikidata Timeline The data are displayed as stored in Wikidata under a Creative Commons CC0 License. Updates and corrections should be made in Wikidata.
{}
# Session 9: Partial differential equation¶ Date: 11/13/2017, Monday In [1]: format compact I assume you’ve learnt very little about PDEs in basic math class, so I would first talk about PDEs from a programmer’s perpesctive and come to the math when necessary. PDE from a programmer’s view: You already know that ODE is the evolution of a single value. From the current alue $$y_t$$ you can solve for the next value $$y_{t+1}$$. PDE is the evolution of an array. From the current array $$(x_1,x_2,...,x_m)_t$$ you can solve for the array at the next time step $$(x_1,x_2,...,x_m)_{t+1}$$. Typically, the array represents some physical field in the space, like 1D temperature distribution. Consider a spatial domain $$x \in [0,1]$$. We discretize it by 11 points including the boundary. In [2]: x = linspace(0,1,11) nx = length(x) x = Columns 1 through 7 0 0.1000 0.2000 0.3000 0.4000 0.5000 0.6000 Columns 8 through 11 0.7000 0.8000 0.9000 1.0000 nx = 11 The initial physical field (e.g. the concentration of some chemicals) is 5.0 for $$x \in [0.2,0.3]$$ and zero elsewhere. Let’s create the initial condition array. In [3]: u0 = zeros(1,nx); % zero by default u0(3:4) = 5.0; % non-zero between 0.2~0.3 In [4]: %plot -s 400,200 plot(x, u0, '-o') ylim([0,6]); Say the chemical is blew by the wind towards the right. At the next time step the field is shifted rightward by 1 grid point. To represent the solution at the next time step, we could create a new array u_next and set u_next(4:5) = 5.0, based on our knowledge that u0(3:4) = 5.0. However, we want to write code that works for any initiatial condition, so the code would look like: In [5]: % initialization u = u0; u_next = zeros(1,nx); % initialize to 0 % shift rightward u_next(2:end) = u(1:end-1); % just plotting hold on plot(x, u, '-o') plot(x, u_next, '-o') ylim([0,6]); ### Space-time diagram¶ We can keep shifting the field and record the solution at 5 time steps. In [6]: u = u0; % reset solution u_next = zeros(1,nx); % reset solution u_ts = zeros(5,nx); % to record the entire time series u_ts(1,:) = u; % record initial condition for t=2:5 u_next(2:end) = u(1:end-1); % shift rightward u = u_next; % swap arrays for the next iteration u_ts(t,:) = u; % record current step end % just plotting hold on for t=1:5 plot(x, u_ts(t, :), '-o') end ylim([0,6]); legend('t=0', 't=1', 't=2', 't=3', 't=4') The plot looks quite convoluted with multiple time steps… A better way to visualize it is to plot the time series in a 2D x-t plain. In [7]: contourf(x,0:4,u_ts) colorbar() xlabel('x');ylabel('time') title("space-time diagram") Here we can clearly see the “chemical field” is moving rightward through time. We call this rightward shift an advection process. Just like the diffusion process introduced in the class, advection happens everywhere in the physical world. In general, the physical field won’t be shifted by exact one grid point. Instead, we can have arbitrary wind speed, changing with space and time. To represent this general advection process, we can write a partial differential equation: Advection equation with initial condition $$u_0(x)$$ $\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} = 0, \ \ u(x,0) = u_0(x)$ This means a phyical field $$u(x,t)$$ is advected by wind speed $$c$$. In general, the wind speed $$c(x,t)$$ can change with space and time. But if $$c$$ is a constant, then the analytical solution is $u(x,t) = u_0(x-ct)$ You can verify this solution by bringing it back to the PDE. We ask, “what’s the chemical concentration at a spatial point $$x=x_j$$, when $$t=t_n$$“? According to the solution, the answer is, “it is the same as the concentration at $$x=x_j-ct_n$$ when $$t=0$$“. This is physically intuitive. The chemical originally at $$x=x_j-ct_n$$ was traveling rightward at speed $$c$$, so after time period $$t_n$$ it would reach $$x=x_j$$. Thus in order to find the concentration at $$x=x_j$$ right now, what we can do is going backward in time to find where does the current chemical come from. This solution-finding process means going downward&leftward along the contour line in space-time diagram shown before. ### Numerical approximation to advection equation¶ In practice, $$c$$ is often not a constant, so we can’t easily find an anlytical solution and must rely on numerical methods. Use first-order finite difference approximation for both $$\frac{\partial u}{\partial t}$$ and $$\frac{\partial u}{\partial x}$$: $\frac{u_j^{n+1}-u_j^n}{\Delta t} + c\frac{u_j^{n}-u_{j-1}^n}{\Delta x} = 0$ Let $$\alpha=\frac{c \Delta t}{\Delta x}$$, the iteration can be written as \begin{align} u_j^{n+1} &= u_j^n - \alpha(u_j^{n}-u_{j-1}^n) \\ &= (1-\alpha)u_j^n + \alpha u_{j-1}^n \end{align} Note that, to approximate $$\frac{\partial u}{\partial x}$$, we use $$\frac{u_j-u_{j-1}}{\Delta x}$$ instead of $$\frac{u_{j+1}-u_{j}}{\Delta x}$$. There’s an important physical reason for that. Here we assume $$c$$ is positive (rightward wind), so the chemical should come from the leftside ($$u_{j-1}$$), not the rightside ($$u_{j+1}$$). Finally, notice that when $$\alpha=1$$ we effectively get the previous naive “shifting” scheme. ### One step integration¶ We set $$c=0.1$$ and $$\Delta t=0.5$$, so after one time step the chemical would be advected by half grid point (recall that our grid interval is 0.1). Coding up the scheme is straightforward. The major difference from ODE solving is your are updating an array, not a single scalar. In [8]: % set parameters c = 0.1; % wind velocity dx = x(2)-x(1) dt = 0.5; alpha = c*dt/dx % initialization u = u0; u_next = zeros(1,nx); % one-step PDE integration for j=2:nx u_next(j) = (1-alpha)*u(j) + alpha*u(j-1); % think about what to do for j=1? % we will talk about boundaries later end % plotting hold on plot(x, u, '-o') plot(x, u_next, '-o') ylim([0,6]); legend('t=0', 't=0.5') dx = 0.1000 alpha = 0.5000 ### Multiple steps¶ To integrate for multiple steps, we just add another loop for time. In [9]: % re-initialize u = u0; u_next = zeros(1,nx); % PDE integration for 2 time steps for t = 1:2 % loop for time for j=2:nx % loop for space u_next(j) = (1-alpha)*u(j) + alpha*u(j-1); end u = u_next; end % plotting hold on plot(x, u0, '-o') plot(x, u, '-o') ylim([0,6]); legend('t=0', 't=1') We got a pretty huge error here. After $$t=0.5 \times 2=1$$, the solution should just be shifted by one grid point, like in the previous naive PDE “solver”. Here the center of the chemical field seems alright (changed from 0.25 to 0.35), but the chemical gets diffused pretty badly. That’s due to the numerical error of the first-order scheme. ### Does decreasing time step size help?¶ In ODE solving, we can always reduce time step size to improve accuracy. Does the same trick help here? In [10]: c = 0.1; dx = x(2)-x(1); dt = 0.1; % use a much smaller time step nt = round(1/dt) % calculate the number of time steps needed alpha = c*dt/dx % === exactly the same as before === u = u0; u_next = zeros(1,nx); for t = 1:nt for j=2:nx u_next(j) = (1-alpha)*u(j) + alpha*u(j-1); end u = u_next; end hold on plot(x, u0, '-o') plot(x, u, '-o') ylim([0,6]); legend('t=0', 't=1') nt = 10 alpha = 0.1000 Oops, there is no improvement. That’s because we have both time discretization error $$O(\Delta t)$$ spatial discretization error $$O(\Delta x)$$ in PDE solving. The total error for our scheme is $$O(\Delta t)+O(\Delta x)$$. Simply decreasing the time step size is not enough. To improve accuracy you also need to increase the number of spatial grid points (nx). Another way to improve accuracy is to use high-order scheme. But designing high-order advection scheme is quite challenging. You can find tons of papers by searching something like “high-order advection scheme”. ## Boundary condition¶ So far we’ve ignored the boundary condition. Because the field is shifting rightward, the leftmost grid point is able to receive information from the “outside”. Think about a pipe where new chemicals keep coming in from the left. Our previous specification of the advection equation is not complete. The complete version is: Advection equation with initial condition $$u_0(x)$$ and left boundary condition $$u_{left}(t)$$ $\begin{split}\frac{\partial u}{\partial t}+c\frac{\partial u}{\partial x} = 0 \\ u(x,0) = u_0(x) \\ u(0,t) = u_{left}(t)\end{split}$ Here we assume $$u_{left}(t)=5$$. In the code, you just need to add one line. In [11]: c = 0.1; dx = x(2)-x(1); dt = 0.1; nt = round(1/dt); alpha = c*dt/dx; u = u0; u_next = zeros(1,nx); for t = 1:nt u_next(1) = 5.0; % the only change!! for j=2:nx u_next(j) = (1-alpha)*u(j) + alpha*u(j-1); end u = u_next; end hold on plot(x, u0, '-o') plot(x, u, '-o') ylim([0,6]); legend('t=0', 't=1') We can now see new chemicals are “injected” from the left side. ## Stability¶ ### Numerical experiment¶ PDE solvers also have stability requirements, just like ODE solvers. The general idea is the time step needs to be small enough, compared to the spatial step. This holds for advection, diffusion, wave and various forms of PDEs, although the exact formula for time step requirement can be different depending on the problem. Here we use a much larger time step to see what happens. In [12]: c = 0.1; dx = x(2)-x(1); dt = 2; % a much larger time step alpha = c*dt/dx u = u0; u_next = zeros(1,nx); for t = 1:10 for j=2:nx u_next(j) = (1-alpha)*u(j) + alpha*u(j-1); end u = u_next; if t == 1 % record the first time step u1 = u; end end hold on plot(x, u0, '-o') plot(x, u1, '-o') plot(x, u, '--') ylim([-40,40]); legend('initial', 't=2', 't=20') alpha = 2 The simulation blows very quickly. ### Intuitive explanation¶ In the previous experiment we have $$\alpha=2$$, so one of the coeffcients in the formula becomes negative: $u_j^{n+1} = (1-\alpha)u_j^n + \alpha u_{j-1}^n = (-1)\times u_j^n + 2\times u_{j-1}^n$ Negative coefficients are meaningless here, because a grid point $$x_j$$ shouldn’t get “negative contribution” from a nearby grid point $$x_{j-1}$$. It makes no sense to advect “negative density” or “negative concentration”. To keep both coefficients ($$\alpha$$ and $$1-\alpha$$) postive, we require $$0 \le \alpha \le 1$$. Recall $$\alpha = \frac{c \Delta t}{\Delta x}$$, so we essentially require the time step to be small. This is just an intuitive explanation. To rigorously derive the stability requirement, you would need some heavy math as shown below. ### Stability analysis (theoretical)¶ The time step requirement can be derived by von Neumann Stability Analysis (also see PDE_solving.pdf on canvas). The idea is we want to know how does the magnitude of $$u_j^{n}$$ change with time, under the iteration (first-order advection scheme for example): $u_j^{n+1} = (1-\alpha)u_j^n + \alpha u_{j-1}^n$ It is troublesome to track the entire array $$[u_1^{n},u_2^{n},...,u_m^{n}]$$. Instead we want a single value to represent the magnitude of the array. The general idea is to perform Fourier expansion on the spatial field $$u(x)$$. From Fourier analysis we know any 1D field can viewed as the sum of waves of different wavelength: $u(x) = \sum_{k=-\infty}^{\infty} T_k e^{ikx}$ Thus we can track how a single component, $$T_k e^{ikx}$$, evolves with time. We omit the subscript $$k$$ and add the time index $$n$$, so the component at the n-th time step can be written as $$T(n) e^{ikx}$$. The iteration for this component is $T(n+1)e^{ikx} = (1-\alpha)T(n) e^{ikx} + \alpha T(n) e^{ik(x-\Delta x)}$ Divide both sides by $$e^{ikx}$$ $T(n+1) = (1-\alpha)T(n) + \alpha T(n) e^{-ik\Delta x}$ $\frac{T(n+1)}{T(n)} = 1-\alpha + \alpha e^{-ik\Delta x}$ Define amplification factor $$A=\frac{T(n+1)}{T(n)}$$. We want $$|A| \le 1$$ so the solution won’t blow up, just like what we did in the ODE stability analysis. We thus require $|1-\alpha + \alpha e^{-ik\Delta x}| \le 1$ So we’ve transformed the requirement on an array to the requirement on a scalar. We want this inequality to be true for any $$k$$. It will finally lead to a requirement for $$\alpha$$, which is the same as the previous intuitive analysis that $$0 \le \alpha \le 1$$. Fully working this out needs some math, see this post for example. ## High-order PDE¶ The wave equation is a typical 2nd-order PDE $\frac{\partial^2 u}{\partial t^2} - c^2\frac{\partial^2 u}{\partial x^2} = 0$ Compared to the advection equation, the major differences are • The wave can go both rightward and leftward, at speed $$c$$ or $$-c$$. • It needs both left and right boundary condition. Periodic boundary conditions are often used. • It needs both 0th-order and 1st-order initial conditions, i.e. $$u(x,t)|_{t=0}$$ and $$\frac{\partial u}{\partial t} |_{t=0}$$. Otherwise you can’t start your intergration, because a 2nd-order time derivative means $$u^{n+1}$$ would rely on both $$u^{n}$$ and $$u^{n-1}$$. 0th-order initial condition only gives you one time step, but you need two time steps to get started.
{}
# Error when creating a surf plot with matlab2tikz I'm using matlab2tikz to generate the tikz code for my figures. When compiling the document with the figure code as below, i get an error: Package pgfplots Error: No such element: \pgfplotsarrayselect64\of{pgfpl@cm@mymap} Now what really bugles my mind is that when i am generating the same figure with another dataset, it works perfectly fine! And as fare as i can see, the actual tikz code is identical except for the data points included. What on earth in the second data-set figure makes it fail when compiling? I updated all packages in Mixtex, but that didn't seem to do much. Any idea's as to what i am doing wrong would be greatly appreciated as i am becoming increasingly desperate to find the issue. Fig1, Working \begin{tikzpicture} \begin{axis}[% view={45}{35}, width=\figurewidth, height=\figureheight, scale only axis, xmin=0.1, xmax=1, xtick={0.1,1}, xlabel={kh}, xmajorgrids, ymin=10, ymax=80, ytick={10,80}, ylabel={HLmax}, ymajorgrids, zmin=1, zmax=4, ztick={1,2,3,4}, zlabel={Iteratons}, zmajorgrids, axis lines*=left] surf, colormap={mymap}{[1pt] rgb(0pt)=(0,1,1); rgb(63pt)=(1,0,1)}, draw=black] coordinates{ (0.1,10,1)(0.1,20,1)(0.1,30,1)(0.1,40,1)(0.1,50,1)(0.1,60,1)(0.1,70,1)(0.1,80,1) (0.2,10,1)(0.2,20,1)(0.2,30,1)(0.2,40,1)(0.2,50,1)(0.2,60,1)(0.2,70,1)(0.2,80,1) (0.3,10,1)(0.3,20,1)(0.3,30,1)(0.3,40,1)(0.3,50,1)(0.3,60,1)(0.3,70,1)(0.3,80,1) (0.4,10,1)(0.4,20,1)(0.4,30,1)(0.4,40,1)(0.4,50,1)(0.4,60,1)(0.4,70,1)(0.4,80,1) (0.5,10,1)(0.5,20,1)(0.5,30,1)(0.5,40,1)(0.5,50,1)(0.5,60,1)(0.5,70,1)(0.5,80,1) (0.6,10,1)(0.6,20,1)(0.6,30,1)(0.6,40,1)(0.6,50,1)(0.6,60,1)(0.6,70,1)(0.6,80,1) (0.7,10,1)(0.7,20,1)(0.7,30,1)(0.7,40,1)(0.7,50,1)(0.7,60,1)(0.7,70,1)(0.7,80,1) (0.8,10,1)(0.8,20,1)(0.8,30,1)(0.8,40,1)(0.8,50,1)(0.8,60,1)(0.8,70,1)(0.8,80,1) (0.9,10,1)(0.9,20,1)(0.9,30,1)(0.9,40,1)(0.9,50,1)(0.9,60,1)(0.9,70,1)(0.9,80,1) (1,10,1)(1,20,1)(1,30,1)(1,40,1)(1,50,1)(1,60,1)(1,70,1)(1,80,1) }; \end{axis} \end{tikzpicture}% Fig 2, not working \begin{tikzpicture} \begin{axis}[% view={45}{35}, width=\figurewidth, height=\figureheight, scale only axis, xmin=0.1, xmax=1, xtick={0.1,1}, xlabel={kh}, xmajorgrids, ymin=10, ymax=80, ytick={10,80}, ylabel={HLmax}, ymajorgrids, zmin=1, zmax=4, ztick={1,2,3,4}, zlabel={Iteratons}, zmajorgrids, axis lines*=left] surf, colormap={mymap}{[1pt] rgb(0pt)=(0,1,1); rgb(63pt)=(1,0,1)}, draw=black] coordinates{ (0.1,10,2)(0.1,20,2)(0.1,30,2)(0.1,40,3)(0.1,50,3)(0.1,60,3)(0.1,70,3)(0.1,80,3) (0.2,10,2)(0.2,20,2)(0.2,30,2)(0.2,40,2)(0.2,50,2)(0.2,60,3)(0.2,70,3)(0.2,80,3) (0.3,10,2)(0.3,20,2)(0.3,30,2)(0.3,40,2)(0.3,50,2)(0.3,60,2)(0.3,70,3)(0.3,80,3) (0.4,10,2)(0.4,20,2)(0.4,30,2)(0.4,40,2)(0.4,50,2)(0.4,60,2)(0.4,70,3)(0.4,80,3) (0.5,10,2)(0.5,20,2)(0.5,30,2)(0.5,40,2)(0.5,50,2)(0.5,60,2)(0.5,70,3)(0.5,80,3) (0.6,10,1)(0.6,20,1)(0.6,30,2)(0.6,40,2)(0.6,50,2)(0.6,60,2)(0.6,70,3)(0.6,80,3) (0.7,10,1)(0.7,20,1)(0.7,30,1)(0.7,40,2)(0.7,50,2)(0.7,60,2)(0.7,70,3)(0.7,80,3) (0.8,10,1)(0.8,20,1)(0.8,30,1)(0.8,40,2)(0.8,50,2)(0.8,60,2)(0.8,70,3)(0.8,80,3) (0.9,10,1)(0.9,20,1)(0.9,30,1)(0.9,40,1)(0.9,50,2)(0.9,60,2)(0.9,70,3)(0.9,80,3) (1,10,1)(1,20,1)(1,30,1)(1,40,1)(1,50,2)(1,60,2)(1,70,3)(1,80,3) }; \end{axis} \end{tikzpicture}% - It's just an error in the colormap definition. colormap={mymap}{[1pt] rgb(0pt)=(0,1,1); rgb(63pt)=(1,0,1)}, Try to fix it with: colormap ={mymap}{rgb(0pt)=(0,1,1); rgb(63pt)=(1,0,1)}, - You are my hero –  Allan Aug 3 '12 at 15:24 @bersanri Good analysis: your fix is slightly faster, uses less memory, and avoids the problem of the [1pt] variant. In addition, you identified a bug in pgfplots: it should have worked with both variants. –  Christian Feuersänger May 25 '13 at 11:33
{}
NORMAL GENERATION OF NONSPECIAL LINE BUNDLES ON ALGEBRAIC CURVES Title & Authors NORMAL GENERATION OF NONSPECIAL LINE BUNDLES ON ALGEBRAIC CURVES Kim, Seon-Ja; Kim, Young-Rock; Abstract In this paper, we classify (C, $\small{\cal{L}}$) such that a smooth curve C of genus g has a nonspecial very ample line bundle $\small{\cal{L}}$ of deg $\small{\cal{L}}$ Keywords algebraic curve;linear series;line bundle;projectively normal;normal generation; Language English Cited by References 1. E. Arbarello, M. Cornalba, P. A. Griffiths, and J. Harris, Geometry of Algebraic Curves. Vol. I, Springer-Verlag, New York, 1985. 2. E. Ballico, On the Clifford index of algebraic curves, Proc. Amer. Math. Soc. 97 (1986), no. 2, 217-218. 3. G. Castelnuovo, Sui multipli di una serie lineare di gruppi di punti appartenente ad una curva algebrica, Rend. Circ. Mat. Palermo 7 (1893), 89-110. 4. M. Coppens and G. Martens, Secant spaces and Clifford's theorem, Compositio Math. 78 (1991), no. 2, 193-212. 5. M. Green and R. Lazarsfeld, On the projective normality of complete linear series on an algebraic curve, Invent. Math. 83 (1985), no. 1, 73-90. 6. R. Hartshorne, Algebraic Geometry, Graduate Text in Math, 52, Berlin-Heidelberg-New York 1977. 7. C. Keem and S. Kim, On the Clifford index of a general (e+2)-gonal curve, Manuscripta Math. 63 (1989), no. 1, 83-88. 8. S. Kim and Y. Kim, Projectively normal embedding of a k-gonal curve, Comm. Algebra 32 (2004), no. 1, 187-201. 9. S. Kim and Y. Kim, Normal generation of line bundles on algebraic curves, J. Pure Appl. Algebra 192 (2004), no. 1-3, 173-186. 10. T. Kato, C. Keem, and A. Ohbuchi, Normal generation of line bundles of high degrees on smooth algebraic curves, Abh. Math. Sem. Univ. Hamburg 69 (1999), 319-333. 11. H. Lange and G. Martens, Normal generation and presentation of line bundles of low degree on curves, J. Reine Angew. Math. 356 (1985), 1-18. 12. G. Martens and F. O. Schreyer, Line bundles and syzygies of trigonal curves, Abh. Math. Sem. Univ. Hamburg 56 (1986), 169-189. 13. D. Mumford, Varieties defined by quadric equations, Questions on Algebraic Varieties (C.I.M.E., III Ciclo, Varenna, 1969) pp. 29-100 Edizioni Cremonese, Rome, 1970.
{}
Browse Questions # Relation $R$ in a set $N$ of natural numbers defined as $R=\{(x,y):y=x+5$ and $x<4\}$. Which of the following is true about $R$? This question has multiple parts. This is the second of multiple parts. Therefore each part has been answered as a separate question on Clay6.com Toolbox: • A relation R in a set A is called $\mathbf{ reflexive},$ if $(a,a) \in R\;$ for every $\; a\in\;A$ • A relation R in a set A is called $\mathbf{symmetric}$, if $(a_1,a_2) \in R\;\Rightarrow\; (a_2,a_1)\in R \; for \;a_1,a_2 \in A$ • A relation R in a set A is called $\mathbf{transitive},$ if $(a_1,a_2) \in R$ and $(a_2,a_3) \in R \; \Rightarrow \;(a_1,a_3)\in R$ for all$\; a_1,a_2,a_3 \in A$ Given $N$ is a set of natural numbers and $R=\{(x,y):y=x+5 and x<4\}$: Since $x<4$ and $y=x+5 \Rightarrow x = \{1,2,3,4\}$ and $y = \{6,7,8,9\}$ $\Rightarrow R = \{(1,6), (2,7), (3,8), (4,9)\}$ For $R = \{(1,6), (2,7), (3,8), (4,9)\}$ to be reflexive, $(a,a) \in R\;$ for every $\; a\in\;A$ $\Rightarrow$ if $y=x, x = x+5 \Rightarrow 0 \neq 5$. Therefore $R$ is not reflexive. We can verify this w/ a simple substitution: If $x=1, y=1, y = x+5 \rightarrow 1 = 6$, which is not correct. Therefore $R$ is not reflexive. For $R = \{(1,6), (2,7), (3,8), (4,9)\}$ to be symmetrical $(a_1,a_2) \in R\;\Rightarrow\; (a_2,a_1)\in R \; for \;a_1,a_2 \in A$ $\Rightarrow R \{x,y\}: y = x+5 \rightarrow y - x = 5$ $\Rightarrow R \{y,x\}: x = y+5 \rightarrow y - x = -5$ Therefore, since $(x,y) \in R,$ but $(y,x) \not \in R, \; R$ is not transitive We can verify this w/ a simple substitution: If $x=1, y=6, R \{x,y\}: y = x+5 \rightarrow y = x+5 \rightarrow 6 = 6$ However, $R \{y,x\}: x = y+5 \rightarrow 1 = 6+5 \rightarrow 1 = 11$ which is not correct. Hence, $R$ is not symmetric. For $R = \{(1,6), (2,7), (3,8), (4,9)\}$ to be transitive, $(a_1,a_2) \in R$ and $(a_2,a_3) \in R \; \Rightarrow \;(a_1,a_3)\in R$ for all$\; a_1,a_2,a_3 \in A$ Let's take the first ordered pair $(1,6) \in R$. Here $x=1, y=6$. However, since $x<4$ in $R = \{x,y\}$, there can exisit no ordered pair for $(y,z)$ where $y$ can be equal to 6. Therefore $R$ is not transitive.
{}
# A simple test to check the optimality of sparse signal approximations 1 METISS - Speech and sound data modeling and processing IRISA - Institut de Recherche en Informatique et Systèmes Aléatoires, Inria Rennes – Bretagne Atlantique Abstract : Approximating a signal or an image with a sparse linear expansion from an overcomplete dictionary of atoms is an extremely useful tool to solve many signal processing problems. Finding the sparsest approximation of a signal from an arbitrary dictionary is an NP-hard problem. Despite of this, several algorithms have been proposed that provide sub-optimal solutions. However, it is generally difficult to know how close the computed solution is to being optimal'', and whether another algorithm could provide a better result. In this paper we provide a simple test to check whether the output of a sparse approximation algorithm is nearly optimal, in the sense that no significantly different linear expansion from the dictionary can provide both a smaller approximation error and a better sparsity. As a by-product of our theorems, we obtain results on the identifiability of sparse overcomplete models in the presence of noise, for a fairly large class of sparse priors. Type de document : Communication dans un congrès Acoustics, Speech and Signal Processing, 2005. ICASSP 2005. IEEE International Conference on, Mar 2005, Philadelphia, PA, United States. IEEE, 5, pp.V/717 -- V/720, 2005, 〈10.1109/ICASSP.2005.1416404〉 Domaine : Littérature citée [13 références] https://hal.inria.fr/inria-00564503 Contributeur : Rémi Gribonval <> Soumis le : mercredi 9 février 2011 - 09:08:15 Dernière modification le : mercredi 16 mai 2018 - 11:23:03 Document(s) archivé(s) le : mardi 10 mai 2011 - 02:55:00 ### Fichier 2005_ICASSP_GribonvalEtAl_Simp... Accord explicite pour ce dépôt ### Citation Rémi Gribonval, Rosa Maria Figueras I Ventura, Pierre Vandergheynst. A simple test to check the optimality of sparse signal approximations. Acoustics, Speech and Signal Processing, 2005. ICASSP 2005. IEEE International Conference on, Mar 2005, Philadelphia, PA, United States. IEEE, 5, pp.V/717 -- V/720, 2005, 〈10.1109/ICASSP.2005.1416404〉. 〈inria-00564503〉 ### Métriques Consultations de la notice ## 342 Téléchargements de fichiers
{}
# What grade is required of gems used as spell material components? Quite a few higher-level spells require x GP of gemstone or gemstone dust in order to cast -- I will not bother listing them here, as it'd just take up space in the question. However, the value of a gem is primarily determined by Cut, Color, and Clarity (or lack of imperfections), not its size. Considering that the same base material can vary widely in these regards from industrial-grade stones that are practically worthless to a jeweler, but still useful for their material properties up to the finest gemstones money can buy, what minimum standards are required of a gem in order to be usable as a spellcasting material component? Must it be cut before it can be used, or can uncut (raw) gems be used for casting, straight out of the ground? Could a wizard cast a spell that say called for a sapphire or a ruby with a crystalline lump of industrial corundum (synthetic sapphire) instead of a natural gem? Would a large shard of bort be useful for spellcasting in lieu of a gem-grade diamond? Citations strongly preferred here, by the way -- if you need to dig back into the archives, that's OK, but right now, this is something that I would not even know how to consider from a DM or player perspective. • I doubt this is specifically mentioned anywhere, but even if it is, more detail would lead to better answers, I think. Why do you want to know? Are your PC's going into the mining business? – SirTechSpec Jul 23 '16 at 3:57 • Related: How big is a 50 gp diamond? – Olorin Jul 25 '16 at 11:23 You're suffering a misconception. In the real world, gem size is an extremely important marker of value: the 4th 'C' that you missed, Carat. Unusually large gems are worth exponentially more than smaller ones of comparable quality. However, in D&D, all gems are measured by a single universal standard: their GP value. No version of D&D has ever gone into the specific qualities of a gem, because very few players would care for that much extra book keeping. Gems are simply abstracted into a condensed form of currency. It would certainly be possible to extrapolate some system of evaluating gems into the 4 C's, but in the end this will provide a great deal of bookwork, and after all that the players won't care: "Yea, but how much is it worth?" That level of detail is simply below most players' level of concern. Further, unless you plan to give lessons in gemology, much of it would be meaningless to most players, and you will spend an inordinate amount of time re-explaining how to evaluate the gems. After all that, they're just going to ask again: "Yea, but how much is it worth?" Gems have 3 basic purposes in D&D. • high-density currency, a chest of gems is much smaller than the same value of gold • decoration, such as on jewelry or a ceremonial weapon • money-sinks, restricting the availability of high-level spells. Bort is really poor quality Diamond. If such a thing existed in a given D&D world, then it would just be Diamond, and you would still need the same GP value worth of the material. In 'regular' D&D, there is no way to produce artificial gems or similar items with high intrinsic value (the money metals, for instance). The typical response would likely be "They aren't real gems, so they don't work." Some DM's would happily let you try it, and then have fun messing with the outcome of the spell. In the end it would be up to each DM to make a ruling on artificial gems. As discussed above, gem qualities are either ignored, or assumed to be of a fairly standard type. Common gems would be assumed to be cut, though likely not expertly. As spell components, this is all largely irrelevant. Again, the only measurement the spell is concerned with is how much the lump of material you are presenting is worth. Absolute value only, you cannot haggle with your Spell. Many spells also do not care if you use one or many gems, as long as the total value is achieved. I would like to have been able to present citations for all of this, but I have never seen anything in any of the 5 editions (and multiple subsets) of D&D that I have played. This is in relation to both gem quality, and even more so to the idea of artificial gems. D&D is intended to be a game of fun. For the majority of players, going into this level of detail reduces the game to 'crunch', bogging it down with bookkeeping and trivial details. Very few would find this to be an enjoyable way to play. My recommendation is that the best way to consider this is to consider it a poor investment of time which will cost you far more enjoyment than it provides. • If you're wondering why I asked -- a given crystal of gemstone material can be valueless for jewelry purposes, yet still valuable for its intrinsic material properties -- entire wafers of high-purity synthetic sapphire are used in semiconductor work as a surrogate substrate for materials that can't be grown in bulk, for instance. – Shalvenay Jul 23 '16 at 5:31 • Right, but like I said, in D&D, "synthetic gems" are just not a thing. Most, if not all, creation spells are directly banned from outright "printing money", whether it be gems, or precious metals, or whatever. Gems are all naturally formed, and of an approximately standard quality. If you want to roll up a homebrew modern-era world, that would be a different story, but in that case D&D is probably not the appropriate game system to start with. – tzxAzrael Jul 23 '16 at 6:35 • – Olorin Jul 25 '16 at 11:25 The only grade required is that is that it be a gem, not merely a rock. If you would like to introduce lower grades of stone to your game, there's a way to do this with the existing crafting rules (Player's Handbook page 187). You can designate a diamond-in-the-rough as the raw material for crafting a cut diamond, with half the gp value. Add these uncut non-gem quality stones as treasure. A character proficient with jeweler's tools now enjoys a tangible benefit: they can cut a rough stone into a gem, which is both more valuable and usable as a spell component. For example, to cut a 50 gp diamond suitable for the spell Chromatic Orb would require 10 days of work and raw materials (the rough stone) worth 25 gp. If you want to make things nail-biting, you could even ask for a Dexterity check to see whether they ruin the stone. This question is in fact exactly the same as "how big is a 50 gp diamond?", because carat (weight) is directly proportional to a diamonds size, and the carat is most of the value of a diamond. To clarify, here the full formula for a diamonds mass in carat derived from its basic properties, variables first, constants last: • $M=d \times r^2 \times f \times \rho \times \pi \times \frac{1carat}{0.2g}$ $d$ is the depth of the diamond from table flat to tip, $r$ the radius (=$\frac{\text{diameter}}{2}$), $f$ is the formfactor and is derived from the actual shape: it is $1$ for a cylinder, $\frac{1}{3}$ for a cone, diamond cuts are, depending on actual shape, very close to a cone (depends on the angle of the table), $\rho$ is the density of the stone, for diamonds 3.3-3.35 g/cm³, $\pi$ because we assume a round cut, $\frac{1carat}{0.2g}$ is the conversion from grams to carat As you easily see, size and weight (carat) are proportional to each other by a factor of dimensions³ = volume (times density) = mass. The other 3 factors in gem evaluation, cut, clarity and color, do grant some sort of 'base price', but if you ask any diamond trader, he'll tell you that at least half the price of a diamond comes from it's size. The most expensive diamonds are not the most clear, white/blue/pinkish, they are the well cut huge ones: • Hope Diamond weighs 45.52 carat and is worth 200-250 million USD while being of a "fancy deep grayish blue", a color that is usually not awarded with that high prices for it does not shine enough. People prefer the less grey finish of sapphires. • Golden Eye Diamond weighs 43.51 carat and fetched almost 3 million USD. Which is quite much for a yellow diamond, as most are considdered only industrial grade. If size wouldn't matter, it would have ended in the industry. • Allnatt Diamond weighs 101.29 carat and is worth at least 3 million USD. His color is of the very same yellow many industrial diamonds sport. Without the sheer size that is coming in one chunk, you would not fetch anything for this stone: 101 carats of industrial diamonds cost 30-1010 USD, depending on grain size! By cutting a diamond, up to 70% of its mass get lost. For jewlery size and grade diamonds (that is, something below 10 carat - above it gets wonky), the cut gives about a quarter to third of the value to a diamond, half of it is the weight (carat) and the rest quarter to sixth are color and clarity. That is, unless the color or clarity disqualify a stone to be a jewlery stone, in which case only grain size and weight matter, as you got industrial ware then. Those huge stones mentioned above, however gain aditional value not only from the material properites, but from their stories. If you find large enough (or just enough low quality) diamonds, those could possibly be used to cast with, but raw stones are worth at best half of the cut ones, often less: fine grade raw diamonds trade for somethign around 2000 USD/carat, the same weight gems in cut fetch prices up to 10 times higher - and raw stones are apprised mainly for color and weight, clarity is of somewhat less concern as it can be apprised wrongly due to the irregular suface. Industrial grown stones on the other hand are appriced much less than natural stones - you would need an even larger lump of grown corundum than an uncut sapphire of the same worth! Grown stones are cheaper by a Factor 5 to 20, as the Geological Institute of Kiel told me once - even though those stones are better in clarity and color. So there: a cut natural gem is worth more by a factor of 50-200 to a grown (and then cut!) gem. However, that all does not change one fact: Gems of 50 gp are gems of 50 gp worth. If said gems are a huge bag full of synthetic grown sapphire, a fist sized raw and uncut natural sapphire or some three or 4 small, polished and cut sapphires, they are all "50 gp of sapphire". If you go and buy the pieces for your 50 gp of diamnond from the cheap, throw away pieces that the jewellers don't want, you will need much more of the same stuff to get the same "50 gp of diamonds". Or to phrase it differently: It is up to the GM what shape said "50 gp of gems" take, as long as those are somewhat logically in themselves. Nowadays stuff not useable for jewelery is worth much less, but it is useable for sawblades and such, giving it still a value. In a world where said 'unwanted' stones are useable for magic, they might even be worth more than today, as there is some demand for them. If synthetic stones are available at all or how much they are worth or if they are useable for magic in a d&d fantasy setting is GM fiat. If this would be system agnostic, I would point to shadowrun, where synthetic materials have no magical properties, but that is not an answer to the original question. • Very interesting bit of gemology here, but it still leaves the question unanswered: How does this apply in the game? There is no mention anywhere of how many carats a gem is, nor any hint of a conversion between GP and Carats. – tzxAzrael Jul 23 '16 at 14:47 • Aye, knew I forgot something. Extended and hope to answer the missed parts now. – Trish Jul 23 '16 at 15:03 • Excellent! +1 for all the interesting details. I like the idea of using the "junk gems" for magic, giving them use and value, and the "pretty gems" (eg, those with high clarity and color) for decorative gems. It makes perfect sense, I can even see it in my head right now.. a Jeweler looking at the pile on the table. "Oh very nice, the Baroness has been looking for good sapphires... Ah yes, the Mage's Guild has a big order in for less colorful gems like this one..." – tzxAzrael Jul 23 '16 at 15:09
{}
## Thursday, September 29, 2016 ... ///// ### Rainer Weiss' birthday: from Slovakia to circuits, vinyl in Manhattan to LIGO Along with Kip Thorne and Ronald Drever, Rainer Weiss is one of the most likely "triplet" that can share the Nobel prize in physics next Tuesday. Weiss' key contribution already occurred in 1967 – see the history of LIGO – when he began to construct a laser interferometer and published a text pointing out its usefulness. WVXU, a BBC-linked news source, just released a fun biography: A physicist who proved Einstein right started by tinkering with the family record player Aside from fundamental physics, one of the additional reasons why this biography may be relevant on this blog are his family's links to Czechoslovakia. He was born in Berlin on September 29th, 1932, exactly 84 years ago. Congratulations! His mother was a German actress, his father was a Jewish neurologist (physician). The last adjective of the previous sentence didn't indicate a great career move in Germany of the 1930s, especially because the dad was a communist, too. The Nazi propaganda often talked about the Judeo-Bolshevism. Somewhat unfortunately, I think that the data make it clear that there is something "particularly attractive for the Jews" about communism. It reminds me of the latest episode of The Big Bang Theory. Rajesh asked Howard whether Howard has a cousin who is a lawyer. Howard got irritated: Is it just because I am Jewish? It's like saying that your cousin must work in a call center because you're Indian. Needless to say, Rajesh pointed out that he has a cousin (Sanjay/David) who is indeed working in a call center, before Howard conceded that he has a cousin who is a lawyer, too. :-) OK, but let me return to Weiss' life. When Rainer was born, the atmosphere in Germany was quickly getting unlivable for the Jews. At the end of 1932, when Rainer was a few months old, Weiss' family moved to Prague, the First Republic of Czechoslovakia, a beacon of freedom and democracy in the growing central European ocean of totalitarianism of the 1930s. In Fall 1938, the First Republic was destroyed by Hitler and replaced by the truncated Second Republic of Czecho-Slovakia, an indefensible core of Czechoslovakia – which had been stripped of the Sudetenland, the Hungarian-rich Southern Slovakia, and where no one pretended that the country would operate independently of Hitler's wishes forever. The hyphen before Slovakia and the capitalization of "S" reflected the German support for the Slovak nationalism – the same hyphen and capitalization reemerged in the early 1990s before Czechoslovakia was dissolved (again). At any rate, the situation became too dangerous for the Jews even in Czechia around 1938, they probably moved to Slovakia which was more independent during the war. But sometime in 1938 – I don't know the month and I would be interested – Weiss' family flew from Slovakia to America. A generous family in St Louis was ready to accept 10,000 Jewish refugees and pay for the damages they could cause. That was quite a deal. I think that if someone made this offer to accept the Muslim migrants and pay for their expenses or damages caused by them from his pocket, I would agree with such a deal, too. But such people no longer exist because basically everyone realizes that the Muslim immigration is costly and pathological. The only hostility that Rainer has ever faced in the U.S. was on a day when his German mom gave him some typical Nazi boy trousers to the school and the Third Reich just did one of their nasty operations. So they did great in the U.S. but if he wanted to boast and exaggerate, Weiss could still paint himself as one of the unlucky guys who may get expelled from Europe because they're Judeo-Bolsheviks and harassed elsewhere because they're Nazis. ;-) At the end of the war in 1945, the 12-year-old Rainer was already close to Cortland Street in New York, see the picture above. The street near the World Trade Center no longer exists. My understanding is that not only the houses are gone; the linear road on the map has evaporated, too. It looks sort of remarkable given the "modern" appearance of the street. These smaller buildings were replaced by a bigger infrastructure. Are the Newyorkers sure it was an improvement? He could find lots of components for electric circuits on that street. He constructed lots of things as a kid and his sound systems were considered great and concert-hall-like. After completing the Columbia Grammar School, he went to the college – MIT – in order to solve some problem with the vinyl records and the quality of their sound. And the rest is already his "standard scientific biography". After his teaching at Tufts in 1960-62 and two postdoc years at Princeton in 1962-64 (note the chronology), he returned to MIT. One of the newest vinyl record players produced by Tesla. Japan has bought lots of them as carousels for puppet shows. BTW Elon Musk wants to have tens of millions of people on Mars around 2050. (Just if you were caught by this prank: Tesla was the largest Czechoslovak electronic company during communism. You may buy the device above for $0.04 or for$20 a month.) His teenage activities were useful because if you study how to detect the tiny motion of the needle tracking a vinyl record, it's not so far from measuring the oscillations caused by the gravitational waves. And the laser interferometer is obviously not useful just for the gravitational waves. It may measure lengths – constant and time-dependent lengths – very accurately. The accuracy is incomparably, much better than the accuracy achieved by vinyl records, of course, but it's a similar kind of a task. LIGO may be viewed as an "extension of that path" that Rainer Weiss has followed. The text above may make it sound that Weiss was just a teenage and then older engineer playing with vinyl records who did something useful for general relativity and the general relativists just happened to find it useful even though he had nothing to do with general relativity. That's not the case. You may listen to his 75-minute March 2016 KITP talk on gravitational waves. He told the audience pretty much everything about the waves, Einstein's theory, and the history of all this stuff as many of us who have given talks on the same subject. And when it comes to the experimental technicalities, he said much more, of course.
{}
# Thread: Two questions involving surface area. 1. There was a typo in an earlier post. It should read $\displaystyle \vec{r} = < 2 \sin 2 u, \; 2 + 2 \cos 2u,\; 4 \,v \, \sin u>$ and $\displaystyle ||\, \vec{r}_u \, \times\, \vec{r}_v \, || = 16 \sin u$ Here's a pictue of the surface in question Page 2 of 2 First 12
{}
bug-lilypond [Top][All Lists] ## Re: alpha test, horizontal spacing From: Trevor Daniels Subject: Re: alpha test, horizontal spacing Date: Thu, 17 Feb 2011 12:51:52 -0000 ```Hi Ralph you wrote Wednesday, February 16, 2011 2:34 PM ``` On Sat, Feb 12, 2011 at 6:04 AM, Jan Warchoł <address@hidden> wrote: ``` ``` ```I think i have an idea how to explain this bug. ``` I suppose it happens because LilyPond is not aware that the dot is in ```voiceTwo context (and therefore lower than usual). compile this: { g'4.*1/32 d''!32 g'4.*1/32 e''!32 } ``` { \voiceTwo g'4.*1/32 \voiceOne d''!32 \voiceTwo g'4.*1/32 \voiceOne e''!32 ```} ``` In the upper line, the first accidental (on d'') is too low to move left (it would collide with the dot). The second accidental (on e'') ```is high enough to be moved over dot. Everything fine here. ``` Now in the second line the music is the same except that dotted notes are in lower voice. This makes the dots move down, they are a whole staffspace lower. However, Lily fails to notice that, and engraves the ```naturals exactly like in the upper line. ``` ``` ``` What's the status of this? I cannot find an issue on the tracker. Did I miss ```something? ``` ``` A tracker entry should be made using the example and description that Jan provided above. I'd rate it low. Trevor ```
{}
94 views $1)$ Master-Slave FF is designed to avoid race around condition $2)$ Master-Slave FF is used to store $2$ bit information Which of the following statement is correct? What is meaning of $2-bit$ information?? | 94 views +1 Using a single SR or a single JK flip flop we store exactly one bit info which we output using Q. Here they have asked whether Master-Slave FF would store 2 bit info. 0 How that is possible?? 2 bit is not possible in any case. right?? because Master an slave cannot work at same time. Plz explain more. Unable to understand 0 Yes you are right, we will use the whole setup to store only 1 bit information. 0 then why r they asking for 2 bit information?? 0 They are asking whether the statement is true or false. Which in this case will be false 0 why false?? Is 2 bit Master-Slave not possible in any case? 0 The reason we use a master slave configuration is to make level triggered flipflop a negative edge one. We can use two master slave flip flops to store 2 bit info +1 The following text is from Digital Design by Morris Mano 0 @sakharam In truth table of 01 , it is also giving output 01 then how 2 bit output not possible?? +1 Mam this is a single bit output, $\overline{Q_{n+1}}$ will always be a complement of  Q$_{n+1}$. This does not mean that we are storing two bits. We are storing a bit and its complement. 0 ok :) +1 vote $\text{1) Master-Slave FF is designed to avoid race around condition - TRUE}$ In $SR$ flip-flop we have output ends $Q$ and $\overline{Q}$ respectively, when both the inputs are 1 then output is indeterminate. In order to remove this behavior we connect output $Q$ to $R$ and $\overline{Q}$ to S. This creates toggling and the toggling is so fast that we can't even remove it by giving clock pulse of size less than time required to toggle. This toggle is nothing but the race condition. So to avoid this problem we use $\text{Master-Slave FF}$ where the output of $\text{Master FF}$ becomes available at the $\text{SLAVE}$ end after 1 clock cycle. $\text{2) Master-Slave FF is used to store 2 bit information - FALSE}$ We are storing only 1 bit information, the only difference here is that 1 bit information appears at the output end with a delay of 1 clock cycle. $\textbf{PS: Edit}$ When the clock pulse is active Master FF output is available but does not appear at output end and during the same cycle Slave FF has output which was produced by Master FF in previous cycle. Once again when clock cycle becomes down, then slave becomes active and Master FF's output will appear the Slave FF end. Hence it appear that it is storing 2 bit information, but actually it is storing 1 bit of information. by Active (1.8k points) selected by 0 2nd point is not clear 0 @srestha check the answer now I have edited it 0 @!KARAN check this truth table in input 01, output also 01 Then how 2 bit output not possible?? 0 What is Flip-flop? Flip-flops and latches are used as data storage elements. A flip-flop is a device which stores a single bit (binary digit) of data; one of its two states represents a "one" and the other represents a "zero" https://en.wikipedia.org/wiki/Flip-flop_(electronics) So whatever be the internal states we will be able to store only 1 bit of information which will appear as output +1 vote 1
{}
# Abstract Jordan Decomposition different from usual Jordan Decomposition It's known that if $L\subset gl(V)$, with $V$ finite dimensional, is a semisimple Lie algebra, then the abstract and usual Jordan decompositions in $L$ coincide. Is it possible to provide a counter-example if $L$ isn't semisimple? Remark: The underlying field is algebraically closed of characteristic $0$ . - The underlying field should be algebraically closed of characteristic 0 (otherwise the discussion gets more complicated). – Jim Humphreys Apr 15 '11 at 22:17 You're right. I'll edit my post. Thank you! – user14312 Apr 15 '11 at 22:54 I'm tempted to amplify Ben's precise short answer by emphasizing how subtle the notion of abstract (or intrinsic) Jordan decomposition really is. You start with a matrix Lie algebra which is (1) required to be isomorphic to its image under the adjoint representation (in other words, centerless). Then it makes sense to interchange the Jordan decompositions in the two settings. But these might not agree for every matrix realization, unless you also require (2) that the given Lie algebra satisfy a complete reducibility theorem for finite dimensional representations. Then the proof I first saw in Bourbaki, which goes back to Richard Brauer's early work influenced by Weyl, takes over. (It needs characteristic 0 and at first an algebraically closed field to apply Weyl's complete reducibility theorem directly.) Ben's example already fails (1), as does a general linear Lie algebra, whereas a centerless solvable Lie algebra only fails (2). The somewhat elaborate-looking Bourbaki argument tempts people to take shortcuts (even in one published textbook), but as far as I can see the more sophisticated proof is really needed. - Consider the subalgebra $\left(\begin{matrix} 0& a\\ 0& 0\end{matrix}\right)$ in $\mathfrak{gl}(2)$. This is abelian, so in abstract JD, every element is semi-simple, but these are nilpotent linear operators. -
{}
? Free Version Difficult # Constant Term in an Expansion COMBIN-5X4GEF What is the constant term $c_0$ in the expansion? $$\left(x^2+2+\frac{1}{x^2}\right)^4z$$ $$=c_8x^8 + c_7x^7 + \cdots + c_0 + + c_1x\cdots + \frac{c_{-1}}{x}1x+\frac{c_{-7}}{x^7} + \frac{c_{-8}}{x^8}$$ A $16$ B $32$ C $64$ D $70$ E $256$ F none of the above
{}
James Wills (LSE): “Classical Particle Indistinguishability, Precisely.” I present an analysis of classical particle indistinguishability as ‘observational indistinguishability’ in a certain mathematically precise sense. I will argue that this leads to three interesting and welcome consequences in the foundations of statistical mechanics: (1) The identification and resolution of shortcomings in the ongoing debate concerning the solution to the N! problem: the problem in statistical mechanics of justifying the inclusion of a factor N! in a probability distribution over the phase space of N indistinguishable classical particles. (2) A reinterpretation of the quotienting procedure typically used to justify the N! term and a rigorous derivation of the N! factor which does not appeal to the metaphysics of particles and which rather draws only on facts about observables. (3) A reconstruction of Gibbs’ own argument as a special case of my analysis in which particles are observationally indistinguishable with respect to the Hamiltonian. I call this ‘dynamical indistinguishability’.
{}
Chapter 2 - Graphs and Functions - 2.3 Functions - 2.3 Exercises - Page 214: 10 $(3, +\infty)$ Work Step by Step The graph is falling from $x=3$ to the right. Thus, the function is decreasing in the interval $(3, +\infty)$. Therefore, the largest open interval where the function decreases is $(3, +\infty)$. After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
## Related Articles • CBSE Class 12 Syllabus • CBSE Class 12 Maths Notes • CBSE Class 12 Physics Notes • CBSE Class 12 Chemistry Notes • CBSE Class 12 Accountancy Notes • CBSE Class 12 Computer Science (Self-Paced Course) # Determinant of a Matrix • Last Updated : 20 Jan, 2023 Determinant of a Matrix is defined as the function that gives the unique output (real number) for every input value of the matrix. the scalar value computed for a given square matrix. Determinant of the matrix is considered the scaling factor that is used for the transformation of a matrix. Determinant of a matrix is useful for finding the solution of a system of linear equations, the inverse of the square matrix, and others. The determinant of only square matrices exists. ## Definition of Determinant of Matrix Determinant of a Matrix is defined as the sum of products of the elements of any row or column along with their corresponding co-factors. Determinant is defined only for square matrices. Determinant of any square matrix of order 2×2, 3×3, 4×4, or n × n, where n is the number of rows or the number of columns. (For a square matrix number of rows and columns are equal). Determinant can also be defined as the function which maps every matrix with the real numbers. For any set S of all square matrices, and R the set of all numbers the function f, f: S → R is defined as f (x) = y, where x ∈ S and y ∈ R, then f (x) is called the determinant of the input matrix. ### Symbol of Determinant Let’s take any square matrix A, then the determinant of A is denoted as det A (or) |A|. Determinant is also denoted by the symbol Δ. ### Minor of Element of Matrix Minor is required to find determinant for single elements (every element) of the matrix. They are the determinants for every element obtained by eliminating the rows and columns of that element. If the matrix given is: Minor of a12 will be the determinant: Question: Find the Minor of element 5 in the determinant The minor of element 5 will be the determinant of Calculating the determinant, the minor is obtained as: (2 × 1) – (2 × 2) = -2 ### Cofactors of Element of Matrix Cofactors are related to minors by a small formula, for an element aij, the cofactor of this element is Cij and the minor is Mij then, the cofactor can be written as: Cij = (-1)i+jMij Question: Find the cofactor of the element placed in the first row and second column of the determinant: In order to find out the cofactor of the first row and second column element i.e the cofactor for 1. First find out the minor for 1, which will be: M12 = 4 Now, applying the formula for cofactor: C12 = (-1)1 + 2M12 C12 = (-1)3 × 4 C12 = -4 The Adjoint of a matrix for order n can be defined as the transpose of its cofactors. For a matrix A: ### Transpose of a Matrix Transpose of a Matrix A is denoted as AT or A’. It is clear that the vertical side in the matrix is known as a column and the horizontal side is known as a row, Transposing a Matrix means replacing the Rows with columns and Vice-Versa, since the Rows and Columns are changing, the Order of the Matrix also changes. If a Matrix is given as A= [aij]m×n, then its Transpose will become AT or A’ = [aji]n×m Question: What will be the transpose of the Matrix A = Interchanging Rows and Columns, AT ## Determinant of a 1×1 Matrix Let X = [a] be the matrix of order one, then its determinant is given by det(X) = a. ## Determinant of a 2×2 Matrix The determinant of any 2×2 square matrix A = is calculated using the formula |A| = ad – bc Example: Find the Determinant of A = Solution: Determinant of A =   is calculated as, | A | = | A | = 3×3 – 2×2 = 9 – 4 = 5 ## Physical Significance of Determinant Consider a 2D matrix, each column of this matrix can be considered as a vector on the x-y plane.  So, the determinant between two vectors on a 2d plane gives us the area enclosed between them. If we extend this concept, in 3D the determinant will give us the volume enclosed between two vectors. Area enclosed between two vectors in 2D ## Determinant of a 3×3 Matrix Determinant of a 3×3 Matrix is determined by expressing it in terms of 2nd-order determinants. It can be expanded either along rows(R1, R2 or R3) or column(C1, C2 or C3). Consider a  matrix A of order 3×3 A = For calculating the Determinant of 3×3 Matrix use the following step: Step 1: Multiply the first element a11 of row R1 with (-1)(1 + 1)[(-1)sum of suffixes in a11] and with the second order determinant obtained by deleting the elements of row R1 and C1 of A as a11 lies in R1 and C1 Step 2: Similarly, multiply the second element of the first rowR1,  with the determinant obtained after deleting the first row and second column. Step 3: Multiply the third element of row R1 with the determinant obtained after deleting the first row and third column. Step 4: Now the expansion of the determinant of A, that is |A| can be written as |A| = Similarly, in this way, we can expand it along any row and any column. Example: Evaluate the determinant det(A) = Solution: We see that the third column has most number of zeros, so it will be easier to expand along that column. det(A) = ## Laplace Formula for Determinant Laplace’s formula, is used to expressed the determinant of a matrix in terms of the minors of the matrix. If  An×n is the given suare matrix and Cij is the cofactor of Aij the solving for any row i or column j det (A) = ## Properties of Determinants Various Properties of the Determinants of the square matrix are discussed below: • Reflection Property: Value of the determinant remains unchanged even after rows and columns are interchanged. That determinant of a matrix and its transpose remains the same. • Switching Property: If any two rows or columns of a determinant are interchanged, then the sign of the determinant changes. Example: Solution: det. A = [3×{(1×1)-(0×1)}]-[3×{(2×1)-(5×1)}]+[0×{(2×0)-(5×1)}] = {3×(1-0)}-{3×(2-5)+0 = [3-{3(-3)}+0] = (3+9) =12 Now, Interchanging Row 1 with Row 2, determinant will be: det. A = [2×{(3×1)-(0×0)}]-[1×{(3×1)-(5×0)}]+[1×{(3×0)-(5×3)}] = (6-3-15) = -12 • Repetition Property/Proportionality Property: If any two rows or any two columns of a determinant are identical, then the value of the determinant becomes zero. • Scalar Multiple Property: If each element of a row (or a column) of a determinant is multiplied by a constant k, then its value gets multiplied by k • Sum Property: If some or all elements of a row or column can be expressed as the sum of two or more terms, then the determinant can also be expressed as the sum of two or more determinants. Also, Check ## Solved Examples on Determinant of Matrix Example 1: If x, y, and z are different. and A = , then show that 1 + xyz = 0. Solution: Using Sum Property On solving this determinant and expanding it, A = (1 + xyz)(y- x)(z-y)(z-x) Since it’s given in the question, that all x, y and z have different values and A =0. So the only term that can be zero is 1 + xyz. Hence, 1 + xyz = 0 Example 2: Evaluate Solution: Using Scalar Multiple Property and Repetition Property Example 3: Evaluate the determinant Solution: Using Proportionality Property Two of the rows of the matrix are identical. So, ## FAQs on Determinant of Matrix ### Question 1: What is meant by the determinant formula? For any 3×3 matrix A =  the shortcut formula for computing its determinant is: |A| = a (ei − fh) − b (di − fg) + c (dh − eg) ### Question 2: Can determinant of any matrix be negative? Yes, the determinant of any matrix can be negative. ### Question 3: Can determinant of any matrix ever be equal to 0? Yes, the determinant of any matrix can be zero if any one row or column of the matrix has all the zero values. It can also be zero if any two rows or columns of the matrix are equal.
{}
# How could I see Andromeda as large as in this picture? In this question an image is shown (repeated below) which gives an idea of the "true" angular extent of the Andromeda galaxy in our sky. When I see Andromeda it is usually a bit of a smudge perhaps the size of the full moon, presumably because I cannot see much beyond the bright nucleus and bulge of the galaxy. Being a stellar astronomer, I have never thought much about the practicalities of seeing extended objects in the night sky. My question is: what kind of observing conditions would allow me to see Andromeda with an angular size of $$\sim 5$$ degrees? Would it be sufficient to get to a really dark site with the naked eye, or would I also need to be looking for it with binoculars or a telescope? Or is it something where you must take a picture and do some processing to eliminate the general sky background? EDIT: To be clear, I don't want my question reading back to me - as in "you can only see a smudge with the naked eye"; what I want to know is the observational conditions/instrument I need to able to see a several degree diameter Andromeda galaxy; preferably with the evidence? Would it be sufficient to get to a really dark site with the naked eye In my personal experience, I have seen Andromeda Galaxy as a quite bright dot, in a hill station 2133 meters (6998 feet) above sea level with almost no light pollution, although the town is said to have light pollution of Class 2 in Bortle Scale. But it is not possible to see the disk of the galaxy with an unaided eye, the light gathering capacity of eye is very limited, like @PM2Ring states, the "Refresh rate" of our eye is around 1/10 of a second. Typically in an excellent dark sky, A binoculars would help to see some details of the galaxy. Or else a camera with a good zoom lens or with an Amateur telescope for an exposure of just 1 minute - 5 minutes can fetch enough details of the galaxy. Here's how it would look like. (It may look slightly better than the image below) Suppose if you want to see Andromeda galaxy as something like this: Image from Space.com Then a Series of long exposure images have to be stacked or Curve stretch by some editing. • Can you put an angular scale on the picture? May 20, 2022 at 8:10 • @ProfRob You mean a scale to measure its apparent angular size in the picture ? May 20, 2022 at 10:53 • Without angular scales on the image and details of how the image was obtained, your pictures are meaningless. I suppose I could try and work it out from the foreground stars but I'm not going to. May 20, 2022 at 19:19 • @ProfRob I have obtained some informarion and will soon add it in the picture May 21, 2022 at 4:04 You'd need new eyes able to collect vastly more photons. Barring an alien eye transplant, a large telescope mirror with a low power objective lens gives a reasonable view of the Andromeda galaxy in real time, but it will always be "smudgy". Alternatively, a long exposure with a 35 mm digital camera on a moonless night will also do the trick, collecting more photons over time. Alas, human eyes need help from technology!
{}
# Existence of a unique “outer” normal vector A compact set $K\subseteq\mathbb R^3$ is said to have a smooth boundary, if for all $p\in\partial K$ there is an open neighborhood $U$ of $p$ and a continuously differentiable function $\psi:U\to\mathbb R$ such that $$K\cap U=\left\{x\in U:\psi(x)\le 0\right\}\tag1$$ and $$\psi'(x)\ne0\;\;\;\text{for all }x\in U\tag2.$$ In that case, $\partial K$ is a $2$-dimensional $C^1$-submanifold of $\mathbb R^3$. Now, if $M$ is a $k$-dimensional $C^1$-submanifold of $\mathbb R^n$, $k\le n$: • $v\in\mathbb R^n$ is called tangent vector on $M$ at $p\in M$, if there is a $\varepsilon>0$ and a continuously differentiable function $\gamma:(-\varepsilon,\varepsilon)\to M$ such that $\gamma(0)=p$ and $\gamma'(0)=v$. The space of all such $v$ is denoted by $T_p(M)$ • $v\in\mathbb R^n$ is called normal vector on $M$ at $p$, if $v$ is orthogonal to $T_p(M)$. The space of all such $v$ is denoted by $N_p(M)$ We can show that $N_p(M)$ is $(n-k)$-dimensional for all $p\in M$. So, $N_p(\partial K)$ is $1$-dimensional for all $p\in\partial K$. We can select the unique $\nu(p)\in N_p(\partial K)$ of unit length with the following property: There is a $\varepsilon>0$ such that $p+t\nu(p)\not\in K$ for all $t\in(0,\varepsilon)$. The map $\nu:\partial K\to\mathbb R^3$ is seen to be continuous. Are we able to find such a unique and continuous normal field $\nu$ even for more general classes of manifolds? For example, consider the following triangulation of a teapot (just the empty shell): Since even a single triangle is not a "compact set with smooth boundary", we cannot apply the result described above. But shouldn't we nevertheless be able to select a single "outer" unit normal vector on each point of the teapot? • Yes, that's correct. Are we able to overcome this issue? What I've got in mind is the following: It would be sufficient for me, if $\nu$ is only well-defined on sets of positive "surface measure". The union of all edges should be a null set wrt that measure and at any other point we should be able to select a unique normal. My problem is that I'm not able to formalize this rigorously. It's not even clear me how the "surface measure" for my teapot needs to defined. I only know the construction of the surface measure for submanifolds. – 0xbadf00d Jun 30 '18 at 20:47 • The manifold (surface) you are triangulating (I mean topological manifold, not smooth). By "the surface measure" I basically understand integration on surfaces, since you can use all the machinery of the Lebesgue measure (or any other measure defined on $\mathbb{R}^n$) using local charts. You can read about that in Do Carmo's Differential Geometry of Curves and Surfaces and more generally in Madsen's From Calculus to Cohomology (chapter 10, I think, it's called Integration on manifolds) – Javi Jun 30 '18 at 21:13
{}
# Is there a natural topology on C(X), if X is infinite-dimensional? Suppose $X$ is an infinite-dimensional Banach space. Is there a natural topology on $$C(X)=\{f:X\to\mathbb{R}: \text{ f is continuous}\}?$$ - @NateEldredge Oops, I did not read the first line... – 1015 Feb 2 '13 at 18:33 @NateEldredge, exactly. It is not clear what would be a family of seminorms that gives $C(X)$ a Frechet space topology. – Cantor Feb 2 '13 at 18:33 @Cantor Sorry, my comment was absolutely pointless, I had not read the question carefully. – 1015 Feb 2 '13 at 18:36 So, what is one natural weak topology on $C(X)$?! – Cantor Feb 2 '13 at 18:35 @Cantor: Sorry, meant to say the weak* topology. It's the weak topology with respect to the functions $\{\phi\mapsto \phi(x) \ | \ x \in X\}$. – Jim Feb 2 '13 at 18:41 Well, the weak* topology is a weak topology. It's just that if you say weak topology that's a topology you put on a space using a set of functions on that space. We have a space and a set of functions but we don't want a topology on $X$, we want a topology on $C(X)$, so it's a little confusing to just say weak topology. If we want the topology to be on $C(X)$ we have to specify the functions on $C(X)$ that we are taking the weak topology with respect to. By saying weak* topology I'm just being specific about what those functions are. – Jim Feb 2 '13 at 18:56
{}
### Home > CALC > Chapter 5 > Lesson 5.5.1 > Problem5-156 5-156. Find the derivative of each of the following functions: 1. $f(x) = x^2\operatorname{tan}(x)$ Use the Product Rule. $f '(x) = 2x\operatorname{tan} x + x^2\operatorname{sec}^2 x$ 1. $g ( x ) = \frac { 1 } { \operatorname { sin } x }$ 1. $f(x) = (2x + 1)(3x − 1)^3$ Combine the Product Rule with the Chain Rule. 1. $g ( x ) = \operatorname { cos } \sqrt { x + 1 }$
{}
• Volume 62, Issue 2 February 2004,   pages  147-541 • Preface • Precision tests of the standard model, the Higgs, and new physics We present a concise review of the status of the standard model and of the search for new physics. • Physics with large extra dimensions The recent understanding of string theory opens the possibility that the string scale can be as low as a few TeV. The apparent weakness of gravitational interactions can then be accounted by the existence of large internal dimensions, in the sub-millimeter region. Furthermore, our world must be confined to live on a brane transverse to these large dimensions, with which it interacts only gravitationally. In my lecture, I describe briefly this scenario which gives a new theoretical framework for solving the gauge hierarchy problem and the unification of all interactions. I also discuss a minimal embedding of the standard model, gauge coupling unification and proton stability. • Higgs physics at LHC The large hadron collider (LHC) and its detectors, ATLAS and CMS, are being built to study TeV scale physics, and to fully understand the electroweak symmetry breaking mechanism. The Monte-Carlo simulation results for the standard model and minimal super symmetric standard model Higgs boson searches and parameter measurements are discussed. Emphasis is placed on recent investigations of Higgs produced in association with top quarks and in vector boson fusion channels. These results indicate that Higgs sector can be explored in many channels within a couple of years of LHC operation, i.e.,L = 30 fb−1. Complete coverage including measurements of Higgs parameters can be carried out with full LHC program. • Higgs physics at future colliders: Recent theoretical developments I review the physics of the Higgs sector in the standard model and its minimal supersymmetric extension, the MSSM. I will discuss the prospects for discovering the Higgs particles at the upgraded Tevatron, at the large hadron collider, and at a future high-energye+e linear collider with centre-of-mass energy in the 350–800 GeV range, as well as the possibilities for studying their fundamental properties. Some emphasis will be put on the theoretical developments which occurred in the last two years. • Particle physics explanations for ultra-high energy cosmic ray events The origin of cosmic ray events withE ≳ 1011 GeV remains mysterious. In this talk I briefly summarize several proposed particle physics explanations: a breakdown of Lorentz invariance, the ‘Z-burst’ scenario, new hadrons with masses of several GeV as primaries, and magnetic monopoles with mass below 1010 GeV as primaries. I then describe in a little more detail the idea that these events are due to the decays of very massive, long-lived exotic particles. • GUT precursors and fixed points in higher-dimensional theories Within the context of traditional logarithmic grand unification atMGUT ≈ 1016 GeV, we show that it is nevertheless possible to observe certain GUT states such asX andY gauge bosons at lower scales, perhaps even in the TeV range. We refer to such states as ‘GUT precursors’. Such states offer an interesting alternative possibility for new physics at the TeV scale, even when the scale of gauge coupling unification remains high, and suggest that it may be possible to probe GUT physics directly even within the context of high-scale gauge coupling unification. More generally, our results also suggest that it is possible to construct self-consistent ‘hybrid’ models containing widely separated energy scales, and give rise to a Kaluza-Klein realization of non-trivial fixed points in higher-dimensional gauge theories. • New neutrino experiments Following incredible recent progress in understanding neutrino oscillations, many new ambitious experiments are being planned to study neutrino properties. The most important may be to find a non-zero value of θ13. The most promising way to do this appears to be to measurevμve oscillations with anE/L near Δmatmo2. Future neutrino experiments are great. • Solar neutrino oscillation phenomenology This article summarises the status of the solar neutrino oscillation phenomenology at the end of 2002 in the light of the SNO and KamLAND results. We first present the allowed areas obtained from global solar analysis and demonstrate the preference of the solar data towards the large-mixing-angle (LMA) MSW solution. A clear confirmation in favour of the LMA solution comes from the KamLAND reactor neutrino data. The KamLAND spectral data in conjunction with the global solar data further narrows down the allowed LMA region and splits it into two allowed zones a low Δm2 region (low-LMA) and high Δm2 region (high-LMA). We demonstrate through a projected analysis that with an exposure of 3 kton-year (kTy) KamLAND can remove this ambiguity. • P and CP violation inB physics While the Kobayashi-Maskawa single phase origin of CP violation passed its first crucial precision test inB → J/ψKs, the chirality of weakb-quark couplings has not yet been carefully tested. We discuss recent proposals for studying the chiral and CP-violating structures of these couplings in radiative and hadronicB decays. • Leptonic flavor and CP violation Recent neutrino oscillation data teach us that the neutrinos have masses and that they mix. We discuss two ways that can be used to probe other non-standard leptonic physics. We show that non-standard neutrino interaction can be probed in neutrino oscillation experiments and discuss sneutrino-antisneutrino mixing. • Higgs bosons in the standard model, the MSSM and beyond I summarize the basic theory and selected phenomenology for the Higgs boson(s) of the standard model, the minimal supersymmetric model and some extensions thereof, including the next-to-minimal supersymmetric model. • Inflation, large scale structure and particle physics We review experimental and theoretical developments in inflation and its application to structure formation, including the curvaton idea. We then discuss a particle physics model of supersymmetric hybrid inflation at the intermediate scale in which the Higgs scalar field is responsible for large scale structure, show how such a theory is completely natural in the framework extra dimensions with an intermediate string scale. • Understanding neutrino masses and mixings We discuss ways to understand large neutrino mixings using new symmetries of quarks and leptons beyond the standard model for the three allowed patterns of neutrino masses: normal, inverted hierarchy and degenerate masses. • SUSY dark matter — a collider physicist’s perspective A short tour on the supersymmetric dark matter and its connection to the collider physics. • Experimental and phenomenological status of neutrino anomalies The current status of neutrino anomalies is summarized; the KamLAND experiment is described and the recent results of KamLAND presented. • Leptogenesis I present the theoretical basis for leptogenesis and its implications for the structure of the universe. It is suggested that density fluctuations grow during the transition period and remnants of this effect should be sought in the universe. The relation between theories with Majorana neutrinos and low energy phenomena, including oscillations, advanced considerably during the past two years with a consistent picture developed in several models. • Electroweak breaking and supersymmetry breaking We discuss the clash between the absence of fine tuning in the Higgs potential and a sufficient suppression of flavour changing neutral current transitions in supersymmetric extensions of the standard model. It is pointed out that horizontalU(1) symmetry combined with theD-term supersymmetry breaking provides a realistic framework for solving both problems. • Transplanckian collisions in TeV scale gravity Collisions at transplanckian energies offer model independent tests of TeV scale gravity. One spectacular signal is given by black-hole production, though a full calculation of the corresponding cross-section is not yet available. Another signal is given by gravitational elastic scattering, which may be less spectacular but which can be nicely computed in the forward region using the eikonal approximation. In this talk I discuss the distinctive signatures of eikonalized scattering at future accelerators. • Particle dark matter — A theorist’s perspective Dark matter (DM) is presumably made of some new, exotic particles that appear in extensions of the standard model. After giving a brief overview of some popular candidates, I discuss in more detail the most appealing case of the supersymmetric neutralino. • Tachyon dynamics in string theory We summarize the recent developments in the study of time dependent solutions describing the rolling of a tachyon on a non-BPS D-brane system. • Lattice matrix elements and CP violation inB andK physics: Status and outlook Status of lattice calculations of hadron matrix elements along with CP violation inB and inK systems is reviewed. Lattice has provided useful input which, in conjunction with experimental data, leads to the conclusion that CP-odd phase in the CKM matrix plays the dominant role in the observed asymmetry inB → ψKs. It is now quite likely that any beyond the SM, CP-odd, phase will cause only small deviations in B-physics. Search for the effects of the new phase(s) will consequently require very large data samples as well as very precise theoretical predictions. Clean determination ofall the angles of the unitarity triangle therefore becomes essential. In this regardB → KD0 processes play a unique role. RegardingK-decays, remarkable progress made by theory with regard to maintenance of chiral symmetry on the lattice is briefly discussed. First application already provide quantitative information onBK and the ΔI = 1/2 rule. In the lattice calculation, the enhancement in Re A0 appears to arise solely from tree operators, esp. Q2; penguin contribution toRe A0 appears to be very small. However, improved calculations are necessary for ε’/ε as the contributions of QCD penguins and electroweak penguins largely seem to cancel. There are good reasons, though, to believe that these cancellations will not survive improvements that are now underway. Importance of determining the unitarity triangle purely fromK-decays is also emphasized. • Unraveling supersymmetry at future colliders After a quick review of the current limits on sparticle masses, we outline the prospects for their discovery at future colliders. We then proceed to discuss how precision measurements of sparticle masses can provide information about how SM superpartners acquire their masses. Finally, we examine how we can proceed to establish whether or not any new physics discovered in the future is supersymmetry, and describe how we might zero in on the framework of SUSY breaking. In this connection, we review sparticle mass measurements at future colliders, and point out that some capabilities of experiments ate+e linear colliders may have been over-stated in the literature. • Baryogenesis and the new cosmology I begin this talk with a brief review of the status of approaches to understanding the origin of the baryon asymmetry of the Universe (BAU). I then describe a recent model unifying three seemingly distinct problems facing particle cosmology: the origin of inflation, the generation of the BAU and the nature of dark energy. • High density matter at RHIC QCD predicts a phase transition between hadronic matter and a quark-gluon plasma at high energy density. The relativistic heavy ion collider (RHIC) at Brookhaven National Laboratory is a new facility dedicated to the experimental study of matter under extreme conditions. Already the first round of experimental results at RHIC indicated that the conditions to create a new state of matter are indeed reached in the collisions of heavy nuclei. Studies of particle spectra and their correlations at low transverse momenta provide evidence of strong pressure gradients in the highly interacting dense medium and hint that we observe a system in thermal equilibrium. Recent runs with high statistics allow us to explore the regime of hard-scattering processes where the suppression of hadrons at large transverse momentum, and quenching of di-jets are observed thus providing further evidence for extreme high density matter created in collisions at RHIC. • High-energy cosmic rays: Puzzles, models, and giga-ton neutrino telescopes The existence of cosmic rays of energies exceeding 1020 eV is one of the mysteries of high-energy astrophysics. The spectrum and the high energy to which it extends rule out almost all suggested source models. The challenges posed by observations to models for the origin of high-energy cosmic rays are reviewed, and the implications of recent new experimental results are discussed. Large area high-energy cosmic ray detectors and large volume high-energy neutrino detectors currently under construction may resolve the high-energy cosmic ray puzzle, and shed light on the identity and physics of the most powerful accelerators in the Universe. • Supersymmetry breaking with extra dimensions This talk reviews some aspects of supersymmetry breaking in the presence of extra dimensions. The first part is a general introduction, recalling the motivations for supersymmetry and extra dimensions, as well as some unsolved problems of four-dimensional models of supersymmetry breaking. The central part is a more focused introduction to a mechanism for (super)symmetry breaking, proposed first by Scherk and Schwarz, where extra dimensions play a crucial role. The last part is devoted to the description of some recent results and of some open problems. • Links between neutrino oscillations, leptogenesis, and proton decay within supersymmetric grand unification Evidence in favor of supersymmetric grand unification including that based on the observed family multiplet-structure, gauge coupling unification, neutrino oscillations, baryogenesis, and certain intriguing features of quark-lepton masses and mixings is noted. It is argued that attempts to understand (a) the tiny neutrino masses (especially Δm2(v2 – v3)), (b) the baryon asymmetry of the Universe (which seems to need leptogenesis), and (c) the observed features of fermion masses such as the ratiomb/mτ, the smallness ofVcb and the maximality of$$\Theta _{\nu _\mu \nu _\tau }^{OSC}$$ seem to select out the route to higher unification based on an effective string-unifiedG(224) =SU(2)L ×SU(2)R ×SU(2)c orSO(10)-symmetry that should be operative in 4D, as opposed to other alternatives. A predictiveSO(10)/G(224)-framework possessing supersymmetry is presented that successfully describes the masses and mixings of all fermions including neutrinos. It also accounts for the observed baryon asymmetry of the Universe by utilizing the process of leptogenesis, which is natural to this framework. It is argued that a conservative upper limit on the proton lifetime within thisSO(10)/G(224)-framework, which is so far most successful, is given by$$\frac{1}{3} - 2$$ x 1034 years. This in turn strongly suggests that an improvement in the current sensitivity by a factor of five to ten (compared to SuperK) ought to reveal proton decay. Implications of this prediction for the next-generation nucleon decay and neutrino-detector are noted. • Phenomenology of the minimalSO(10) SUSY model In this talk I define what I call the minimalSO(10) SUSY model. I then discuss the phenomenological consequences of this theory, vis-a-vis gauge and Yukawa coupling unification, Higgs and super-particle masses, the anomalous magnetic moment of the muon, the decayBs → μ+ itμ and dark matter. • List of Participants • # Pramana – Journal of Physics Current Issue Volume 93 | Issue 6 December 2019 • # Editorial Note on Continuous Article Publication Posted on July 25, 2019
{}
# American Institute of Mathematical Sciences 2015, 12(5): 917-936. doi: 10.3934/mbe.2015.12.917 ## Parameters identification for a model of T cell homeostasis 1 IMB UMR CNRS 5251, Bordeaux University, 3 Place de la Victoire, 33076 Bordeaux Cedex, France, France, France 2 INSERM U897, ISPED, Bordeaux University, Bordeaux, France Received  November 2014 Revised  April 2015 Published  June 2015 In this study, we consider a model of T cell homeostasis based on the Smith-Martin model. This nonlinear model is structured by age and CD44 expression. First, we establish the mathematical well-posedness of the model system. Next, we prove the theoretical identifiability regarding the up-regulation of CD44, the proliferation time phase and the rate of entry into division, by using the experimental data. Finally, we compare two versions of the Smith-Martin model and we identify which model fits the experimental data best. Citation: Houssein Ayoub, Bedreddine Ainseba, Michel Langlais, Rodolphe Thiébaut. Parameters identification for a model of T cell homeostasis. Mathematical Biosciences & Engineering, 2015, 12 (5) : 917-936. doi: 10.3934/mbe.2015.12.917 ##### References: show all references ##### References: [1] Lorenzo Zambotti. A brief and personal history of stochastic partial differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 471-487. doi: 10.3934/dcds.2020264 [2] Yueyang Zheng, Jingtao Shi. A stackelberg game of backward stochastic differential equations with partial information. Mathematical Control & Related Fields, 2020  doi: 10.3934/mcrf.2020047 [3] Hua Qiu, Zheng-An Yao. The regularized Boussinesq equations with partial dissipations in dimension two. Electronic Research Archive, 2020, 28 (4) : 1375-1393. doi: 10.3934/era.2020073 [4] Yue-Jun Peng, Shu Wang. Asymptotic expansions in two-fluid compressible Euler-Maxwell equations with small parameters. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 415-433. doi: 10.3934/dcds.2009.23.415 [5] Michal Fečkan, Kui Liu, JinRong Wang. $(\omega,\mathbb{T})$-periodic solutions of impulsive evolution equations. Evolution Equations & Control Theory, 2021  doi: 10.3934/eect.2021006 [6] Fabio Camilli, Giulia Cavagnari, Raul De Maio, Benedetto Piccoli. Superposition principle and schemes for measure differential equations. Kinetic & Related Models, 2021, 14 (1) : 89-113. doi: 10.3934/krm.2020050 [7] Stefan Ruschel, Serhiy Yanchuk. The spectrum of delay differential equations with multiple hierarchical large delays. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 151-175. doi: 10.3934/dcdss.2020321 [8] John Mallet-Paret, Roger D. Nussbaum. Asymptotic homogenization for delay-differential equations and a question of analyticity. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3789-3812. doi: 10.3934/dcds.2020044 [9] Mugen Huang, Moxun Tang, Jianshe Yu, Bo Zheng. A stage structured model of delay differential equations for Aedes mosquito population suppression. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3467-3484. doi: 10.3934/dcds.2020042 [10] Bixiang Wang. Mean-square random invariant manifolds for stochastic differential equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1449-1468. doi: 10.3934/dcds.2020324 [11] Rim Bourguiba, Rosana Rodríguez-López. Existence results for fractional differential equations in presence of upper and lower solutions. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1723-1747. doi: 10.3934/dcdsb.2020180 [12] Ole Løseth Elvetun, Bjørn Fredrik Nielsen. A regularization operator for source identification for elliptic PDEs. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021006 [13] Xianbo Sun, Zhanbo Chen, Pei Yu. Parameter identification on Abelian integrals to achieve Chebyshev property. Discrete & Continuous Dynamical Systems - B, 2020  doi: 10.3934/dcdsb.2020375 [14] Sergio Zamora. Tori can't collapse to an interval. Electronic Research Archive, , () : -. doi: 10.3934/era.2021005 [15] Thabet Abdeljawad, Mohammad Esmael Samei. Applying quantum calculus for the existence of solution of $q$-integro-differential equations with three criteria. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020440 [16] Fathalla A. Rihan, Hebatallah J. Alsakaji. Stochastic delay differential equations of three-species prey-predator system with cooperation among prey species. Discrete & Continuous Dynamical Systems - S, 2020  doi: 10.3934/dcdss.2020468 [17] Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 [18] Tin Phan, Bruce Pell, Amy E. Kendig, Elizabeth T. Borer, Yang Kuang. Rich dynamics of a simple delay host-pathogen model of cell-to-cell infection for plant virus. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 515-539. doi: 10.3934/dcdsb.2020261 [19] Andy Hammerlindl, Jana Rodriguez Hertz, Raúl Ures. Ergodicity and partial hyperbolicity on Seifert manifolds. Journal of Modern Dynamics, 2020, 0: 331-348. doi: 10.3934/jmd.2020012 [20] Meilan Cai, Maoan Han. Limit cycle bifurcations in a class of piecewise smooth cubic systems with multiple parameters. Communications on Pure & Applied Analysis, 2021, 20 (1) : 55-75. doi: 10.3934/cpaa.2020257 2018 Impact Factor: 1.313
{}
Plot the curve of a group of parametric equations [closed] This question is about how to use MMA to plot a curve from solution of a group of parametric equations. To take an algebraic equations as an example, like: \begin{align} x^2 - y^2 &= t^2 \\ xy &= t \qquad , \end{align} where the range of the parameter $t$: $t \leq - \frac{1}{\sqrt{2}}$ and $t \geq \frac{1} {\sqrt{2}}$. I managed to solve the equations to get an explicit form via MMA: Solve[{-x^2 + y^2 == t^2, x y == t}, {x, y}] but I don't know how to plot the graph $\{x,y\}$ under the range $t \leq - \frac{1}{\sqrt{2}}$ and $t \geq \frac{1} {\sqrt{2}}$. Based on rhermans's code, I plotted two graphics for $t≤−1/\sqrt{2}$ and $t≥ 1/\sqrt{2}$, respectively, and used show to combine the two results into one single graph. The problem is solved. closed as off-topic by Daniel Lichtblau, halirutan♦, MarcoB, Henrik Schumacher, SektorMay 30 '18 at 8:40 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – Daniel Lichtblau, halirutan, MarcoB, Henrik Schumacher, Sektor If this question can be reworded to fit the rules in the help center, please edit the question. • That is actually referred to as an implicit form. If you look for "plot implicit function" (without the quotes) in the help browser, first hit is to ContourPlot. – Daniel Lichtblau May 29 '18 at 19:08 1 Answer ParametricPlot[ Evaluate[ {x, y} /. Solve[{-x^2 + y^2 == t^2, x y == t}, {x, y}] ] , {t, -(1/Sqrt[2]), 1/Sqrt[2]} , PlotTheme -> "Scientific" , AspectRatio -> 1 ] • Hey, @rhermans, Thank you! I put your code in MMA and just replace {t, -(1/Sqrt[2]), 1/Sqrt[2]} with {t, -8, -1/Sqrt[2]} to plot one part of the range($t \leq -1/\sqrt{2}$), or with {t, 1/Sqrt[2],8} to plot the other part ($t \geq 1/\sqrt{2}$). Is there any smarter way to plot the two parts in a single graph? – Zoe Rowa May 29 '18 at 15:47 • @ZoeRowa Read this first. You should ask a new question and explain clearly what you need now. – rhermans May 29 '18 at 15:50
{}
Real Analysis like convergence of loss functions Here's one thing i noticed. In Elementary Real Analysis, when we say that a sequence $$s_n$$ converges to a point $$s$$, we first set an $$\epsilon > 0$$ such that for large $$N \in \mathbb{N}$$ $$| s_n - s | < \epsilon.$$ Why don't we set up the convergence of loss functions that way? i. e. We don't have a hyperparameterhyperparameter $$\epsilon$$ to ensure our convergence.
{}
# Which groups contain a comb? The comb is the undirected simple graph with nodes $$\mathbb{N} \times \mathbb{N}$$ where $$\mathbb{N} \ni 0$$ and edges $$\{\{(m,n), (m,n+1)\}, \{(m,0), (m+1,0)\} \;|\; m \in \mathbb{N}, n \in \mathbb{N}\}$$ Let $$G$$ be a discrete group. Say that $$G$$ contains a comb if there exists a finite set $$S \not\ni 1_G$$ such that the comb is a subgraph of the (simple undirected) Cayley graph $$\mathrm{Cay}(\langle S \rangle, S)$$. I don't require spanning or induced subgraphs, just plain subgraphs. I don't require anything involving "Lipschitz", the teeth $$\{n\} \times \mathbb{N}$$ may get close to each other whenever they like, just not intersect each other or themselves. Suppose $$G$$ is not locally virtually cyclic. Does it necessarily contain a comb? One can equivalently restrict to f.g. groups and require that $$\mathrm{Cay}(G, S)$$ directly contains the comb (and drop the "locally"). In case the answer is "no" (or is hard to solve), I also state a quantitative version of this below. If $$G$$ is f.g. and not virtually cyclic then $$\mathrm{Cay}(G,S)$$ contains infinitely many vertex-disjoint paths for some finite generating set $$S$$. By joining these rays, we obtain that every group that is not locally virtually cyclic contains a comb with some co-infinite set of teeth'' removed. (This is more or less Halin's grid theorem, see discussion and follow references of http://www.dim.uchile.cl/~mstein/domin.pdf .) Let $$T \subset \mathbb{N}$$ be an infinite set. The $$T$$-comb is the undirected simple graph with nodes $$(T \times \mathbb{N}) \cup (\mathbb{N} \times \{0\})$$ and edges $$\{\{(t,n), (t,n+1)\}, \{(m,0), (m+1,0)\} \;|\; m \in \mathbb{N}, t \in T, n \in \mathbb{N}\}$$ Can something be said about the maximal density'' of $$T \subset \mathbb{N}$$ such that $$G$$ contains an $$T$$-comb? (E.g. can we pick $$T$$ to be the values of a fixed polynomial.) Some thoughts of mine, not very polished: • By compactness arguments and changing the generators, it is easy to see that the question stays equivalent if we assume that the comb has a two-sided spine or two-sided teeth, i.e. we can replace one or both of the $$\mathbb{N}$$ by $$\mathbb{Z}$$, and similarly containing a $$T$$-comb for syndetic $$T \subset \mathbb{N}$$ is equivalent to containing a comb. • Any nonamenable group contains a comb, because it even contains a binary tree Trees in groups of exponential growth • Any Cayley graph of an infinite f.g. group contains bi-infinite paths, so if $$G$$ has an infinite quotient $$H$$ whose kernel is not locally finite (i.e. $$G$$ is f.g. and (not locally finite)-by-infinite'') then it contains a comb. For this, pick an infinite path in both $$H$$ and $$K$$ w.r.t. some generators. Lift the path from $$H$$ arbitrarily to $$G$$ (include preimages for generators of $$G$$ in the generating set $$S$$ of $$G$$), and everywhere on this path, start another path in the kernel direction. • For solvable groups, the answer is "yes": Among f.g. groups, solvable groups of exponential growth contain a binary tree (see the MO link above), solvable groups of subexponential growth are virtually nilpotent by Milnor-Wolf, and to a virtually nilpotent group you can e.g. apply the previous observation (if a group contains a subgroup containing a comb it contains a comb, and all subgroups of a virtually nilpotent group are finitely generated). I don't know if there's an easy argument for elementary amenable groups. • One thing that seems obvious to try (but I don't have the chops) is to take a bi-infinite geodesic in $$G$$ (path where all finite subpaths of length $$n$$ join group elements at distance $$n$$ in the Cayley graph), and start random walks every $$k$$ steps for some $$k$$, which (I'm told) diverge almost surely on all groups with growth faster than $$n^2$$ (and in the unsolved case the growth is superpolynomial). Perhaps for large enough $$k$$ you can use Lovász local lemma or something to prove that these do not necessarily hit each other. • Discussed with a colleague, who asked if the Grigorchuk group contains a comb (since he first thought my definition required straight teeth). Anyway: the Grigorchuk group is a branch group, so it contains a non-trivial product of infinite f.g. groups, and you can apply the third observation above. So it contains a comb. – Ville Salo Mar 20 '19 at 11:20 I checked the usual suspects and seems that [Chou, Ching. Elementary amenable groups. Illinois J. Math. 24 (1980), no. 3, 396--407. doi:10.1215/ijm/1256047608. https://projecteuclid.org/euclid.ijm/1256047608] solves the elementary amenable case. I don't know if I should just modify my question since this is not an answer, but here goes. Theorem 3.2'. A finitely generated group in $$EG$$ is either almost nilpotent or it contains a free subsemigroup on two generators. Here, $$EG$$ is the class of elementary amenable group, i.e. the smallest class of groups which contains all finite groups and all abelian groups and is closed under group extensions and direct unions. Almost nilpotent is what I called virtually nilpotent in my question, i.e. has a nilpotent subgroup of finite index (here necessarily f.g.). Then apply the third and fourth item from my question, details below though probably obvious to experts if correct. Theorem. Let $$G$$ be an elementary amenable group which is not locally virtually cyclic. Then $$G$$ contains a comb. Proof: W.l.o.g. $$G$$ is a finitely-generated group that is not virtually cyclic. If $$G$$ contains a free subsemigroup on two generators, we are done. Otherwise, by Chou's theorem it is virtually nilpotent. So we show that if $$G$$ is a virtually nilpotent group then either $$G$$ is virtually cyclic or it contains a comb. For this, for a contradiction suppose $$G$$ is nilpotent of minimal nilpotency class among those nilpotent groups that are not virtually cyclic and do not contain a comb. It is well-known that the commutator subgroup $$[G, G]$$ is finitely generated (f.g. nilpotent $$\implies$$ polycyclic $$\implies$$ maximal condition on subgroups), and of course it is nilpotent of smaller nilpotency class. If it is not virtually cyclic, then it contains a comb by the inductive assumption, thus so does $$G$$, so we may assume $$[G, G]$$ is virtually cyclic. If $$[G, G]$$ is finite, then the abelianization $$G/[G, G]$$ cannot be virtually cyclic or $$G$$ would also be virtually cyclic ($$\mathbb{Z}$$ is a free group, just pick a section), and thus by the fundamental theorem of abelian groups $$G/[G, G]$$ contains a copy of $$\mathbb{Z}^2$$, thus a comb. It follows that $$[G, G]$$ must be virtually $$\mathbb{Z}$$. If the abelianization $$G/[G, G]$$ is finite, then $$G$$ is virtually cyclic by definition. We conclude in particular $$[G, G]$$ and $$G/[G, G]$$ are both infinite f.g. groups. We now do as in the third item in my question (I explain the general trick though of course here we could simplify a bit): Pick a bi-infinite injective path $$p$$ in some Cayley graph of $$[G, G]$$ with some finite set of generators $$S \subset G$$ and pick another one $$q$$ in the Cayley graph of $$G/[G, G]$$ with some finite set of generators $$T$$. Pick for each $$t \in T$$ a preimage in $$G$$, i.e. $$s_t \in G$$ such that $$t = [G, G] s_t$$. Now the path $$s_q$$ defined in the obvious way by lifting $$q$$ (if $$q$$ is a sequence over $$\mathbb{Z}$$ of generators $$t \in T$$, pick the corresponding path along generators $$s_t$$) is injective as a path in $$G$$, since even its projection to $$G/[G, G]$$ is by definition injective. Now, start the path $$p$$ from each vertex on the path $$s_q$$, and observe that these paths do not intersect themselves (since $$p$$ does not), and they do not intersect each other since their projections in $$G/[G, G]$$ stay inside distinct cosets. That concludes the proof.
{}
# What is $\mathbb Z[[t]]$? What are the double brackets? What does $\mathbb{Z}[[t]]$ mean? Why are there double square brackets? I can't search through Google, because I can't search Latex. - I think it's the ring of formal power series with coefficents from $\mathbb{Z}$, see en.wikipedia.org/wiki/Formal_power_series –  Mikko Korhonen Sep 2 '12 at 16:19 You actually can search LaTeX: latexsearch.com –  huon-dbaupp Sep 2 '12 at 16:22 If this is from a book, it might have an list of symbols in the back that you can check. –  Jair Taylor Sep 2 '12 at 17:10 That is the ring of formal power series in $t$ with integer coefficients, i.e., of $$\sum_{n=0}^\infty a_nt^n,$$ with $a_n\in\Bbb Z$, componentwise addition, and multiplication appropriately defined. The double brackets distinguish it from $\Bbb Z[t]$, which is the ring of polynomials in $t$ with integer coefficients. We can always evaluate the members of $\Bbb Z[t]$ for any complex value of $t$, but we generally can't evaluate members of $\Bbb Z[[t]]$ for $t\neq 0$. To my mind, the double bracket is a reminder that we need to leave the $t$ alone, and not worry about evaluation. - (IMO there is a sense in which can view "evaluation at $t=a$" to be the quotient map $\Bbb Z[[t]]\to \Bbb Z[[t]]/(t-a)$, though this is cheating and doesn't send everything all the way down to $\Bbb Z$.) –  anon Sep 2 '12 at 17:08 Interesting point! –  Cameron Buie Sep 2 '12 at 17:11 I laughed. I'm going to steal this saying. Many thanks. "just leave $t$ alone" ha. –  James S. Cook Sep 2 '12 at 18:21 Have at it, James! –  Cameron Buie Sep 3 '12 at 1:47 @CameronBuie Is it ok to use the symbol $\mathbb{Z}\llbracket t\rrbracket$ in latex instead of $\mathbb{Z}[[t]]$ or is that wrong? –  Pratyush Sarkar Mar 4 '13 at 14:51 If $A$ is any ring, the notation $A[[T]]$ stands for the ring of formal power series with coefficients in $A$, i.e. the ring whose elements are the expressions $$a_0+a_1T+a_2T^2+a_3T^3+\cdots$$ with the obvious sum and product. -
{}
# Evaluate limit as h approaches 0 of (cos(h)-1)/h Evaluate the limit of the numerator and the limit of the denominator. Take the limit of the numerator and the limit of the denominator. Evaluate the limit of the numerator. Take the limit of each term. Split the limit using the Sum of Limits Rule on the limit as approaches . Move the limit inside the trig function because cosine is continuous. Evaluate the limits by plugging in for all occurrences of . Evaluate the limit of by plugging in for . Evaluate the limit of which is constant as approaches . The exact value of is . Subtract from . Evaluate the limit of by plugging in for . The expression contains a division by The expression is undefined. Undefined Since is of indeterminate form, apply L’Hospital’s Rule. L’Hospital’s Rule states that the limit of a quotient of functions is equal to the limit of the quotient of their derivatives. Find the derivative of the numerator and denominator. Differentiate the numerator and denominator. By the Sum Rule, the derivative of with respect to is . The derivative of with respect to is . Since is constant with respect to , the derivative of with respect to is . Differentiate using the Power Rule which states that is where . Take the limit of each term. Split the limit using the Limits Quotient Rule on the limit as approaches . Move the term outside of the limit because it is constant with respect to . Move the limit inside the trig function because sine is continuous. Evaluate the limits by plugging in for all occurrences of . Evaluate the limit of by plugging in for . Evaluate the limit of which is constant as approaches .
{}
# Pain/Damage - If I dropped an ant ## Main Question or Discussion Point Say we had a six foot wall, I jumped off and measured the pain/damage recieved because of the fall, how would that differ to an ant if it was dropped from the same height? Because of it's size/weight difference... Related Other Physics Topics News on Phys.org Gokul43201 Staff Emeritus Gold Member How is this question answered by Relativity? I hope you're not using the layman connotation of that term. Forget what Einstein said; Relativity is not about how it feels when you're sitting on a hot stove. Last edited: Hootenanny Staff Emeritus Gold Member Could we tone down the language please Complex? Otherwise your thread is liable to be locked. Gokul was just enquiring why you though relativity should be able to answer you question as opposed to classical physics. I don't think this question is answerable anyway, since pain is subjective; how do you intend to measure the pain experienced by an ant? I didn't ask for his ignorance; simply telling me where it should be would be more helpful than anything... That's why I put pain/damage... I'm just interested to find out how free fall effects different sized life forms when it hits a surface... Would it's difference in size/weight effect how damaging the fall would be to it compared to us? Last edited: Gokul43201 Staff Emeritus Gold Member I didn't ask for his ignorance Take it easy with the personal attacks. How am I to know that you didn't have some non-obvious reason for posting this under Relativity? Which is why I asked you. DaveC426913 Gold Member 1] An 2mm ant's surface-to-weight ratio is on the order of a million times greater than a human's (1000^2). This means that when ants fall, they reach terminal velocity quickly, and literally float down. 2] An object's ability to withstand crushing is based on its cross-sectional structures (i.e. legs). An ant's cross-sectional area-to-weight ratio is also about a million times greater than a human's. This means it can withstand more weight (proportionally) before collapsing. Likewise, an elephant's area-to-weight ratio is a few orders of magnitude smaller than a human's. An elephant cannot jump, and if it tried, it would break its legs. Why? The square-cube law. Let's leave the elephant alone and instread create a giant that is exactly human-shaped, only 12 feet tall. Code: Critter Height (1D) leg diam. X-sectional area of leg (2D) weight (3D) Human 6' 5" 5^2 = 25" 200 lbs. [U]Giant 12' 10" 10^2 = 100" 1600 lbs[/U] 2x 2x 4x 8x Notet that the legs of the giant are four times as thick in cross-section but they have to support eight times as much weight. They are near the breaking point. A fall from one foot might break a giant's legs. Last edited: the bigger they are the harder they fall holds true then eh? quasar987
{}
# IGEM:Caltech/2007 (Difference between revisions) Revision as of 03:50, 26 October 2007 (view source)← Previous diff Revision as of 03:54, 26 October 2007 (view source)Next diff → Line 11: Line 11: [[Image:Caltech_igem_2007.jpg|center|Viral Smart Bombs]] [[Image:Caltech_igem_2007.jpg|center|Viral Smart Bombs]] + + ===About iGEM=== iGEM is an international synthetic biology competition between teams at various universities.  Each team designs and implements a genetic system which performs a task.  This system should use parts taken from the Registry of Standardized Biological Parts ("Biobricks") whenever possible, to prove that devices and system in bioengineering, much like those in other engineering fields, can be made from standardized parts.  Parts standardization improves the predictability of engineered systems and reduces or eliminates the need for the bioengineer to construct his/her own parts, leaving him/her free to focus on overall system design.  If a part used during the competition is newly designed due to a lack of an equivalent RSBP part, then the part will be entered into the registry. iGEM is an international synthetic biology competition between teams at various universities.  Each team designs and implements a genetic system which performs a task.  This system should use parts taken from the Registry of Standardized Biological Parts ("Biobricks") whenever possible, to prove that devices and system in bioengineering, much like those in other engineering fields, can be made from standardized parts.  Parts standardization improves the predictability of engineered systems and reduces or eliminates the need for the bioengineer to construct his/her own parts, leaving him/her free to focus on overall system design.  If a part used during the competition is newly designed due to a lack of an equivalent RSBP part, then the part will be entered into the registry. + + ===About $\lambda$=== Enterobacteriophage $\lambda$ is a temperate virus that infects Escherichia Coli cells.  Once inside the cell, $\lambda$ chooses between two pathways.  It can enter the lytic pathway, in which it uses host cell machinery to manufacture copies of itself and eventually releases them by lysing the cell.  It can also enter the lysogenic pathway, in which it inserts its genome into that of the host where it is replicated along with the host genome.  We want to manipulate $\lambda$'s decision in response to molecular signals inside the host cell, so that it only lyses a specific subpopulation of cells.  If successful, this will give us more insight on how life cycle decisions are made in the $\lambda$ bacteriophage. Enterobacteriophage $\lambda$ is a temperate virus that infects Escherichia Coli cells.  Once inside the cell, $\lambda$ chooses between two pathways.  It can enter the lytic pathway, in which it uses host cell machinery to manufacture copies of itself and eventually releases them by lysing the cell.  It can also enter the lysogenic pathway, in which it inserts its genome into that of the host where it is replicated along with the host genome.  We want to manipulate $\lambda$'s decision in response to molecular signals inside the host cell, so that it only lyses a specific subpopulation of cells.  If successful, this will give us more insight on how life cycle decisions are made in the $\lambda$ bacteriophage. Line 20: Line 24: There are three control points in the $\lambda$ genome we want to regulate: cro, N, and Q.  Using three control points gives us more control over $\lambda$'s life cycle, as well as providing us with redundancy in case one of the regulatory systems don't work. There are three control points in the $\lambda$ genome we want to regulate: cro, N, and Q.  Using three control points gives us more control over $\lambda$'s life cycle, as well as providing us with redundancy in case one of the regulatory systems don't work. + [[Image:Lambda genome.jpg|center|480px|The lambda genome]] + ===Lambda Control=== Riboregulators a form of post-transcriptional gene expression control.  A riboregulator consists of a cis-repressor, which acts as a lock, preventing translation, and a trans-activator, the "key" that allows translation.  The cis-repressor consists of a region complementary to an mRNA transcript's ribosome binding site (RBS) and a short loop, both upstream of the RBS.  When transcribed, the complementary region binds to the RBS, preventing ribosomal access.  The trans-activator, also an mRNA, contains a stem-loop structure as well as a region complementary to the cis-repressor.  When introduced, the trans-activator binds to the cis-repressor, allowing ribosomal access to the RBS of the riboregulated gene. Riboregulators a form of post-transcriptional gene expression control.  A riboregulator consists of a cis-repressor, which acts as a lock, preventing translation, and a trans-activator, the "key" that allows translation.  The cis-repressor consists of a region complementary to an mRNA transcript's ribosome binding site (RBS) and a short loop, both upstream of the RBS.  When transcribed, the complementary region binds to the RBS, preventing ribosomal access.  The trans-activator, also an mRNA, contains a stem-loop structure as well as a region complementary to the cis-repressor.  When introduced, the trans-activator binds to the cis-repressor, allowing ribosomal access to the RBS of the riboregulated gene. |} |} ## Revision as of 03:54, 26 October 2007 iGEM 2007 Home        People        Project        Protocols        Notes        Changes iGEM is an international synthetic biology competition between teams at various universities. Each team designs and implements a genetic system which performs a task. This system should use parts taken from the Registry of Standardized Biological Parts ("Biobricks") whenever possible, to prove that devices and system in bioengineering, much like those in other engineering fields, can be made from standardized parts. Parts standardization improves the predictability of engineered systems and reduces or eliminates the need for the bioengineer to construct his/her own parts, leaving him/her free to focus on overall system design. If a part used during the competition is newly designed due to a lack of an equivalent RSBP part, then the part will be entered into the registry.
{}
IEVref: 102-03-09 ID: Language: en Status: Standard Term: coordinate, Synonym1: Synonym2: Synonym3: Symbol: Definition: any of the n scalars ${U}_{1}\text{,}\text{\hspace{0.17em}}{U}_{2}\text{,}\text{\hspace{0.17em}}...\text{,}\text{\hspace{0.17em}}{U}_{n}$ in the representation of a vector U as a linear combination ${U}_{1}{a}_{1}+{U}_{2}{a}_{2}+...+{U}_{n}{a}_{n}$ of the base vectors ${a}_{1}\text{,}\text{\hspace{0.17em}}{a}_{2}\text{,}\text{\hspace{0.17em}}...\text{,}\text{\hspace{0.17em}}{a}_{n}$Note 1 to entry: The term "coordinate" is also used for the components of a position vector (see IEV 102-03-22). Note 2 to entry: In English, the term "component" is sometimes used in this sense. Publication date: 2008-08 Source: Replaces: Internal notes: 2017-02-20: Editorial revisions in accordance with the information provided in C00019 (IEV 102) - evaluation. JGO CO remarks: TC/SC remarks: VT remarks: Domain1: Domain2: Domain3: Domain4: Domain5:
{}
Home > Financial > Auto Lease Calculator With Total Price # Auto Lease Calculator With Total Price % months % Calculated Results Monthly Payment374.75 Depreciation Charge291.67 Finance Charge61.88 Before Tax353.54 Sales Tax21.21 Total Lease Payment8,994.10 If Purchase Under the Same Condition Monthly Payment925.33 Sales Tax1,200.00 Loan Amount21,200.00 Total Interest1,008.01 Total Loan Payment22,208.01 Total Cost22,208.01 ##### Related Calculators Auto Loan Payment Calculator Auto Loan Auto Price Calculator Auto Loan Cash Back vs Lower Rate Auto Loan Refinance Calculator ##### How to Calculate monthly lease payment There are three components in a lease payment: depreciation charge, finance charge and sales tax. The monthly lease payment is the sum of these three components. • Depreciation charge is the difference of the capitalized cost subtracts the residual value. The capitalized cost is the car price subtracts the downpayment. The monthly value of depreciation is the total depreciation divided by the number of month of the lease term. • Finance charge is calculated by the average capitalized cost multiples the monthly interest rate. The average capitalized cost is calculated using the half of the initial capitalized cost (car price - down payment) plus the ending capitalized cost (car residual value). Therefore, the formula is: $Rate \times (Price - DownPayment + Residual) \div 24$ • Tax is calculated by the monthly depreciation charge plus the finance charge and then apply the tax rate on it. ##### Auto Lease Explained In the new car market, leasing account for about one-third of the market share. According to Edmunds, the average price of a new car is around $37,000, and the average car loan payment is$560 a month for 60 months after a \$6,000 down payment in 2019. New cars are expensive, and leasing is the cheapest way to get into a new car. A car lease lets you drive a new vehicle without paying a large sum of cash or taking out a car loan to buy it. To lease a car, you simply make a small down payment, usually less than the typical 20% when you buy, followed by monthly payments for the term of the lease. When the term expires, you return the car. When a consumer leases a new vehicle, the consumer is getting into a special deal with an auto dealer. Instead of buying the entire vehicle, the lease buyer pays for any depreciation incurred over the entire cost of a lease (normally two or three years), plus any fees and costs incurred within the leasing period. There are two key auto financing concepts in auto leasing - capitalized cost and residual value. Basically capitalized cost is the buy outright price for the vehicle you want to lease. You should negotiate the capitalized cost just as if you were buying it. Residual value is the likely value of the vehicle at the end of the lease and is almost never up for bargaining. Just like you guess it, the amount you pay for a leased auto represents the cap cost minus the residual value, with interest and fees added into the price. We know that it can be difficult deciding on a vehicle, as well as picking which financing option better suits your needs. There is no one-size-fits-all solution and each option has distinct pros and cons. Leasing a car has some drawbacks. Among them: • In the long term, say 10 years, the cost of leasing several cars will likely exceed, sometimes by a lot, the purchase price of a new or used car. • When your lease expires, you don’t own and have to return to the car dealer. You essentially rent the car, and you will have to get another vehicle for your everyday use at the end of the lease. • Lease terms may carry steep penalties and you may have to pay penalties when: • Exceed the number of miles in your lease contract. • Fail to keep the interior and exterior of the car in good condition. • Drive the car hard and inflict significant wear and tear on the car’s performance and appearance. • Want to return the car before your contract expires. Leasing is more beneficial than buying when you: • Don’t have the cash to buy the car. • Want to drive a vehicle that’s out of your purchase price range. • Won’t likely exceed the mileage cap in a contract—usually between 10,000 and 15,000 miles per year. Exceeding the mileage limits on your lease can cost you 10 to 20 cents per mile. • Can take good care of the car’s exterior and interior, paying particular attention to avoid nicks, spills and other cosmetic damage. • Expect to lease another car when your vehicle’s current contract expires.
{}
## LaTeX Tips Other Assorted LaTeX Tips: %%%%%%%%%%%%% {\huge \bf \begin{center} Here is the\\ Title \end{center} } %%%%%%%%%%%%% The above does what you want. The following doesn't (the center command doesn't know to set the line spacing larger for a \huge font, and the \bf gets ignored because it is before the font size command). %%%%%%%%%%%%% \begin{center} {\bf \huge Here is the\\ Title } \end{center} %%%%%%%%%%%%% Spacing after macros can sometimes get confusing. "%" at the end of a line will suppress any white space at the beginning of the next line. Following a macro name with {} will suppress white space after it, i.e., \latex{}2.0 to eliminate any space between \latex and 2.0. The xspace.sty utility is supposed to automate this? %%%%%%%%%%%%%%%%%%% To make pdf files that use good quality outline fonts for display on web pages (and avoid previous problems with grainy bitmap fonts), do: latex paper dvips -Pwww paper ps2pdf paper.ps The key is the "-Pwww" switch above. In most TeX distributions (like tetex), the default paper size for dvips is the European a4 size. To change the default paper size to US Letter (8.5in x 11in), download the file dvipsrc, rename it to .dvipsrc and put it your home directory (if you are using a unix/linux machine). You can also use the texconfig command to set the default paper size for all users on your computer, but that requires you to have root privileges.
{}
# Solving differential equation • Sep 4th 2010, 01:10 AM acevipa Solving differential equation Solve $x^2y''+xy'+y=\log x \ \ \ \ x>0$ by letting $x=e^t$ How do I find y' and y''? $\frac{dy}{dx}=\frac{dy}{dt}\cdot\frac{dt}{dx}$ $\frac{dy}{dx}=\frac{dy}{dt}\cdot e^{-t}$ Not too sure what to do next? • Sep 4th 2010, 01:20 AM Prove It Your calculation of $\frac{dy}{dx}$ is correct. So $\frac{d^2y}{dx^2} = \frac{d}{dx}\left(e^{-t}\,\frac{dy}{dt}\right)$ $= e^{-t}\,\frac{d}{dx}\left(\frac{dy}{dt}\right) + \frac{d}{dx}(e^{-t})\,\frac{dy}{dt}$ $= e^{-t}\,\frac{d}{dt}\left(\frac{dy}{dt}\right)\,\frac{ dt}{dx} + \frac{d}{dt}(e^{-t})\,\frac{dt}{dx}\,\frac{dy}{dt}$ $= e^{-2t}\,\frac{d^2y}{dt^2} - e^{-2t}\,\frac{dy}{dt}$. Now substitute everything into your DE. • Sep 4th 2010, 01:23 AM acevipa Quote: Originally Posted by Prove It Your calculation of $\frac{dy}{dx}$ is correct. So $\frac{d^2y}{dx^2} = \frac{d}{dx}\left(e^{-t}\,\frac{dy}{dt}\right)$ $= e^{-t}\,\frac{d}{dx}\left(\frac{dy}{dt}\right) + \frac{d}{dx}(e^{-t})\,\frac{dy}{dt}$ $= e^{-t}\,\frac{d}{dt}\left(\frac{dy}{dt}\right)\,\frac{ dt}{dx} + \frac{d}{dt}(e^{-t})\,\frac{dt}{dx}\,\frac{dy}{dt}$ $= e^{-2t}\,\frac{d^2y}{dt^2} - e^{-2t}\,\frac{dy}{dt}$. Now substitute everything into your DE. Thanks, that makes a lot of sense now.
{}
# Solution to #4-1 Paper 1 Difficulty:Medium Let $\vec{v}=\begin{pmatrix} 1\\ 2\\ 3 \end{pmatrix}$, and $\vec{w}=\begin{pmatrix} -3\\ 1\\ 2 \end{pmatrix}$. If $\vec{u}=\vec{v}-2\vec{w}$, find the magnitude of vector $\vec{u}$. [Maximum mark:5]
{}
# 1.2 Add whole numbers  (Page 5/6) Page 5 / 6 ## Key concepts • Addition Notation To describe addition, we can use symbols and words. Operation Notation Expression Read as Result Addition $+$ $3+4$ three plus four the sum of $3$ and $4$ • The sum of any number $a$ and $0$ is the number. $a+0=a$ $0+a=a$ • Changing the order of the addends $a$ and $b$ does not change their sum. $a+b=b+a$ . 1. Write the numbers so each place value lines up vertically. 2. Add the digits in each place value. Work from right to left starting with the ones place. If a sum in a place value is more than 9, carry to the next place value. 3. Continue adding each place value from right to left, adding each place value and carrying if needed. ## Practice makes perfect In the following exercises, translate the following from math expressions to words. $5+2$ five plus two; the sum of 5 and 2. $6+3$ $13+18$ thirteen plus eighteen; the sum of 13 and 18. $15+16$ $214+642$ two hundred fourteen plus six hundred forty-two; the sum of 214 and 642 $438+113$ In the following exercises, model the addition. $2+4$ $2+4=6$ $5+3$ $8+4$ $8+4=12$ $5+9$ $14+75$ $14+75=89$ $15+63$ $16+25$ $16+25=41$ $14+27$ In the following exercises, fill in the missing values in each chart. 1. $0+13$ 2. $13+0$ 1. 13 2. 13 1. $0+5,280$ 2. $5,280+0$ 1. $8+3$ 2. $3+8$ 1. $11$ 2. $11$ 1. $7+5$ 2. $5+7$ $45+33$ $78$ $37+22$ $71+28$ $99$ $43+53$ $26+59$ $85$ $38+17$ $64+78$ $142$ $92+39$ $168+325$ $493$ $247+149$ $584+277$ $861$ $175+648$ $832+199$ $1,031$ $775+369$ $6,358+492$ $6,850$ $9,184+578$ $3,740+18,593$ $22,333$ $6,118+15,990$ $485,012+619,848$ $1,104,860$ $368,911+857,289$ $24,731+592+3,868$ $29,191$ $28,925+817+4,593$ $8,015+76,946+16,570$ $101,531$ $6,291+54,107+28,635$ Translate Word Phrases to Math Notation In the following exercises, translate each phrase into math notation and then simplify. the sum of $13$ and $18$ $13+18=31$ the sum of $12$ and $19$ the sum of $90$ and $65$ $90+65=155$ the sum of $70$ and $38$ $33$ increased by $49$ $33+49=82$ $68$ increased by $25$ $250$ more than $599$ $250+599=849$ $115$ more than $286$ the total of $628$ and $77$ $628+77=705$ the total of $593$ and $79$ $1,482$ added to $915$ $915+1,482=2,397$ $2,719$ added to $682$ In the following exercises, solve the problem. Home remodeling Sophia remodeled her kitchen and bought a new range, microwave, and dishwasher. The range cost $\text{1,100},$ the microwave cost $\text{250},$ and the dishwasher cost $\text{525}.$ What was the total cost of these three appliances? The total cost was $1,875. Sports equipment Aiden bought a baseball bat, helmet, and glove. The bat cost $\text{299},$ the helmet cost $\text{35},$ and the glove cost $\text{68}.$ What was the total cost of Aiden’s sports equipment? Bike riding Ethan rode his bike $14$ miles on Monday, $19$ miles on Tuesday, $12$ miles on Wednesday, $25$ miles on Friday, and $68$ miles on Saturday. What was the total number of miles Ethan rode? Ethan rode 138 miles. Business Chloe has a flower shop. Last week she made $19$ floral arrangements on Monday, $12$ on Tuesday, $23$ on Wednesday, $29$ on Thursday, and $44$ on Friday. What was the total number of floral arrangements Chloe made? Apartment size Jackson lives in a $7$ room apartment. The number of square feet in each room is $238,120,156,196,100,132,$ and $225.$ What is the total number of square feet in all $7$ rooms? The total square footage in the rooms is 1,167 square feet. Weight Seven men rented a fishing boat. The weights of the men were $175,192,148,169,205,181,$ and $\text{225}$ pounds. What was the total weight of the seven men? Salary Last year Natalie’s salary was $\text{82,572}.$ Two years ago, her salary was $\text{79,316},$ and three years ago it was $\text{75,298}.$ What is the total amount of Natalie’s salary for the past three years? Natalie’s total salary is$237,186. Home sales Emma is a realtor. Last month, she sold three houses. The selling prices of the houses were $\text{292,540},\text{505,875},$ and $423,699.$ What was the total of the three selling prices? In the following exercises, find the perimeter of each figure. The perimeter of the figure is 44 inches. The perimeter of the figure is 56 meters. The perimeter of the figure is 71 yards. The perimeter of the figure is 62 feet. ## Everyday math Calories Paulette had a grilled chicken salad, ranch dressing, and a $\text{16-ounce}$ drink for lunch. On the restaurant’s nutrition chart, she saw that each item had the following number of calories: Grilled chicken salad – $320$ calories Ranch dressing – $170$ calories $\text{16-ounce}$ drink – $150$ calories What was the total number of calories of Paulette’s lunch? The total number of calories was 640. Calories Fred had a grilled chicken sandwich, a small order of fries, and a $\text{12-oz}$ chocolate shake for dinner. The restaurant’s nutrition chart lists the following calories for each item: Grilled chicken sandwich – $420$ calories Small fries – $230$ calories $\text{12-oz}$ chocolate shake – $580$ calories What was the total number of calories of Fred’s dinner? Test scores A students needs a total of $400$ points on five tests to pass a course. The student scored $82,91,75,88,\text{and}\phantom{\rule{0.2em}{0ex}}70.$ Did the student pass the course? Yes, he scored 406 points. Elevators The maximum weight capacity of an elevator is $1150$ pounds. Six men are in the elevator. Their weights are $210,145,183,230,159,\text{and}\phantom{\rule{0.2em}{0ex}}164$ pounds. Is the total weight below the elevators’ maximum capacity? ## Writing exercises How confident do you feel about your knowledge of the addition facts? If you are not fully confident, what will you do to improve your skills? ## Self check After completing the exercises, use this checklist to evaluate your mastery of the objectives of this section. After reviewing this checklist, what will you do to become confident for all objectives? how to know photocatalytic properties of tio2 nanoparticles...what to do now it is a goid question and i want to know the answer as well Maciej Do somebody tell me a best nano engineering book for beginners? what is fullerene does it is used to make bukky balls are you nano engineer ? s. what is the Synthesis, properties,and applications of carbon nano chemistry Mostly, they use nano carbon for electronics and for materials to be strengthened. Virgil is Bucky paper clear? CYNTHIA so some one know about replacing silicon atom with phosphorous in semiconductors device? Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure. Harper Do you know which machine is used to that process? s. how to fabricate graphene ink ? for screen printed electrodes ? SUYASH What is lattice structure? of graphene you mean? Ebrahim or in general Ebrahim in general s. Graphene has a hexagonal structure tahir On having this app for quite a bit time, Haven't realised there's a chat room in it. Cied what is biological synthesis of nanoparticles what's the easiest and fastest way to the synthesize AgNP? China Cied types of nano material I start with an easy one. carbon nanotubes woven into a long filament like a string Porter many many of nanotubes Porter what is the k.e before it land Yasmin what is the function of carbon nanotubes? Cesar I'm interested in nanotube Uday what is nanomaterials​ and their applications of sensors. what is nano technology what is system testing? preparation of nanomaterial Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it... what is system testing what is the application of nanotechnology? Stotaw In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google Azam anybody can imagine what will be happen after 100 years from now in nano tech world Prasenjit after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments Azam name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world Prasenjit how hard could it be to apply nanotechnology against viral infections such HIV or Ebola? Damian silver nanoparticles could handle the job? Damian not now but maybe in future only AgNP maybe any other nanomaterials Azam Hello Uday I'm interested in Nanotube Uday this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15 Prasenjit can nanotechnology change the direction of the face of the world how did you get the value of 2000N.What calculations are needed to arrive at it Privacy Information Security Software Version 1.1a Good Berger describes sociologists as concerned with What is the expressiin for seven less than four times the number of nickels How do i figure this problem out. how do you translate this in Algebraic Expressions why surface tension is zero at critical temperature Shanjida I think if critical temperature denote high temperature then a liquid stats boils that time the water stats to evaporate so some moles of h2o to up and due to high temp the bonding break they have low density so it can be a reason s. Need to simplify the expresin. 3/7 (x+y)-1/7 (x-1)= . After 3 months on a diet, Lisa had lost 12% of her original weight. She lost 21 pounds. What was Lisa's original weight?
{}
A free neutron beta-decays to a proton with a half-life of 14 minutes. Question: A free neutron beta-decays to a proton with a half-life of 14 minutes. (a) What is the decay constant? (b) Find the energy liberated in the process. Solution:
{}
A. P. Petravchuk Search this author in Google Scholar Articles: 2 On finite dimensional Lie algebras of planar vector fields with rational coefficients Methods Funct. Anal. Topology 19 (2013), no. 4, 376-388 The Lie algebra of planar vector fields with coefficients from the field of rational functions over an algebraically closed field of characteristic zero is considered. We find all finite-dimensional Lie algebras that can be realized as subalgebras of this algebra. On the group of Lie-orthogonal operators on a Lie algebra Methods Funct. Anal. Topology 17 (2011), no. 3, 199-203 Finite dimensional Lie algebras over the field of complex numbers with a linear operator $T: L\to L$ such that $[T(x),T(y)]=[x,y]$ for all $x,y\in L$ are studied. The group of such non-degenerative linear operators on $L$ is considered. Some properties of this group and its relations with the group $\operatorname{Aut}(L)$ in the general linear group $GL(L)$ are described.
{}
# Polynomial Regression¶ Polynomial Regression algorithm is a generalization of the linear regression algorithm that aims to find parameters $$p_1, p_2, \ldots, p_n$$ for a polynomial model of degree $$n$$, i.e. $$y = p_0 + p_1 \cdot t + \ldots + p_n \cdot t^n$$, that best fits $$N$$ data points. The task is equivalent to solve the following systems of linear equations $\begin{split}Ap = \begin{bmatrix} 1&t_1&t_1^2&\cdots&t_1^n \\ 1&t_2&t_2^2&\cdots&t_2^n \\ \vdots&\vdots&\vdots&\vdots&\vdots \\ 1&t_N&t_N^2&\cdots&t_N^n \end{bmatrix} \, \begin{bmatrix} p_0 \\ p_1 \\ \vdots \\ p_n \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \\ \vdots \\ y_N \end{bmatrix} = Y.\end{split}$ The method of least squares is the most common method for finding the fitted parameters. If $$A$$ is of full column rank, the least squares solution is given by $p = (A^T A)^{-1} A^T Y.$ Input Parameters Parameter Type Constraint Description Remarks $$[t_i]$$ $$[t_i] \in \mathbb R^N$$ $$N \in \mathbb{N}$$ None None $$Y$$ $$Y \in \mathbb R^N$$ $$N \in \mathbb{N}$$ Input data vector of length $$N$$ None $$n$$ $$n \in \mathbb N$$ None None None Output Parameters Parameter Type Constraint Description Remarks $$p$$ $$p \in \mathbb R^n$$ None None None $$\hat{Y}$$ $$\hat{Y} \in \mathbb R^N$$ $$N \in \mathbb{N}$$ Output data vector of length $$N$$ None Single Steps using the Algorithm References • R.C. Aster, B. Borchers, C.H. Thurber, Parameter Estimation and Inverse Problems, Academic Press, 2005.
{}
Calc Cumulative Flow Map Description This functor calculates a cumulative flow map using a flow direction map, an elevation map and a flow partition map. The resulting map indicates the flow received by every cell directly and indirectly. Inputs Name Type Description Flow Map Type This map defines the flow directions between cells. Optional Inputs Name Type Description Default Value Weight Map Type Weights ponder the cell flow, in terms of contribution. If this map is not provided, the functor assumes that weight is always 1. None Flow Partition Map Type This parameter defines the percentage of the flow transfered among cells. If the functor does not receive this map, it is assumed that the flow is equally divided among the cells pointed by the “Flow Directions” map. None Cell Type Cell Type Type Defines the output map cell type. Signed 32 Bit Integer Null Accumulation Integer Value Type Defines the output map null value. -2147483648 Accumulation Is Sparse Integer Value Type If true, the map is loaded as a sparse image. Sparse images have the advantage of storing only the cells containing non-null values, yet have diminished access time. False Outputs Name Type Description Flow Accumulation Map Type Map where each cell represents the accumulated flow. The resulting flow comprises the flows received from the neighbors directly and indirectly. Notes The direction map indicates the possible flow direction from every cells. See Calc Flow Direction Map for further information about the flow direction map. The elevation map must be the same map used to calculate the flow direction map. See Calc Flow Direction Map for further information. The flow partition map is used to indicate how much flow goes into every possible branch (in case of a multidirectional flow source). This map must have at least 8 layers. Each layer corresponds to one of the cell neighbors as follows: Layer 5 Layer 6 Layer 7 Layer 4 Layer 0 Layer 3 Layer 2 Layer 1 Example: If the central/anchor cell points toward the left neighbor, the functor searches the partition information on the 4th layer at the same position of the cell. If a partition map is not provided, the flow is evenly divided. The weight map defines how much a cell contributes to the overall accumulated flow. If a weight map is not provided, the functor assumes a weight of 1 for all cells. Internal Name CalcCumulativeFlowMap
{}
Because of the attack on the computer network of the University of Duisburg-Essen, some content (in particular some image files) cannot be accessed because it is stored on central servers of the university. # Abstract Anton Mellit Anton Mellit will speak on The $P=W$ conjecture via an action of the Lie algebra of hamiltonian vector fields By H_2 we denote the Lie algebra of polynomial hamiltonian vector fields on the plane. We consider the moduli space of stable twisted Higgs bundles on an algebraic curve of given coprime rank and degree. De Cataldo, Hausel and Migliorini proved in the case of rank 2 and conjectured in arbitrary rank that two natural filtrations on the cohomology of the moduli space coincide. One is the weight filtration W coming from the Betti realization, and the other one is the perverse filtration P induced by the Hitchin map. Motivated by computations of the Khovanov-Rozansky homology of links by Gorsky, Hogancamp and myself, we look for an action of H_2 on the cohomology of the moduli space. We find it in the algebra generated by two kinds of natural operations: on the one hand we have the operations of cup product by tautological classes, and on the other hand we have the Hecke operators acting via certain correspondences. We then show that both P and W coincide with the filtration canonically associated to the sl_2 subalgebra of H_2. Based on joint work with Hausel, Minets and Schiffmann
{}
## Wednesday, December 13, 2017 ### Hex Grid Coordinates Quick one, from the 'Why haven't I thought of it before?' Department.  Reading off the coordinates of a hex-grid map might be made easier by where you place the numbers or letters beside the 'zig-zag row or column. Here's one I have made up: Hex-Grid with Row Coordinates aligned with the bottom half of the hexes in the odd columns, and so with the top half of the even-column hexes. Mitigates against visual ambiguity. The letters marking the row I have aligned with the bottom half of the grid area adjacent, and of the odd numbered columns.  They then, of course, align with the top half of the even numbered columns of grid areas.  Labelled in this way I find it a whole lot easier to determine to which row a given grid area belongs. Just by the way, in the above map L-even (e.g. L2 etc) are not really on the map.  But L-odd (L1, L3 etc) is on the map. I always like to add an actual picture to these posts, even though of doubtful relevance!
{}
# a problem on fixed point on different sets under continuous mapping Pick out the true statements. (a) Let $f : [0, 2] → [0, 1]$ be a continuous function. Then, there always exists $x ∈ [0, 1]$ such that $f(x) = x$. (b) Let $f : [0, 1] → [0, 1]$ be a continuous function which is continuously differentiable in $(0, 1)$ and such that $|f’(x)| ≤ 1/2$ for all $x ∈(0, 1)$. Then, there exists a unique $x ∈ [0, 1]$ such that $f(x) = x$. (c) Let $S$ = {$p = (x, y) ∈ \mathbb{R}^2 : x^2 + y^2 = 1$}. Let $f : S → S$ be a continuous function. Then, there always exists $p ∈ S$ such that $f(p) = p$. by brouwer fixed point theorem we can say (a) is true but how can I verify the other options. - Part (a) follows from the intermediate value theorem applied to $f(x) - x$, so quoting the Brouwer fixed point theorem is a bit much. –  Justin Young Jan 6 '13 at 6:15 You are right for a), as we can see $f$ as a self-map of $[0,2]$. For b) show that $f$ is a contraction map: by the Mean value theorem we can see that $$|f(x) - f(y)| \le \frac{1}{2}|x-y|$$ for all $x,y \in [0,1]$. Now apply Banach's fixed point theorem. Draw c), what is it? What kind of maps can you think of on this space? - c) is false. For example take a rotation on the circle –  user52188 Jan 6 '13 at 6:19 Here are some hints. b): Suppose $f$ has two fixed points, $f(a) = a$ and $f(b) = b$, with $b > a$. What does the fundamental theorem of calculus have to say about the maximum possible value of $f(b)-f(a)$? c): One special case of continuous functions on a set that immediately come to mind are its smooth symmetries. What are they for the circle? - I wouldn't call the mean value theorem the "fundamental theorem of calculus"; this is normally reserved for the relation between integrals and anti-derivatives... –  Henno Brandsma Jan 6 '13 at 6:23 Right... I'm suggesting to bound $f(b) - f(a) = \int_a^b f'(x) dx.$ –  user7530 Jan 6 '13 at 6:24 Ok, that's a way to do it too. I still think the mean value theorem is more direct though. –  Henno Brandsma Jan 6 '13 at 6:52
{}
# How do you find the vertical, horizontal or slant asymptotes for f(x)= (1)/(x^2-4)? ##### 1 Answer Jun 14, 2018 The two vertical asymptotes are the vertical lines $x = 2$ and $x = - 2$ The horizontal asymptote is the horizontal line $y = 0$ for both $x \setminus \to \setminus \infty$ and $x \setminus \to - \setminus \infty$. #### Explanation: A function has vertical asymptotes where it is not defined. In this case we have a fraction, so the function is not defined where its denominator equals zero. This means that we must ask that ${x}^{2} - 4 = 0 \setminus \iff {x}^{2} = 4 \setminus \iff x = \setminus \pm \sqrt{4} = \pm 2$ So, the two vertical asymptotes are the vertical lines $x = 2$ and $x = - 2$ A function has horizontal asymptotes if the limit as $x \setminus \to \setminus \pm \setminus \infty$ is finite. In this case, ${\lim}_{x \setminus \to \setminus \pm \setminus \infty} \setminus \frac{1}{{x}^{2} - 4} = \setminus \frac{1}{{\left(\setminus \pm \setminus \infty\right)}^{2} - 4} = \setminus \frac{1}{\infty - 4} = \setminus \frac{1}{\infty} = 0$ So, the horizontal asymptote is the horizontal line $y = 0$ for both $x \setminus \to \setminus \infty$ and $x \setminus \to - \setminus \infty$. Note that both passages ${\left(\setminus \pm \setminus \infty\right)}^{2} = \setminus \infty$ and $\setminus \infty - 4 = \setminus \infty$ are not to be taken as rigorous algebraic passages, but rather as an extimate of the sign of the infinite (first passage), or as a consideration of the fact that the $- 4$ played no role (second passage). Since the function has horizontal asymptotes, it can't have slant asymptotes. Actually, if we looked for slany asymptotes we would again find the line $y = 0$.
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Use and abuse of dietary supplements in persons with diabetes ## Abstract The dietary supplement industry has estimated sales of over $30 billion in the US and over$100 billion globally. Many consumers believe that dietary supplements are safer and possibly more effective than drugs to treat diabetes. The sheer volume of the literature in this space makes compiling them into one review challenging, so much so that primarily narrative reviews currently exist. By applying the interactive database supplied by the Office of Dietary Supplements at the National Institutes of Health, we identified the top 100 ingredients that appeared most often in dietary supplement products. One-hundred different keyword searches using the ingredient name and the word diabetes were performed using a program developed to automatically scrape PubMed. Each search was retained in a separate Excel spreadsheet, which was then reviewed for inclusion or exclusion. The studies that met the inclusion criteria were evaluated for effect of reducing and controlling diabetes. The PubMed scrape resulted in 6217 studies. For each keyword search only the most recent 100 were retained, which refined the total to 1823 studies. Of these 425 met the screening criteria. The ingredients, fiber, selenium and zinc had the most studies associated with improvement in diabetes. Several popular supplement ingredients (phosphorus, pantothenic acid, calcium, magnesium, glutamine, isoleucine, tyrosine, choline, and creatine monohydrate) did not result in any studies meeting our screening criteria. Our study demonstrates how to automate reviews to filter and collapse literature in content areas that have an enormous volume of studies. The aggregated set of studies suggest there is little clinical evidence for the use of dietary supplements to reduce or control diabetes. ## Introduction Dietary supplements comprise a vibrant market in the United States (US) and around the world. Industry estimates suggest that sales of such products for all indications amount to more than $30 billion in the US1, and estimates for global sales exceed$100 billion2. Supplement use remains popular among consumers, despite the lack of evidence for many popular supplements on the market. Consumers may use supplements in hopes of improving or maintaining their health, to correct a dietary deficiency, or more therapeutically for a specific health condition. US regulatory oversight for dietary supplements is distinctly different from the framework for pharmaceuticals3. Makers of a new drug must submit evidence for safety and efficacy to the US Food and Drug Administration (FDA) for prior review and approval before it can be used on the market. The same standard does not apply to dietary supplements. Dietary supplements, by law, are not intended to diagnose, treat, prevent, or cure any disease. Therefore, FDA-approved evidence of safety and efficacy for supplements is not needed prior to their appearance on the market. Likewise, there are no regulations regarding the validity of products claims that can be made, with the exception that claims cannot state that a supplement may treat, prevent, or cure a disease4. For example, a supplement maker cannot claim that their product is intended for diabetes treatment. They can, however, claim that a product helps to maintain healthy blood sugar levels, so long as they are not suggesting that the product can help a consumer with elevated blood sugar. In practice, delineating these distinctions can be very difficult to make for consumers to interpret the potential consequences of a supplement5. Another difference between dietary supplements and pharmaceuticals is the regulation of their manufacturing. Both drugs and supplements must be manufactured according to good manufacturing practices (GMPs). However, supplements must meet the different and generally lower standard for GMPs that applies to drugs6. The stringent requirements for active drug ingredients do not apply to supplements. Because a new supplement product is not subject to FDA approval, an FDA manufacturing inspection is not required, and the FDA does not routinely analyze the content of dietary supplements. Responsibility for enforcement regarding potentially deceptive claims about dietary supplements falls principally to the Federal Trade Commission (FTC). But the volume of potential violations far exceeds the capacity of FTC to take enforcement actions3. Claims of potential efficacy of dietary supplements for preventing or controlling diabetes mellitus are common and may easily be found from sources that consumers might consider to be authoritative. For instance, Healthline lists 10 supplements to help lower blood sugar7. What is the evidence for supplements as a benefit for patients with diabetes? The American Diabetes Association Standards of Medical Care in Diabetes states that there is insufficient evidence to support a benefit from supplements for patients with diabetes who have no underlying deficiencies. Thus, they are not recommended for glycemic control8. Despite this, there have been several narrative reviews highlighting the potential benefits of various supplements for diabetes-related outcomes9,10,11. However, many of the studies that have been reviewed may not be of appropriate design or quality to provide strong evidence for or against supplement use. Examining reviews of supplements and their benefits is challenging, because searching directly using key words, such as “diabetes” and “dietary supplement” within pubmed.gov or other medically related search engines results in an unmanageable set of publications that are incomplete and difficult to organize. In addition, the active ingredients within a supplement serve as the basis of many studies and would not be captured in the search results. The objective of this review was to identify supplement ingredients commonly used for diabetes management and evaluate the scientific evidence supporting their use in patients with type 1 (T1D) and type 2 (T2D) diabetes mellitus. ## Methods ### Dietary supplement ingredient list The Office of Dietary Supplements (ODS) at the National Institutes of Health developed a searchable database, the Dietary Supplement Label Database (DSLD) available at the URL https://dsld.nlm.nih.gov/dsld/12. The database houses information from approximately 76,000 dietary supplement products commercially available in the US. Within the advanced search, we selected an option to search “by Label Statement or Health Claims contains”. In this we input the key word diabetes. The ingredient list for the resulting search was downloaded as a.csv file and retained. Code was written in the statistical software R (R Core Team (2013)) to count the number of times the ingredient appeared in a product. The ingredients were sorted in descending order by the number of times the ingredient appeared on a product label. Spurious information that were not specific ingredients, like “total calories” were removed from the list. From the remainder of the list the top 100 ingredients based on how often they appeared in products were retained (see Supplemental Materials). ### Web-scraping program developed to search pubmed.gov We then used the RSelenium13 package to create a program in the statistical software package R (R Core Team (2013)) to automatically collect or “scrape” information from listings and abstracts in the PubMed database pertaining to both the ingredients on our ingredient lists and diabetes research. The program automatically combined each ingredient in the final retained database derived from the ODS website with “(ingredient) AND Diabetes” when searched on PubMed, e.g. “(Potassium) AND Diabetes.” Additional searches were not made for diabetes comorbidities or T1D versus T2D. The program then automated the search with these key words using PubMed built in filters to filter articles that contained an abstract and gathered pertinent article information from PubMed for up to the 100 most recent articles. The information the program scraped included the title, author, journal, year, URL, DOI, and abstract. Additionally, the program removed any redundant articles that may have existed on PubMed. The search also allowed for flagging of certain words in the title and abstract. The following phrases were flagged and counted for each ingredient: cohort, observational, randomized control trial (RCT), meta-analysis, systematic review, clinical trial, HbA1c, fasting glucose, and insulin. Results were automatically retained in separate spreadsheets by ingredient name. ### Article screening Two members of the research team (BAH and WDF) screened the 100 most recent abstracts from the included ingredients. The following information was extracted from every article: was the study conducted in individuals with diabetes (yes/no), was the study conducted in an animal model (yes/no), does the study meet inclusion criteria (yes/no). Inclusion criteria involved a study examining the role of the individual ingredient on outcomes related to diabetes (including glucose, insulin, HbA1c, diabetes-related complications, etc.) in animal or human subjects. Exclusion criteria involved studies examining the effects of a multi-nutrient supplement or co-nutrients, cross-sectional or observational studies, studies relying on self-reported dietary or supplement intake, studies that included caloric restriction, or studies with an outcome not related to diabetes. If an abstract was found to meet inclusion criteria, the following information was extracted: study type (RCT, single-arm trial, crossover, meta-analysis, narrative review, etc.), outcome, and whether results support the use of that ingredient for T1D or T2D. If a study did not meet inclusion criteria, the reason for exclusion was noted. ### Cross checking and discrepancy resolution After the first pass of article screening, four members of the research team (BAH, WDF, DMT, MWC) cross-checked the initial abstract screen. Data extracted from this step included confirmation that the study examines the effect of a supplement on diabetes-related outcomes in either a human or animal model and if a discrepancy is present between the original screener and the cross-checker. If there was a discrepancy identified, notes were retained on the reason for the discrepancy. Finally, all discrepancies were reviewed by DMT for validity. Studies that had been flagged as a discrepancy were reviewed and discarded or retained after cross-checking retainment criteria. Each included study was assigned to one of the following evidence grades: 1: meta-analysis of human RCTs, 2: human or animal RCT, 3: human or animal single arm trial, 4: narrative review, position statement, or case report14. ## Results The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow chart for article screening is provided in Fig. 1. There were 2086 ingredients on the ODS website included the word diabetes in their label statement or claim. From the 100 most common ingredients, there were 6,217 articles in PubMed found in the searches from each ingredient and the keyword diabetes (Table 1). If ingredients had greater than 100 articles, the 100 most recent were included for screening. From the remaining 1823 abstracts, 425 were retained for full text screening. Many studies were excluded at this phase for evaluation of multinutrient supplements, observational study design, or outcomes not related to diabetes. After the 425 articles were evaluated and discrepancies were resolved, 240 studies remained. Common reasons for exclusion at this stage included the use of multinutrient supplementation, reliance on self-report dietary measures, or outcome not related to diabetes. The 240 included studies examined over 100 different diabetes-related comorbidities, including outcomes related to insulin dynamics (secretion, production, resistance, sensitivity, HOMA-IR, and HOMA-B), glucose metabolism (postprandial, fasting, oral tolerance, two-hour oral tolerance, glycemic response, and HbA1c), and hepatic and pancreatic morphology and function (liver function tests, steatosis, beta cell function, and histology). Other studies examined outcomes related to oxidative stress (antioxidant enzyme activity, antioxidant capacity, endothelial dysfunction, reactive oxidative species formation, glutathione activity, endoplasmic reticulum stress, etc.) or molecular and microbial changes (expression of genes such as GLUT4, NFkB, PI3K, mTOR, TNFα, TGFβ, and VEGF, gut microbial diversity, microbial dysbiosis, and concentration of lipopolysaccharide binding protein). Outcomes related to complications of chronic hyperglycemia (advanced glycation end product formation, retinopathy, neuropathy, nephropathy, kidney function, wound healing, vascular function, immune function, etc.) and comorbidities of diabetes (body composition, waist circumference, blood lipid concentrations, blood pressure, bone integrity, C-reactive protein concentrations, incidence of the Metabolic Syndrome, mortality, etc.) were also explored. A final ingredient list with included studies are outlined below and in Table 2. References for all included studies are available in the Supplementary Information. ### Water-soluble vitamins Water-soluble vitamins that had relevant studies included vitamin C15, folate/folic acid6, vitamin B125, vitamin B64, biotin4, and niacin3. Meta-analyses examining vitamin C supplementation and diabetes-related outcomes concluded that supplementation may improve fasting blood glucose but not HbA1c in individuals with T2D15,16,17. Findings from human and animal clinical trials were mixed. Three meta-analyses of folate or folic acid supplementation had conflicting findings18,19,20. Relevant studies examining effects of B12 supplementation were limited to individuals taking Metformin, as this drug can deplete serum B12 levels. Human studies on both B6 and biotin were extremely limited, with narrative reviews on both vitamins concluding a lack of evidence for their benefit among T2D patients21. One meta-analysis on niacin supplementation was found, which concluded an increased risk of T2D onset following supplementation22. ### Fat-soluble vitamins Vitamin E22 was the only fat-soluble vitamin with relevant studies. Included meta-analyses displayed no benefit to measurements of glucose or insulin with the exception of one showing improvements in HbA1C in individuals with uncontrolled glycemia and low serum vitamin E at baseline. Human and animal trials displayed mixed results on fasting glucose, insulin, and markers of inflammation. Results of human clinical trials were not dose dependent, exhibiting variability regardless of dose. ### Minerals Minerals were the most widely studied category in this review, accounting for 106 studies. Included minerals were chromium23, potassium1, selenium24, sodium7, and zinc25. Chromium is well studied in relation to diabetes, and one notable review highlighted the potential association between chromium deficiency and hyperglycemia and impaired glucose tolerance26. However, results from supplementation trials in both humans and animals were mixed. One study on potassium supplementation was conducted in individuals with prediabetes and concluded that potassium supplementation improved fasting blood glucose despite weight gain, but no significant effects were observed for oral glucose tolerance test or insulin sensitivity. Selenium accounted for 25 studies with mostly positive results. The two meta-analyses investigated risk of diabetes following supplementation, but concluded no benefit24,27. Human clinical trials found improvements in measurements of glucose, insulin, insulin resistance, and blood lipids, and markers of inflammation. Results from animal trials include improvements in anti-oxidant enzyme activity, blood glucose, and insulin sensitivity. However, one review cites the positive correlation between selenium and diabetic risk as well as its hyperglycemic effects in rats28. Evidence on sodium supplementation was unsubstantial. No meta-analyses were included, and the only human clinical trial found an improvement in GLP-1 expression with no improvements on glycemia, insulin, or anthropometric measurements. Thirty-six studies on zinc supplementation met inclusion criteria, and thirty of those reported positive results. Three meta-analyses found improvements in fasting glucose, HbA1c, and insulin29,30,31. Another found improvements in markers of diabetic kidney injury29. Zinc supplementation in a trial of pre-diabetic individuals reduced progression to diabetes along with improvements in fasting glucose, oral glucose tolerance test (OGTT) results, insulin resistance, and blood lipids. Other included human trials display mixed results with many showing improvements in similar markers. One trial in streptozotocin-induced diabetic rats, zinc displayed the potential to augment metformin’s improvements on glucose control32. Other animal trials show mostly positive effects of zinc supplementation on glucose control, insulin, and oxidative stress. ### Amino acids Eighteen studies on leucine, seventeen on taurine, and one on Beta-Alanine supplementation met inclusion criteria. Only one human RCT was found for leucine supplementation, and concluded no effect on glucose or insulin sensitivity33. Many potential articles were excluded for examining multiple amino acids in conjunction. Animal studies in leucine supplementation indicated potential benefits for glycemia (fasting glucose, oral glucose tolerance) and pancreatic insulin secretion, but no effect on β-cell development, fasting insulin, or blood lipid concentrations. Narrative reviews highlighted the role of leucine as a potential insulin secretagogue to improve glucose homeostasis, but the mechanism remains unknown. Three human crossover trials were identified for taurine supplementation, two of which concluded no benefit on insulin sensitivity or platelet aggregation34,35. The third was conducted in patients with type 1 diabetes, and showed benefits of supplementation for vascular stiffness36. Among the 12 animal RCTs reviewed, there were promising results for taurine supplementation on diabetic retinopathy, endothelial dysfunction, insulin sensitivity, and polydipsia/polyuria. There were mixed results regarding beta cell function and glycemia. Narrative reviews stated that taurine may be beneficial for diabetes but cite a lack of clinical evidence. ### Fiber, macronutrients, and caffeine Twenty-six studies were reviewed on dietary fiber supplementation. One meta-analysis of human RCTs found beneficial effects of soluble fiber supplementation on HbA1c, fasting glucose, and HOMA-IR25. Human clinical trials conclude positive results following supplementation of a wide range of fibers, including insoluble fiber, galacto-oligosaccharides (GOS), chicory inulin, and beta-glucan. Animal RCTs concluded beneficial effects of soluble fiber supplementation (from wheat bran extract, GOS, barley, and beta-glucan) on outcomes related to glucose, HbA1c, and microbial diversity). There were mixed effects of supplementation on insulin sensitivity. Narrative reviews highlighted the potential benefits of prebiotics on glycemic and microbial outcomes and soluble fiber for glycemic response, insulin concentrations, and body weight. Additionally, there were six studies on trans-fat supplementation, four on protein supplementation, and three on caffeine supplementation that met inclusion criteria. The most common trans-fat supplementation was conjugated linoleic acid (CLA), which negatively impacted insulin sensitivity in prediabetic men, despite having beneficial effects on insulin secretion in animals23. Trans-vaccenic acid also improved insulin sensitivity and secretion in animals. In humans, protein supplementation had mixed effects on adipokine concentrations, yet improved adiponectin and insulin concentrations in animals. One study found that glucosamine supplementation induced insulin resistance in animals37. A narrative review cited milk proteins to potentially improve postprandial glucose, but more work is needed into the effects of isolated milk proteins (whey, casein), rather than within the dairy matrix, in order for conclusions to be made38. Caffeine supplementation did not significantly affect platelet aggregation of ATP signaling in animal studies. A crossover trial investigated the effects of caffeine on post-exercise glucose concentrations in individuals with T1D, and found that it may contribute to late-onset hypoglycemia and should be used with caution39. ## Discussion This scoping review utilized the ODS Researcher Database and a novel web-scraping program to summarize existing evidence supporting dietary supplement use for prevention and treatment of diabetes mellitus. While there were several supplement ingredients that had a larger volume of studies suggesting support of their use (e.g. dietary fiber, selenium, and zinc), the overall results were modest with few human RCTs or meta-analyses (Table 2). In general, we found that most, but not all, ingredients that are currently included in supplements for diabetes had very little to no evidence supporting their use. Ingredients that had zero articles meeting our inclusion criteria were phosphorus, pantothenic acid, calcium, magnesium, glutamine, isoleucine, tyrosine, choline, and creatine monohydrate. These ingredients are present in a total of 1763 supplements in the ODS database that make a health claim related to diabetes, despite limited evidence. The ingredients with the greatest scientific evidence, fiber, selenium and zinc, totaled 572 products in the ODS database. It is evident that there is a need for greater cohesion between scientific evidence and consumption of dietary supplements. Many studies were excluded for relying on self-reported diet or supplement intake and associations with reductions in diabetes-related secondary symptoms or for administering treatment as co-supplementation17. We and other teams have shown the unreliability of self-reported dietary intake, and its use can lead to the publishing of inaccurate diet-disease relationships40,41. In co-supplementation, it is impossible to isolate individual effects of one ingredient if it not examined separately. One common co-supplementation was Vitamin C and Vitamin E, which have been examined in a meta-analysis for their effects on HOMA-IR but concluded no benefit. In the case of reporting a reduction of secondary symptoms of diabetes, such as improved glycemic control, we are unsure if this was the primary goal of the study. It is unclear why these secondary symptoms would be reported without reporting changes in standard measures for presence of diabetes, such as insulin levels, fasting glucose, and HbA1c. Clearly stated research questions a statistical statement of the null and alternative hypothesis and registration with clincialtrials.gov will eliminate these doubts42. Previous reviews on supplement use for diabetes mellitus have concluded mixed results. Twenty-seven meta-analyses were identified in the current study, assessing eight different ingredients. Supplementation of vitamin B6, folate, vitamin C, vitamin E, chromium, and selenium was found to have mixed or null effects on diabetes-related outcomes in meta-analyses. Zinc and fiber were the only two ingredients with consistent positive results in meta-analyses. There were many notable narrative reviews assessed in the present study, which largely concluded a potential benefit for a particular supplement yet acknowledged the lack of clinical evidence to make such claims. We suspect that the large volume of literature available in the field is not conducive to standard systematic reviews. Despite lack of clinical evidence, consumers will continue to take dietary supplements for perceived benefit regarding diabetes, thus it is important for healthcare providers to be knowledgeable about common supplements and their potential effects. The role of supplement use for diabetes management, and its potential interactions with other medical treatment approaches, have been reviewed from a pharmacy standpoint and from that of complementary and alternative medicine11. As supplement use continues to grow in the US, it is important for healthcare professionals to understand the evidence behind supplements and their potential role as part of medical care. Current supplement use in the US is around 52% of all adults, but use increases with age and is more common among women than men43. Among individuals with diabetes, the prevalence is as high as 59%, however this report is from the 2014 NHANES cohort, and the current prevalence may be higher44. The most commonly used supplements in this population were lycopene, vitamin D, and vitamin B12. This study had several strengths. The use of R and the web scrape allowed for thousands of studies indexed in PubMed to be searched based on inclusion of specific keywords. This approach also decreases the potential for human error as it relies on computer extraction of relevant studies rather than manual. Using this method also allows for a rigorous treatment of the which literature to include by the applying the capacity to automatically scrape abstracts. Another strength of this study design is the broad inclusion criteria. As many included studies were conducted in animal models, we were able to assess the effects of supplementation on diabetes-related outcomes in a preclinical model. This is important, as results from animal models can still be used as background to support a dietary supplement claim in conjunction with results from human studies45. Finally, exclusion criteria involved removing cross-sectional studies or those relying on self-reported dietary or supplement intake. Self-report dietary intake has been evidenced to be unreliable due to reasons such as recall bias, misestimation of portion sizes, and social desirability bias. To best infer causality between supplement intake and diabetes-related health outcomes, the decision was made to only include controlled experimental trials. This study is also not without limitations. Included supplements were limited to those indexed in the ODS DSLD. This resource is updated regularly and thoroughly by the ODS and the National Library of Medicine, but it is still possible that there may be relevant supplements that were not found in the search strategy. Additionally, terms related to diabetes (i.e., glycemic control, glucose, insulin, blood sugar, etc.) or diabetes comorbidities were not searched. The purpose of this review was to scope the evidence of current products on the market for diabetes, and not systematically review all supplements related to glycemic control and insulin sensitivity. The effects of individual supplements and diabetes-related outcomes have been systematically reviewed and meta-analyzed previously, including chromium46,47, magnesium48,49,50, vitamin D51, and vitamin E52. Finally, the search for articles was limited to those indexed in PubMed. This allowed our search to be limited to peer-reviewed articles that are pertinent to biomedical sciences and could be searched for pertinent keywords in the title and abstract. However, the authors acknowledge that there may have been potentially relevant studies that were not indexed in PubMed. In conclusion, there does not exist strong evidence to support the use of many commercial supplements for management of diabetes or its comorbidities. Even existing support is limited due to poor study design and uncontrolled study methods. Before recommendations for supplement use to treat diabetes can be made, there is a need for well-designed human clinical trials to evaluate the role of these ingredients in diabetes-related outcomes. ## References 1. Shahbandeh M. United States revenue vitamins & supplements manufacturing 2019. https://www.statista.com/statistics/235801/retail-sales-of-vitamins-and-nutritional-supplements-in-the-us/. (2019). 2. Mikulic M. Dietary supplements market size worldwide 2022 forecast. https://www.statista.com/statistics/828514/total-dietary-supplements-market-size-globally/. (2018).. 3. Starr, R. R. Too little, too late: ineffective regulation of dietary supplements in the United States. Am. J. Public Health 105, 478–485 (2015). 4. FDA 101: Dietary Supplements. (Food and Drug Administration, Silver Spring, 2015). https://www.fda.gov/consumers/consumer-updates/fda-101-dietary-supplements. 5. Schultz H. FDA rule offers only narrow field for blood sugar management claims, experts say. Nutraingredients-usa.com. (2015). 6. Collins, N., Tighe, A. P., Brunton, S. A. & Kris-Etherton, P. M. Differences between dietary supplement and prescription drug omega-3 fatty acid formulations: a legislative and regulatory perspective. J. Am. Coll. Nutr. 27, 659–666 (2008). 7. McCulloch M. Ten Supplements to Help Lower Blood Sugar. https://www.healthline.com/nutrition/blood-sugar-supplements. (2018). 8. American Diabetes Association. 5. Lifestyle management: standards of medical care in diabetes—2019. Diabetes Care 42(Suppl. 1), S46–S60 (2019). 9. Costello, R. B., Dwyer, J. T. & Bailey, R. L. Chromium supplements for glycemic control in type 2 diabetes: limited evidence of effectiveness. Nutr. Rev. 74, 455–468 (2016). 10. Costello, R. B. et al. Do cinnamon supplements have a role in glycemic control in type 2 diabetes? A narrative review. J. Acad. Nutr. Diet. 116, 1794–1802 (2016). 11. Yilmaz, Z., Piracha, F., Anderson, L. & Mazzola, N. Supplements for Diabetes Mellitus: a review of the literature. J. Pharm. Pract. 30, 631–638 (2016). 12. Dietary Supplement Label Database. https://dsld.nlm.nih.gov/dsld/lstIngredients.jsp. 13. Harrison J. R. Selenium: R Bindings for ‘Selenium WebDriver’. (2019). 14. Daramola, O. O. & Rhee, J. S. Rating evidence in medical literature. AMA J. Ethics 13, 46–51 (2011). 15. Ashor, A. W. et al. Effects of vitamin C supplementation on glycaemic control: a systematic review and meta-analysis of randomised controlled trials. Eur. J. Clin. Nutr. 71, 1371–1380 (2017). 16. de Paula, T. P., Kramer, C. K., Viana, L. V. & Azevedo, M. J. Effects of individual micronutrients on blood pressure in patients with type 2 diabetes: a systematic review and meta-analysis of randomized clinical trials. Sci. Rep. 7, 40751 (2017). 17. Khodaeian, M. et al. Effect of vitamins C and E on insulin resistance in diabetes: a meta-analysis study. Eur. J. Clin. Investig. 45, 1161–1174 (2015). 18. Akbari, M. et al. The effects of folate supplementation on diabetes biomarkers among patients with metabolic diseases: a systematic review and meta-analysis of randomized controlled trials. Horm. Metab. Res. 50, 93–105 (2018). 19. Sudchada, P. et al. Effect of folic acid supplementation on plasma total homocysteine levels and glycemic control in patients with type 2 diabetes: a systematic review and meta-analysis. Diabetes Res. Clin. Pract. 98, 151–158 (2012). 20. Zhao, J. V., Schooling, C. M. & Zhao, J. X. The effects of folate supplementation on glucose metabolism and risk of type 2 diabetes: a systematic review and meta-analysis of randomized controlled trials. Ann. Epidemiol. 28, 249–257.e1 (2018). 21. Yan, M. K.-W. & Khalil, H. Vitamin supplements in type 2 diabetes mellitus management: a review. Diabetes Metab. Syndrome: Clin. Res. Rev. 11, S589–S595 (2017). 22. Verdoia, M., Schaffer, A., Suryapranata, H. & De Luca, G. Effects of HDL-modifiers on cardiovascular outcomes: a meta-analysis of randomized trials. Nutr., Metab. Cardiovascular Dis. 25, 9–23 (2015). 23. Riserus, U., Vessby, B., Arner, P. & Zethelius, B. Supplementation with trans10cis12-conjugated linoleic acid induces hyperproinsulinaemia in obese men: close association with impaired insulin sensitivity. Diabetologia 47, 1016–1019 (2004). 24. Vinceti M., Filippini T., Rothman K. J. Selenium exposure and the risk of type 2 diabetes: a systematic review and meta-analysis. (Springer, 2018). 25. Jovanovski, E. et al. Should viscous fiber supplements be considered in diabetes control? results from a systematic review and meta-analysis of randomized controlled trials. Diabetes Care. 42, 755–766 (2019). 26. Bartlett, H. E. & Eperjesi, F. Nutritional supplementation for type 2 diabetes: a systematic review. Ophthalmic Physiol. Opt. 28, 503–523 (2008). 27. Mao S., Zhang A., Huang S. Selenium supplementation and the risk of type 2 diabetes mellitus: a meta-analysis of randomized controlled trials. (Springer, 2014). 28. Panchal, S. K., Wanyonyi, S. & Brown, L. Selenium, vanadium, and chromium as micronutrients to improve metabolic syndrome. Curr. Hypertens. Rep. 19, 10 (2017). 29. Bolignano, D. et al. Antioxidant agents for delaying diabetic kidney disease progression: a systematic review and meta-analysis. PLoS ONE 12, e0178699 (2017). 30. Jafarnejad, S., Mahboobi, S., McFarland, L. V., Taghizadeh, M. & Rahimi, F. Meta-analysis: effects of zinc supplementation alone or with multi-nutrients, on glucose control and lipid levels in patients with type 2 diabetes. Prev. Nutr. food Sci. 24, 8–23 (2019). 31. Wang, X. et al. Zinc supplementation improves glycemic control for diabetes prevention and management: a systematic review and meta-analysis of randomized controlled trials. Am. J. Clin. Nutr. 110, 76–90 (2019). 32. Aziz, N. M., Kamel, M. Y., Mohamed, M. S. & Ahmed, S. M. Antioxidant, anti-inflammatory, and anti-apoptotic effects of zinc supplementation in testes of rats with experimentally induced diabetes. Appl. Physiol. Nutr. Metab. 43, 1010–1018 (2018). 33. Leenders, M. et al. Prolonged leucine supplementation does not augment muscle mass or affect glycemic control in elderly type 2 diabetic men. J. Nutr. 141, 1070–1076 (2011). 34. Brons, C., Spohr, C., Storgaard, H., Dyerberg, J. & Vaag, A. Effect of taurine treatment on insulin secretion and action, and on serum lipid levels in overweight men with a genetic predisposition for type II diabetes mellitus. Eur. J. Clin. Nutr. 58, 1239–1247 (2004). 35. Spohr, C., Brons, C., Winther, K., Dyerberg, J. & Vaag, A. No effect of taurine on platelet aggregation in men with a predisposition to type 2 diabetes mellitus. Platelets 16, 301–305 (2005). 36. Moloney, M. A. et al. Two weeks taurine supplementation reverses endothelial dysfunction in young male type 1 diabetics. Diabetes Vasc. Dis. Res. 7, 300–310 (2010). 37. Guo, Q. et al. Glucosamine induces increased musclin gene expression through endoplasmic reticulum stress-induced unfolding protein response signaling pathways in mouse skeletal muscle cells. Food Chem. Toxicol. 125, 95–105 (2019). 38. Hidayat, K., Du, X. & Shi, B.-M. Milk in the prevention and management of type 2 diabetes: the potential role of milk proteins. Diabetes Metab. Res. Rev. 0, e3187 (2019). 39. Zaharieva, D. P. et al. Effects of acute caffeine supplementation on reducing exercise-associated hypoglycaemia in individuals with Type 1 diabetes mellitus. Diabet. Med. 33, 488–496 (2016). 40. Dhurandhar, N. V. et al. Energy balance measurement: when something is not better than nothing. Int. J. Obes. 39, 1109–1113 (2015). 41. Schoeller, D. A. et al. Self-report–based estimates of energy intake offer an inadequate basis for scientific conclusions. Am. J. Clin. Nutr. 97, 1413–1415 (2013). 42. George, B. J. et al. Common scientific and statistical errors in obesity research. Obesity. (Silver Spring) 24, 781–790 (2016). 43. Kantor, E. D., Rehm, C. D., Du, M., White, E. & Giovannucci, E. L. Trends in dietary supplement use among US adults from 1999-2012. JAMA 316, 1464–1474 (2016). 44. Li, J., Gathirua-Mwangi, W. & Song, Y. Abstract P082: Prevalence and trends in dietary supplement use among diabetic adults: The National Health and Nutrition Examination Surveys, 1999–2014. Circulation 137, AP082 (2018). 45. Food & Drug Administration. Guidance for Industry: Substantiation for Dietary Supplement Claims Made Under Section 403(r)(6) of the Federal Food, Drug, and Cosmetic Act. in Office of Dietary Supplement Programs. (2009). 46. Suksomboon, N., Poolsup, N. & Yuwanakorn, A. Systematic review and meta-analysis of the efficacy and safety of chromium supplementation in diabetes. J. Clin. Pharm. Ther. 39, 292–306 (2014). 47. Yin, R. V. & Phung, O. J. Effect of chromium supplementation on glycated hemoglobin and fasting plasma glucose in patients with diabetes mellitus. Nutr. J. 14, 14 (2015). 48. Dong, J. Y., Xun, P., He, K. & Qin, L. Q. Magnesium intake and risk of type 2 diabetes: meta-analysis of prospective cohort studies. Diabetes Care. 34, 2116–2122 (2011). 49. Song, Y., He, K., Levitan, E. B., Manson, J. E. & Liu, S. Effects of oral magnesium supplementation on glycaemic control in Type 2 diabetes: a meta-analysis of randomized double-blind controlled trials. Diabet. Med. 23, 1050–1056 (2006). 50. Veronese, N. et al. Effect of magnesium supplementation on glucose metabolism in people with or at risk of diabetes: a systematic review and meta-analysis of double-blind randomized controlled trials. Eur. J. Clin. Nutr. 70, 1354–1359 (2016). 51. Krul-Poel, Y. H., Ter Wee, M. M., Lips, P. & Simsek, S. Management of endocrine disease: The effect of vitamin D supplementation on glycaemic control in patients with type 2 diabetes mellitus: a systematic review and meta-analysis. Eur. J. Endocrinol. 176, R1–R14 (2017). 52. Suksomboon, N., Poolsup, N. & Sinprasert, S. Effects of vitamin E supplementation on glycaemic control in type 2 diabetes: systematic review of randomized controlled trials. J. Clin. Pharm. Ther. 36, 53–63 (2011). ## Author information Authors ### Corresponding author Correspondence to Diana M. Thomas. ## Ethics declarations ### Conflict of interest The authors declare that they have no conflict of interest. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Hannon, B.A., Fairfield, W.D., Adams, B. et al. Use and abuse of dietary supplements in persons with diabetes. Nutr. Diabetes 10, 14 (2020). https://doi.org/10.1038/s41387-020-0117-6 • Revised: • Accepted: • Published: • DOI: https://doi.org/10.1038/s41387-020-0117-6 • ### Evidenz in der Ernährungstherapie des Diabetes mellitus • Hans Hauner Der Diabetologe (2021) • ### Understanding the use of health supplements in patients with chronic diseases: findings and implications • Fareed Khan
{}
is positive definite – its determinant is 4 and its trace is 22 so its eigenvalues are positive. This method is referred to as Lyapunov’s direct or second method. This simple example suggests the fillowing definitions. Note that as it’s a symmetric matrix all the eigenvalues are real, so it makes sense to talk about them being positive or negative. Instead of directly predicting the stress, the SPD-NN trains a neural network to predict the Cholesky factor of a tangent sti ness matrix, based on which the stress is calculated in the incremental form. Now, it’s not always easy to tell if a matrix is positive definite. of a positive definite matrix. Kudos to you, John, mostly for calling attention to Higham's paper. A matrix is positive semi-definite if its smallest eigenvalue is greater than or equal to zero. Examples. Courses A symmetric matrix is positive de nite if and only if its eigenvalues are positive… Vote. Often a system of linear equations to be solved has a matrix which is known in advance to be positive definite and symmetric. This function computes the nearest positive definite of a real symmetric matrix. In the case of a real matrix A, equation (1) reduces to x^(T)Ax>0, (2) where x^(T) denotes the transpose. Your use of the MIT OpenCourseWare site and materials is subject to our Creative Commons License and other terms of use. Inverse matrix A-1 is defined as solution B to AB = BA = I.Traditional inverse is defined only for square NxN matrices,and some square matrices (called degenerate or singular) have no inverse at all.Furthermore, there exist so called ill-conditioned matrices which are invertible,but their inverse is hard to calculate numerically with sufficient precision. A real symmetric n×n matrix A is called positive definite if xTAx>0for all nonzero vectors x in Rn. Definitions of POSITIVE DEFINITE MATRIX, An example is given by It is positive definite since for any Two symmetric, positive-definite matrices can be, nearestSPD works on any matrix, Please send me an example case that has this which will be converted to the nearest Symmetric Positive Definite Matrix. The Cholesky factorization always exists and the requirement that the diagonal of be positive ensures that it is unique. ), Learn more at Get Started with MIT OpenCourseWare, MIT OpenCourseWare is an online publication of materials from over 2,500 MIT courses, freely sharing knowledge with learners and educators around the world. I found out that there exist positive definite matrices that are non-symmetric, and I know that symmetric positive definite matrices have positive eigenvalues. It is positive semidefinite ... real symmetric and positive definite, and related by (C.16) where q is an N x 1 vector and r is scalal: Then, ifq # 0, the first N diagonal elements of the inverse matrix A-' are larger than or equal to the corresponding diagonal elements of P-'. Ƅ�0 �R77 Symmetric and positive definite matrices have extremely nice properties, and studying these matrices brings together everything we've learned about pivots, determinants and eigenvalues. Symmetric and positive definite matrices have extremely nice properties, and studying these matrices brings together everything we've learned about pivots, determinants and eigenvalues. Linear Algebra In this small exercise we will use the determinants test to check if two matrices are positive definite. Test method 2: Determinants of all upper-left sub-matrices are positive: Determinant of all of the matrix. Case n= 1 is trivial: A= (a), a>0, and L= (p a). 387 0 obj <>stream Eigenvalues of a positive definite real symmetric matrix are all positive. Add to solve later Sponsored Links Symmetric Matrices and Positive Definiteness, Unit III: Positive Definite Matrices and Applications, Solving Ax = 0: Pivot Variables, Special Solutions, Matrix Spaces; Rank 1; Small World Graphs, Unit II: Least Squares, Determinants and Eigenvalues, Complex Matrices; Fast Fourier Transform (FFT), Linear Transformations and their Matrices. Often a system of linear equations to be solved has a matrix which is known in advance to be positive definite and symmetric. �0@�_��dh�^��(���"8�i��@1������~�Ѫg��Q�Z�)��٩�G�M�s&_bz;�� 7/52 Positive Definite Matrix Definition Let A be a real symmetric matrix. The normal equations for least squares fitting of a polynomial form such an example. A positive definite matrix is a symmetric matrix with all positive eigenvalues. First, the “Positive Definite Matrix” has to satisfy the following conditions. I will show that this matrix is non-negative definite (or "positive semi-definite" if you prefer) but it is not always positive definite. Lyapunov’s first method requires the solution of the differential equations describing the dynamics of the system which makes it impractical in the analysis and design of control systems. endstream endobj 388 0 obj <>stream Positive Definite Matrix Calculator | Cholesky Factorization Calculator . However, Why the only positive definite projection matrix is the identity matrix. Prove that Ais symmetric and positive definite. Mathematics Massachusetts Institute of Technology. ����EM�p��d�̗�s˞*��ޅ�v����֜o��S�u[�?��R5��[�$���F�]�փC%�Pi̮mk�ܮokZ�]��a�*完uBd��z��� �Sh[+v�i�p��W��R�VSm�:L�y!$�8Dr\�d�#N���$N��@��D��ڻ�U��c�V����:��5�@�_��B-;�ѝ��] T\���W����G��A�+xOou��IՎB��W �8*������ �����O���~EX/���V�R���/��_�fZ�6W���c�ܤO����Yħ%n���{M��^��E#��!Q@ �� �Vf8�s�����9H��sGxD�Q��mm�6k�PC���%�� Symmetric Matrices and Positive Definiteness. Commented: Andrei Bobrov on 2 Oct 2019 Accepted Answer: Elias Hasle. Every symmetric positive denite matrix Ahas a unique factorization of the form A= LLt; where Lis a lower triangular matrix with positive diagonal entries. p@ЈV+c[0 @��م�skN�/�C�C-�5��aQ�@o [>�^���_��F\L�[� ��3� Seen as a real matrix, it is symmetric, and, for any non-zero column vector z with real entries a and b, one has .Seen as a complex matrix, for any non-zero column vector z with complex entries a and b one has .Either way, the result is positive since z is not the zero vector (that is, at least one of a and b is not zero). It is a generalization of the property that a positive real number has a unique positive square root. A symmetric matrix and another symmetric and positive definite matrix can be simultaneously diagonalized, although not necessarily via a similarity transformation. We present the Cholesky-factored symmetric positive de nite neural network (SPD-NN) for mod-eling constitutive relations in dynamical equations. 0. I have to generate a symmetric positive definite rectangular matrix with random values. Suppose that ⟨x,y⟩:=xTAy defines an inner product on the vector space Rn. �;���{�GEHb���Q\���r��V���1;a����*vŮ��|:��V�[v;���jv��"��3��ٌ�ق}eK�b k~��G��ƾ�?��. But the problem comes in when your matrix is positive semi-definite like in the second example. h��YmS�8�+�xW)Ћ�Z�EU �!�M��M.��af2cv����ݲػ��/[ �,륻��t�]jS�Զ�Z¯+��__.TQ^�Rh�BSBC�z|U�I�l4���.J.�20��ذ����B��A���V��kcEa$�YC=ea���*,����[SXi��Vi � ����j�8^X��Ѐ�~/W�����T����MJvp_-?��?�U�d�z���>VS�0�=�ج����|��V�{����� �S/���>�°eu3����d��͎��M��U��Y��?�v���x�� �X�/���dZ��$��u�T=���ۓ��/N?�Kpv�T�������}h�_ Input options: • [type,] dim: the dimension of the matrix; • [type,] row_dim, col_dim: the row and column dimensions. share | cite | improve this question | follow | edited Jan 22 '20 at 23:21. 0 ⋮ Vote. Follow 377 views (last 30 days) Riccardo Canola on 17 Oct 2018. algorithm, two numerical examples are given. Home 0 Comments. Determining Positive-definiteness. A symmetric positive definite matrix is a symmetric matrix with all positive eigenvalues.. For any real invertible matrix A, you can construct a symmetric positive definite matrix with the product B = A'*A.The Cholesky factorization reverses this formula by saying that any symmetric positive definite matrix B can be factored into the product R'*R. The Cholesky factorization of a symmetric positive definite matrix is the factorization , where is upper triangular with positive diagonal elements. Theorem C.6 The real symmetric matrix V is positive definite if and only if its eigenvalues Positive Definite, Symmetric, but possibly Ill-conditioned Matrix Introduction. (b) Prove that if eigenvalues of a real symmetric matrix A are all positive, then Ais positive-definite. To accomplish this for a symmetric positive definite matrix, we consider the Cholesky decomposition $$A_{chol}$$. Theorem C.6 The real symmetric matrix V is positive definite if and only if its eigenvalues are positive. Theorem 1.1 Let A be a real n×n symmetric matrix. Example-Prove if A and B are positive definite then so is A + B.) where Γ is a constant positive definite matrix and KD (t) is a uniformly positive-definite matrix (i.e., KD ≥ Λ I > 0). The example below defines a 3×3 symmetric and positive definite matrix and calculates the Cholesky decomposition, then the original matrix is reconstructed. » Here denotes the transpose of . Use OCW to guide your own life-long learning, or to teach others. �joqնD�u���N�Lk �N���X/��P����o֎k�A��G��� 'X�01�3cȏcmr�|nZo�1b�[����⾞�F�Eu s�o$�p+Mfw0s�r��tϯ&����f���|�OA���w>y�W;g�j֍�P��2���/��1=�؅�#G���W��_#! endstream endobj 389 0 obj <>stream Therefore, you could simply replace the inverse of the orthogonal matrix to a transposed orthogonal matrix. mdinfo("hilb") Hilbert matrix ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡ The Hilbert matrix is a very ill conditioned matrix. Also, it is the only symmetric matrix. But the difference between them is, the symmetric matrix is equal to its transpose whereas skew-symmetric matrix is a matrix whose transpose is equal to its negative.. ". Cholesky decomposition is the most efficient method to check whether a real symmetric matrix is positive definite. Extension to the complex case is immediate. A symmetric matrix and skew-symmetric matrix both are square matrices. The identity matrix is positive definite. In linear algebra, a positive-definite matrix is a matrix that in many ways is analogous to a positive real number.The notion is closely related to a positive-definite symmetric bilinear form (or a sesquilinear form in the complex case).. Work the problems on your own and check your answers when you're done. Positive definite matrices are of both theoretical and computational importance in a wide variety of applications. Extension to the complex case is immediate. Take some non-zero vector $$x$$, and a symmetric, idempotent matrix $$A$$. Show Hide all comments. Now, it’s not always easy to tell if a matrix is positive definite. A symmetric matrix is positive definite if: all the diagonal entries are positive, and; each diagonal entry is greater than the sum of the absolute values of all other entries in the corresponding row/column. The matrix in Example 2 is not positive de nite because hAx;xican be 0 for nonzero x(e.g., for x= 3 3). In this session we also practice doing linear algebra with complex numbers and learn how the pivots give information about the eigenvalues of a symmetric matrix. This definition makes some properties of positive definite matrices much easier to prove. Definitions of POSITIVE DEFINITE MATRIX, An example is given by It is positive definite since for any Two symmetric, positive-definite matrices can be, nearestSPD works on any matrix, Please send me an example case that has this which will be converted to the nearest Symmetric Positive Definite Matrix. Let M be a symmetric and N a symmetric and positive-definite matrix. A symmetric positive definite matrix is a symmetric matrix with all positive eigenvalues.. For any real invertible matrix A, you can construct a symmetric positive definite matrix with the product B = A'*A.The Cholesky factorization reverses this formula by saying that any symmetric positive definite matrix B can be factored into the product R'*R. See help("make.positive.definite") from package corpcor. RDocumentation ��wX��G�v=穂ci s�@� For example, we know that a symmetric matrix is PSD if and only if all its eigenvalues are non-negative. Lis called the (lower) Cholesky factor of A. Test method 2: Determinants of all upper-left sub-matrices are positive: Determinant of all The Cholesky factorization of a symmetric positive definite matrix is the factorization , where is upper triangular with positive diagonal elements. … Prove that ⟨x,y⟩:=xTAy defines an inner product on the vector space Rn. Learn more », © 2001–2018 Also, it is the only symmetric matrix. The level curves f (x, y) = k of this graph are ellipses; its graph appears in Figure 2. The quadratic form of a symmetric matrix is a quadratic func-tion. f�P[��hCm,D���;׶��q8��>��~lc?� 4���w�C����޶� ߑ�T&D_��5�Sb~�z4���w��,X��Cx@�q�׳#D�N" |�� q", » A symmetric matrix is positive definite if: all the diagonal entries are positive, and each diagonal entry is greater than the sum of the absolute values of all other entries in the corresponding row/column. For example, if a matrix has an eigenvalue on the order of eps, then using the comparison isposdef = all(d > 0) returns true, even though the eigenvalue is numerically zero and the matrix is better classified as symmetric positive semi-definite. Special matrices have special eigenvalues and eigenvectors. Consider the $2\times 2$ real matrix \[A=\begin{bmatrix} 1 & 1\\ 1& 3 It is symmetric positive definite and totally positive. Eric. Non-Positive Definite Covariance Matrices Value-at-Risk. This result does not extend to the case of three or more matrices. » Sponsored Links A symmetric, and a symmetric and positive-definite matrix can be simultaneously diagonalized, although not necessarily via a similarity transformation. mdinfo("hilb") Hilbert matrix ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡ The Hilbert matrix is a very ill conditioned matrix. Quick, is this matrix? Made for sharing. AMS 2010 Subject Classi cation: 15A24, 65F10. I have to generate a symmetric positive definite rectangular matrix with random values. Question 6: Can we say that a positive definite matrix is symmetric? For example, the quadratic form of A = " a b b c # is xTAx = h x 1 x 2 i " a b b c #" x 1 x 2 # = ax2 1 +2bx 1x 2 +cx 2 2 Chen P Positive Definite Matrix. This is an lower-triangular matrix with positive diagonal entries such that $$A = … Also, if eigenvalues of real symmetric matrix are positive, it is positive definite. Explore materials for this course in the pages linked along the left. One known feature of matrices (that will be useful later in this chapter) is that if a matrix is symmetric and idempotent then it will be positive semi-definite. Unit III: Positive Definite Matrices and Applications is.positive.semi.definite returns TRUE if a real, square, and symmetric matrix A is positive semi-definite. An arbitrary symmetric matrix is positive definite if and only ifeach of its principal submatrices How to generate a symmetric positive definite matrix? A real matrix is symmetric positive definite if it is symmetric (is equal to its transpose,) and By making particular choices of in this definition we can derive the inequalities Satisfying these inequalities is not sufficient for positive definiteness. For example, the matrix 262 POSITIVE SEMIDEFINITE AND POSITIVE DEFINITE MATRICES Proof. Transposition of PTVP shows that this matrix is symmetric.Furthermore, if a aTPTVPa = bTVb, (C.15) with 6 = Pa, is larger than or equal to zero since V is positive semidefinite.This completes the proof. Although such matrices are indefinite, we show that any symmetric permutation of a quasi-definite matrix yields a factorization LDLT. If D is a diagonal matrix with positive entries, show that it must also be positive definite. 29 Oct 2014. Could you please clarify this? Then A is positive definite if and only if all its eigenvalues are positive. We will use induction on n, the size of A, to prove the theorem. This is one of over 2,400 courses on OCW. In this section we write for the real case. Note that as it’s a symmetric matrix all the eigenvalues are real, so it makes sense to talk about them being positive or negative. There's no signup, and no start or end dates. The normal equations for least squares fitting of a polynomial form such an example. Freely browse and use OCW materials at your own pace. Proof: If A is positive definite and λ is an eigenvalue of A, then, for any eigenvector x belonging to λ x>Ax,λx>x = λkxk2. We say that a real symmetric n×n matrix is (i) Positive definite provided x>Ax > 0 for all x 6= 0; (ii) Positive semi-definite provided x>Ax ≥ 0 for all x ∈ Rn; (iii) Negative definite provided x>Ax < 0 for all x 6= 0; (iv) Negative semi-definite provided x>Ax ≤ 0 for all x ∈ Rn. Hence λ = x>Ax kxk2 > 0. The quadratic form associated with this matrix is f (x, y) = 2x2 + 12xy + 20y2, which is positive except when x = y = 0. See for example modchol_ldlt.m in https: ... A - square matrix, which will be converted to the nearest Symmetric Positive Definite Matrix." In this session we also practice doing linear algebra with complex numbers and learn how the pivots give information about the eigenvalues of a symmetric matrix. A positive definite matrix will have all positive pivots. Only the second matrix shown above is a positive definite matrix. In this session we also practice doing linear algebra with complex numbers and learn how the pivots give information about the eigenvalues of a symmetric matrix. A positive definite matrix will have all positive pivots. Answer: A positive definite matrix happens to be a symmetric matrix that has all positive eigenvalues. h�t�K�0�����lb)��q�&zȡPRiS�o�֢��ev�ffQ(��B��~�( �_)�'�A3����S2�Z뀓eQ7.�d�G�Dqz\ ٵ�,�i��C��n[sw�>�}^8�q��EgQ�Ҍp���m0��o4���l۲�}��D^ȑ��S58��^�?c�O�b�+��1H%�Aٙ3��� �b� �@}��ҼK}�̔�h���BXH��T���������[�B��IS��Dw@bQ*P�1�� 솙@3��74S Flash and JavaScript are required for this feature. Problem. In this section we write for the real case. T����3V;����A�M��z�҄�G� ]v�B��H�s*9�~A&I!��Jd4���x3�> ". The eigenvalue \(\lambda$$ is a scalar such that, for a matrix \ (A\) and non-zero $$n\times 1$$ vector $$v$$, $$A\cdot v = \lambda \cdot v$$. This latter property is absolutely key in the area of support vector machines , specifically kernel methods and the kernel trick, where the kernel must be This result does not extend to the case of three or more matrices. We don't offer credit or certification for using OCW. Download files for later. Rodrigo de Azevedo. Note that PSD differs from PD in that the transformation of the matrix is no longer strictly positive. » (b) Let A be an n×n real matrix. endstream endobj 390 0 obj <>stream Only the second matrix shown above is a positive definite matrix. However, We say that a symmetric matrix K is quasi-definite if it has the form K = " −E AT A F # where E and F are symmetric positive definite matrices. An n×n complex matrix A is called positive definite if R[x^*Ax]>0 (1) for all nonzero complex vectors x in C^n, where x^* denotes the conjugate transpose of the vector x. 7/52 Positive Definite Matrix Definition Let A be a real symmetric matrix. It is a generalization of the property that a positive real number has a unique positive square root. ... A concrete example of a positive-definite matrix is given in the next problem. It might not be clear from this statement, so let’s take a look at an example. A real symmetric positive definite (n × n)-matrix X can be decomposed as X = LL T where L, the Cholesky factor, is a lower triangular matrix with positive diagonal elements (Golub and van Loan, 1996).Cholesky decomposition is the most efficient method to check whether a real symmetric matrix is positive definite. Symmetric and positive definite matrices have extremely nice properties, and studying these matrices brings together everything we've learned about pivots, determinants and eigenvalues. (a) Prove that the eigenvalues of a real symmetric positive-definite matrix Aare all positive. The proof is given in Appendix 5.C. Ahmed. The quadratic form of a symmetric matrix is a quadratic func-tion. Let M be a symmetric and N a symmetric and positive definite matrix. For example, if a matrix has an eigenvalue on the order of eps, then using the comparison isposdef = all(d > 0) returns true, even though the eigenvalue is numerically zero and the matrix is better classified as symmetric positive semi-definite. Quick, is this matrix? In linear algebra, a symmetric × real matrix is said to be positive-definite if the scalar is strictly positive for every non-zero column vector of real numbers. Conversely, some inner product yields a positive definite matrix. No enrollment or registration. A square matrix is positive definite if pre-multiplying and post-multiplying it by the same vector always gives a positive number as a result, independently of how we choose the vector. h�|̱ A matrix is symmetric if the absolute difference between A and its transpose is less than tol. To do this, consider an arbitrary non-zero column vector $\mathbf{z} \in \mathbb{R}^p - \{ \mathbf{0} \}$ and let $\mathbf{a} = \mathbf{Y} \mathbf{z} \in \mathbb{R}^n$ be the resulting column vector. (a) Suppose that A is an n×n real symmetric positive definite matrix. Positive definite symmetric matrices have the property that all their eigenvalues are positive. 3.2 Cholesky decomposition A real symmetric positive definite (n × n)-matrix X can be decomposed as X = LLT where L, the Cholesky factor, is a lower triangular matrix with positive diagonal elements (Golub and van Loan, 1996). In this way, symmetric positive definite matrices can be viewed as ideal candidates for coordinate transforms. Thanks! %PDF-1.6 %���� A real matrix Ais said to be positive de nite if hAx;xi>0; unless xis the zero vector. Lecture 25: Symmetric Matrices and Positive Definiteness, > Download from Internet Archive (MP4 - 98MB), Problem Solving: Symmetric Matrices and Positive Definiteness, > Download from Internet Archive (MP4 - 28MB). Send to friends and colleagues. So first off, why every positive definite matrix is invertible. Note that all the eigenvalues are real because it’s a symmetric matrix all the eigenvalues are real. Non-Positive Definite Covariance Matrices Value-at-Risk. ALGLIB package has routines for inversion of several different matrix types,including inversion of real and complex matrices, general and symmetric positive … If A is a real symmetric positive definite matrix, then it defines an inner product on R^n. linear-algebra matrices eigenvalues-eigenvectors positive-definite. Modify, remix, and reuse (just remember to cite OCW as the source. Knowledge is your reward. A positive definite matrix is a symmetric matrix with all positive eigenvalues. MIT OpenCourseWare is a free & open publication of material from thousands of MIT courses, covering the entire MIT curriculum. Key words: linear operator, symmetric positive de nite, matrix equation, itera- » The closed-loop manipulator system is asymptotically stable and lim t → ∞ ˜q = 0 lim t → ∞ ˜q˙ = 0. For example, if a matrix has an eigenvalue on the order of eps, then using the comparison isposdef = all(d > 0) returns true, even though the eigenvalue is numerically zero and the matrix is better classified as symmetric positive semi-definite. where Q is some symmetric positive semi-definite matrix. Examples 1 and 3 are examples of positive de nite matrices. Does this hold for non-symmetric matrices as well? h�262R0P062V01R& If A is a symmetric matrix, then A = A T and if A is a skew-symmetric matrix then A T = – A.. Also, read: Sign in to comment. For example, the quadratic form of A = " a b b c # is xTAx = h x 1 x 2 i " a b b c #" x 1 x 2 # = ax2 1 +2bx 1x 2 +cx 2 2 Chen P Positive Definite Matrix. Positive and Negative De nite Matrices and Optimization The following examples illustrate that in general, it cannot easily be determined whether a sym-metric matrix is positive de nite from inspection of the entries. Positive Definite, Symmetric, but possibly Ill-conditioned Matrix Introduction. Consequently, it makes sense to discuss them being positive or negative. 12 Nov 2013. Sign in to answer this question. While I do not explore this further in this chapter, there are methods available for recovering these values from the preceding equation. Now, it ’ s not always easy to tell if a real symmetric matrix V is positive semi-definite.. B are positive one of over 2,400 courses on OCW its principal Non-Positive! Factorization always exists and the requirement that the eigenvalues are positive… of property... The preceding equation only positive definite matrices that are non-symmetric, and a symmetric matrix symmetric positive definite matrix example, why every definite! S take a look at an example sponsored Links the quadratic form of a definite! This result does not extend to the case of three or more matrices to zero materials. A system of linear equations to be solved has a unique positive square root home » courses » »... If a matrix is invertible, 65F10 such an example, symmetric, but possibly Ill-conditioned matrix Introduction Hilbert ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡. = x > Ax kxk2 > 0. where Q is some symmetric positive definite if only.: Elias Hasle question 6: can we say that a positive definite matrix will have all positive pivots SPD-NN! Statement, so Let ’ s direct or second method positive square root and materials is subject our! Square matrices returns TRUE if a and b are positive exist positive definite, symmetric positive de if. Just remember to cite OCW as the source Aare all positive pivots there 's no signup, and symmetric! All the eigenvalues of real symmetric matrix with all positive symmetric positive real! Possibly Ill-conditioned matrix Introduction also, if eigenvalues of a positive-definite matrix can be simultaneously diagonalized, not! With all positive materials at your own and check your answers when you done... Matrices that are non-symmetric, and symmetric matrix are all positive eigenvalues are.! 1 and 3 are examples of positive de nite matrices 22 so its eigenvalues real... Our Creative Commons License and other terms of use ) Let a be n×n. Answers when you 're done symmetric positive definite matrix example space Rn be viewed as ideal candidates for coordinate transforms certification! Generate a symmetric matrix is positive definite matrix ” has to satisfy the following conditions if its... | improve this question | follow | edited Jan 22 '20 at 23:21 this! Symmetric matrix s a symmetric positive definite real symmetric positive-definite matrix is factorization... ∞ ˜q = 0 lim t → ∞ ˜q˙ = 0 lim t → ∞ ˜q˙ = 0 t... You 're done an n×n real symmetric positive de nite neural network ( SPD-NN ) for constitutive. Or second method and i know that symmetric positive definite matrices can simultaneously. L= ( p a ) be a symmetric, idempotent matrix \ ( x\ ), L=... Non-Zero vector \ ( x\ ), and no start or end dates then so is a ill... Second method | follow | edited Jan 22 '20 at 23:21 definite and symmetric positive matrices. Are square matrices ( A\ ) for recovering these values from the preceding equation property that a definite! The matrix yields a factorization LDLT the only positive definite matrix Lyapunov ’ s not always easy to tell a! Sense to discuss them being positive or negative | improve this question | follow | edited Jan 22 '20 23:21! Unless xis the zero vector, we show that it is positive.! \ ( x\ ), and L= ( p a ) prove that ⟨x, y⟩: defines! ˜Q˙ = 0 lim t → ∞ ˜q = 0 lim t → ∞ ˜q = 0 only if its... So Let ’ s not always easy to tell if a and its transpose is less than tol A= a! Over 2,400 courses on OCW hilb '' ) Hilbert matrix ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡ the symmetric positive definite matrix example matrix the! Not be clear from this statement, so Let ’ s take a look an...... a concrete example of a polynomial form such an example that are non-symmetric, and symmetric say a. Offer credit or certification for using OCW the requirement that the diagonal of be de! We will use induction on N, the matrix y ) = k of this graph are ellipses its! ) prove that the diagonal of be positive ensures that it is unique is a symmetric matrix that all... Publication of material from thousands of MIT courses, covering the entire curriculum! Own and check your answers when you 're done is asymptotically stable and lim t → ∞ ˜q 0. A system of linear equations to be positive definite matrices and positive definite and! Hax ; xi > 0, and symmetric Hilbert matrix ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡ the Hilbert matrix the. Of be positive definite matrix unless xis the zero vector definition makes some properties of positive definite matrix skew-symmetric... Or second method the eigenvalues of real symmetric matrix with positive diagonal elements note that all their eigenvalues positive…! Xi > 0 ; unless xis the zero vector possibly Ill-conditioned matrix Introduction, although not necessarily via a transformation. 377 views ( last 30 days ) Riccardo Canola on 17 Oct 2018 | edited Jan 22 '20 at.! And computational importance in a wide variety of Applications matrix Aare all positive eigenvalues a > 0, symmetric! Have the property that all the eigenvalues of a positive-definite matrix is symmetric ifeach its! For calling attention to Higham 's paper symmetric, but possibly Ill-conditioned Introduction... The “ positive definite matrix than or equal to zero Cholesky factor of a to. Theorem 1.1 Let a be an n×n real matrix Ais said to be positive ensures that it is unique A\. Square, and a symmetric matrix V is positive definite if and only if all its eigenvalues are positive s... Or end dates, y ) = k of this graph are ellipses its. As ideal candidates for coordinate transforms 0 ; unless xis the zero.! Although not necessarily via a similarity transformation positive-definite matrix is symmetric that a symmetric positive semi-definite like in next! S a symmetric matrix a are all positive eigenvalues exists and the requirement that the eigenvalues are non-negative first,... For example, the size of a positive definite matrix, y ) = of..., so Let ’ s take a look at an example ∞ ˜q˙ =.. Only positive definite matrix and skew-symmetric matrix both are square matrices Higham 's paper Oct 2019 Accepted:. Positive or negative matrix both are square matrices you, John, mostly for calling attention to Higham 's.... ) Riccardo Canola on 17 Oct 2018 linear Algebra » Unit III: positive definite and symmetric positive definite matrix example matrix with positive. Always exists and the requirement that the diagonal of be positive definite matrix is the most efficient method to whether... The example below defines a 3×3 symmetric and positive-definite matrix Aare all positive pivots i know that positive. A diagonal matrix with random values, why every positive definite matrices can be viewed as candidates! Positive-Definite matrix is a symmetric matrix are all positive eigenvalues result does not extend to case! The factorization, where is upper triangular with positive diagonal elements at 23:21 and reuse just... Offer credit or certification for using OCW definite and symmetric Non-Positive definite matrices. Other terms of use mdinfo ( hilb '' ) Hilbert matrix is positive semi-definite like in the matrix..., there are methods available for recovering these values from the preceding equation y ) = k of graph! Three or more matrices always exists and the requirement that the diagonal of be positive that... Consequently, it ’ s take a look at an example comes in when your matrix is reconstructed 3×3. Upper triangular with positive diagonal elements both theoretical and computational importance in a wide variety of Applications lim. Your answers when you 're done x, y ) = k this... Statement, so Let ’ s not always easy to tell if a symmetric positive definite matrix example b positive. Have to generate a symmetric, idempotent matrix \ ( A\ ) as the source now, it ’ take... Cation: 15A24, 65F10 so first off, why every positive of... Opencourseware is a positive definite matrices much easier to prove nite if hAx ; xi > 0 and. Decomposition, then the original matrix is PSD if and only if all eigenvalues. 2010 subject Classi cation: 15A24, 65F10 in advance to be solved has a matrix is symmetric if absolute... Eigenvalues are positive… of the MIT OpenCourseWare is a symmetric, but possibly Ill-conditioned matrix Introduction requirement that the are. & open publication of material from thousands of MIT courses, covering the entire MIT curriculum Non-Positive! The size of a polynomial form such an example quasi-definite matrix yields factorization... Manipulator system is asymptotically stable and lim t → ∞ ˜q = 0 but the problem comes when... Remember to cite OCW as the source if eigenvalues of real symmetric matrix are all,. All their eigenvalues are real because it ’ s not always easy to tell if a real symmetric is! Statement, so Let ’ s a symmetric positive definite matrix definite real symmetric matrix positive. The quadratic form of a positive-definite matrix is the most efficient method to check whether a real matrix! I have to generate a symmetric and positive-definite matrix symmetric positive definite matrix example positive definite matrix will have positive... Rectangular matrix with positive diagonal elements signup, and no start or end dates pages linked along the.... Kxk2 > 0. where Q is some symmetric positive definite matrix is symmetric x y! Publication of material from thousands of MIT courses, covering the entire MIT curriculum ) Hilbert ≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡≡... So first off, why every positive definite matrix Bobrov on 2 Oct 2019 Accepted Answer a. ) for mod-eling constitutive relations in dynamical equations tell if a matrix which is known advance. Definite symmetric matrices have positive eigenvalues much easier to prove the original matrix is generalization! While i do not explore this further in this section we write for the real symmetric matrix materials subject... Some symmetric positive symmetric positive definite matrix example rectangular matrix with random values available for recovering these values from the preceding equation submatrices.
{}
# Why don't I get the graphics output I'm expecting? [closed] Today,I want to use a point generate a circle,the main picture as below: Given that I know the equation of circle, and the value of $L_1$,$L_2$.The coordinate of point $A$ is $(p_x,p_y)$. I use the equations: $$p_x=L_1 cos(\theta_1)+L_2cos(\theta_1+\theta_2) \\ p_y=L_1 sin(\theta_1)+L_2sin(\theta_1+\theta_2)$$ to solve the results of $\theta_1,\theta_2$ as below:$$cos(\theta_2)=\frac{-L_1^2-L_2^2+p_x^2+p_y^2}{2 L_1 L_2} \\sin(\theta_2)=\pm \sqrt{1-cos^2(\theta_2)}$$ $$sin(\theta_1) =-\frac{sin(\theta_2) L_2 p_x-L_1 p_y - cos(\theta_2) L_2 p_y}{p_x^2+p_y^2}\\ cos(\theta_1) =-\frac{-L_1 p_y - cos(\theta_2) L_2 p_x - sin(\theta_2) L_2 p_y}{p_x^2+p_y^2}$$ My trial as below: Manipulate[ Module[{L1, L2, θ1, θ2, A, B, C1, D1, px, py}, L1 = 35; L2 = 20; A = (-L1^2 - L2^2 + px^2 + py^2)/(2 L1 L2); B = Sqrt[1 - A^2 ]; C1 = -((-L1 px - A L2 px - B L2 py)/(px^2 + py^2)); D1 = -((B L2 px - L1 py - A L2 py)/( px^2 + py^2)); px = 50 + 15 Cos[20 ° t]; py = 50 + 15 Sin[20 ° t]; θ2 = ArcTan[B, A] // Simplify; θ1 = ArcTan[D1, C1] // Simplify; Graphics[ {Line[{0, 0}, {L1 Cos[θ1], L1 Sin[θ1]}, {L1 Cos[θ1] + L2 Cos[θ1 + θ2], L1 Sin[θ1] + L2 Sin[θ1 + θ2]}], Red, Point[{L1 Cos[θ1] + L2 Cos[θ1 + θ2], L1 Sin[θ1] + L2 Sin[θ1 + θ2]}]}, Axes -> True] ], {t, 0, 18} ] ### Edit Thanks for Mr.Wizard's help. Manipulate[ Module[{L1, L2, θ1, θ2, c2, s2, c1, s1, px, py}, L1 = 35; L2 = 20; c2 = (-L1^2 - L2^2 + px^2 + py^2)/(2 L1 L2); s2 = Sqrt[1 - (c2^2) ]; c1 = -((-L1 px - c2 L2 px - s2 L2 py)/(px^2 + py^2)); s1 = -((s2 L2 px - L1 py - c2 L2 py)/(px^2 + py^2)); px = 50 + 15 Cos[20 ° t]; py = 50 + 15 Sin[20 ° t]; θ2 = ArcTan[c2, s2] // Simplify; θ1 = ArcTan[c1, s1] // Simplify; Graphics[ {Circle[{50, 50}, 15], Line[{{0, 0}, {L1 Cos[θ1], L1 Sin[θ1]}, {L1 Cos[θ1] + L2 Cos[θ1 + θ2], L1 Sin[θ1] + L2 Sin[θ1 + θ2]}}], Blue, PointSize[Large], Point[{L1 Cos[θ1], L1 Sin[θ1]}], Red, PointSize[Large], Point[{L1 Cos[θ1] + L2 Cos[θ1 +θ2], L1 Sin[θ1] + L2 Sin[θ1 + θ2]}]}, Axes -> True, PlotRange -> {{-20, 80}, {-20, 80}}, AspectRatio -> Automatic] /. z_Complex :> Re[z]], {t, 0, 18}] It generates: By manipulating the value of t,I found the length of shaft L1 and shaft L2 vary. This is contradictory because the length of L1 and L2 are constants. I don't know why. - ## closed as off-topic by Yves Klett, rasher, belisarius, m_goldberg, bobthechemistMay 3 at 11:33 This question appears to be off-topic. The users who voted to close gave this specific reason: • "This question arises due to a simple mistake such as a trivial syntax error, incorrect capitalization, spelling mistake, or other typographical error and is unlikely to help any future visitors, or else it is easily found in the documentation." – Yves Klett, rasher, belisarius, bobthechemist If this question can be reworded to fit the rules in the help center, please edit the question. You might want to define the variables L1 and L2, and use the correct syntax for the cosine and sine functions, which are Cos[ ] and Sin[ ]. Also, you cannot really assign Cos[theta]=. –  bill s May 3 at 4:29 @bills,I have edited my question and given the values of L1 and L2 –  tangshutao May 3 at 4:47 One problem with the above formulation is that it produces values of c1, s1, c2, and c2 of magnitude greater than 1, but these quantities are supposed to represent the sines and cosines of angles. –  m_goldberg May 3 at 11:21 This question appears to be off-topic because the bad results stem at least in part from errors in the underlying math that must be corrected before consideration is given to the coding. –  m_goldberg May 3 at 11:26 Two things I note: 1. Line is used incorrectly; use e.g. Line[{{a,b}, {c,d}}] not Line[{a,b}, {c,d}] 2. The coordinates in your Graphics expression are complex values. You must convert them. I don't know if this works as you intend but it no longer produces an error: Manipulate[ Module[{L1, L2, θ1, θ2, A, B, C1, D1, px, py}, L1 = 35; L2 = 20; A = (-L1^2 - L2^2 + px^2 + py^2)/(2 L1 L2); B = Sqrt[1 - A^2]; C1 = -((-L1 px - A L2 px - B L2 py)/(px^2 + py^2)); D1 = -((B L2 px - L1 py - A L2 py)/(px^2 + py^2)); px = 50 + 15 Cos[20 ° t]; py = 50 + 15 Sin[20 ° t]; θ2 = ArcTan[B, A] // Simplify; θ1 = ArcTan[D1, C1] // Simplify; Graphics[{Line[{{0, 0}, {L1 Cos[θ1], L1 Sin[θ1]}, {L1 Cos[θ1] + L2 Cos[θ1 + θ2], L1 Sin[θ1] + L2 Sin[θ1 + θ2]}}], Red, Point[{L1 Cos[θ1] + L2 Cos[θ1 + θ2], L1 Sin[θ1] + L2 Sin[θ1 + θ2]}]}, Axes -> True] /. z_Complex :> Re[z] ], {t, 0, 18} ] - ,Thank you,I have found mistake that the circle trajactory must in the aera of point P. –  tangshutao May 3 at 13:23
{}
show · mf.siegel.family.sp8z all knowls · up · search: The dimensions for degree 4 Siegel modular cusp forms $S_k(\mathrm{Sp}(8,Z))$ for the full modular group for weights $k\le 16$ were proven by C. Poor and D. S. Yuen [MR:2302669]. Poor and Yuen also computed Fourier coefficients and some eigenvalues. The cusp forms in weights up through 14 are either Duke-Imamoglu-Ikeda lifts or Miyawaki lifts. In weight 16, in addition to these two types of lifts, there are other eigenforms that have been shown by Ibukiyama to instantiate a conjectural lift from vector-valued Siegel modular forms. Authors: Knowl status: • Review status: beta • Last edited by Andrew Sutherland on 2016-06-30 21:15:03 Referred to by: Not referenced anywhere at the moment. History:
{}
A die is thrown once. The probability of getting an odd number greater than 3 is Question: A die is thrown once. The probability of getting an odd number greater than 3 is (a) $\frac{1}{3}$ (b) $\frac{1}{6}$ (c) $\frac{1}{2}$ (d) 0 Solution: ​Total number of outcomes = 6. Out of the given numbers, odd number greater than 3 is 5. Numbers of favourable outcomes = 1. $\therefore \mathrm{P}$ (getting an odd number greater than 3 ) $=\frac{\text { Number of favourable outcomes }}{\text { Number of all possible outcomes }}$ $=\frac{1}{6}$ Thus, the probability of getting an odd number greater than 3 is $\frac{1}{6}$. Hence, the correct answer is option (b).
{}
Search by Topic Resources tagged with Mathematical reasoning & proof similar to Halving the Triangle: Filter by: Content type: Stage: Challenge level: There are 183 results Broad Topics > Using, Applying and Reasoning about Mathematics > Mathematical reasoning & proof Golden Eggs Stage: 5 Challenge Level: Find a connection between the shape of a special ellipse and an infinite string of nested square roots. Continued Fractions II Stage: 5 In this article we show that every whole number can be written as a continued fraction of the form k/(1+k/(1+k/...)). Areas and Ratios Stage: 4 Challenge Level: What is the area of the quadrilateral APOQ? Working on the building blocks will give you some insights that may help you to work it out. Gift of Gems Stage: 4 Challenge Level: Four jewellers possessing respectively eight rubies, ten saphires, a hundred pearls and five diamonds, presented, each from his own stock, one apiece to the rest in token of regard; and they. . . . Target Six Stage: 5 Challenge Level: Show that x = 1 is a solution of the equation x^(3/2) - 8x^(-3/2) = 7 and find all other solutions. Pent Stage: 4 and 5 Challenge Level: The diagram shows a regular pentagon with sides of unit length. Find all the angles in the diagram. Prove that the quadrilateral shown in red is a rhombus. The Golden Ratio, Fibonacci Numbers and Continued Fractions. Stage: 4 An iterative method for finding the value of the Golden Ratio with explanations of how this involves the ratios of Fibonacci numbers and continued fractions. Plus or Minus Stage: 5 Challenge Level: Make and prove a conjecture about the value of the product of the Fibonacci numbers $F_{n+1}F_{n-1}$. Square Mean Stage: 4 Challenge Level: Is the mean of the squares of two numbers greater than, or less than, the square of their means? Thousand Words Stage: 5 Challenge Level: Here the diagram says it all. Can you find the diagram? Fractional Calculus III Stage: 5 Fractional calculus is a generalisation of ordinary calculus where you can differentiate n times when n is not a whole number. Truth Tables and Electronic Circuits Stage: 2, 3 and 4 Investigate circuits and record your findings in this simple introduction to truth tables and logic. Symmetric Tangles Stage: 4 The tangles created by the twists and turns of the Conway rope trick are surprisingly symmetrical. Here's why! There's a Limit Stage: 4 and 5 Challenge Level: Explore the continued fraction: 2+3/(2+3/(2+3/2+...)) What do you notice when successive terms are taken? What happens to the terms if the fraction goes on indefinitely? Big, Bigger, Biggest Stage: 5 Challenge Level: Which is the biggest and which the smallest of $2000^{2002}, 2001^{2001} \text{and } 2002^{2000}$? Logic, Truth Tables and Switching Circuits Challenge Stage: 3, 4 and 5 Learn about the link between logical arguments and electronic circuits. Investigate the logical connectives by making and testing your own circuits and fill in the blanks in truth tables to record. . . . Proof Sorter - Quadratic Equation Stage: 4 and 5 Challenge Level: This is an interactivity in which you have to sort the steps in the completion of the square into the correct order to prove the formula for the solutions of quadratic equations. Stonehenge Stage: 5 Challenge Level: Explain why, when moving heavy objects on rollers, the object moves twice as fast as the rollers. Try a similar experiment yourself. Pythagorean Golden Means Stage: 5 Challenge Level: Show that the arithmetic mean, geometric mean and harmonic mean of a and b can be the lengths of the sides of a right-angles triangle if and only if a = bx^3, where x is the Golden Ratio. Power Quady Stage: 4 Challenge Level: Find all real solutions of the equation (x^2-7x+11)^(x^2-11x+30) = 1. Problem Solving, Using and Applying and Functional Mathematics Stage: 1, 2, 3, 4 and 5 Challenge Level: Problem solving is at the heart of the NRICH site. All the problems give learners opportunities to learn, develop or use mathematical concepts and skills. Read here for more information. Napoleon's Hat Stage: 5 Challenge Level: Three equilateral triangles ABC, AYX and XZB are drawn with the point X a moveable point on AB. The points P, Q and R are the centres of the three triangles. What can you say about triangle PQR? Number Rules - OK Stage: 4 Challenge Level: Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number... Square Pair Circles Stage: 5 Challenge Level: Investigate the number of points with integer coordinates on circles with centres at the origin for which the square of the radius is a power of 5. Similarly So Stage: 4 Challenge Level: ABCD is a square. P is the midpoint of AB and is joined to C. A line from D perpendicular to PC meets the line at the point Q. Prove AQ = AD. Mediant Stage: 4 Challenge Level: If you take two tests and get a marks out of a maximum b in the first and c marks out of d in the second, does the mediant (a+c)/(b+d)lie between the results for the two tests separately. Mouhefanggai Stage: 4 Imagine two identical cylindrical pipes meeting at right angles and think about the shape of the space which belongs to both pipes. Early Chinese mathematicians call this shape the mouhefanggai. Proofs with Pictures Stage: 5 Some diagrammatic 'proofs' of algebraic identities and inequalities. Euler's Formula and Topology Stage: 5 Here is a proof of Euler's formula in the plane and on a sphere together with projects to explore cases of the formula for a polygon with holes, for the torus and other solids with holes and the. . . . The Triangle Game Stage: 3 and 4 Challenge Level: Can you discover whether this is a fair game? A Computer Program to Find Magic Squares Stage: 5 This follows up the 'magic Squares for Special Occasions' article which tells you you to create a 4by4 magicsquare with a special date on the top line using no negative numbers and no repeats. Impossible Sandwiches Stage: 3, 4 and 5 In this 7-sandwich: 7 1 3 1 6 4 3 5 7 2 4 6 2 5 there are 7 numbers between the 7s, 6 between the 6s etc. The article shows which values of n can make n-sandwiches and which cannot. Classifying Solids Using Angle Deficiency Stage: 3 and 4 Challenge Level: Toni Beardon has chosen this article introducing a rich area for practical exploration and discovery in 3D geometry The Frieze Tree Stage: 3 and 4 Patterns that repeat in a line are strangely interesting. How many types are there and how do you tell one type from another? Whole Number Dynamics IV Stage: 4 and 5 Start with any whole number N, write N as a multiple of 10 plus a remainder R and produce a new whole number N'. Repeat. What happens? Whole Number Dynamics V Stage: 4 and 5 The final of five articles which containe the proof of why the sequence introduced in article IV either reaches the fixed point 0 or the sequence enters a repeating cycle of four values. Telescoping Functions Stage: 5 Take a complicated fraction with the product of five quartics top and bottom and reduce this to a whole number. This is a numerical example involving some clever algebra. Where Do We Get Our Feet Wet? Stage: 5 Professor Korner has generously supported school mathematics for more than 30 years and has been a good friend to NRICH since it started. A Knight's Journey Stage: 4 and 5 This article looks at knight's moves on a chess board and introduces you to the idea of vectors and vector addition. Whole Number Dynamics III Stage: 4 and 5 In this third of five articles we prove that whatever whole number we start with for the Happy Number sequence we will always end up with some set of numbers being repeated over and over again. Can it Be Stage: 5 Challenge Level: When if ever do you get the right answer if you add two fractions by adding the numerators and adding the denominators? Whole Number Dynamics I Stage: 4 and 5 The first of five articles concentrating on whole number dynamics, ideas of general dynamical systems are introduced and seen in concrete cases. Whole Number Dynamics II Stage: 4 and 5 This article extends the discussions in "Whole number dynamics I". Continuing the proof that, for all starting points, the Happy Number sequence goes into a loop or homes in on a fixed point. Try to Win Stage: 5 Solve this famous unsolved problem and win a prize. Take a positive integer N. If even, divide by 2; if odd, multiply by 3 and add 1. Iterate. Prove that the sequence always goes to 4,2,1,4,2,1... Yih or Luk Tsut K'i or Three Men's Morris Stage: 3, 4 and 5 Challenge Level: Some puzzles requiring no knowledge of knot theory, just a careful inspection of the patterns. A glimpse of the classification of knots and a little about prime knots, crossing numbers and. . . . Transitivity Stage: 5 Suppose A always beats B and B always beats C, then would you expect A to beat C? Not always! What seems obvious is not always true. Results always need to be proved in mathematics. More Sums of Squares Stage: 5 Tom writes about expressing numbers as the sums of three squares. Modulus Arithmetic and a Solution to Dirisibly Yours Stage: 5 Peter Zimmerman from Mill Hill County High School in Barnet, London gives a neat proof that: 5^(2n+1) + 11^(2n+1) + 17^(2n+1) is divisible by 33 for every non negative integer n. Composite Notions Stage: 4 Challenge Level: A composite number is one that is neither prime nor 1. Show that 10201 is composite in any base. Sums of Squares and Sums of Cubes Stage: 5 An account of methods for finding whether or not a number can be written as the sum of two or more squares or as the sum orf two or more cubes.
{}
# Solve differential equationdy/dx= x/(ye^(x+y^2)) Solve differential equation $dy/dx=x/\left(y{e}^{x+{y}^{2}}\right)$ You can still ask an expert for help • Questions are typically answered in as fast as 30 minutes Solve your problem for the price of one coffee • Math expert for every subject • Pay only if we can solve it Brighton $dy/dx=x/\left(y{e}^{x+{y}^{2}}\right)$ $=>dy/dx=x/\left(y{e}^{x}\ast {e}^{{y}^{2}}\right)$ $=>y{e}^{\left(}{y}^{2}\right)dy=x/{e}^{x}dx$ $y{e}^{{y}^{2}}dy=x{e}^{-x}dx$ $\int {e}^{{y}^{2}}ydy=\int x{e}^{-x}dx+c$ where C is integrating constant $\int {e}^{{y}^{2}}ydy=\int x{e}^{-x}dx+c$ ${y}^{2}=t$ $2ydy=dt$ $ydy=\left(dt\right)/2$ $⇒\frac{1}{2}\int {e}^{t}dt=\int x{e}^{-x}dx+c$(integrating by part) $⇒\frac{1}{2}{e}^{t}=-x{e}^{-x}+\int {e}^{-x}dx+c$ $1/2{e}^{{y}^{2}}=-x{e}^{-x}+\left(-{e}^{-x}+c\therefore \left(t={y}^{2}\right)\right)$ $1/2{y}^{2}=-{e}^{-}x\left[x+1\right]+c$ $1/2{y}^{2}+{e}^{-}n\left(x+1\right)=c$ $1/2{y}^{2}=-{e}^{-}x\left(x+1\right)+c$ ${y}^{2}/2+{e}^{-}x\left(x+1\right)=C$ Jeffrey Jordon Answer is given below (on video)
{}
# 2. Notes ## 2.1. Plotting array elements In MATLAB, line plots are created using the plot function. % Create an array to plot A = [1.0,4.0,16.0,32.0]; clf; % Clear the plot window plot(A) From raw.githubusercontent.com on May 19 2019 09:24:15. The plot(A) command caused a plot of the values of A to be shown with the values in A shown on the y-axis. The x-axis values were assumed to be [1,2,3,4]. The above commands are equivalent to A = [1.0,4.0,16.0,32.0]; x = [1,2,3,4]; clf; % Clear the plot. plot(x,A); From raw.githubusercontent.com on May 19 2019 09:24:15. To change the x-axis values, modify the first array that is passed to the plot function. A = [1.0,4.0,16.0,32.0]; x = [10,20,30,40]; clf; % Clear the plot window plot(x,A); From raw.githubusercontent.com on May 19 2019 09:24:15. ## 2.2. Annotation To add a grid, write grid on after the plot command. To add a axis labels and a title, use xlabel, ylabel, and title A = [1.0,4.0,16.0,32.0]; x = [1,2,3,4]; clf; plot(x,A); grid on; xlabel('Time [seconds]'); ylabel('Height [meters]'); title('Experiment 1 results'); From raw.githubusercontent.com on May 19 2019 09:24:15. ## 2.3. Line Color A style argument may be specified when calling the plot function. This set of commands will create a red line. clf; A = [1.0,4.0,16.0,32.0]; x = [1,2,3,4]; % Can be one of r,g,b,c,m,y,k,w style = 'r'; plot(x,A,style); From raw.githubusercontent.com on May 19 2019 09:24:15. Other colors may be used by specifying a set of r,g,b values. This set of commands will create a gray line. clf; A = [1.0,4.0,16.0,32.0]; x = [1,2,3,4]; plot(x,A,'Color',[0.5,0.5,0.5]); From raw.githubusercontent.com on May 19 2019 09:24:15. ## 2.4. Marker Style Marker styles are one of .,o,x,+,*,s,d,v,^,<,>,p,h A = [1.0,4.0,16.0,64.0]; x = [1,2,3,4]; % Can be one of % .,o,x,+,*,s,d,v,^,<,>,p,h style = '*'; clf; plot(x,A,style); From raw.githubusercontent.com on May 19 2019 09:24:15. ## 2.5. Axis Numbering In this example, note that by default MATLAB chose to label the values in 0.5 increments. This is not a good default - all of the x-values are integers. The following example shows how to modify the values that are labeled. To modify the x-position labels, use YTick instead of XTick. A = [1.0,4.0,16.0,32.0]; x = [1,2,3,4]; clf; plot(x,A); % Label x-position labels % at 1, 2, 3, and 4. set(gca,'XTick',[1:4]); grid on; xlabel('Time [seconds]'); ylabel('Height [meters]'); title('Experiment 1 results'); From raw.githubusercontent.com on May 19 2019 09:24:15. # 3. Problems ## 3.1. Scalar Time Series Plots I Create a the plot shown below. The plot must have a grid, labels, x symbols of size 20, and a green line of width 3. Turn in your program and a print-out of the plot (you can print both the program and the plot from within MATLAB using File-Print.) ## 3.2. Scalar Time Series Plots II The following program computes population for two different scenarios. • An initial population of 100 and a growth rate of 1%/yr. • An initial population of 1000 and a decay rate of 10%/yr. Pa(1) = 100; for i = 2:40 Pa(i) = Pa(i-1) + 0.05*Pa(i-1); end Pb(1) = 1000; for i = 2:40 Pb(i) = Pb(i-1) - 0.10*Pb(i-1); end Plot both population scenarios and draw a vertical line at the year in which the populations are nearest each other. The plot should contain a legend and axis labels on the plot.
{}
Archived This topic is now archived and is closed to further replies. sqrt() slow? and Shortest distance, 2 pts in C This topic is 6134 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic. Recommended Posts Hello, I was just wondering how one very talented physics guru might go about creating a macro/function to find the shortest distance between 2 points (I''d like it to return float): r^2=(x1-x2)^2+(y1-y2)^2 I''ve heard that standard c''s exp() and sqrt() functions are dead slow and should be avoided when programming games. I have to perform about a combined 35 sqrt''s and exp''s every frame. will this be a problem? Ciphersoftware website Support the open source community Share on other sites Well, I'm certainly no physics guru, but here goes: Since you know that the exponent is always 2 you can easily avoid exp() by observing that X2 = X*X. so you get r2 = (x1-x2)*(x1-x2) + (y1-y2)*(y1-y2) sqrt is more difficult to avoid. If you don't actually need the distance, but only want to compare two distances, then compare the squares of the distances. That way you don't need the sqrt. Of course, if you really need the distance, then I guess you're screwed There might be more efficient ways than sqrt() to calculate the square root though, especially if you can tolerate less accuracy. Edited by - Dactylos on October 19, 2001 2:48:31 AM Share on other sites 35 squareroots is not much at all and is perfectly acceptable on nowadays computers. (Do you have any idea how many squareroots 3D games use?) However, sqrt is indeed not such a fast way to compute squareroots. For such a small number of squareroots it might not matter, but for higher numbers you''ll need an alternative. davepermen posted a nice templated fast squareroot function for 32-bit floats on the Flipcode forum: http://www.flipcode.com/cgi-bin/msg.cgi?showThread=00003300&forum=3dtheory&id=-1 The beauty of it is that you can specify how accurate you want it to be. If it''s to slow, lower the accuracy. Fast but not accurate enough, raise the accuracy. I''d ask him if you may code his code if I were you. Share on other sites Hi, The squareroot found at flicode is more or less standard newton-raphson-iteration for square roots, I think, where in that case, the number of iterations is not fixed. A similar implementation can be found in the Quake III Tools sources, function rsqrt. I don''t think that precision is critical in games (in geometry-preprocessing steps, however, it can be critical). I''ve compared several algorithms on my machine once. The fastest and most precise was the AMD Math Lib''s sqrt, but it processor- and platform specific, and therefore not well suited. Another one is simply using the FPU: __asm { fld [floatvalue] fsqrt fstp [somefloat]} Thats very simple, comparetively precise, and a bit faster than Quake''s reciprocal square root. Standard C''s sqrt was a bit slower, but in the end, the difference wasn''t big. I''ve tested that on an AMD Athlon 600MHz. You might want to run your own tests. However, keep in mind that 1000 standard sqrt''s in DEBUG mode on such a machine take less than 0.000080 seconds... Share on other sites If you only want an approx distance between two points, the following works well providing the points are too close (<10 units). int hypotq (signed dx, signed dy)// Quick approximation of a hypotenuse{ register int min; if (dx < 0) dx = -dx; if (dy < 0) dy = -dy; min = (dx > dy) ? dy : dx; return (dx+dy - (min >> 1));} Share on other sites Is that fsqrt portable? I mean, do all processor types support it? Share on other sites If all you have to do is sort objects by distance, you can take out the square root and it will still work. The curve won''t be the same, but objects with greater distances will still have greated "pseudo distances" (that is, without the square root) because, even though the difference in distances between two objects won''t be correct, the fact that a given distance is greater or less than another will be. Share on other sites A word of warning, cipher. Dont worry about speed of the code while implementing something for the first time. First get it implemented. If its slow, then do some research on better algorithms. If you are using the fastest known algorithm for the problem only then start optimizing the code. And even then, first profile, and then optimize the code that needs to be optimized. Those 35 sqrts per frame are like nothing. In fact, you should worry about speed of the standard C functions only when you are executing them like million times per frame. Share on other sites Beelzebub: Won''t the conditional testing in your hypotq() stall the processor? Share on other sites With good code generation it shouldn''t since there''s an assembler/compiler optimization trick for doing the abs() without any jumps, something along the lines of... temp = value >> 31; // temp = (value < 0) ? -1 : 0 value = (value ^ temp) + temp; memorys a bit foggy, but I think that''s right - if you imagine temp and value are both CPU registers. Anyway, back in the old days (when div''s took 50+ cycles), a processor stall was faster. 1. 1 2. 2 frob 16 3. 3 4. 4 Rutin 11 5. 5 • 13 • 13 • 61 • 14 • 15 • Forum Statistics • Total Topics 632125 • Total Posts 3004250 ×
{}
# Static light scattering Static light scattering is a technique in physical chemistry that measures the intensity of the scattered light to obtain the average molecular weight Mw of a macromolecule like a polymer or a protein in solution. Measurement of the scattering intensity at many angles allows calculation of the root mean square radius, also called the radius of gyration Rg. By measuring the scattering intensity for many samples of various concentrations, the second virial coefficient A2, can be calculated.[1][2][3][4][5] Static light scattering is also commonly utilized to determine the size of particle suspensions in the sub-μm and supra-μm ranges, via the Lorenz-Mie (see Mie scattering) and Fraunhofer diffraction formalisms, respectively. For static light scattering experiments, a high-intensity monochromatic light, usually a laser, is launched in a solution containing the macromolecules. One or many detectors are used to measure the scattering intensity at one or many angles. The angular dependence is required to obtain accurate measurements of both molar mass and size for all macromolecules of radius above 1–2% the incident wavelength. Hence simultaneous measurements at several angles relative to the direction of incident light, known as multi-angle light scattering (MALS) or multi-angle laser light scattering (MALLS), is generally regarded as the standard implementation of static light scattering. Additional details on the history and theory of MALS may be found in multi-angle light scattering. To measure the average molecular weight directly without calibration from the light scattering intensity, the laser intensity, the quantum efficiency of the detector, and the full scattering volume and solid angle of the detector needs to be known. Since this is impractical, all commercial instruments are calibrated using a strong, known scatterer like toluene since the Rayleigh ratio of toluene and a few other solvents were measured using an absolute light scattering instrument. ## Theory For a light scattering instrument composed of many detectors placed at various angles, all the detectors need to respond the same way. Usually detectors will have slightly different quantum efficiency, different gains and are looking at different geometrical scattering volumes. In this case a normalization of the detectors is absolutely needed. To normalize the detectors, a measurement of a pure solvent is made first. Then an isotropic scatterer is added to the solvent. Since isotropic scatterers scatter the same intensity at any angle, the detector efficiency and gain can be normalized with this procedure. It is convenient to normalize all the detectors to the 90° angle detector. ${\displaystyle \ N(\theta )={\frac {I_{R}(\theta )-I_{S}(\theta )}{I_{R}(90)-I_{S}(90)}}}$ where IR(90) is the scattering intensity measured for the Rayleigh scatterer by the 90° angle detector. The most common equation to measure the weight-average molecular weight, Mw, is the Zimm equation[5] (the right hand side of the Zimm equation is provided incorrectly in some texts, as noted by Hiemenz and Lodge):[6] ${\displaystyle {\frac {Kc}{\Delta R(\theta ,c)}}={\frac {1}{M_{w}}}\left(1+{\frac {q^{2}R_{g}^{2}}{3}}+O(q^{4})\right)+2A_{2}c+O(c^{2})}$ where ${\displaystyle \ K=4\pi ^{2}n_{0}^{2}(dn/dc)^{2}/N_{A}\lambda ^{4}}$ and ${\displaystyle \ \Delta R(\theta ,c)=R_{A}(\theta )-R_{0}(\theta )}$ with ${\displaystyle \ R(\theta )={\frac {I_{A}(\theta )n_{0}^{2}}{I_{T}(\theta )n_{T}^{2}}}{\frac {R_{T}}{N(\theta )}}}$ and the scattering vector for vertically polarized light is ${\displaystyle \ q=4\pi n_{0}\sin(\theta /2)/\lambda }$ with n0 the refractive index of the solvent, λ the wavelength of the light source, NA Avogadro's number (6.022x1023), c the solution concentration, and dn/dc the change in refractive index of the solution with change in concentration. The intensity of the analyte measured at an angle is IA(θ). In these equation the subscript A is for analyte (the solution) and T is for the toluene with the Rayleigh ratio of toluene, RT being 1.35x10−5 cm−1 for a HeNe laser. As described above, the radius of gyration, Rg, and the second virial coefficient, A2, are also calculated from this equation. The refractive index increment dn/dc characterizes the change of the refractive index n with the concentration c, and can be measured with a differential refractometer. A Zimm plot is built from a double extrapolation to zero angle and zero concentration from many angle and many concentration measurements. In the most simple form, the Zimm equation is reduced to: ${\displaystyle \ Kc/\Delta R(\theta \rightarrow 0,c\rightarrow 0)=1/M_{w}}$ for measurements made at low angle and infinite dilution since P(0) = 1. There are typically a number of analyses developed to analyze the scattering of particles in solution to derive the above named physical characteristics of particles. A simple static light scattering experiment entails the average intensity of the sample that is corrected for the scattering of the solvent will yield the Rayleigh ratio, R as a function of the angle or the wave vector q as follows: ## Data analyses ### Guinier plot The scattered intensity can be plotted as a function of the angle to give information on the Rg which can simply be calculated using the Guinier approximation as follows: ${\displaystyle \ln(\Delta R(\theta ))=1-(R_{g}^{2}/3)q^{2}}$ where ln(ΔR(θ)) = lnP(θ) also known as the form factor with q = 4πn0sin(θ/2)/λ. Hence a plot of the corrected Rayleigh ratio, ΔR(θ) vs sin2(θ/2) or q2 will yield a slope Rg2/3. However, this approximation is only true for qRg < 1. Note that for a Guinier plot, the value of dn/dc and the concentration is not needed. ### Kratky plot The Kratky plot is typically used to analyze the conformation of proteins, but can be used to analyze the random walk model of polymers. A Kratky plot can be made by plotting sin2(θ/2)ΔR(θ) vs sin(θ/2) or q2ΔR(θ) vs q. ### Zimm plot For polymers and polymer complexes which are of a monodisperse nature (${\displaystyle \scriptstyle \mu _{2}/{\bar {\Gamma }}^{2}<0.3}$ ) as determined by static light scattering, a Zimm plot is a conventional means of deriving the parameters such as Rg, molecular mass Mw and the second virial coefficient A2. One must note that if the material constant K is not implemented, a Zimm plot will only yield Rg. Hence implementing K will yield the following equation: ${\displaystyle {\frac {Kc}{\Delta R(\theta ,c)}}={\frac {1}{M_{w}}}\left(1+{\frac {q^{2}R_{g}^{2}}{3}}+O(q^{4})\right)+2A_{2}c+O(c^{2})}$ Experiments are performed at several angles, which satisfy condition ${\displaystyle qR_{g}<1}$  and at least 4 concentrations. Performing a Zimm analysis on a single concentration is known as a partial Zimm analysis and is only valid for dilute solutions of strong point scatterers. The partial Zimm however, does not yield the second virial coefficient, due to the absence of the variable concentration of the sample. More specifically, the value of the second virial coefficient is either assumed to equal zero or is inputted as a known value in order to perform the partial Zimm analysis. ### Multiple scattering Static light scattering assumes that each detected photon has only been scattered exactly once. Therefore, analysis according to the calculations stated above will only be correct if the sample has been diluted sufficiently to ensure that photons are not scattered multiple times by the sample before being detected. Accurate interpretation becomes exceedingly difficult for systems with non-negligible contributions from multiple scattering. In many commercial instruments where analysis of the scattering signal is automatically performed, the error may never be noticed by the user. Particularly for larger particles and those with high refractive index contrast, this limits the application of standard static light scattering to very low particle concentrations. On the other hand, for soluble macromolecules that exhibit a relatively low refractive index contrast versus the solvent, including most polymers and biomolecules in their respective solvents, multiple scattering is rarely a limiting factor even at concentrations that approach the limits of solubility. However, as shown by Schaetzel,[7] it is possible to suppress multiple scattering in static light scattering experiments via a cross-correlation approach. The general idea is to isolate singly scattered light and suppress undesired contributions from multiple scattering in a static light scattering experiment. Different implementations of cross-correlation light scattering have been developed and applied. Currently, the most widely used scheme is the so-called 3D-dynamic light scattering method,.[8][9] The same method can also be used to correct dynamic light scattering data for multiple scattering contributions.[10] Samples that change their properties after dilution may not be analyzed via static light scattering in terms of the simple model presented here as the Zimm equation. A more sophisticated analysis known as 'composition-gradient static (or multi-angle) light scattering' (CG-SLS or CG-MALS) is an important class of methods to investigate protein–protein interactions, colligative properties and other macromolecular interactions as it yields, in addition to size and molecular weight, information on the affinity and stoichiometry of molecular complexes formed by one or more associating macromolecular/biomolecular species. In particular, static light scattering from a dilution series may be analyzed to quantify self-association, reversible oligomerization and non-specific attraction or repulsion, while static light scattering from mixtures of species may be analyzed to quantify hetero-association.[11] ## References 1. ^ A. Einstein (1910). "Theorie der Opaleszenz von homogenen Flüssigkeiten und Flüssigkeitsgemischen in der Nähe des kritischen Zustandes". Annals of Physics. 33 (16): 1275. Bibcode:1910AnP...338.1275E. doi:10.1002/andp.19103381612. 2. ^ C.V. Raman (1927). Indian J. Phys. 2: 1. Missing or empty |title= (help) 3. ^ P.Debye (1944). "Light Scattering in Solutions". J. Appl. Phys. 15 (4): 338. Bibcode:1944JAP....15..338D. doi:10.1063/1.1707436. 4. ^ B.H. Zimm (1945). "Molecular Theory of the Scattering of Light in Fluids". J. Chem. Phys. 13 (4): 141. Bibcode:1945JChPh..13..141Z. doi:10.1063/1.1724013. 5. ^ a b B.H. Zimm (1948). "The Scattering of Light and the Radial Distribution Function of High Polymer Solutions". J. Chem. Phys. 16 (12): 1093. Bibcode:1948JChPh..16.1093Z. doi:10.1063/1.1746738. 6. ^ Hiemenz, Paul C.; Lodge, Timothy P. (2007). Polymer chemistry (2nd ed.). Boca Raton, Fla. [u.a.]: CRC Press. pp. 307–308. ISBN 978-1-57444-779-8. 7. ^ Schaetzel, K. (1991). "Suppression of multiple-scattering by photon cross-correlation techniques". J. Mod. Opt. 38: SA393–SA398. Bibcode:1990JPCM....2..393S. doi:10.1088/0953-8984/2/S/062. 8. ^ Urban, C.; Schurtenberger, P. (1998). "Characterization of turbid colloidal suspensions using light scattering techniques combined with cross-correlation methods". J. Colloid Interface Sci. 207 (1): 150–158. Bibcode:1998JCIS..207..150U. doi:10.1006/jcis.1998.5769. PMID 9778402. 9. ^ Block, I.; Scheffold, F. (2010). "Modulated 3D cross-correlation light scattering: Improving turbid sample characterization". Review of Scientific Instruments. 81 (12): 123107–123107–7. arXiv:1008.0615. Bibcode:2010RScI...81l3107B. doi:10.1063/1.3518961. PMID 21198014. S2CID 9240166. 10. ^ Pusey, P.N. (1999). "Suppression of multiple scattering by photon cross-correlation techniques". Current Opinion in Colloid & Interface Science. 4 (3): 177–185. doi:10.1016/S1359-0294(99)00036-9. 11. ^ Some, D. (2013). "Light Scattering Based Analysis of Biomolecular Interactions". Biophys. Rev. 5 (2): 147–158. doi:10.1007/s12551-013-0107-1. PMC 3641300. PMID 23646069.
{}
## Tags : Fisher information geodesic distance Entries in this Tags : 1logs Showing : 1 - 1 / 1 ## Nov 24, 2009 ### Fisher information of Gamma distributions Computing the Rao distance for Gamma distributions by F. Reverter and J. M. Oller The Gamma distribution belongs to the exponential families. Therefore, the Fisher information metric is $I(\theta)=\nabla^2 F(\theta)$. However, integrating the square root of the information matrix is difficult (no closed form solution). The author proceeds by characterizing the Riemannian geodesic using the differential equation relying on Christoffel symbols. Geodesics on the Gamma manifold are unique since the manifold is simply connected, complete with all sectional curvatures nonpositive. The authors come up with a Newton-like numerical optimization algorithm that depends on a good initialization. First, they show that the metric is bounded by Poincare metrics for which closed form equations of the geodesics are known. This yields a good starting tangent vector. It is quite impressive to look at the formula of the closed-form equation of the Poincare geodesics. Those formula are surprisingly quite complicated. The authors implemented their algorithm in FORTRAN and show that the algorithm always convergence on the domain examples, with high numerical precisions.
{}
Grade 9 Math: Intro to Geometry Pythagorean Theorem Back to Courses True False What is the one condition of a Pythagorean theorem? It can only be used on a isosceles triangle It can only be used on a right-angled triangle It can only be used on a scalene triangle It can only be used on a equilateral triangle We have a triangle with the following measurements: $$a=3$$ and $$b=9$$. What is the value of the hypotenuse? $$\sqrt{45}$$ $$\sqrt{60}$$ $$\sqrt{30}$$ $$\sqrt{90}$$ 12km 18km 22km 24km
{}
## Filters Q&A - Ask Doubts and Get Answers Q # A A 2V battery is connected across AB as shown in the figure. The value of the current supplied by the battery when in one case battery’s positive terminal is connected to A and in other case when positive terminal of battery is connected to B will respectively be : Answers (1) Views In one case NO current will pass through $D_2$ I = 2/5 A = 0.4 A In second case : no current will pass through $D_1$ I = 2/10 = 0.2 A Exams Articles Questions
{}
# How do you factor 9t^2+42t+49? May 1, 2016 $9 {t}^{2} + 42 t + 49 = {\left(3 t + 7\right)}^{2}$ Notice that $9 {t}^{2} = {\left(3 t\right)}^{2}$ and $49 = {7}^{2}$ are both perfect squares, so it remains to check whether the middle term is correct: ${\left(3 t + 7\right)}^{2} = {\left(3 t\right)}^{2} + 2 \left(3 t\right) \left(7\right) + {7}^{2} = 9 {t}^{2} + 42 t + 49$
{}
### Home > MC1 > Chapter 9 > Lesson 9.3.1 > Problem9-127 9-127. Without a calculator, simplify each of the following expressions. 1. $0.25 + 2.5 + 2.5$ Try doing this one without making any calculations on paper. What is $2.5 + 2.5$? 2. $432.7 - 0.08$ Here, you are subtracting $\frac{8}{100} \text{ from }432\frac{70}{100}.$ Which are like parts? Can you find the difference in your head? $432.62$ 3. $4.57 · 0.3$ It might be useful to think of this as $\left(4\frac{57}{100}\right)\left(\frac{3}{10}\right)\text{ or }\left(\frac{457}{100}\right)\left(\frac{3}{10}\right)$ $\frac{(457)(3)}{1000}=\frac{1371}{1000}=1\frac{371}{1000}=1.371$
{}
# Homework Help: Collision between 3 masses, with spring 1. Mar 5, 2016 ### Karol 1. The problem statement, all variables and given/known data The masses C, of magnitude 2m, and B, of magnitude m, move with velocity V and hit a stationary mass A of magnitude m. the collision lasts a very short time and is plastic but the masses don't stick together. the spring with constant k is ideal and the surface is smooth. What's the max acceleration of C during the contact between A and B Box A continues, after it separates, with constant velocity, what's it. What's the max contraction of the spring after A and B part? How long did the contact between A and B take place? 2. Relevant equations Kinetic energy, potential energy of a spring: $E_p=\frac{1}{2}kx^2$ Impulse-momentum: $mv=Ft$ Conservation of momentum: $m_1v_1+m_2v_2=m_1v_1'+m_2v_2'$ 3. The attempt at a solution I divide into 2 collisions. the first is plastic between boxes A and B in which the members cling to one another. the final velocity of the boxes A and B is $\frac{V}{2}$ and i find it from conservation of momentum between box B and both boxes after the collision: $$mV=2mv\;\rightarrow\; v=\frac{V}{2}$$ Now these 2 boxes collide elastically with C. the spring contracts, expands and reaches, again, the initial relaxed length L0. then and there the boxes A and B part and A continues alone. One system consists of the boxes A and B and the other system is box C alone. conservation of energy+momentum: $$\begin{cases}\frac{1}{2}(2m)V^2+\frac{1}{2}(2m)\left( \frac{V}{2} \right)^2=\frac{1}{2}(2m)v_1^2+\frac{1}{2}(2m)v_2^2 \\ 2mV+2m\frac{V}{2}=2mv_1+2mv_2 \end{cases}\;\Rightarrow\; v_1=\frac{V}{2},\; v_2=V$$ This is also A's velocity after it detaches, which happens at distance L0 again. At some point, when the spring contracts to it's max, A and B halt momentarily. from the point of view of an observer on C all A and B's kinetic energy gets into the spring. A and B's velocity relative to C is $\frac{V}{2}$: $$\frac{1}{2}(2m)\left( \frac{V}{2} \right)^2=\frac{1}{2}kx^2\;\rightarrow\; x=\sqrt{\frac{m}{2k}}V$$ C's max acceleration: $$ma=F=kx\;\rightarrow\;2m\cdot a=\sqrt{\frac{m}{2k}}V\;\rightarrow\;a=\sqrt{\frac{mk}{2}}\frac{V}{2m}$$ After A has left, B stretches the string and returns. this is another collision: $$\begin{cases}\frac{1}{2}mV^2+\frac{1}{2}(2m)\left( \frac{V}{2} \right)^2=\frac{1}{2}(2m)v_C^2+\frac{1}{2}mv_B^2 \\ mV+2m\frac{V}{2}=2mv_C+mv_B \end{cases}\;\Rightarrow\; v_C=\frac{5}{6}v,\; v_B=\frac{V}{3}$$ Now it compresses the spring, how much? The distance between B and C is x. i want to find x as a function of t: $x=f(t)$ so that i will be able to differentiate it and find the maximum contraction. i can't. i use a simplified setting: a spring attached to a wall. Lets say x is at distance x0 from the wall, the velocity there is v0 and it advances distance dx. the acceleration is $F=kx=ma\;\rightarrow\; a=\frac{k\cdot x_0}{m}$. the acceleration in the interval dx is considered constant, and the mean velocity in dx is: $$v-\frac{k\cdot x_0}{m}dt$$ from kinematics: $$dx=\left( v-\frac{k\cdot x_0}{2m}dt \right)dt\;\rightarrow\;\frac{dx}{dt}=v_0-\frac{k\cdot x_0}{2m}dt$$ $$\int \frac{dx}{dt} dt=\int dx=x=\int v_0 dt-\int \frac{k\cdot x_0}{2m}dt^2$$ I don't know to handle dt2, if everything is correct till there, of course. 2. Mar 5, 2016 ### haruspex I assume that full stop should not be there, that you are saying A and B halt momentarily from the point of view of an observer on C That argument worries me. You are using a non-inertial frame. But KE is different in different inertial frames, so a non-inertial frame means energy is not conserved. Consider the common mass centre and speeds relative to it. I think you'll find the spring PE is half what you calculated. For the motion after A and B part, it must be SHM relative to the common mass centre, no? 3. Mar 7, 2016 ### Karol With A and B attached, The velocity of COM: $$v_{cm}=\frac{2mV+2m\frac{V}{2}}{4m}=\frac{3}{4}V$$ In the COM frame the masses approach and move away from the COM at the same velocity: So there is a momentary halt. the total kinetic energy: $$E_k=2\cdot 2m\cdot\frac{V^2}{16}=\frac{1}{4}V^2$$ This is the compressed spring's energy: $$\frac{1}{4}V^2=\frac{1}{2}kx^2\;\rightarrow\; x=\frac{V}{\sqrt{2k}}$$ $$F=kx\;\rightarrow\; 2ma=k\frac{V}{\sqrt{2k}}\;\rightarrow\;a=\frac{\sqrt{2k}V}{4m}$$ I understand intuitively that there is SHM, but why? is it because the masses move symmetrically in the COM or just because the restoring force is F=kx? Harmonic motion: $$\dot x=-\omega A\sin(\omega t+\theta),\; \omega=\sqrt{\frac{k}{m}}=\frac{2\pi}{T}$$ In the COM system at t=0 the spring is loose, $\dot x=\frac{V}{4}$ and $\theta=\frac{3}{2}\pi$ The COM is one third the distance between B and C, and there the spring stationary while aside that point it contracts\expands. looking at B from that point gives: $$\omega=\sqrt{\frac{\frac{3}{2}k}{m}}=\sqrt{\frac{3k}{2m}}$$ $$\frac{V}{4}=\omega A=\sqrt{\frac{k}{m}}A\;\rightarrow\;A=\frac{V}{4}\sqrt{\frac{2m}{3k}}$$ The time A and B are in contact is found also from the second collision harmonic motion conditions: $$\omega=\sqrt{\frac{2k}{2m}}=\sqrt{\frac{k}{m}}=\frac{2\pi}{T}\;\rightarrow\;T=2\pi\frac{m}{k}$$ The time A and B are in contact is T. 4. Mar 7, 2016 ### haruspex But the masses are different now. In the frame of the common mass centre, momentum is zero. 5. Mar 8, 2016 ### Karol The velocity of COM $\frac{3}{4}V$, the velocities of C and B in the COM are $\frac{1}{4}V$ The reduced mass: $$\overline{m}=\frac{m_1m_2}{m_1+m_2}=\frac{2m^2}{3m}=\frac{2}{3}m$$ $$\omega=\sqrt{\frac{k}{\overline{m}}}=\sqrt{\frac{3k}{2m}}$$ $$\omega=\frac{2\pi}{T}\;\rightarrow\; T=\frac{2\pi}{\omega}$$ T, the period, is independent of coordinate systems, so why does it change, since i use $\overline{m}$? The mass is an inherent property of C and B. i know i can express momentum with $\overline{m}$: $P=\overline{m}V_{12}$, but what else? 6. Mar 9, 2016 ### haruspex Change? I did not see a different value, except via a line of reasoning that I regarded as wrong. Another way to get to this is to observe that the mass m is effectively attached to the common mass centre by a spring of constant 3k/2, and 2m by a spring of constant 3k. Both lead to a frequency $\sqrt{\frac{3k}{2m}}$ 7. Mar 9, 2016 ### Karol So the result: $$\frac{V}{4}=\omega A=\sqrt{\frac{k}{m}}A\;\rightarrow\;A=\frac{V}{4}\sqrt{\frac{2m}{3k}}$$ Is correct, right? and also correct is the time A and B were in contact:: $$T=2\pi\frac{m}{k}$$ 8. Mar 9, 2016 ### haruspex The max compression is for after separation, right? The speeds cannot both be V/4 relative to the mass centre then. For the time in contact, they are only in contact for half a cycle, no? 9. Mar 10, 2016 ### Karol After separation $v_{cm}=\frac{2}{3}V$. the relative velocity between C and B $v_{rel}=\frac{V}{3}$ If i want to use the reduced mass $\overline{m}=\frac{2}{3}m$ then: $$\frac{V}{3}=\omega A=\sqrt{\frac{3k}{2m}}\cdot A\;\rightarrow\; A=\frac{V}{3}\sqrt{\frac{2m}{3k}}$$ If i use the partial spring method, there is 1/3 spring between COM and C, i get the same A. For the time they are in contact it's half cycle because B collides A when the spring is loose, at L0, and they separate at L0 again. Last edited: Mar 10, 2016 10. Mar 10, 2016 ### haruspex How do you get that relative velocity? Why does separation suddenly change it? 11. Mar 14, 2016 ### Karol Correction, $v_{rel}=\frac{V}{2}$, but that doesn't change the result for A. 12. Mar 19, 2016 ### Karol The kinetic energy in the COM frame can be calculated using the reduced mass: $$E_k=\frac{1}{2}\overline{m}V_{12}^2=\frac{1}{2}\frac{2}{3}m\frac{V^2}{4}=\frac{1}{12}mV^2$$ And it differs from the KE i found in post #3, where i also had a mistake: $$E_k=2\frac{1}{2}\cdot 2m\cdot\frac{V^2}{16}=\frac{1}{8}V^2$$ 13. Mar 19, 2016 ### haruspex Yes. But to get the maximum contraction after separation, I feel that working in the COM frame only complicates things. You can just consider the loss in KE in going from point of separation to when B and C are at the same velocities. 14. Mar 20, 2016 ### Karol How do i find the common velocity of C and B after A parts? 15. Mar 20, 2016 ### haruspex Conservation of momentum on B+C. 16. Mar 21, 2016 ### Karol $$2m\frac{V}{2}+mV=3mv\;\rightarrow\; v=\frac{2}{3}V$$ $$E_{k\ initial}=\frac{mV^2}{2}\left( \frac{1}{4}+1 \right)=\frac{5}{8}mV^2$$ $$E_{k\ final}=\frac{mV^2}{2}\frac{4}{9}=\frac{2}{9}mV^2$$ $$\Delta E_k=\frac{13}{72}mV^2$$ In post #3 i got: $$\Delta E_k=\frac{1}{4}mV^2$$ 17. Mar 21, 2016 ### haruspex The mass is 3m here. Similarly, in the Einitial equation, one mass is 2m. 18. Mar 21, 2016 ### Karol $$E_{k\ initial}=\frac{mV^2}{2}\left(2 \frac{1}{4}+1 \right)=\frac{3}{4}mV^2$$ $$E_{k\ final}=\frac{3mV^2}{2}\frac{4}{9}=\frac{2}{3}mV^2$$ $$\Delta E_k=\frac{1}{12}mV^2$$ Still different than in #3 19. Mar 22, 2016 ### haruspex I was never sure what the KE calculated as mV2/8 in #3, and corrected to mV2/12 in #12, referred to. I gather it was KE of A+B+C in "the com frame", but which com frame? The mV2/12 in #12 is for the KE that goes into compression of the spring after collision. That agrees with the lost KE (of B and C) computed in #18. Last edited: Mar 23, 2016 20. Mar 23, 2016 ### Karol There is still a problem with the amplitude A i got in #9 $$A=\frac{V}{3}\sqrt{\frac{2m}{3k}}=\frac{2}{3}\frac{1}{\sqrt{6}}V\sqrt{\frac{m}{k}}$$ While with the KE method: $$\Delta E_k=\frac{1}{12}mV^2=\frac{1}{2}kx^2\;\rightarrow\; x=\frac{1}{\sqrt{6}}V\sqrt{\frac{m}{k}}$$ 21. Mar 23, 2016 ### haruspex In your post #11 you agreed that the relative velocity V/3 you quoted in post #9 should have been V/2, but maintained that it did not change the result for A. I did not check that at the time, but looking at it now I believe it does change the result for A and leads to agreement with the KE method. 22. Mar 25, 2016 ### Karol So to solve for 2 masses with a spring must i use the reduced mass $\over{m}$ only? Is there also an other method?
{}
Search by Topic Resources tagged with Working systematically similar to Factoring Factorials: Filter by: Content type: Stage: Challenge level: There are 129 results Broad Topics > Using, Applying and Reasoning about Mathematics > Working systematically Product Sudoku Stage: 3 Challenge Level: The clues for this Sudoku are the product of the numbers in adjacent squares. A First Product Sudoku Stage: 3 Challenge Level: Given the products of adjacent cells, can you complete this Sudoku? Cuboids Stage: 3 Challenge Level: Find a cuboid (with edges of integer values) that has a surface area of exactly 100 square units. Is there more than one? Can you find them all? Multiples Sudoku Stage: 3 Challenge Level: Each clue number in this sudoku is the product of the two numbers in adjacent cells. How Old Are the Children? Stage: 3 Challenge Level: A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" Stage: 3 Challenge Level: If you take a three by three square on a 1-10 addition square and multiply the diagonally opposite numbers together, what is the difference between these products. Why? Ones Only Stage: 3 Challenge Level: Find the smallest whole number which, when mutiplied by 7, gives a product consisting entirely of ones. Where Can We Visit? Stage: 3 Challenge Level: Charlie and Abi put a counter on 42. They wondered if they could visit all the other numbers on their 1-100 board, moving the counter using just these two operations: x2 and -5. What do you think? Ben's Game Stage: 3 Challenge Level: Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters. Stage: 3 Challenge Level: A mathematician goes into a supermarket and buys four items. Using a calculator she multiplies the cost instead of adding them. How can her answer be the same as the total at the till? LCM Sudoku Stage: 4 Challenge Level: Here is a Sudoku with a difference! Use information about lowest common multiples to help you solve it. Special Numbers Stage: 3 Challenge Level: My two digit number is special because adding the sum of its digits to the product of its digits gives me my original number. What could my number be? Peaches Today, Peaches Tomorrow.... Stage: 3 Challenge Level: Whenever a monkey has peaches, he always keeps a fraction of them each day, gives the rest away, and then eats one. How long could he make his peaches last for? Cinema Problem Stage: 3 Challenge Level: A cinema has 100 seats. Show how it is possible to sell exactly 100 tickets and take exactly £100 if the prices are £10 for adults, 50p for pensioners and 10p for children. Latin Squares Stage: 3, 4 and 5 A Latin square of order n is an array of n symbols in which each symbol occurs exactly once in each row and exactly once in each column. Factor Lines Stage: 2 and 3 Challenge Level: Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line. American Billions Stage: 3 Challenge Level: Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3... Twinkle Twinkle Stage: 2 and 3 Challenge Level: A game for 2 people. Take turns placing a counter on the star. You win when you have completed a line of 3 in your colour. Two and Two Stage: 3 Challenge Level: How many solutions can you find to this sum? Each of the different letters stands for a different number. Number Daisy Stage: 3 Challenge Level: Can you find six numbers to go in the Daisy from which you can make all the numbers from 1 to a number bigger than 25? LCM Sudoku II Stage: 3, 4 and 5 Challenge Level: You are given the Lowest Common Multiples of sets of digits. Find the digits and then solve the Sudoku. Summing Consecutive Numbers Stage: 3 Challenge Level: Many numbers can be expressed as the sum of two or more consecutive integers. For example, 15=7+8 and 10=1+2+3+4. Can you say which numbers can be expressed in this way? Difference Sudoku Stage: 4 Challenge Level: Use the differences to find the solution to this Sudoku. Multiplication Equation Sudoku Stage: 4 and 5 Challenge Level: The puzzle can be solved by finding the values of the unknown digits (all indicated by asterisks) in the squares of the $9\times9$ grid. Product Sudoku 2 Stage: 3 and 4 Challenge Level: Given the products of diagonally opposite cells - can you complete this Sudoku? More Magic Potting Sheds Stage: 3 Challenge Level: The number of plants in Mr McGregor's magic potting shed increases overnight. He'd like to put the same number of plants in each of his gardens, planting one garden each day. How can he do it? M, M and M Stage: 3 Challenge Level: If you are given the mean, median and mode of five positive whole numbers, can you find the numbers? Magic Potting Sheds Stage: 3 Challenge Level: Mr McGregor has a magic potting shed. Overnight, the number of plants in it doubles. He'd like to put the same number of plants in each of three gardens, planting one garden each day. Can he do it? Reach 100 Stage: 2 and 3 Challenge Level: Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100. Stage: 3 Challenge Level: How many different symmetrical shapes can you make by shading triangles or squares? Pair Sums Stage: 3 Challenge Level: Five numbers added together in pairs produce: 0, 2, 4, 4, 6, 8, 9, 11, 13, 15 What are the five numbers? Colour Islands Sudoku Stage: 3 Challenge Level: An extra constraint means this Sudoku requires you to think in diagonals as well as horizontal and vertical lines and boxes of nine. More Plant Spaces Stage: 2 and 3 Challenge Level: This challenging activity involves finding different ways to distribute fifteen items among four sets, when the sets must include three, four, five and six items. Integrated Product Sudoku Stage: 3 and 4 Challenge Level: This Sudoku puzzle can be solved with the help of small clue-numbers on the border lines between pairs of neighbouring squares of the grid. More Children and Plants Stage: 2 and 3 Challenge Level: This challenge extends the Plants investigation so now four or more children are involved. Squares in Rectangles Stage: 3 Challenge Level: A 2 by 3 rectangle contains 8 squares and a 3 by 4 rectangle contains 20 squares. What size rectangle(s) contain(s) exactly 100 squares? Can you find them all? Counting on Letters Stage: 3 Challenge Level: The letters of the word ABACUS have been arranged in the shape of a triangle. How many different ways can you find to read the word ABACUS from this triangular pattern? Cayley Stage: 3 Challenge Level: The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"? The Best Card Trick? Stage: 3 and 4 Challenge Level: Time for a little mathemagic! Choose any five cards from a pack and show four of them to your partner. How can they work out the fifth? Isosceles Triangles Stage: 3 Challenge Level: Draw some isosceles triangles with an area of $9$cm$^2$ and a vertex at (20,20). If all the vertices must have whole number coordinates, how many is it possible to draw? Tea Cups Stage: 2 and 3 Challenge Level: Place the 16 different combinations of cup/saucer in this 4 by 4 arrangement so that no row or column contains more than one cup or saucer of the same colour. Crossing the Town Square Stage: 2 and 3 Challenge Level: This tricky challenge asks you to find ways of going across rectangles, going through exactly ten squares. Consecutive Negative Numbers Stage: 3 Challenge Level: Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? More on Mazes Stage: 2 and 3 There is a long tradition of creating mazes throughout history and across the world. This article gives details of mazes you can visit and those that you can tackle on paper. Stage: 3 Challenge Level: Rather than using the numbers 1-9, this sudoku uses the nine different letters used to make the words "Advent Calendar". Crossing the Bridge Stage: 3 Challenge Level: Four friends must cross a bridge. How can they all cross it in just 17 minutes? Coins Stage: 3 Challenge Level: A man has 5 coins in his pocket. Given the clues, can you work out what the coins are? Weights Stage: 3 Challenge Level: Different combinations of the weights available allow you to make different totals. Which totals can you make? Warmsnug Double Glazing Stage: 3 Challenge Level: How have "Warmsnug" arrived at the prices shown on their windows? Which window has been given an incorrect price? Building with Longer Rods Stage: 2 and 3 Challenge Level: A challenging activity focusing on finding all possible ways of stacking rods.
{}
## What is a Load Bearing Structural System? It is a structural system where loads of buildings i.e. weight of building itself plus the live loads get transferred to the ground through walls. Walls bear the load of roofs, floor and of course self weight. The most constructive use of load bearing is seen in load bearing structural system wherein it performs a range of functions from supporting of loads, subdividing the space (creating room), providing thermal and acoustic insulation to structure, as well as fire and weather protection, which normally in a framed building has to be accounted for separately by means of walls. However as loads coming on the structure are taken care by walls of structure, the thickness of walls at bottom increases to a considerable extent. Hence masonry structures are found to be very uneconomical beyond 3 to 4 stories. Courtesy - 123rf In Asian countries there is not much progress in the construction of tall load bearing masonry structures mainly because of poor quality of workmanship for masonry and clay bricks which even today have only 3.5 to 10 MPa strength. However recently mechanized brick plants are producing brick units of strength 17.5 to 25 MPa and therefore it is possible to construct 5 to 6 storey load bearing structures at a cost less than those of RCC framed structural system. Reinforcement in masonry can further improve its load carrying capacity and improve flexure and shear behavior under earthquake loads. Now a days masonry units are being manufactured in such shapes and sizes that make reinforcement embedding in masonry less cumbersome. There are different building codes for masonry structures in world and they are as follows: • IS (Indian Standard) 1905-1987 • ACI (American Concrete Institute) 530-02 • ASCE (American Society of Civil Engineers) 5-02 • TMS (The Masonry Society) 402-02 • ICC (International Building Code) 2000 • NZS (New Zealand Standard) 4230: Part 1:1990 • Euro 6 From all the above codes, Indian Code provides a semi empirical approach to the design of unreinforced masonry, especially for stresses arising from vertical and moderate lateral loads, such as wind. The permissible stress values are not directly linked to prism test values and do not address the strength and ductility of masonry members under large lateral loads due to earthquakes. Further the use of reinforcement is necessary to improve its flexural resistance and ductility required for seismic loads. The load bearing system needs many elements like plinth band, lintel band, roof band, corner reinforcement, reinforcement at openings etc to make it safer against earthquake loads. This needs slightly skilled and experienced manpower. ## Material Exhibition Explore the world of materials.
{}
# Math Help - Need Help with Logarithmic function problem??? 1. ## Need Help with Logarithmic function problem??? Logarithmic Function Problem The public health service monitors the spread of an epidemic of a particularly long-lasting strain of the flu in a city of 500,000 people using the logistic function. At the begining of the first week (time zero), 200 cases had been reported. During the first week 300 new cases were reported. a. Determine the logistic function. b. Estimate the number of individuals infected after 5 weeks. c. When will the epidemic spread at the greatest rate? d. At what rate will the epidemic spread when 40% of the population has been infected? e. Graph the logistic function for the first 20 weeks of the epidemic's spread. 2. Originally Posted by CalculusChallenge Logarithmic Function Problem The public health service monitors the spread of an epidemic of a particularly long-lasting strain of the flu in a city of 500,000 people using the logistic function. At the begining of the first week (time zero), 200 cases had been reported. During the first week 300 new cases were reported. a. Determine the logistic function. b. Estimate the number of individuals infected after 5 weeks. c. When will the epidemic spread at the greatest rate? d. At what rate will the epidemic spread when 40% of the population has been infected? e. Graph the logistic function for the first 20 weeks of the epidemic's spread. What is the logistic function? CB 3. Your first difficulty is thinking that it involves a logarithm when it doesn't! Your problem says the spread of the disease is modeled by a logistic function, not a logarithm. So I repeat Captain Black's question, "What is a logistic function"? 4. Originally Posted by CaptainBlack What is the logistic function? CB Calculus Challenge: Ahh. thank you. logistic function is y = a / [1 + be^ -kt] But still How do I figure out a, b, and k???? 5. Originally Posted by CalculusChallenge Calculus Challenge: Ahh. thank you. logistic function is y = a / [1 + be^ -kt] But still How do I figure out a, b, and k???? you were given information about points on the curve ... in a city of 500,000 people ... what constant in the function represents this value? At the begining of the first week (time zero), 200 cases had been reported. ... (0,200) correct? During the first week 300 new cases were reported. ... for t measured in weeks, this would be (1,300) , correct? use this info to determine each constant (a, b, and k) in the function. 6. Originally Posted by skeeter you were given information about points on the curve ... ... what constant in the function represents this value? ... (0,200) correct? ... for t measured in weeks, this would be (1,300) , correct? use this info to determine each constant (a, b, and k) in the function. CalculusChallenge Thank you for the enlightenment: So I used to two point to find the slope to be (300-200)/(1-0)=100. using y - mx+b to find 200= 100(0) +B, therefore b = 200. So the constant a = 500000 constant b = 200 and constant k = 100 Is this correct??? 7. Originally Posted by CalculusChallenge CalculusChallenge Thank you for the enlightenment: So I used to two point to find the slope to be (300-200)/(1-0)=100. using y - mx+b to find 200= 100(0) +B, therefore b = 200. So the constant a = 500000 constant b = 200 and constant k = 100 Is this correct??? sorry ... not even close. $b$ is not a y-intercept ... you're dealing with a logistic growth curve, not a linear function. 8. Can you give me some formulas and hints on how to find constant a, b and k cause I haven't got a clue. thanks.. 9. $y = \frac{a}{1 + be^{-kt}}$ $a$ is the limiting value; the maximum possible (also called the carrying capacity) value for $y$. $y = \frac{500000}{1 + be^{-kt}}$ at $t = 0$ , $y = 200$ $200 = \frac{500000}{1 + be^{0}} = \frac{500000}{1 + b}$ solve for $b$ ... $b = 2499$ $y = \frac{500000}{1 + 2499e^{-kt}}$ now use the point $(1, 300)$ and solve for $k$ 10. Originally Posted by skeeter $y = \frac{a}{1 + be^{-kt}}$ $a$ is the limiting value; the maximum possible (also called the carrying capacity) value for $y$. $y = \frac{500000}{1 + be^{-kt}}$ at $t = 0$ , $y = 200$ $200 = \frac{500000}{1 + be^{0}} = \frac{500000}{1 + b}$ solve for $b$ ... $b = 2499$ $y = \frac{500000}{1 + 2499e^{-kt}}$ now use the point $(1, 300)$ and solve for $k$ Oddly my prof. gave me the exact same problem. The only thing is, is that I don't know how to solve the equation in terms of k. I got: dy/dt=Ky(1+y/L) for the growth model y=L/1+be^-kt So when solving for the point 1, 500 (I believe it's 500 and not 300 because they said 300 new cases, not 300 total), I got that k=500.5. That doesn't seem to make sense though because everything different number of weeks I put in I always get 500,000. Any help is much appreciated! 11. Originally Posted by duriliim Oddly my prof. gave me the exact same problem. The only thing is, is that I don't know how to solve the equation in terms of k. I got: dy/dt=Ky(1+y/L) for the growth model y=L/1+be^-kt So when solving for the point 1, 500 (I believe it's 500 and not 300 because they said 300 new cases, not 300 total), I got that k=500.5. That doesn't seem to make sense though because everything different number of weeks I put in I always get 500,000. Any help is much appreciated! I did more thinking and I messed up how I got 500.5 big time. I did it again and I found that it does come to k=.9168. I tested it and it came out right. So for part b I got 18,850 infections. The only trouble I'm having is parts c and d. How do I find the greatest rate and how do I find the rate when 40% of the population is infected? Thanks!
{}
# zbMATH — the first resource for mathematics ## Liu, James Hetao Compute Distance To: Author ID: liu.james-hetao Published as: Liu, J. H.; Liu, James; Liu, James H.; Liu, James Hetao Documents Indexed: 52 Publications since 1988, including 2 Books all top 5 #### Co-Authors 16 single-authored 10 Liang, Jin 10 Xiao, Ti-Jun 9 Nguyen Van Minh 7 Ezzinbi, Khalil 4 N’Guérékata, Gaston Mandata 3 Grimmer, Ronald 3 Lin, Yanping 2 Li, Fang 2 Naito, Toshiki 1 Ma, Chenglu 1 Vũ Quôc Phóng 1 Xu, Hong-Kun all top 5 #### Serials 4 Journal of Mathematical Analysis and Applications 3 Journal of Integral Equations and Applications 2 Applicable Analysis 2 Applied Mathematics and Computation 2 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 2 Proceedings of the American Mathematical Society 2 Semigroup Forum 2 Mathematical and Computer Modelling 2 Dynamic Systems and Applications 2 Dynamics of Continuous, Discrete and Impulsive Systems 2 Dynamics of Continuous, Discrete & Impulsive Systems. Series A. Mathematical Analysis 2 International Journal of Evolution Equations 2 Nonlinear Analysis. Theory, Methods & Applications 1 Rocky Mountain Journal of Mathematics 1 Funkcialaj Ekvacioj. Serio Internacia 1 Journal of Applied Mathematics and Stochastic Analysis 1 Electronic Journal of Differential Equations (EJDE) 1 Discrete and Continuous Dynamical Systems 1 Nonlinear Studies 1 Journal of Inequalities and Applications 1 Electronic Journal of Qualitative Theory of Differential Equations 1 Communications on Pure and Applied Analysis 1 Analysis and Applications (Singapore) 1 Series on Concrete and Applicable Mathematics 1 International Journal of Qualitative Theory of Differential Equations and Applications 1 Journal of Abstract Differential Equations and Applications (JADEA) all top 5 #### Fields 27 Ordinary differential equations (34-XX) 22 Integral equations (45-XX) 10 Operator theory (47-XX) 7 Partial differential equations (35-XX) 4 Mechanics of deformable solids (74-XX) 2 Numerical analysis (65-XX) 1 Dynamical systems and ergodic theory (37-XX) 1 Abstract harmonic analysis (43-XX) #### Citations contained in zbMATH 43 Publications have been cited 668 times in 436 Documents Cited by Year Semilinear integrodifferential equations with nonlocal Cauchy problem. Zbl 0916.45014 Lin, Yanping; Liu, James H. 1996 Nonlinear impulsive evolution equations. Zbl 0932.34067 Liu, James H. 1999 Nonlocal impulsive problems for nonlinear differential equations in Banach spaces. Zbl 1173.34048 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2009 Nonlocal Cauchy problems governed by compact operator families. Zbl 1083.34045 Liang, Jin; Liu, James; Xiao, Ti-Jun 2004 Nonlocal problems for integrodifferential equations. Zbl 1163.45010 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2008 Nondensely defined evolution equations with nonlocal conditions. Zbl 1035.34063 Ezzinbi, K.; Liu, James H. 2002 Bounded and periodic solutions of finite delay evolution equations. Zbl 0934.34066 Liu, James H. 1998 A Massera type theorem for almost automorphic solutions of differential equations. Zbl 1081.34054 Liu, James; N’Guérékata, Gaston; Nguyen Van Minh 2004 Bounded and periodic solutions of differential equations in Banach space. Zbl 0819.34041 Liu, James H. 1994 Bounded and periodic solutions of infinite delay evolution equations. Zbl 1045.34052 Liu, James; Naito, Toshiki; Nguyen Van Minh 2003 A remark on the mild solutions of non-local evolution equations. Zbl 1015.37045 Liu, James H. 2003 Periodic solutions of delay impulsive differential equations. Zbl 1242.34134 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2011 Periodic solutions of infinite delay evolution equations. Zbl 1056.34085 Liu, James H. 2000 Integrated semigroups amd integrodifferential equations. Zbl 0788.45011 Grimmer, Ronald; Liu, James H. 1994 Topics on stability and periodicity in abstract differential equations. Zbl 1158.34002 Liu, James H.; N’Guérékata, Gaston M.; Minh, Nguyen Van 2008 Periodic solutions of non-densely defined delay evolution equations. Zbl 1018.34063 Ezzinbi, Khalil; Liu, James H. 2002 Bounded and periodic solutions of semilinear evolution equations. Zbl 0833.34054 Liu, James H. 1995 Almost automorphic solutions of second order evolution equations. Zbl 1085.34045 Liu, James; N’Guérékata, Gaston M.; Van Minh, Nguyen 2005 Non-autonomous integrodifferential equations with non-local conditions. Zbl 1044.45002 Liu, James H.; Ezzinbi, Khalil 2003 Nonlocal Cauchy problems for nonautonomous evolution equations. Zbl 1143.34320 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2006 Singular perturbations of integrodifferential equations in Banach space. Zbl 0814.47058 Liu, James H. 1994 Singular perturbations in viscoelasticity. Zbl 0805.73029 Grimmer, Ronald; Liu, James H. 1994 Bounded solutions of parabolic equations in continuous function spaces. Zbl 1151.34049 Liu, James; N’Guérékata, Gaston; Nguyen Van Minh; Vu Quoc Phong 2006 Hyperbolic singular perturbations for integrodifferential equations. Zbl 1074.45009 Liang, Jin; Liu, James; Xiao, TiJun 2005 Integrodifferential equations with non-autonomous operators. Zbl 0928.45008 Liu, James H. 1998 A first course in the qualitative theory of differential equations. Zbl 1257.34001 Liu, James Hetao 2003 Resolvent operators and weak solutions of integrodifferential equations. Zbl 0792.45004 Liu, J. H. 1994 Periodicity of solutions to the Cauchy problem for nonautonomous impulsive delay evolution equations in Banach spaces. Zbl 1364.47047 Liang, Jin; Liu, James H.; Xiao, Ti-Jun; Xu, Hong-Kun 2017 Periodic solutions of some evolution equations with infinite delay. Zbl 1123.35084 Ezzinbi, Khalil; Liu, James H. 2007 Uniform asymptotic stability via Liapunov-Razumikhin technique. Zbl 0831.45011 Liu, James H. 1995 Dynamic response of a rigid plastic clamped beam struck by a mass at any point on the span. Zbl 0629.73043 Liu, J. H.; Jones, Norman 1988 Note on multiplicative perturbation of local $$C$$-regularized cosine functions with nondensely defined generators. Zbl 1217.47083 Li, Fang; Liu, James H. 2010 Periodic solutions of impulsive evolution equations. Zbl 1198.34106 Ezzinbi, Khalil; Liu, James H.; Nguyen Van Minh 2009 Nonlocal impulsive problems for integrodifferential equations. Zbl 1183.45004 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2009 Limiting equations of integrodifferential equations in Banach space. Zbl 0813.45005 Grimmer, Ronald; Liu, James H. 1994 A singular perturbation problem in integrodifferential equations. Zbl 0809.45008 Liu, James H. 1993 Exponential decay in integrodifferential equations with nonlocal conditions. Zbl 1263.45009 Ezzinbi, Khalil; Li, Fang; Liu, James H.; Van Minh, Nguyen 2007 Convergence for hyperbolic singular perturbation of integrodifferential equations. Zbl 1188.45006 Liang, Jin; Liu, James; Xiao, Ti-Jun 2007 Periodic solutions in fading memory spaces. Zbl 1157.34346 Ezzinbi, Khalil; Liu, James H.; Nguyen Van Minh 2005 On the bounded solutions of Volterra equations. Zbl 1056.45004 Naito, Toshiki; Nguyen Van Minh; Liu, James H. 2004 A concurrent hierarchical evolution approach to assembly process planning. Zbl 1032.90014 Guan, Q.; Liu, J. H.; Zhong, Y. F. 2002 Commutativity of resolvent operators in integrodifferential equations. Zbl 0957.45019 Liu, James H. 2000 Large deflections of an elastoplastic strain-hardening cantilever. Zbl 0702.73034 Liu, J. H.; Stronge, W. J.; Yu, T. X. 1989 Periodicity of solutions to the Cauchy problem for nonautonomous impulsive delay evolution equations in Banach spaces. Zbl 1364.47047 Liang, Jin; Liu, James H.; Xiao, Ti-Jun; Xu, Hong-Kun 2017 Periodic solutions of delay impulsive differential equations. Zbl 1242.34134 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2011 Note on multiplicative perturbation of local $$C$$-regularized cosine functions with nondensely defined generators. Zbl 1217.47083 Li, Fang; Liu, James H. 2010 Nonlocal impulsive problems for nonlinear differential equations in Banach spaces. Zbl 1173.34048 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2009 Periodic solutions of impulsive evolution equations. Zbl 1198.34106 Ezzinbi, Khalil; Liu, James H.; Nguyen Van Minh 2009 Nonlocal impulsive problems for integrodifferential equations. Zbl 1183.45004 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2009 Nonlocal problems for integrodifferential equations. Zbl 1163.45010 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2008 Topics on stability and periodicity in abstract differential equations. Zbl 1158.34002 Liu, James H.; N’Guérékata, Gaston M.; Minh, Nguyen Van 2008 Periodic solutions of some evolution equations with infinite delay. Zbl 1123.35084 Ezzinbi, Khalil; Liu, James H. 2007 Exponential decay in integrodifferential equations with nonlocal conditions. Zbl 1263.45009 Ezzinbi, Khalil; Li, Fang; Liu, James H.; Van Minh, Nguyen 2007 Convergence for hyperbolic singular perturbation of integrodifferential equations. Zbl 1188.45006 Liang, Jin; Liu, James; Xiao, Ti-Jun 2007 Nonlocal Cauchy problems for nonautonomous evolution equations. Zbl 1143.34320 Liang, Jin; Liu, James H.; Xiao, Ti-Jun 2006 Bounded solutions of parabolic equations in continuous function spaces. Zbl 1151.34049 Liu, James; N’Guérékata, Gaston; Nguyen Van Minh; Vu Quoc Phong 2006 Almost automorphic solutions of second order evolution equations. Zbl 1085.34045 Liu, James; N’Guérékata, Gaston M.; Van Minh, Nguyen 2005 Hyperbolic singular perturbations for integrodifferential equations. Zbl 1074.45009 Liang, Jin; Liu, James; Xiao, TiJun 2005 Periodic solutions in fading memory spaces. Zbl 1157.34346 Ezzinbi, Khalil; Liu, James H.; Nguyen Van Minh 2005 Nonlocal Cauchy problems governed by compact operator families. Zbl 1083.34045 Liang, Jin; Liu, James; Xiao, Ti-Jun 2004 A Massera type theorem for almost automorphic solutions of differential equations. Zbl 1081.34054 Liu, James; N’Guérékata, Gaston; Nguyen Van Minh 2004 On the bounded solutions of Volterra equations. Zbl 1056.45004 Naito, Toshiki; Nguyen Van Minh; Liu, James H. 2004 Bounded and periodic solutions of infinite delay evolution equations. Zbl 1045.34052 Liu, James; Naito, Toshiki; Nguyen Van Minh 2003 A remark on the mild solutions of non-local evolution equations. Zbl 1015.37045 Liu, James H. 2003 Non-autonomous integrodifferential equations with non-local conditions. Zbl 1044.45002 Liu, James H.; Ezzinbi, Khalil 2003 A first course in the qualitative theory of differential equations. Zbl 1257.34001 Liu, James Hetao 2003 Nondensely defined evolution equations with nonlocal conditions. Zbl 1035.34063 Ezzinbi, K.; Liu, James H. 2002 Periodic solutions of non-densely defined delay evolution equations. Zbl 1018.34063 Ezzinbi, Khalil; Liu, James H. 2002 A concurrent hierarchical evolution approach to assembly process planning. Zbl 1032.90014 Guan, Q.; Liu, J. H.; Zhong, Y. F. 2002 Periodic solutions of infinite delay evolution equations. Zbl 1056.34085 Liu, James H. 2000 Commutativity of resolvent operators in integrodifferential equations. Zbl 0957.45019 Liu, James H. 2000 Nonlinear impulsive evolution equations. Zbl 0932.34067 Liu, James H. 1999 Bounded and periodic solutions of finite delay evolution equations. Zbl 0934.34066 Liu, James H. 1998 Integrodifferential equations with non-autonomous operators. Zbl 0928.45008 Liu, James H. 1998 Semilinear integrodifferential equations with nonlocal Cauchy problem. Zbl 0916.45014 Lin, Yanping; Liu, James H. 1996 Bounded and periodic solutions of semilinear evolution equations. Zbl 0833.34054 Liu, James H. 1995 Uniform asymptotic stability via Liapunov-Razumikhin technique. Zbl 0831.45011 Liu, James H. 1995 Bounded and periodic solutions of differential equations in Banach space. Zbl 0819.34041 Liu, James H. 1994 Integrated semigroups amd integrodifferential equations. Zbl 0788.45011 Grimmer, Ronald; Liu, James H. 1994 Singular perturbations of integrodifferential equations in Banach space. Zbl 0814.47058 Liu, James H. 1994 Singular perturbations in viscoelasticity. Zbl 0805.73029 Grimmer, Ronald; Liu, James H. 1994 Resolvent operators and weak solutions of integrodifferential equations. Zbl 0792.45004 Liu, J. H. 1994 Limiting equations of integrodifferential equations in Banach space. Zbl 0813.45005 Grimmer, Ronald; Liu, James H. 1994 A singular perturbation problem in integrodifferential equations. Zbl 0809.45008 Liu, James H. 1993 Large deflections of an elastoplastic strain-hardening cantilever. Zbl 0702.73034 Liu, J. H.; Stronge, W. J.; Yu, T. X. 1989 Dynamic response of a rigid plastic clamped beam struck by a mass at any point on the span. Zbl 0629.73043 Liu, J. H.; Jones, Norman 1988 all top 5 #### Cited by 459 Authors 32 Ezzinbi, Khalil 26 Wang, Jinrong 24 Liang, Jin 21 Li, Yongxiang 19 Xiao, Ti-Jun 18 N’Guérékata, Gaston Mandata 16 Chen, Pengyu 16 Liu, James Hetao 15 Wei, Wei 14 Benchohra, Mouffak 14 Li, Fang 12 Balachandran, Krishnan 12 Li, Gang 12 Lizama, Carlos 12 Zhang, Xuping 11 Fan, Zhenbin 11 Fu, Xianlong 11 Nguyen Van Minh 11 Xiang, Xiaoling 9 Diop, Mamadou Abdoul 9 Hernández, Eduardo M. 9 Wang, Rongnian 8 Henríquez, Hernán R. 7 Chang, Jung-Chan 7 Ntouyas, Sotiris K. 7 Zhou, Yong 6 Ahmed, Nasir Uddin 6 Fečkan, Michal 6 Ghnimi, Saifeddine 6 Park, Jong Yeoul 5 Chang, Yong-Kui 5 Elazzouzi, Abdelhai 5 Ji, Shaochun 5 Naito, Toshiki 5 Yu, Xiulan 5 Zhu, Lanping 4 Abada, Nadjet 4 Bahuguna, Dhirendra 4 Cao, Junfei 4 Cuevas, Claudio 4 Hammouche, Hadda 4 Hernández Morales, Eduardo 4 Liu, Hsiang 4 Mophou, Gisèle Massengo 4 Nguyen Thieu Huy 4 Radhakrishnan, Bheeman 4 Trujillo, Juan J. 4 Wang, Huiwen 4 Xue, Xingmei 4 Yan, Zuomao 4 Yang, He 3 Anguraj, Annamalai 3 Arjunan, Mani Mallika 3 Azevedo, Katia A. G. 3 Caraballo Garrido, Tomás 3 Chen, Dehan 3 Chen, Qian 3 Diagana, Toka 3 Ding, Huisheng 3 Essebbar, Brahim 3 Hilal, Khalid 3 Ke, Tran Dinh 3 Kucche, Kishor D. 3 Kyelem, Bila Adolphe 3 Liu, Shengda 3 Mahmudov, Nazim Idrisoglu 3 Malar, Kandasamy 3 Maniar, Lahcen 3 McKibben, Mark Anthony 3 Oka, Hirokazu 3 Olszowy, Leszek 3 O’Regan, Donal 3 Ouaro, Stanislas 3 Ouhinou, Aziz 3 Taoudi, Mohamed Aziz 3 Wedrychowicz, Stanislaw 3 Zhang, Jun 2 Adimy, Mostafa 2 Agarwal, Ravi P. 2 Ait Dads, El Hadi 2 Aki, Sueli M. Tanaka 2 Akrid, Thami 2 Alves, José Ferreira 2 Anthoni, Selvaraj Marshal 2 Araya, Daniela 2 Balasubramaniam, Pagavathigounder 2 Belmekki, Mohammed 2 Bohner, Martin J. 2 Bryngelson, Spencer H. 2 Byszewski, Ludwik 2 Cardinali, Tiziana 2 Cheng, Yi 2 de Andrade, Bruno 2 de Carvalho, Maria Pires 2 deLaubenfels, Ralph J. 2 Dhakne, Machindra Baburao 2 Dong, Qixiang 2 dos Santos, Jair Silvério 2 Dubey, Shruti A. 2 Ferreira, José Augusto ...and 359 more Authors all top 5 #### Cited in 125 Serials 66 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 30 Advances in Difference Equations 28 Journal of Mathematical Analysis and Applications 19 Applied Mathematics and Computation 14 Abstract and Applied Analysis 11 Computers & Mathematics with Applications 11 Journal of Integral Equations and Applications 10 Mediterranean Journal of Mathematics 8 Proceedings of the American Mathematical Society 8 Semigroup Forum 8 Mathematical and Computer Modelling 7 Journal of Differential Equations 7 Journal of Function Spaces and Applications 6 Applied Mathematics Letters 6 Journal of Mathematical Sciences (New York) 6 Nonlinear Analysis. Hybrid Systems 5 Applicable Analysis 5 Journal of Computational and Applied Mathematics 5 Journal of Fixed Point Theory and Applications 4 Journal of Functional Analysis 4 Journal of Optimization Theory and Applications 4 Journal of Applied Mathematics 4 Boundary Value Problems 4 Nonlinear Analysis. Theory, Methods & Applications 3 Mathematical Methods in the Applied Sciences 3 Results in Mathematics 3 Journal of Inequalities and Applications 3 Fractional Calculus & Applied Analysis 3 Discrete Dynamics in Nature and Society 3 Communications in Nonlinear Science and Numerical Simulation 3 Journal of Dynamical and Control Systems 3 Nonlinear Analysis. Real World Applications 3 Discrete and Continuous Dynamical Systems. Series B 3 Journal of Applied Mathematics and Computing 3 Fixed Point Theory and Applications 3 African Diaspora Journal of Mathematics 3 Journal of Control Theory and Applications 3 Afrika Matematika 3 Evolution Equations and Control Theory 2 Indian Journal of Pure & Applied Mathematics 2 ZAMP. Zeitschrift für angewandte Mathematik und Physik 2 Chaos, Solitons and Fractals 2 Demonstratio Mathematica 2 Numerical Functional Analysis and Optimization 2 Applications of Mathematics 2 Electronic Journal of Differential Equations (EJDE) 2 Computational and Applied Mathematics 2 Opuscula Mathematica 2 Differential Equations and Dynamical Systems 2 Acta Mathematica Sinica. English Series 2 International Journal of Nonlinear Sciences and Numerical Simulation 2 Nonlinear Analysis. Modelling and Control 2 Bulletin of the Malaysian Mathematical Sciences Society. Second Series 2 Central European Journal of Mathematics 2 Analysis and Applications (Singapore) 2 Cubo 2 Journal of Nonlinear Science and Applications 2 International Journal of Differential Equations 2 Electronic Journal of Mathematical Analysis and Applications EJMAA 2 Mathematics 2 Open Mathematics 2 Transactions of A. Razmadze Mathematical Institute 1 Acta Mechanica 1 Bulletin of the Australian Mathematical Society 1 Computer Methods in Applied Mechanics and Engineering 1 International Journal of Solids and Structures 1 Journal of Fluid Mechanics 1 Journal of the Franklin Institute 1 Journal of Mathematical Biology 1 Journal of Mathematical Physics 1 Journal of Statistical Physics 1 Reports on Mathematical Physics 1 Rocky Mountain Journal of Mathematics 1 Acta Mathematica Vietnamica 1 Annales Polonici Mathematici 1 Automatica 1 Collectanea Mathematica 1 Kybernetika 1 Le Matematiche 1 Mathematics and Computers in Simulation 1 Mathematische Nachrichten 1 Mathematica Slovaca 1 Osaka Journal of Mathematics 1 Zeitschrift für Analysis und ihre Anwendungen 1 Acta Applicandae Mathematicae 1 International Journal of Production Research 1 Applied Numerical Mathematics 1 Acta Mathematicae Applicatae Sinica. English Series 1 Journal of Applied Mathematics and Stochastic Analysis 1 Aequationes Mathematicae 1 Glasnik Matematički. Serija III 1 International Journal of Computer Mathematics 1 Journal de Mathématiques Pures et Appliquées. Neuvième Série 1 Proceedings of the Indian Academy of Sciences. Mathematical Sciences 1 Indagationes Mathematicae. New Series 1 International Journal of Bifurcation and Chaos in Applied Sciences and Engineering 1 Discrete and Continuous Dynamical Systems 1 Mathematical Problems in Engineering 1 Journal of Vibration and Control 1 Positivity ...and 25 more Serials all top 5 #### Cited in 28 Fields 293 Ordinary differential equations (34-XX) 182 Operator theory (47-XX) 95 Partial differential equations (35-XX) 91 Integral equations (45-XX) 47 Systems theory; control (93-XX) 20 Real functions (26-XX) 19 Abstract harmonic analysis (43-XX) 17 Numerical analysis (65-XX) 15 Probability theory and stochastic processes (60-XX) 13 Calculus of variations and optimal control; optimization (49-XX) 10 Mechanics of deformable solids (74-XX) 10 Biology and other natural sciences (92-XX) 9 Difference and functional equations (39-XX) 8 Dynamical systems and ergodic theory (37-XX) 3 Functional analysis (46-XX) 2 Computer science (68-XX) 2 Fluid mechanics (76-XX) 1 General and overarching topics; collections (00-XX) 1 Combinatorics (05-XX) 1 Potential theory (31-XX) 1 Special functions (33-XX) 1 Approximations and expansions (41-XX) 1 Harmonic analysis on Euclidean spaces (42-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Quantum theory (81-XX) 1 Statistical mechanics, structure of matter (82-XX) 1 Operations research, mathematical programming (90-XX)
{}
## Polynomials How can I see that I cannot have a product of two irreducible quadratic polynomials such that it has root modulo all primes but no root in Z.
{}
# Brute force passwords in Java - follow-up Here is the updated code. The original question can be found here. I would love it if you posted more ways I can improve this code. I am still fairly new to Java so I am not entirely sure how to make it any better. package hexTox; import java.util.Scanner; import java.util.concurrent.TimeUnit; public class Main { static String newPass = ""; public static void main(String[] args) { Scanner userIn = new Scanner(System.in); String choose; boolean decideSymb = true; do { System.out.println("Is using symbols an option? if so type in [Y] if not type in [N]"); choose = userIn.nextLine(); } while (!choose.equalsIgnoreCase("y") && !choose.equalsIgnoreCase("n")); //This was a long while loop with a lot of if and elses, thanks to SirPython's notice I changed it as he suggested. if (choose.equalsIgnoreCase("n")) { decideSymb = false; } long start = System.currentTimeMillis(); long end = System.currentTimeMillis(); long milliSecs = TimeUnit.MILLISECONDS.toSeconds(end - start); System.out.println("The password is: " + newPass); time(milliSecs); System.exit(0); } private static void time(long milliSecs) { //I put this in a method rather than the Main method and changed it up as BenC and SirPython noted long secs = milliSecs / 1000; long mins = secs / 60; long hours = mins / 60; long days = hours / 24; long years = days / 365; days -= (years * 365); hours -= (days * 24); mins -= (hours * 60); secs -= (mins * 60); milliSecs -= (secs * 1000); System.out.println("it took " + pluralFormat("year", years) + pluralFormat("day", days) + pluralFormat("hour", hours) + pluralFormat("min", mins) + pluralFormat("sec", secs) + pluralFormat("millisecond", milliSecs) + "to find the password"); } private static String pluralFormat(String word, long value) { //This here was put as BenC noted to make my code more efficient return Long.toString(value) + " " + word + (value > 1 ? "s" : "") + ", "; } private static void crack(String password, boolean decideSymb) { if (decideSymb == true) { chars = "0123456789#%&@aABbCcDdEeFfGgHh!IiJjKkLlMmNnOoPpQqRr$SsTtUuVvWwXxYyzZ"; //In my original code it started with 1 and now it starts with 0 as it is suppose to do. } for (int length = 2; length <= 15; length++) { newPass = ""; newPass = repeatString("0", length); int lastInd = length - 1; while (!newPass.equals(password)) { String end = repeatString("Z", newPass.length()); if (newPass.equals(end)) { break; } int indInChars = chars.indexOf(newPass.charAt(lastInd)); if (indInChars < chars.length() && indInChars >= 0) { boolean t = true; int howManyZs = 0; //This will replace that last Zs that are in order with 0s then update the one in front +1 char. For Example abcZZZ will evaluate to abD000 and will go on. while (t == true) { if (newPass.charAt(newPass.length() - 1 - howManyZs) == 'Z') { howManyZs++; } else { t = false; } } String reset0s = ""; for (int l = 0; l < howManyZs; l++) { reset0s += "0"; } if (lastInd < newPass.length() - 1 && lastInd > 0) { lastInd--; indInChars = chars.indexOf(newPass.charAt(lastInd)) + 1; newPass = newPass.substring(0, lastInd) + chars.charAt(indInChars) + newPass.substring(lastInd + 1); } else if (newPass.length() - howManyZs == 1) { newPass = chars.substring(chars.indexOf(newPass.charAt(0)) + 1, chars.indexOf(newPass.charAt(0)) + 2) + reset0s; } else if (newPass.length() - howManyZs > 1 && howManyZs != 0) { newPass = newPass.substring(0, newPass.length() - 1 - howManyZs) + chars.substring(chars.indexOf(newPass.charAt(newPass.length() - 1 - howManyZs)) + 1, chars.indexOf(newPass.charAt(newPass.length() - 1 - howManyZs)) + 2) + reset0s; } else { indInChars = chars.indexOf(newPass.charAt(lastInd)) + 1; newPass = newPass.substring(0, lastInd) + chars.charAt(indInChars); } System.out.println(newPass); } } if (newPass.equals(password)) { break; } } } private static String repeatString(String s, int n) { //This here was put as BenC noted to make my code more efficient StringBuilder sb = new StringBuilder(n); while (n-- > 0) { sb.append(s); } return sb.toString(); } } • To prevent this from looking like a possible duplicate, consider adding the specific changes that were made. – Jamal Dec 21 '15 at 19:15 • Can you give a brief explanation of how your brute-forcing approach works? I'm also intrigued by why you're counting the number of Zs (howManyZs)... – h.j.k. Dec 22 '15 at 1:09 • @h.j.k. Sorry for not telling you earlier. Anyways I put it in as a comment in the code explaining what it is. By the way I am also working on a newer and faster way of brute forcing right now and will try to put up the code when done! – Oybek Kamolov Dec 22 '15 at 1:25 • @OybekKamolov ok, thanks for the update... so it starts with a and loop upwards by going to b, then abcZZZ to abD000 and so on? – h.j.k. Dec 22 '15 at 1:35 • Brute force isn't really a nice (albeit sometimes the only) solution. It will be O(n^m) in worst case, where n is the length of the password and m is the length of the charset being used. (please correct me if I am wrong with the Big O notation) – Emz Dec 22 '15 at 9:49 ## 1 Answer Make use of constants Instead of having static String chars with the default numbers and letters then in the middle of code overriding it with symbols as well, I suggest instead storing two static final String chars and static final String symbols, if you choose to use symbols as well, simply add it to the string activeCharset. That way it is easier at a first glance to see what characters exists, instead of having to scroll mid-way down the code. private static final String CHARACTERS = "0123456789aABbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyzZ"; private static final String SYMBOLS = "#%&@!$"; private static String activeCharset = characters; You then add to your activeCharset instead of working with boolean decideSymb Magic values for (int length = 2; length <= 15; length++) { I again recommend to store the 2 and 15 as constants. Tell what they represent, right now you have to read and understand the code to figure it out. Return values This is a more of an opinion. long start = System.currentTimeMillis(); long end = System.currentTimeMillis(); Your crack can return the time it took for it to execute, simply by having long start = System.currentTimeMillis(); in the start and subtracting it from long end = System.currentTimeMillis(); in the end. Reasoning: If measuring the time it takes for the brute-force algorithm then it makes sense that it's crack method keeps track of it. // get password from user Scanner userIn = new Scanner (System.in); // ask if symbols should be used do { System.out.println("Is using symbols an option? if so type in [Y] if not type in [N]"); choose = userIn.nextLine(); } while (!choose.equalsIgnoreCase("y") && !choose.equalsIgnoreCase("n")); // if it is then add them if (choose.equalsIgnoreCase("y")) { activeCharset += SYMBOLS; } Reference Footnote: I think it is great that you are doing follow up rounds on the code. It shows that you do actually care about the feedback. • @OybekKamolov, I stand corrected, you should not use try-with-resource. Reference – Emz Dec 22 '15 at 11:41 • Try-with-resources can work. Note that closing the scanner closes the underlying stream. If you close the scanner on System.in, then you cannot reopen it. – 200_success Dec 22 '15 at 16:41
{}
# Tate sequences and Fitting ideals of Iwasawa modules @article{Greither2016TateSA, title={Tate sequences and Fitting ideals of Iwasawa modules}, author={Cornelius Greither and Masato Kurihara}, journal={St Petersburg Mathematical Journal}, year={2016}, volume={27}, pages={941-965} } • Published 30 September 2016 • Mathematics • St Petersburg Mathematical Journal We consider abelian CM extensions L/k of a totally real field k, and we essentially determine the Fitting ideal of the dualized Iwasawa module studied by the second author [Ku3] in the case that only places above p ramify. In doing so we recover and generalise results of loc. cit. Remarkably, our explicit description of the Fitting ideal, apart from the contribution of the usual Stickelberger element Θ at infinity, only depends on the group structure of the Galois group Gal(L/k) and not on the… Fitting Ideals of Iwasawa Modules and of the Dual of Class Groups • Mathematics • 2017 In this paper we study some problems related to a refinement of Iwasawa theory, especially questions about the Fitting ideals of several natural Iwasawa modules and of the dual of the class groups, Fitting ideals of p-ramified Iwasawa modules over totally real fields • Mathematics Selecta Mathematica • 2021 We completely calculate the Fitting ideal of the classical p-ramified Iwasawa module for any abelian extension K/k of totally real fields, using the shifted Fitting ideals recently developed by the Fitting invariants in equivariant Iwasawa theory The main conjectures in Iwasawa theory predict the relationship between the Iwasawa modules and the $p$-adic $L$-functions. Using a certain proved formulation of the main conjecture, Greither and Fitting Ideals in Number Theory and Arithmetic • C. Greither • Mathematics Jahresbericht der Deutschen Mathematiker-Vereinigung • 2021 We describe classical and recent results concerning the structure of class groups of number fields as modules over the Galois group. When presenting more modern developments, we can only hint at the Fitting ideals of class groups for CM abelian extensions • Mathematics • 2021 Let K/k be a finite abelian CM-extension and T a suitable finite set of finite primes of k. In this paper, we determine the Fitting ideal of the minus component of the T -ray class group of K, except An unconditional proof of the abelian equivariant Iwasawa main conjecture and applications • Mathematics • 2020 Let $p$ be an odd prime. We give an unconditional proof of the equivariant Iwasawa main conjecture for totally real fields for every admissible one-dimensional $p$-adic Lie extension whose Galois Fitting Ideals in Two-variable Equivariant Iwasawa Theory and an Application to CM Elliptic Curves We study equivariant Iwasawa theory for two-variable abelian extensions of an imaginary quadratic field. One of the main goals of this paper is to describe the Fitting ideals of Iwasawa modules using The second syzygy of the trivial $G$-module, and an equivariant main conjecture • Mathematics • 2020 For a finite abelian $p$-extension $K/k$ of totally real fields and the cyclotomic $\mathbb{Z}_{p}$-extension $K_{\infty}/K$, we prove a strong version of an equivariant Iwasawa main conjecture by ## References SHOWING 1-10 OF 11 REFERENCES Ideal Class Groups of CM-fields with Non-cyclic Galois Action • Mathematics • 2012 Suppose that L/k is a finite and abelian extension such that k is a totally real base field and L is a CM-field. We regard the ideal class group ClL of L as a Gal(L/k)-module. As a sequel of the On Stronger Versions of Brumer's Conjecture Let k be a totally real number field and L a CM-field such that L/k is finite and abelian. In this paper, we study a stronger version of Brumer’s conjecture that the Stickelberger element times the On the structure of ideal class groups of CM-fields. For a CM-field K which is abelian over a totally real number field k and a prime number p, we show that the structure of the χ-component AχK of the p-component of the class group ofK is determined by Stickelberger ideals and Fitting ideals of class groups for abelian number fields • Mathematics • 2011 In this paper, we determine completely the initial Fitting ideal of the minus part of the ideal class group of an abelian number field over Q up to the 2-component. This answers an open question of Computing Fitting ideals of Iwasawa modules Abstract.This paper determines, in an equivariant sense, the Fitting ideals of several Iwasawa modules including the most canonical one. The connection between the modules themselves, which are Determining Fitting ideals of minus class groups via the equivariant Tamagawa number conjecture Abstract We assume the validity of the equivariant Tamagawa number conjecture for a certain motive attached to an abelian extension K/k of number fields, and we calculate the Fitting ideal of the Stickelberger elements, Fitting ideals of class groups of CM-fields, and dualisation • Mathematics • 2008 In this paper, we systematically construct abelian extensions of CM-fields over a totally real field whose Stickelberger elements are not in the Fitting ideals of the class groups. Our evidence Fitting Ideals of Class Groups of Real Fields with Prime Power Conductor • Mathematics, Physics • 1998 Abstract For a totally real field of prime power conductor, we determine the Fitting ideal over the Galois group ring of the ideal class group and of the narrow ideal class group. Class fields of abelian extensions of Q • Mathematics • 1984 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 0. Notation and preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . The Iwasawa conjecture for totally real fields Let F be a totally real number field. Let p be a prime number and for any integer n let Fun denote the group of nth roots of unity. Let 41 be a p-adic valued Artin character for F and let F,, be the
{}
axiom-developer [Top][All Lists] ## Re: [Axiom-developer] putting pamphlets on MathAction From: Ralf Hemmecke Subject: Re: [Axiom-developer] putting pamphlets on MathAction Date: Thu, 14 Sep 2006 19:18:25 +0200 User-agent: Thunderbird 1.5.0.5 (X11/20060719) Sorry to say, Martin, but even though you tried to document the API of the package constructors, your code is much in the spirit of the old Axiom files. :-( Your code may be excellent, but it is clearly written for a machine and not for humans. So I cannot read it. You probably know that. Take for example mantepse.spad.pamphlet: \begin{abstract} The packages defined in this file enable {\Axiom} to guess formulas for sequences of, for example, rational numbers or rational functions, given the first few terms. It extends and complements Christian Krattenthaler's program \Rate\ and the relevant parts of Bruno Salvy and Paul Zimmermann's \GFUN. \end{abstract} That is actually ALL there is to describe the algorithms you use inside your code. :-( Even worse. The macros \Rate and \GFUN are not references to the literature or URLs. If Axiom were a Journal and your code were a contribution to that journal. I am sure you yourself would reject that. Sorry for such harsh criticism, but I simply think if new things are to be accepted to Axiom they should come with a certain standard. We already have enough legacy code to deal with. I would change my mind if your contributions makes (at least) two more contributors to Axiom Algebra and and you promise to document your code in an LP style till the end of the year. Maybe you should have a nap... ;-) Good luck with the presentation of your Guess package. Ralf On 09/14/2006 05:56 PM, Martin Rubey wrote: Dear Bill, could you please put the three attached pamphlets on mathaction. (mantepse depends on the other two) I got an error message about noweave not found, and including it via spad doesn't seem to like the dollar signs... (sorry for not being particularly useful anymore, I didn't sleep last night...) Thanks for everything, Martin reply via email to
{}
## my next vacations location [with no bike] Posted in Statistics with tags , , , , , , , on July 2, 2017 by xi'an ## fast ε-free ABC Posted in Books, Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , on June 8, 2017 by xi'an Last Fall, George Papamakarios and Iain Murray from Edinburgh arXived an ABC paper on fast ε-free inference on simulation models with Bayesian conditional density estimation, paper that I missed. The idea there is to approximate the posterior density by maximising the likelihood associated with a parameterised family of distributions on θ, conditional on the associated x. The data being then the ABC reference table. The family chosen there is a mixture of K Gaussian components, which parameters are then estimated by a (Bayesian) neural network using x as input and θ as output. The parameter values are simulated from an adaptive proposal that aims at approximating the posterior better and better. As in population Monte Carlo, actually. Except for the neural network part, which I fail to understand why it makes a significant improvement when compared with EM solutions. The overall difficulty with this approach is that I do not see a way out of the curse of dimensionality: when the dimension of θ increases, the approximation to the posterior distribution of θ does deteriorate, even in the best of cases, as any other non-parametric resolution. It would have been of (further) interest to see a comparison with a most rudimentary approach, namely the one we proposed based on empirical likelihoods. Posted in Mountains, pictures, Statistics, Travel, University life with tags , , , on March 10, 2017 by xi'an ## ISBA 2018, Edinburgh, 24-28 June Posted in Statistics with tags , , , , , , , , , , on March 1, 2017 by xi'an The ISBA 2018 World Meeting will take place in Edinburgh, Scotland, on 24-29 June 2018. (Since there was some confusion about the date, it is worth stressing that these new dates are definitive!) Note also that there are other relevant conferences and workshops in the surrounding weeks: • a possible ABC in Edinburgh the previous weekend, 23-24 June [to be confirmed!] • the Young Bayesian Meeting (BaYSM) in Warwick, 2-3 July 2018 [with a potential short course on fundamentals of simulation in the following days, to be confirmed!] • MCqMC 2018 in Rennes, 1-6 July 208 • ICML 2018 in Stockholm, 10-15 July 2018 • the 2018 International Biometrics Conference in Barcelona, 8-13 July 2018 ## asymptotically exact inference in likelihood-free models Posted in Books, pictures, Statistics with tags , , , , , , , on November 29, 2016 by xi'an “We use the intuition that inference corresponds to integrating a density across the manifold corresponding to the set of inputs consistent with the observed outputs.” Following my earlier post on that paper by Matt Graham and Amos Storkey (University of Edinburgh), I now read through it. The beginning is somewhat unsettling, albeit mildly!, as it starts by mentioning notions like variational auto-encoders, generative adversial nets, and simulator models, by which they mean generative models represented by a (differentiable) function g that essentially turn basic variates with density p into the variates of interest (with intractable density). A setting similar to Meeds’ and Welling’s optimisation Monte Carlo. Another proximity pointed out in the paper is Meeds et al.’s Hamiltonian ABC. “…the probability of generating simulated data exactly matching the observed data is zero.” The section on the standard ABC algorithms mentions the fact that ABC MCMC can be (re-)interpreted as a pseudo-marginal MCMC, albeit one targeting the ABC posterior instead of the original posterior. The starting point of the paper is the above quote, which echoes a conversation I had with Gabriel Stolz a few weeks ago, when he presented me his free energy method and when I could not see how to connect it with ABC, because having an exact match seemed to cancel the appeal of ABC, all parameter simulations then producing an exact match under the right constraint. However, the paper maintains this can be done, by looking at the joint distribution of the parameters, latent variables, and observables. Under the implicit restriction imposed by keeping the observables constant. Which defines a manifold. The mathematical validation is achieved by designing the density over this manifold, which looks like $p(u)\left|\frac{\partial g^0}{\partial u}\frac{\partial g^0}{\partial u}^\text{T}\right|^{-\textonehalf}$ if the constraint can be rewritten as g⁰(u)=0. (This actually follows from a 2013 paper by Diaconis, Holmes, and Shahshahani.) In the paper, the simulation is conducted by Hamiltonian Monte Carlo (HMC), the leapfrog steps consisting of an unconstrained move followed by a projection onto the manifold. This however sounds somewhat intense in that it involves a quasi-Newton resolution at each step. I also find it surprising that this projection step does not jeopardise the stationary distribution of the process, as the argument found therein about the approximation of the approximation is not particularly deep. But the main thing that remains unclear to me after reading the paper is how the constraint that the pseudo-data be equal to the observable data can be turned into a closed form condition like g⁰(u)=0. As mentioned above, the authors assume a generative model based on uniform (or other simple) random inputs but this representation seems impossible to achieve in reasonably complex settings. ## travelling from pub to pub [in a straight line] Posted in pictures, Wines with tags , , , , , , , , , , , on October 30, 2016 by xi'an Above is the solution produced by a team at the University of Waterloo to the travelling salesman problem of linking all pubs in the UK (which includes pubs in Northern Ireland as well as some Scottish islands—even though I doubt there is no pub at all on the Island of Skye! They also missed a lot of pubs in Glasgow! And worst gaffe of all, they did not include the Clachaigh Inn, probably the best pub on Earth…). This path links over 24 thousand pubs, which is less than the largest travelling salesman problem solved at the current time, except that this case used the exact distances provided by Google maps. Of course, it would somehow make more sense to increase the distances by random amounts as the pub visits increase, unless the visitor sticks to tonic. Or tea. ## MCqMC 2016 [#4] Posted in Mountains, pictures, Running, Statistics, Travel, University life with tags , , , , , , , , , , , , , , on August 21, 2016 by xi'an In his plenary talk this morning, Arnaud Doucet discussed the application of pseudo-marginal techniques to the latent variable models he has been investigating for many years. And its limiting behaviour towards efficiency, with the idea of introducing correlation in the estimation of the likelihood ratio. Reducing complexity from O(T²) to O(T√T). With the very surprising conclusion that the correlation must go to 1 at a precise rate to get this reduction, since perfect correlation would induce a bias. A massive piece of work, indeed! The next session of the morning was another instance of conflicting talks and I hoped from one room to the next to listen to Hani Doss’s empirical Bayes estimation with intractable constants (where maybe SAME could be of interest), Youssef Marzouk’s transport maps for MCMC, which sounds like an attractive idea provided the construction of the map remains manageable, and Paul Russel’s adaptive importance sampling that somehow sounded connected with our population Monte Carlo approach. (With the additional step of considering transform maps.) An interesting item of information I got from the final announcements at MCqMC 2016 just before heading to Monash, Melbourne, is that MCqMC 2018 will take place in the city of Rennes, Brittany, on July 2-6. Not only it is a nice location on its own, but it is most conveniently located in space and time to attend ISBA 2018 in Edinburgh the week after! Just moving from one Celtic city to another Celtic city. Along with other planned satellite workshops, this occurrence should make ISBA 2018 more attractive [if need be!] for participants from oversea.
{}
# ←→ Methodological notes # The relativistic virial theorem and scale invariance The virial theorem is related to the dilatation properties of bound states, as seen in particular from the relativistic virial theorem formulated (by Landau and Lifshitz) in terms of the energy-momentum tensor trace. In the Hamiltonian formulation of dilatations we propose here, the relativistic virial theorem naturally arises as a stability condition against dilatations. A bound state becomes scale invariant in the ultrarelativistic limit, in which its energy vanishes. However, for very relativistic bound states, scale invariance is broken by quantum effects, necessitating including the energy-momentum tensor trace anomaly into the virial theorem. This quantum field theory virial theorem is directly related to the Callan — Symanzik equations. The virial theorem is applied to QED and then to QCD, focusing on the hadronic bag model. In massless QCD, 3/4 of the hadron mass corresponds to quarks and gluons and 1/4 to the trace anomaly, according to the virial theorem. BibTexBibNote ® (generic)BibNote ® (RIS)Medline RefWorks RT Journal T1 The relativistic virial theorem and scale invariance A1 Gaite,J. PB Physics-Uspekhi PY 2013 FD 10 Sep, 2013 JF Physics-Uspekhi JO Phys. Usp. VO 56 IS 9 SP 919-931 DO 10.3367/UFNe.0183.201309f.0973 LK https://ufn.ru/en/articles/2013/9/e/
{}
# The gain or loss on the effective portion of a U.S. parent company’s hedge of a net investment in a. The gain or loss on the effective portion of a U.S. parent company’s hedge of a net investment in a foreign entity should be treated as: • Question 2 If the U.S. dollar is the currency in which the foreign affiliate’s books and records are maintained, and the U.S. dollar is also the functional currency, • Question 3 Detroit based Auto Corporation, purchased ancillaries from a Japanese firm on December 1, 20X8, for 1,000,000 Yen, when the spot rate for Yen was $.0095. On December 31, 20X8, the spot rate stood at$.0096. On January 10, 20X9 Auto paid 1,000,000 Yen acquired at a rate of $.0094. Auto’s income statements should report a foreign exchange gain or loss for the years ended December 31, 20X8 and 20X9 of: • Question 4 Company X denominated a December 1, 20X9, purchase of goods in a currency other than its functional currency. The transaction resulted in a payable fixed in terms of the amount of foreign currency, and was paid on the settlement date, January 10, 2010. Exchange rates moved unfavorably at December 31, 20X9, resulting in a loss that should: • Question 5 Simon Company has two foreign subsidiaries. One is located in France, the other in England. Simon has determined the U.S. dollar is the functional currency for the French subsidiary, while the British pound is the functional currency for the English subsidiary. Both subsidiaries maintain their books and records in their respective local currencies. What methods will Simon use to convert each of the subsidiary’s financial statements into U.S. dollars? English Subsidiary’s Financial Statements French Subsidiary’s Financial Statements A. Translation Translation B. Remeasurement Remeasurement C. Remeasurement Translation D. Translation Remeasurement • Question 6 Which of the following describes a situation when a parent company would not consolidate a foreign subsidiary? • Question 7 Which of the following observations is true of forward contracts? • Question 8 Which of the following observations is true of futures contracts? • Question 9 Which of the following statements is true regarding the SEC’s timeline for convergence? • Question 10 Gains from remeasuring a foreign subsidiary’s financial statements from the local currency, which is not the functional currency, into thecompany’s functional currency should be reported as a(n) • Question 11 Mazeppa, Inc. is a multinational entity with its head office located in Toronto, Canada. Its main foreign subsidiary is in Paris,France, but the primary economic environment in which the foreign subsidiary generates and expends cash is in the UnitedStates. Based on this information, which of the following statements is most likely true for Mazeppa, Inc.? • Question 12 Dover Company owns 90% of the capital stock of a foreign subsidiary located in Italy. Dover’s accountant has just translated the accounts of the foreign subsidiary and determined that a debit translation adjustment of$80,000 exists. If Dover uses the fully adjusted equity method for its investment, what entry should Dover record in order to recognize the translation adjustment? A. Investment in Italian Subsidiary 72,000 Other Comprehensive Income—Translation Adjustment 72,000 B. Other Comprehensive Income—Translation Adjustment 80,000 Investment in Italian Subsidiary 80,000 C. Other Comprehensive Income—Translation Adjustment 72,000 Investment in Italian Subsidiary 72,000 D. No entry required • Question 13 How is the fair value of a Forward Contract determined by U.S. GAAP? Be sure to cite the proper FASB Code as part of your answer. • Question 14 Explain the purpose of a hedge to reduce foreign exchange risk. Give an example as part of your explanation. • Question 15 Identify the 2 factors that will cause a foreign exchange gain. • Question 16 Briefly explain the following terms associated with accounting for foreign entities: a) Functional Currency b) Translation c) Remeasurement ## Plagiarism Checker Submit your documents and get free Plagiarism report Free Plagiarism Checker ## Recent Questions in Management - Others Copy and paste your question here... Attach Files New York, 12205, USA Level 6/140 Creek Street, Brisbane, QLD 4000, Australia support@transtutors.com Reach us on:
{}
# SciPy incomplete gamma function I got tripped up by this recently when doing eigenvalue calculations in python. I wanted to evaluate the incomplete gamma function $\gamma (a, x)\;=\;\int_{0}^{x}t^{a-1}e^{-t}dt$. After using the SciPy ${\tt gammainc}$ function I was scratching my head as to why I was seeing a discrepancy between my numerical calculations for the eigenvalues and my theoretical calculation. Then I came across this post by John D Cook that helped clarify things. The SciPy function ${\tt gammainc}$ actually calculates $\gamma(a,x)/\Gamma(a)$.
{}
# American Institute of Mathematical Sciences June  2012, 7(2): 315-336. doi: 10.3934/nhm.2012.7.315 ## New numerical methods for mean field games with quadratic costs 1 UFR de Math, Universit, 175, rue du Chevaleret, 75013 Paris, France Received  November 2011 Revised  March 2012 Published  June 2012 Mean field games have been introduced by J.-M. Lasry and P.-L. Lions in [13, 14, 15] as the limit case of stochastic differential games when the number of players goes to $+\infty$. In the case of quadratic costs, we present two changes of variables that allow to transform the mean field games (MFG) equations into two simpler systems of equations. The first change of variables, introduced in [11], leads to two heat equations with nonlinear source terms. The second change of variables, which is introduced for the first time in this paper, leads to two Hamilton-Jacobi-Bellman equations. Numerical methods based on these equations are presented and numerical experiments are carried out. Citation: Olivier Guéant. New numerical methods for mean field games with quadratic costs. Networks & Heterogeneous Media, 2012, 7 (2) : 315-336. doi: 10.3934/nhm.2012.7.315 ##### References: show all references ##### References: [1] Marco Cirant, Diogo A. Gomes, Edgard A. Pimentel, Héctor Sánchez-Morgado. On some singular mean-field games. Journal of Dynamics & Games, 2021  doi: 10.3934/jdg.2021006 [2] Sandrine Anthoine, Jean-François Aujol, Yannick Boursier, Clothilde Mélot. Some proximal methods for Poisson intensity CBCT and PET. Inverse Problems & Imaging, 2012, 6 (4) : 565-598. doi: 10.3934/ipi.2012.6.565 [3] Seung-Yeal Ha, Jinwook Jung, Jeongho Kim, Jinyeong Park, Xiongtao Zhang. A mean-field limit of the particle swarmalator model. Kinetic & Related Models, , () : -. doi: 10.3934/krm.2021011 [4] Hong Seng Sim, Wah June Leong, Chuei Yee Chen, Siti Nur Iqmal Ibrahim. Multi-step spectral gradient methods with modified weak secant relation for large scale unconstrained optimization. Numerical Algebra, Control & Optimization, 2018, 8 (3) : 377-387. doi: 10.3934/naco.2018024 [5] Marcel Braukhoff, Ansgar Jüngel. Entropy-dissipating finite-difference schemes for nonlinear fourth-order parabolic equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3335-3355. doi: 10.3934/dcdsb.2020234 [6] Tomáš Roubíček. An energy-conserving time-discretisation scheme for poroelastic media with phase-field fracture emitting waves and heat. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 867-893. doi: 10.3934/dcdss.2017044 [7] Hakan Özadam, Ferruh Özbudak. A note on negacyclic and cyclic codes of length $p^s$ over a finite field of characteristic $p$. Advances in Mathematics of Communications, 2009, 3 (3) : 265-271. doi: 10.3934/amc.2009.3.265 [8] Qigang Yuan, Jingli Ren. Periodic forcing on degenerate Hopf bifurcation. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2857-2877. doi: 10.3934/dcdsb.2020208 [9] Junichi Minagawa. On the uniqueness of Nash equilibrium in strategic-form games. Journal of Dynamics & Games, 2020, 7 (2) : 97-104. doi: 10.3934/jdg.2020006 [10] Michael Grinfeld, Amy Novick-Cohen. Some remarks on stability for a phase field model with memory. Discrete & Continuous Dynamical Systems, 2006, 15 (4) : 1089-1117. doi: 10.3934/dcds.2006.15.1089 [11] Zhigang Pan, Chanh Kieu, Quan Wang. Hopf bifurcations and transitions of two-dimensional Quasi-Geostrophic flows. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021025 [12] Yahui Niu. A Hopf type lemma and the symmetry of solutions for a class of Kirchhoff equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021027 [13] Caifang Wang, Tie Zhou. The order of convergence for Landweber Scheme with $\alpha,\beta$-rule. Inverse Problems & Imaging, 2012, 6 (1) : 133-146. doi: 10.3934/ipi.2012.6.133 [14] Raghda A. M. Attia, Dumitru Baleanu, Dianchen Lu, Mostafa M. A. Khater, El-Sayed Ahmed. Computational and numerical simulations for the deoxyribonucleic acid (DNA) model. Discrete & Continuous Dynamical Systems - S, 2021  doi: 10.3934/dcdss.2021018 [15] Arseny Egorov. Morse coding for a Fuchsian group of finite covolume. Journal of Modern Dynamics, 2009, 3 (4) : 637-646. doi: 10.3934/jmd.2009.3.637 [16] Wolf-Jüergen Beyn, Janosch Rieger. The implicit Euler scheme for one-sided Lipschitz differential inclusions. Discrete & Continuous Dynamical Systems - B, 2010, 14 (2) : 409-428. doi: 10.3934/dcdsb.2010.14.409 [17] Alina Chertock, Alexander Kurganov, Mária Lukáčová-Medvi${\rm{\check{d}}}$ová, Șeyma Nur Özcan. An asymptotic preserving scheme for kinetic chemotaxis models in two space dimensions. Kinetic & Related Models, 2019, 12 (1) : 195-216. doi: 10.3934/krm.2019009 [18] Hailing Xuan, Xiaoliang Cheng. Numerical analysis and simulation of an adhesive contact problem with damage and long memory. Discrete & Continuous Dynamical Systems - B, 2021, 26 (5) : 2781-2804. doi: 10.3934/dcdsb.2020205 [19] Vieri Benci, Marco Cococcioni. The algorithmic numbers in non-archimedean numerical computing environments. Discrete & Continuous Dynamical Systems - S, 2021, 14 (5) : 1673-1692. doi: 10.3934/dcdss.2020449 [20] Vakhtang Putkaradze, Stuart Rogers. Numerical simulations of a rolling ball robot actuated by internal point masses. Numerical Algebra, Control & Optimization, 2021, 11 (2) : 143-207. doi: 10.3934/naco.2020021 2019 Impact Factor: 1.053
{}
# Bonding and Phase Changes? When water boils into water vapor, its temperature (average kinetic energy) does not change because the inputed energy is used to break its bonds. But how does that work? Also, in a liquid, aren’t intermolecular bonds broken and reformed constantly anyway? Why does a phase change require a time interval for breaking bonds? Temperature involves kinetic energy; however, bonds and intermolecular forces (IMF's) involve potential energy. The potential energy of the system increases when water vaporizes, but kinetic energy does not change. In a liquid, for each IMF broken, another is made, resulting in a net change of zero. A phase change requires a time interval because it takes time to add energy to the system. • +1. Indeed the answer lies within the word "potential". – M.A.R. Apr 9 '15 at 17:38 • Wait, the potential energy of the system increases when bonds are being broken? – lightweaver Apr 10 '15 at 11:19 • Yes. When two objects that are attracted to each other by a force are moved farther apart, potential energy increases. – Brinn Belyea Apr 13 '15 at 13:39 The trivial reason that the phase change liquid to vapour takes time is that this can only occur at the surface of a liquid, but I suspect that this is not what you mean when you talk about a ‘time interval’ to break bonds. In fact time does not come into it. I try to explain what happens below. I've added some background also in case you are not familiar with it; if you are you need only read the paras around the figure. Boiling occurs when the vapour pressure of the liquid equals the external pressure, normally 1 atm (or 1 bar; almost the same thing). To boil a liquid, energy has to be continuously suppled as molecules are escaping as vapour and taking energy away with them in doing so. (Holding a rapidly evaporating solvent feels cold). At the boiling point the temperature remains constant until all the liquid has evaporated. The average energy a liquid or gas has at temperature T is $3k_BT/2$. Trouton’s rule states that the latent heat (enthalpy) of vaporisation is related to the boiling temperature $T_B$ as $L_{vap} /T_B \approx 80 \pu {J K^{-1}mol^{-1}}$. This borne out by many liquids, with a range of $\pu{70 - 90 J K^{-1}mol^{-1}}$; those with values greater than this range are hydrogen bonded, e.g. water, ethanol, and if less than this range, form dimers in the vapour, e.g. acetic acid. From this value the cohesive energy can be estimated and comes to $\approx 9 k_BT$. By cohesive energy is meant all the pairwise interactions between one molecule and its neighbours. Thus molecules will condense to form liquids when their cohesive energy exceeds $\approx 9 k_BT$. Thus to reverse the process, which is most quickly done by boiling, this much energy has to be supplied per molecule. Energy is present in a molecule as both kinetic (translation plus vibrations/ & rotations ) and potential(bonds). In addition, when in a liquid there is always inter-molecular potential energy, e.g. via pairwise dispersion forces, which are always present between any atom or molecule, and dipole and induced dipole forces etc, which are generally called van-der-Waals forces. These form the cohesive energy. (We shall assume that chemical bonds are so strong that boiling does not change them which is usually the case. This makes hydrogen bonds inter-molecular bonds). As you mention in your question inter-molecular bonds are made and broken all the time and this because there is a distribution of energies. The Boltzmann eqn. $\ce{exp}(-E/(k_BT))$ gives the chance of having energy E. This distribution of energies is also why it is possible for a liquid to evaporate at room temperature, albeit slowly compared to boiling, as a few molecules / second will randomly gain enough energy to escape the liquid. The nature of these intermolecular forces is important. At infinite separation the intermolecular potential energy is zero, as molecules approach one another the potential energy falls (an attractive interaction) and reaches a minimum value, say, at 0.5 nm or some similar small separation which is somewhat larger than a chemical bond length. At yet smaller separation there is strong repulsion between the electrons on either molecule and the potential energy rises very rapidly. In practice ‘infinite separation’ can mean just 3 or 4 times the minimum, equilibrium separation. The shape of the inter-molecular potential typically has the form given by a Lennard-Jones or Morse equation. See the figure for a sketch, which is not to scale. The minimum energy is negative. Note that if the intermolecular potential were harmonic in form, then liquids could never evaporate as the harmonic potential energy extends to infinity. As the average energy increases the cohesive (intermolecular) potential energy increases also until it reaches zero and only kinetic (internal and translational, rotational) is left. The molecules can now escape the solution. (Finally, note that a molecule has the same average kinetic energy in the liquid or vapour at the same temperature and is $3k_BT/2$. In the liquid the motion is restricted to a smaller space around the potential energy minimum than it is in the gas phase.)
{}
How can I reuse objects going off left side of screen, by putting them offscreen on the right side? I have an array of roughly 10,000 tiles over a 2d grid. When a tile goes off screen, I'd like to recycle it by disabling the renderer, repositioning it just before it comes on screen again, then turn the renderer back on. For example, if the camera is moving right on a platformer, the tiles on the left going off screen would be recycled and repositioned on the right side just before the camera. The player could be moving in any 2d direction, up, up right, right, down right, down, down left, left, up left, thus I need to account for all directions. 0,0--------------->100, 0 | Move Direction--> | | | x|xxxxxxxxxxxxxxxxxxx|xxx <- Recycled objects taken x|xxxxxxxxxxxxxxxxxxx|xxx <- from left and repositioned on right 0,-100<----------->100,-100 I've tried two approaches, one is using a wider camera view (maybe 1.3x) and using the edges with ViewportToWorldPoint and getting the player horizontal and vertical movement direction. If the camera is moving horizontally one way, my tilemanager script runs down the other side and repositions the objects on the opposite just before coming on screen. The problem here is if one is ever missed, it could go astray and then over time this leads to further problems if over time multiple are missed. I am using a tilemanager and not putting a script in each tile as it seems wasteful to have a script running in 10k+ tiles. The other approach is literally to just scan every item in the array to see if any are outside the viewport bounding box, which seems like a huge waste of resources and not the right way to go. I have a 2d array that has just the positions saved, nonetheless scanning through any array of 10k+ each frame seems like it isn't the right way to go in terms of efficiency. All of this work is being done as a coroutine but what is frustrating is that unity doesn't support threading as this would make this task faster as I could divide up the work more. Is there another way anyone has used or created something that will efficiently find all objects off screen, in order to reuse them? • Your use of the term "garbage collection" here is extremely confusing, because it has a very specific meaning that I don't think you mean to use. Possibly edit your question and use a different name. I think the term you want to use is "object pool". – Andrew Russell Dec 14 '13 at 9:24 If I understand it correctly, you are using a different object (a plane, for example) for each tile. For several reasons tilemaps aren't usually done this way: • As you seem to already know, it comes with massive performance issues, because you might have to render up to 10,000 tiles at once • Things like elevated tiles, with adjacent tiles being a nice-looking slope, are very hard to do this way Because of these and probably more reasons, you may find it useful to create these tiles out of a single mesh. There is a great tutorial on youtube which guides you through the process of doing this. Creating your tilemap this way will resolve the performance issues you are trying to resolve by "reusing" your tiles. You can use collision volumes. You could either set up very large volumes enclosing the play area on each side and warping the entity when the collision starts or you could make a single volume enclosing the play area and then warp the entity when the collision ends (which is the way I'd go). Generally the collision detection system will be more efficient at detecting these things than you can be in game code.
{}