text
stringlengths
100
957k
meta
stringclasses
1 value
# Curve-fit to resonant FRF data ## Introduction I am re-reviewing the bible-like book “Introduction to Modal Analysis,” which every Japanese vibration engineer has read at least once. I would like to explain what I have learned in the re-review, showing the MATLAB code. This time, we will review from section 6.2.4, “Curve-fit,” p. 349. The following link is a summary to a description of vibration theory in English, if you are interested. TopPage 振動騒音 研究所 MATLABとNVH(Noise, Vibration and Harshness)をこよなく愛する人生の先輩、それが私(MATLABパイセン)です。NVHを極めるにあたり、周辺の知識も網羅的に勉強することになり、その知識を共有すべく、本HPを運営しています。日本の製造業を応援すべく、機械エンジニアの「車輪の再開発」の防止と業務効率化の手助けをモットーに活動。専門である振動騒音工学や音響工学について、プログラムを示しながら解説。大学の授業よりもわかりやすく説明することを目指す。質問頂ければ、回答したいと思います。 ## 6.2.4 Curve-fit (Curve-fitting) First, consider the FRF shown in the figure below as an example. Figure 1 FRF The Nyquist diagram for the frequency band in the red box shown in Figure 1 is shown in Figure 2. Figure 2 Nyquist diagram(5~15Hz) As shown in Figure 2, the Nyquist diagram for FRF is the red line. Depending on the frequency resolution, the Nyquist diagram will be a crisp circle. (However, in textbooks, it is explained as a beautiful circle like the black dotted line…) モード円適合では、図2に示すようなナイキスト線からモード特性(共振周波数およびモード減衰、モードベクトル)を同定すること意味します。なお、図1および図2は1次共振周波数の共振峰が他の共振周波数の影響を受けていないと考えられます。このような場合にのみモード円適合は適用可能です。 The curve-fitting means identifying the mode characteristics (resonant frequency and mode damping, mode vector) from the Nyquist line as shown in Figure 2. Note that the resonant peaks at the first-order resonant frequency in Figures 1 and 2 are considered to be unaffected by other resonant frequencies. The curve-fitting is applicable only in such cases. The Nyquist diagram of a resonant peak unaffected by other vibration modes (resonant frequencies) is a circle as shown in Figure 3. Figure 3 Nyquist diagram of compliance (when the effects of other vibration modes are negligible) $$Center = (R, I – \frac{ 1 }{4Kζ}) \tag{6.21}$$ This also relates to the following eq. $$a=2R, b=2I – \frac{ 1 }{2Kζ}, c=\frac{ I }{2Kζ}-R^2-I^2 \tag{6.25}$$ $$Radius of circle=\sqrt{c+\frac{ a^2+b^2}{4}}, 中心=(\frac{ a }{2}, \frac{ b }{2}) \tag{6.27}$$ where the real part of compliance is x and the imaginary part is y. The coefficients a, b, and c can be expressed in terms of the least-squares relationship in equation (6.30). Note that l (lowercase L) means the number of data. $\left[ \begin{array}{ccc} ∑x_i^2 & ∑x_iy_i & ∑x_i \\ ∑x_iy_i & ∑y_i^2 & ∑y_i \\ ∑x_i & ∑y_i & l \\ \end{array} \right] \left[ \begin{array}{c} a \\ b \\ c \end{array} \right] =\left[ \begin{array}{c} ∑(x_i^3+x_iy_i^2) \\ ∑(x_i^2y_i+y_i^3) \\ ∑(x_i^2+y_i^2) \end{array} \right] \tag{6.30}$ Equation (6.30) yields a, b, and c, so that a circle with black dotted lines can be drawn as shown in Figure 2. However, just being able to draw a clean circle does not mean that the mode characteristics (resonant frequency, mode attenuation ratio, and mode vector) can be obtained. From here, I would like to show the procedure for obtaining the mode characteristics. Figure 4 Nyquist diagram of compliance part2 First, we need to find the resonance frequency fn shown in Figure 4. Generally, it is between 9.4Hz and 9.5Hz, so it is often obtained as (9.4Hz + 9.5Hz)/2 = 9.45Hz. However, this was in the early 1990s when Dr. Nagamatsu gave an introduction to modal analysis, and if numerical operations could be performed easily and with high precision as they are today, we would want to be a little smarter about obtaining this. So how do we obtain the resonance frequency fn? Since the center of the circle is known from equation (6.27), we know the position on the circle that the resonance frequency fn takes. However, since the correspondence with the frequency is not known, it is necessary to find the correspondence between the position on the circle and the frequency. As shown in Figure 4, two frequencies (9.4 Hz and 9.5 Hz) straddling the resonance frequency fn are known, so they can be obtained by linear completion as in the following equation. $$fn=\frac{θ_d}{θ_d+θ_e}df + fd$$ Since the angles are linearly complemented, there will be an error, but since the frequency resolution df when curve fitting is performed is about 2 Hz at most, this error may be acceptable. The mode attenuation ratio can be obtained by finding the resonance frequency fn. The method for calculating the mode attenuation ratio is shown below. $$βd=\frac{fd}{fn}, βe=\frac{fe}{fn} \tag{B9.2}$$ $$ζ=\frac{βe^2-βd^2}{2(βd\tan{\frac{θd}{2}}+βe\tan{\frac{θe}{2}})} \tag{B9.5}$$ The rest of the process is the same as for the half power method. I would like to explain it again just to be sure. Since the compliance G(ω) is based on the case of self-compliance and the amplitude of the mode vector is 1, K in Eq. (6.5) can be treated as equivalent to the modal stiffness k. Self-compliance is, for example, the transfer function obtained from the response of m1 when m1 is vibrated. Once the modal stiffness k is obtained, the modal mass can be easily obtained from the following equation. $$Ω^2 m = k$$ So far, the resonant frequency fn, modal damping ratio ζ, modal stiffness k, and modal mass m are known. All that remains is to determine the mode vector, and then we have obtained all the modal characteristics. In the case of self-compliance, the component of the mode vector is now 1. On the other hand, for mutual compliance Gis (excitation point is i and response point is s), the following procedure is used to obtain φi=1, so the component φs of the mode vector becomes equation (6.11). $$K_{is}= \frac{ k }{ φ_s } \tag{6.11}$$ The relationship with the value of the resonant frequency of mutual compliance is equation (6.12). $$|G_{is}(ωn)| ≒ \frac{ 1 }{ 2K_{is}ζ } \tag{6.12}$$ Therefore, the component φs of the mode vector can be obtained using equation (6.13). $$φ_s ≒ 2kζ|G_{is}(ωn)| \tag{6.13}$$ ## Program Verification We would like to verify the accuracy of the program with the 4dof model. Table 1 shows the theoretical values. We will verify whether these theoretical values can be obtained by the theory of “mode circle fitting. Note that we would also like to verify the relationship with the frequency resolution, since it is known that the accuracy increases with a smaller frequency resolution df. #### 理論値 1st:9.432Hz 2nd:33.385Hz 3rd:53.603Hz 4th:74.490Hz modal damping ratio:1% modal damping ratio:1% modal damping ratio:1% modal damping ratio:1% Mode vector 1 1 1 1 1.412 0.400 -1.336 -3.976 1.604 -0.293 -0.368 7.223 1.678 -0.652 0.880 -4.156 #### 同定結果(周波数分解能が0.1Hzのとき) 1st:9.4727Hz 2nd:33.4549Hz 3rd:53.6579Hz 4th:74.6738Hz modal damping ratio:1.0057% modal damping ratio:1.0043% modal damping ratio:1.0080% modal damping ratio:1.2731% Mode vector 1.0000 1.0000 1.0000 1.0000 1.3985 0.4003 -1.3234 -1.9105 1.5653 -0.2944 -0.3656 3.3689 1.6323 -0.6501 0.8706 -1.3685 #### 同定結果(周波数分解能が1Hzのとき) 1st:9.7776Hz 2nd:33.9207Hz 3rd:53.9864Hz 4th:73.7188Hz modal damping ratio:1.0772% modal damping ratio:1.0538% modal damping ratio:1.0873% modal damping ratio:1.8900% Mode vector 1.0000 1.0000 1.0000 1.0000 1.2654 0.4084 -1.2482 -1.2167 1.2422 -0.3108 -0.3410 1.9921 1.2617 -0.6383 0.8257 -1.1708 #### 同定結果(周波数分解能が2Hzのとき) 1st:10.2043Hz 2nd:34.5581Hz 3rd:54.9777Hz 4th:73.7797Hz modal damping ratio:1.0647% modal damping ratio:0.6913% modal damping ratio:0.9051% modal damping ratio:18.944% Mode vector 1.0000 1.0000 1.0000 1.0000 1.1764 0.2204 -1.1183 -0.0953 1.2551 -0.1800 -0.5794 0.1352 1.2843 -0.3242 1.4505 -0.0416 #### Accuracy Comparison First, the accuracy of the resonant frequency estimation is compared in Table 1. As shown in Table 1, the error is calculated using the theoretical value as the reference (correct value). As can be seen, the smaller the frequency resolution, the smaller the error. Personally, I was surprised that the maximum error is 8.2% when the frequency resolution is 2 Hz. I had expected it to be much lower, so it shows that the frequency resolution setting is important. Table 1 Resonance frequency estimation accuracy 1st resonance freq. Error between theoretical and estimated values [%] 2nd resonance freq. Error [%] 3rd resonance freq. Error [%] 4th resonance freq. Error [%] df=0.1Hz 0.4 0.2 0.1 0.2 df=1Hz 3.7 1.6 0.7 1.0 df=2Hz 8.2 3.5 2.6 1.0 Table 2 then compares the estimation accuracy of the mode attenuation ratios. As shown in Table 2, the larger the frequency resolution, the larger the error and the lower the estimation accuracy. My personal feeling is that a mode attenuation ratio of 60-70% error is acceptable. (In the worst case scenario, it is acceptable if the mode attenuation ratio differs by a factor of 2 or more.) Therefore, the estimation accuracy of the mode damping ratio for 4th order resonance with a frequency resolution of 2Hz has an unacceptable level of error. Table 2 Accuracy of Mode Damping Ratio Estimation 1st resonance freq. Error between theoretical and estimated values [%] 2nd resonance freq. Error [%] 3rd resonance freq. Error [%] 4th resonance freq. Error [%] df=0.1Hz 0.6 0.4 0.8 27.3 df=1Hz 7.7 5.4 8.7 89.0 df=2Hz 6.5 30.9 9.5 1794.4 Finally, the accuracy of the mode vector is verified. The accuracy of the mode vector was verified by determining the cross MAC as shown in Figure 5. Although the letters in the figure are smaller, from left to right, the frequency resolution is 0.1 Hz, the center is 1 Hz, and the right is 2 Hz. It can be seen that the mode vector can be obtained with sufficient accuracy at a frequency resolution of 1Hz. At a frequency resolution of 2 Hz, the similarity of (4th order of theory) x (2nd order of mode circle fit) is 0.5, indicating that the mode vector is not well identified. Figure 5 Accuracy verification results of mode vectors by cross MAC ## MATLAB code #### Executable program clear all;clc;close all m_vec=ones(1,4); k_vec=(1:4)*2*10^4; [M]=eval_Mmatrix(m_vec); %mass matrix [K]=eval_Kmatrix(k_vec); % [V,D]=eig(K,M); %固有値D、固有ベクトルV for ii1=1:length(m_vec) V(:,ii1)=V(:,ii1)/V(1,ii1); % 加振点m1のモードベクトルの成分が1になるように正規化 end wn=sqrt(diag(D)); %固有角振動数 fn=wn/(2*pi); %固有振動数 for ii10=1:4 freq=fn(ii10)-7:2:fn(ii10)+7; %計算周波数の定義 w=2*pi*freq; %角振動数 mr=diag(V.'*M*V); %モード質量行列 kr=diag(V.'*K*V); %モード剛性行列 c_cr=2*sqrt(mr.*kr); modal_dampimg=1*0.01; %モード減衰比(とりあえず0.1%と設定) cr=c_cr*modal_dampimg; %モード減衰係数 F=zeros(length(m_vec),1); F(1)=1; %外力ベクトル(全周波数帯を1で加振する) Xj=zeros(length(m_vec),length(freq)); %変位ベクトル(計算前にベクトルを定義してメモリを確保) % % % % モード法 ii=1; %加振点 (加振点が複数ある場合はfor ii=1:kength(F)のループを追加すればよい ) for ii1=1:length(m_vec) %応答点 for ii2=1:length(wn) Xj(ii1,:)=Xj(ii1,:)+V(ii1,ii2)*V(ii,ii2)./( -mr(ii2).*w.^2 + 1i*cr(ii2)*w +kr(ii2) )*F(ii); % 横ベクトル =横ベクトル + スカラー  ×スカラー   ./( スカラー×横ベクトル + スカラー×横ベクトル +スカラー)×スカラー end end figure semilogy(freq,abs(Xj)) % % % % % % % % ここからが「モード円適合」のプログラム [zeta1,R1,I1,k1,m1,modal_vector1,fn_curve1] = modal_circle_fit(Xj(1,:),freq,[1 1]); [zeta2,R2,I2,k2,m2,modal_vector2,fn_curve2] = modal_circle_fit(Xj(2,:),freq,[2 1],k1,m1); [zeta3,R3,I3,k3,m3,modal_vector3,fn_curve3] = modal_circle_fit(Xj(3,:),freq,[3 1],k1,m1); [zeta4,R4,I4,k4,m4,modal_vector4,fn_curve4] = modal_circle_fit(Xj(4,:),freq,[4 1],k1,m1); ii10 modal_vector=[ modal_vector1 modal_vector2 modal_vector3 modal_vector4 ] zeta_temp=[ zeta1 zeta2 zeta3 zeta4 ]; fn_curve_temp=[fn_curve1 fn_curve2 fn_curve3 fn_curve4]; zeta=mean(zeta_temp); fn_curve=mean(fn_curve_temp) zeta*100 end #### functionファイル function [zeta,R,I,k,m,modal_vector,fn_curve]=modal_circle_fit(Xj,freq,out_in,modalk,modalm) % % Xj コンプライアンス % % out_in 入力と応答のid、例えば1点応答-2番入力の場合はout_in=[1,2] df=freq(2) - freq(1); xi = real(Xj(1,:)); yi = imag(Xj(1,:)); if nargin < 4 modalk=0; modalm=0; end A = [sum(xi.^2) sum(xi.*yi) sum(xi) sum(xi.*yi) sum(yi.^2) sum(yi) sum(xi) sum(yi) length(xi) ]; %式(6.30) B = [sum(xi.^3+xi.*yi.^2) sum(xi.^2.*yi+yi.^3) sum(xi.^2+yi.^2) ]; %式(6.30) abc=A\B; %式(6.30) figure plot(real(Xj(1,:)),imag(Xj(1,:))) hold on grid on axis equal title([num2str(freq(1)) '~' num2str(freq(end)) 'Hzまで']) plot(abc(1)/2,abc(2)/2,'ko') %式(6.27) r=sqrt(abc(3) + (abc(1)^2+abc(2)^2)/4); %式(6.27) for ii1=0:0.05:2*pi plot(abc(1)/2+r*cos(ii1),abc(2)/2+r*sin(ii1),'k.') %式(6.27) end % % % % モード円適合で算出された共振周波数の「円上の角度」 fn_theta=atan( abc(2)/abc(1) ); %「円上の角度」≒ 共振周波数 if ( abc(2)/2+r*sin(fn_theta) )^2 < ( abc(2)/2+r*sin(-fn_theta) )^2 fn_theta=-fn_theta; end plot(abc(1)/2+r*cos(fn_theta),abc(2)/2+r*sin(fn_theta),'ko') Gmax_fn=( abc(1)/2+r*cos(fn_theta) ) + i*( abc(2)/2+r*sin(fn_theta) ); % 「円上の角度」のコンプライアンスの値(共振周波数におけるFRFの値) % % % % 入力データ|Xj|maxの角度  |G|maxの角度 [value,id_max]=max(abs(Xj(1,:))); Gmax_theta = atan( imag(Xj(1,id_max))/real(Xj(1,id_max)) ); % % % % 「円上の角度」と隣り合うデータの角度を求める(片方は|G|maxの角度) if Gmax_theta >= fn_theta Gmax_theta_next = atan( imag(Xj(1,id_max+1))/real(Xj(1,id_max+1)) ); id_next=id_max+1; else Gmax_theta_next = atan( imag(Xj(1,id_max-1))/real(Xj(1,id_max-1)) ); id_next=id_max-1; end % % % % 「円上の角度」と隣り合う角度を求める dtheta1 = abs( Gmax_theta - fn_theta ); dtheta2 = abs( Gmax_theta - Gmax_theta_next ); % % % % 「円上の角度」から求めた共振周波数(線形補完しているので、共振周波数には誤差があるが、一般的には許容範囲内で収まるはず) fn_curve=(dtheta1/(dtheta2+dtheta1))*df + freq(id_max); % % % % atanで角度を求めているので、座標上の第三象限では正負の符号がおかしくなるので、調整する必要がある if Gmax_theta>0; Gmax_theta = -Gmax_theta; end if fn_theta>0; fn_theta = -fn_theta; end if Gmax_theta_next>0; Gmax_theta_next = -Gmax_theta_next; end theta_d = abs( Gmax_theta - fn_theta ); theta_e = abs( Gmax_theta_next - fn_theta ); beta_d = freq(id_max) / fn_curve; %式(B9.2) beta_e = freq(id_next) / fn_curve; %式(B9.2) zeta = abs(beta_e^2 - beta_d^2) / ( 2*(beta_d*tan(theta_d)+beta_e*tan(theta_e)) ); %式(B9.5) % % % 式(6.25)に代入 R=abc(1)/2; K=1/(2*zeta)*sqrt( 1/( 4*(abc(3)+R^2) + abc(2)^2) ); % FRFが自己コンプライアンスの場合はk=K; I=1/2*(abc(2)+1/(2*K*zeta)); % out_in,modalk) if out_in(1)==out_in(2) modal_vector=1; k=K; m=k/(2*pi*fn_curve)^2; else Gmax_fn % Gmax_fn=Gmax_fn .* imag(Gmax_fn)./abs(Gmax_fn) modal_vector=2*modalk*zeta*imag(-Gmax_fn); k=modalk; m=modalm; end % [Gis,id]=max( abs( imag(Xj(:,:)) ) ,[],2 ); % Gis=Gis .* imag(-Xj(:,id(1)))./abs(Xj(:,id(1))) ; % phys=2*k_hanchi*zeta_hanchi*Gis; MATLABパイセンが教える振動・騒音・音響・機械工学
{}
Could LIGO discovery be due to e.g. earthquakes or have a terrestrial source? [duplicate] This question already has an answer here: I mean, could it have been earthquakes or anything else? marked as duplicate by Kyle Kanos, user36790, Bill N, ACuriousMind♦, GertFeb 13 '16 at 20:26 • Coincidence of signals for one... – Kyle Kanos Feb 12 '16 at 13:06 • Hi Sidarth. Just a simple answer for you: the two stations picked up exactly the same signal at once. You see? They are very far apart. It could be that incredibly there was a "passing freight train" or something at the exact same instant, speed, etc etc in both areas - but it's extremely unlikely. At heart, the issue is that simple. – Fattie Feb 12 '16 at 13:42 • I deleted my earlier comment and I posted an answer below explaining why it could not have been terrestrial perturbations. – user106422 Feb 12 '16 at 23:54 I want to add one very compelling argument which clearly shows that Ligo could not have been because of earthquakes or terrestrial phenomena, or at least their probabilities would be outrageously low. What was announced yesterday was that two different detectors picked up almost entirely identical signals with a spacing of a few milliseconds difference. The distance between the two detectors in Livingston, Lousiana and Hanford, Washington as the crow flies on the surface of the Earth is about 3042 km. But gravitational waves would not care about the ATCF distance along the surface of the earth but the actual geodesic distance between the two points which is much smaller (about 3000km). For an earthquake to travel about 3042km, the minimum time to do so assuming a uniform terrestrial medium between the two detector sites is about 3000km/ (8km/s) (8km/s is a respectable upper bound on the speed of a seismic wave, according to Wikipedia.) which gives us about 375 to 380, assuming that the earthquake is so powerful that it can maintain the same energy density at both the locations. This is already highly unlikely that the seismic wave does not lose energy but we now have definitive proof that the same seismic wave is not reponsible for the events measured on the gravitational detectors 3042km apart. Finally, what if there was more than 1 such event? The USGS and other earthquake related databases indicate that the strongest possible earthquake which can most probably affect the Hanford detector site is from the faultlines from California which have approximately a 1/50 chance that the next earthquake will be a magnitude 7 earthquake in San Fransisco which is still a good ~1100km from Hanford. The closest possible predictive earthquake chance near Lousiana is a 1/300 chance of a magnitede 8 earthquake in Missouri which is again about 1000km away from Livingston in Lousiana. Now, assuming that one would need really strong earthquakes to contribute to the data (which as @anna mentions, is nearly impossible because the mirrors and the beam sources were suspended from highly sophisticated suspensions), the probability of two earthquakes happening at two different locations simultaneously such that the same energy is felt at two different locations which are over 3000km apart, thereby leading to the same waveform being measured on detectors is almost 0. I don't need to be a geologist to claim that the probably of this happening is a bit more outrageous than the probability of a celestial event which is capable of producing detectable gravitational waves. Finally, not to mention - If I were doing an experiment which might involve millions/billions of dollars worth of effort, time and equipment over a period of 40 years which in principle could produce false positive data from earth quakes, I would also allow for data analysis and tools which removes any seismic effects from the final data in addition to the intricate suspension technologies. And if I can be wise enough to contemplate this, I'm pretty sure highly experienced scientists and analysts are already so. • that would have had to been a really REALLY powerful earthquake which all the media is covering right now. also, seismic waves lose energy and there are technologies implemented to cancel these effects in LIGO. – user106422 Feb 13 '16 at 9:32 • You have speculated extensively about earthquake possibility, so I assumed this option is not ruled out "by definition" especially that the signal recorded was on the order of $10^-21$ – bright magus Feb 13 '16 at 9:38 • It isn't really speculation. I read off the data provided by geological surverys for possibility of strong earthquakes on the day that the signal was detected. The key idea here is not to focus on these kind of "what if" scenarios. I mentioned the sophistication of the suspension devices which effectively screen out seismic waves. EVEN IF there were seismic contributions, the probability that they/it would produce identical results in 2 different experiments exact to the order of $10^{-21}$ is practically 0. And I don't think nature gets that lucky to a significance of 5.1$\sigma$. – user106422 Feb 13 '16 at 11:52 • "The key idea here is not to focus on these kind of "what if" scenarios." Well, I always assumed this is exactly the key idea in science. And as to the nature getting lucky - the nature was "lucky" enough to produce homo sapiens capable of discovering its ways. – bright magus Feb 13 '16 at 12:02 • This is getting too philosophical for me. You're addressing this a priori, in which case yes it is the essence of science. I'm talking from a a posteriori perspective. I'm not looking to answer the question "How can we eliminate terrestrial effects from data?" but rather "How do we know that the data that we have following rigourous analysis could not have been caused by an earthquake?" As for the anthropic argument, I think we should continue it elsewhere because it deviates from the essence of this thread. – user106422 Feb 13 '16 at 12:08 In very broad language, we don't know (and hear me out before you judge me)! But then what is science? Science is the process of producing models that get us to understand the universe better and make predictions about it. We have a model of gravitational waves that was produced using general relativity. This model predicts a specific signal that we would detect if this model is accurate. The signal is shown in the paper of gravitational waves published yesterday. It compares our model with the signal detected: (Image from: Observation of Gravitational Waves from a Binary Black Hole Merger by B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration) in Phys. Rev. Lett. 116, 061102, doi:10.1103/PhysRevLett.116.061102) Then after we see this amount of matching, we do the statistical math and calculate, what is the probability of this happening by coincidence in two stations? The probability is measured by how many "sigmas" we're far from our model. Then we make a publication like the one linked above, and we say: We made an observation that is consistent with gravity waves. Then other experiments in the future repeat the measurement again, and again, and again, and every other experiment confirms what we had. If only LIGO would measure gravitational waves, and bigger binary black holes merge in the future and we see nothing, then we start doubting what happened and question whether what we measured is gravitational waves. More experiments reveal more evidence and solid proof. This is how science works. The big deal about this is that LIGO and Virgo were the first ever to detect such solid evidence like the signals you saw in the pictures. So we're quite certain this is gravitational waves. • How do we know that "bigger binary black holes merge in the future" if "we see nothing"? – bright magus Feb 12 '16 at 19:12 • @brightmagus are gravitational waves the only way to recognize black holes? – The Quantum Physicist Feb 12 '16 at 19:25 • How many merging black holes have been observed so far? – bright magus Feb 12 '16 at 19:27 • @brightmagus Why are you playing this game with me? Is my explanation wrong? If it's, then say it's; if it's not, then ask your question politely without applying your own mind models as if they're absolutely true. I don't have time for games! – The Quantum Physicist Feb 12 '16 at 19:32 • What are you talking about? I'm asking politely how do you expect to verify if the interferometer correctly detects gravitational waves created by the merging of black holes. If you don't know or do not wish to answer then that's OK with me. – bright magus Feb 12 '16 at 20:03 I will address this main question: Could LIGO discovery be due to e.g. earthquakes or have a terrestrial source? The short answer is , NO. The reason the observation happened in September and the rumors rose just a month ago is because the researchers themselves were double checking all the numbers. Anybody who watches the presentation given for the press can see in a simplified manner that vibrations from trucks ( :) ), earthquakes etc are isolated by the suspension of the detectors as pendulums, to dampen any high frequency changes. The characteristic signal takes miliseconds, one can barely hear it in the demonstration. This discussion in Motl's blog will help. Now of course a model is used to identify the signal with two black holes merging, as the other answer says. No competing physical model has been proposed so , if it walks like a duck and it quacks like a duck, why, it IS a duck. • " ... if it walks like a duck and it quacks like a duck, why, it IS a duck." Well, perhaps ... "She is a witch! She looks like one. Burn her! – bright magus Feb 12 '16 at 22:12 • @brightmagus there is no burning in making the identification of a duck. did I talk about shooting ? you are distorting the classical analogy – anna v Feb 13 '16 at 4:03 • You'd need to have a consulting seismologist with deep mantle specialism on the LIGO team. It's at least possible a deep mantle earthquake could register identically at the two locations. Was there a seismologist on the team? – Lucy Meadow Mar 22 '16 at 10:48 • @LucyMeadow The suspension takes care of this . did you watch the video? – anna v Mar 22 '16 at 11:23 In addition to the answers already given, let me add one thought: These people aren't just doing some out of the box experiment and they aren't amateurs. The experiments have been going on for decades and the detectors used in the experiment were build in 1999 and have been measuring in 2002. They are constantly refined using the latest technology which is also probed and used in other experiments of a similar type (LIGO explicitly mentions GEO 600 an their report). Hundreds of people are and were involved during that time and because of independent experiments, you will have people thinking about similar matters independently (meaning there is redundancy). Therefore, you can be sure that the people know their experiments. When the experiments were started, you can be sure that a lot of signals were recorded (and still are), each with its own characteristics. Then the experimenters started to learn what these signals mean. Often, they will find natural causes: maybe there were earthquakes at the time or maybe the signal pops up at the exact same time a train is going by close-by. Then they find out more and more and make sure that this is really the cause and they try to eliminate these kind of signals by making the design even better. It is not unusual that such signals cannot be explained for several months and eventually, a natural cause is found. To sum it up: The experiments are extremely carefully designed and the experimenters go to great length to understand every single detail of their experiments. Of course this doesn't eliminate the chance that the signal is just noise as others have explained, but it is highly unlikely. • I thought the detection that was made had only been possible for 2 days? – Lucy Meadow Mar 22 '16 at 10:50 • Yes, but the advanced LIGO doesn't come out of nowhere. It's an enhanced system of what has been running for years. All parts of the new experiment have also been tested individually, etc. – Martin Mar 22 '16 at 12:05
{}
# Remote access/administrator with ability to make remote screen static I have used TightVNC for a long time now. I want to be able to access/work-on my remote computer (rm1) from my present computer (rm2) BUT with my remote computer locked. Why? Nasty siblings, meh :P All I want is that people who are in the room where rm1 is kept should not be able to see anything that I am doing, I am not having any naughty intentions, just don't want people to see my work/code. Edit: can I possibly achieve screen hiding running some script on rm1? rm1 is Windows 7 Pro, 64-bit rm2 is either XP or Ubuntu 14.o4 • For what operating system? What OS will run both the remote and local computer? – Alejandro May 31 '15 at 2:53 • @Alejandro edited the question, thanks :) – RinkyPinku May 31 '15 at 3:24
{}
### Contact fparodi@pennmedicine.upenn.edu ### Bio Originally from Buenos Aires, Argentina, Felipe studied Neuroscience and Economics at the University of Miami from 2015-2019. As an undergraduate, Felipe conducted research on neurolinguistic pain modulation in the Social and Cultural Neuroscience Lab. Following graduation, he worked as a Psychometrician for First Choice Neurology, where he helped refine a screening instrument as a predictor of cognitive dysfunction in white and Hispanic geriatric populations. Outside of the lab, Felipe plays soccer, occasionally engages in people-watching, and is on a quest to make (and find!) the perfect burger. ### Research Interests Felipe is conducting a co-rotation under Konrad Kording and Michael L. Platt on non-human primate tracking. His project largely involves attempting to define (and refine) methods that combine analysis of naturalistic, unrestrained behavior with measures of neural activity. He is also interested in applying machine learning techniques to understand how real and artificial brains can optimize behavior.
{}
# Determining whether a space is really three or two dimensional? [closed] A space purports to be three dimensional with the metric $$dl^2=dx^2+dy^2+dz^2-\left(\frac{3}{13}dx+\frac{4}{13}dy+\frac{12}{13}dz\right)^2$$ How can I show that it actually represents a two dimensional space? I tried diagonalizing it to see if it had a zero eigenvalue, for then it would imply that there exists a basis in which the representation of the metric tensor is really a 2X2 matrix i.e to show that there exists such a coordinate transformation which makes it a 2X2 matrix. ## closed as off-topic by Jim, ACuriousMind♦, Kyle Kanos, Danu, JamalSMar 10 '15 at 13:04 This question appears to be off-topic. The users who voted to close gave this specific reason: • "Homework-like questions should ask about a specific physics concept and show some effort to work through the problem. We want our questions to be useful to the broader community, and to future users. See our meta site for more guidance on how to edit your question to make it better" – Jim, ACuriousMind, Kyle Kanos, Danu, JamalS If this question can be reworded to fit the rules in the help center, please edit the question. • I edited to make the equation more readable, however as it stands this will likely be closed as a homework question, and is still somewhat confusing – Sean Mar 9 '15 at 18:36 • Zero is an eigenvalue! – MBN Mar 9 '15 at 21:20 Expand your line element and obtain the metric $g_{ij}$. It is of the form $$g_{ij}=\delta_{ij}-n_in_j$$ where $n=\langle \frac3{13}, \frac4{13}, \frac{12}{13}\rangle$ and so $n_in^i=1$ What you have now is a projection operator (because $g_{ij}g_{jk} = \delta_{ik}$, check it symbolically) which does this: It takes any 3D vector $v$ and gives you its vector component along the plane perpendicular to the unit vector $n$, so it spans the 2D plane orthogonal to $n$ Proof:$$g_{ij}v^j=(\delta_{ij}-n_in_j)v^j=v_i-n_i(n_jv^j)$$ and the RHS is simply the vector $v$ minus its component along $n$, so you get the component orthogonal to $n$. So this metric projects a 3D vector onto a plane. In GR, these kinds of "degenerate metrics" are generally used when splitting space-time as a 3+1 foliation for solving initial value problems. It is not a space-time because it is not Lorentzian. It is actually Riemannian. This exercise may be from a general relativity book, but is in fact a geometry question. So I take it that the question is to show that it represents a two dimensional space. But since it is in the general relativity tag one can be smart and guess the following. Consider the vector $n^i=\langle \frac3{13}, \frac4{13}, \frac{12}{13}\rangle$. It is a unit vector in a three dimensional Euclidian space. The given metric can be written as $$g_{ij}=\delta_{ij}-n_in_j$$ where $\delta_{ij}$ is the usual Euclidan metric. This shows that the given metric is the induced metric on the orthogonal to $n^i$ subspace. • But the n^i space is still three dimensional! – MQRG Mar 10 '15 at 3:22 • n^i is not a space it is a vector. The subspace which is orthogonal to that vector is two dimensional and that is the space you are looking for. – MBN Mar 10 '15 at 7:31 • But you mentioned n^i subspace!, which is the 3D euclidean space , right? – MQRG Mar 10 '15 at 8:13 • Thanks for the clarification. But you mentioned n^i subspace! All I meant was that n^i is a vector in 3D. What is meant by a subspace being orthogonal to a vecor? Can you give a mathematical relation? – MQRG Mar 10 '15 at 8:25 • I wrote the orthogonal to n^i subspace (i.e. the subspace orthogonal to the vector), not n^i subspace. – MBN Mar 10 '15 at 8:57 Having a zero column in a diagonalization is bad (since the metric would be degenerate), but also bad would be if somehow it looked like $$dl^2=dx^2+dy^2+dz^2$$ or $$dl^2=-dx^2-dy^2-dz^2.$$ S you also want to avoid the metric being positive definite or negative definite. For more dimensions you'd also want to worry about having two spatial directions and two time directions! So you want your metric to have a signature +--- or -+++ for a real spacetime, and for a lower number of dimensions, +-- or -++. • Well, I guess that is why one has to show that it is actually a two dimensional space and in some basis looks like this! – MQRG Mar 10 '15 at 3:23 • @MariaQuadeer I tried making my answer very general, partly because of the homework policy and partly because I didn't find your question clear enough. I can't even figure out why you think zero isn't an eigenvalue. I'd expand the metric, write it as a symmetric matrix, and find the eigenvalues/eigenvectors. – Timaeus Mar 10 '15 at 4:05 • Yeah, thanks for the remark! Zero is an eigenvalue, I checked it again. – MQRG Mar 10 '15 at 8:26
{}
# A possible more principled approach to generation and simplification in hypothesis Currently the way generation and simplification in hypothesis work are very ad hoc and undisciplined. The API is spread across various places, and the actual behaviour is very under-specified. There are two perations exposed, produce and simplify. produce takes a size parameter and produces values of the specifed type of about the provided size, whileas simplify takes a value of the specified type and produces a generator over simplified variants of it. The meaning of these terms is explicit meaning of these terms is deliberately undefined: There is no specified meaning for “about that size” or “simplified variants”. This post is an attempt to sketch out some ideas about how this could become better specified. I’m just going to use maths rather than Python here – some of the beginnings of this has made it into code, but it’s no more than a starting point right now. Let $$S$$ be a set of values we’ll call our state space. We will call a search tactic for $$S$$ a triple: \begin{align*} \mathrm{complexity} & : S \to [0, \infty) \\ \mathrm{simplify} & : S \to [S]^{< \omega}\\ \mathrm{produce} & : [0, \infty) \to \mathrm{RV}(S) \\ \end{align*} Where $$\mathrm{RV}(S)$$ is the set of random variables taking values in $$S$$ and $$[S]^{<\omega}$$ is the set of finite sequences taking values in $$S$$. These operations should satisfy the following properties: 1. $$\mathrm{h}(\mathrm{produce}(x)) = \min(x, \log(|S|))$$ 2. $$x \to \mathbb{E}(\mathrm{complexity}(\mathrm{produce}(x)))$$ should be monotone increasing in X 3. $$\mathrm{complexity}(y) \leq \mathrm{complexity}(x)$$ for $$y \in \mathrm{simplify}(x)$$ In general it would be nice if the distribution of $$\mathrm{produce}(x)$$ minimized the expected complexity, but I think that would be too restrictive. Where $$\mathrm{h}$$ is the entropy function. The idea here is that the entropy of the distribution is a good measure of how spread out the search space is – a low entropy distribution will be very concentrated, whileas a high entropy distribution will be very spread out. This makes it a good “size” function. The requirement that the expected complexity be monotone increasing captures the idea of the search space spreading out, and the requirement that simplification not increase the complexity captures the idea of moving towards the values more like what you generated at low size. Here are some examples of how you might produce search strategies: A search strategy for positive real numbers could be: \begin{align*} \mathrm{complexity}(x) & = x \\ \mathrm{simplify}(x) & = x, \ldots, x - n \mbox{ for } n < x\\ \mathrm{produce}(h) & = \mathrm{Exp}(e^h - 1) \\ \end{align*} The exponential distribution seems to be a nice choice because it’s a maximum entropy distribution for a given expectation, but I don’t really know if it’s a minimal choice of expectation for a fixed entropy. Another example. Given search strategies for $$S$$ and $$T$$ we can produce a search strategy for $$S \times T$$ as follows: \begin{align*} \mathrm{complexity}(x, y) & = \mathrm{complexity}_S(x) + \mathrm{complexity}_T(y)\\ \mathrm{simplify}(x, y) & = [(a, b) : a \in \mathrm{simplify}_S(x), b \in \mathrm{simplify}_T(y)] \\ \mathrm{produce}(h) & = (\mathrm{produce}_S(\frac{1}{2}h), \mathrm{produce}_T(\frac{1}{2}h)) \\ \end{align*} The first two should be self-explanatory. The produce function works because the entropy of a product of independent variables is the sum of the entropy of its components. It might also be potentially interesting to distribute the entropy less uniformly through the components, but this is probably simplest. You can also generate a search strategy for sequences given a search strategy for natural numbers $$\mathbb{N}$$ and a search strategy for $$S$$ we can generate a search strategy for $$[S]^{< \omega}$$: If we define the complexity of a sequence as the sum of the complexities of its components, we can define its simplifications as coordinatewise simplifications of subsequences. The produce is the only hard one to define. Its definition goes as follows: We will generate length as a random variable $$N = \mathrm{produce}_{\mathbb{N}}(i)$$ for some entropy $$i$$. We will then allocate a total entropy $$j$$ to the sequence of length $$N$$. So the coordinates will be generated as $$x_i = \mathrm{produce}_S(\frac{j}{N})$$. Let $$T$$ be the random variable of the sequence produced. The value of $$N$$ is completely specified by $$S$$, so $$h(S) = h(S,N)$$. We can then use conditional entropy to calculate this: We have that $$h(S | N = n) = j$$ because we allocated the entropy equally between each of the coordinates. So $$h(S) = h(N) + h(S | N) = i + j$$ So we can allocate the entropy between coordinates and length as we wish – either an equal split, or biasing in favour of shorter sequences with more complex coordinates or short sequences with complex coordinates. Anyway, those are some worked examples. It seems to work reasonably well, and is more pleasantly principled than the current ad hoc approach. It remains to be seen how well it works in practice. # Stateful testing with hypothesis The idea of stateful testing is that it is “quickcheck for mutable data structures”. The way it works is that rather than trying to produce arguments which falsify an example, we instead try and produce a sequence of operations which break a data structure. Let me show you. We’re going to start with the following broken implementation of a set: class BadSet: def __init__(self): self.data = []   def add(self, arg): self.data.append(arg)   def remove(self, arg): for i in xrange(0, len(self.data)): if self.data[i] == arg: del self.data[i] break   def contains(self, arg): return arg in self.data Because it uses an array internally to store its items and doesn’t check if an item is already contained when adding it, if you add an item twice and then remove it then the item will still be there. (Obviously this is a really stupid example, but it should demonstrate how it works for more complicated examples) Now lets write some tests that break this! The operations we want to test are adding and removing elements. So we do the following: from hypothesis.statefultesting import StatefulTest, step, requires   class BadSetTester(StatefulTest): def __init__(self): self.target = BadSet()   @step @requires(int) def add(self,i): self.target.add(i) assert self.target.contains(i)   @step @requires(int) def remove(self,i): self.target.remove(i) assert not self.target.contains(i) We can now ask this to produce us a breaking example: >>> print BadSetTester.breaking_example() [('add', 0), ('add', 0), ('remove', 0)] Note the nicely minimized example sequence. At the moment this code is very much a work in progress – what I have works, but should probably be considered a sketch of how it might work rather than a finished product. As such, feedback would very definitely be appreciated. # Quickcheck style testing in python with hypothesis So I’ve been tinkering a bit more with hypothesis. I would now cautiously label it “probably not too broken and I’m unlikely to completely change the API”. The version number is currently 0.0.4 though, so you should certainly regard it with some degree of suspicion. However I’m now reasonably confident that it’s good enough for simple usage in the sense that I expect bug reports from most people who use it in anger but probably not all people and probably not many bug reports from most. :-) Since the initial version I’ve mostly been thinking about the API and fixing anything that was really obviously stupid. I’ve also cleaned up a bunch of internals. The implementation is still pretty far from perfect, but it’s no longer awful. Most importantly, I’ve added a feature to it that makes it actually useful. Test integration! You can now use it for some fairly pretty quickcheck style testing: @given(int,int) def test_int_addition_is_commutative(x,y): assert x + y == y + x   @given(str,str) def test_str_addition_is_commutative(x,y): assert x + y == y + x The first test will pass, because int addition really is commutative, but of course string addition is not so what you’ll get is an error from the test suite: x = '0', y = '1'   @given(str,str) def test_str_addition_is_commutative(x,y): > assert x + y == y + x E assert '01' == '10' E - 01 E + 10 This doesn’t currently support passing keyword arguments through to the tests – all arguments have to be positional. I think I know how to fix this, I just haven’t sorted out the details yet. But yeah, quickcheck style testing with test case minimization (look at how nice the counter-example it produced was!). If that turns out to be exactly what you’ve always wanted, go wild. # An interview question Edit for warning: This problem has proven substantially harder than I intended it to be. The spec keeps turning out to be ambiguous and needing patching and there are enough nasty edge cases that basically no one gets everything right (indeed, my reference implementation has just been pointed out to contain a bug). I’m going to leave it as is for the challenge but be warned. This isn’t a question I’ve ever used to interview people, but it’s not dissimilar from the coding test we’ve got very good results from at Aframe (the problem is not at all similar, but the setup is fairly). I’ve been pondering this as an alternative, and I thought it might interesting to share it with people. I’ll explain what it’s designed to test in a later post. If you want to answer it, I’m happy grade your answer, but there’s no exciting reward for doing so other than public or private recognition of a job well done. Details on this after the question. ### The problem You are required to write a command line program which reads lines which are terminated by ‘\n’ or EOF  from its STDIN until it gets an end of file and then writes a stably sorted version of them to STDOUT. Where a line is terminated by EOF it should have an implicit ‘\n’ inserted after it. It is perfectly OK to use library sort functions for this. Additionally, ability to handle large numbers of lines is not required – the only performance requirement is to sort 500 lines of less than 1000 bytes each in less than 10 seconds (this should not be in any way onerous). You can write an external sort if you want, and you’ll get extra kudos for it, but it is in no way required for a correct answer. The task is to implement the comparator used for the sorting, not to implement a sorting algorithm. The comparator to be implemented is as follows: Each line is to be considered to be an arbitrary sequence of non-’\n’ bytes, which will be interpreted as a sequence of non-overlapping tokens. A token is EITHER: • a sequence of ascii letter characters a-zA-Z • a numeric string. This is a decimal representation of a number, containing 1 or more digits 0-9, possibly with a leading – sign, possibly with a single decimal point (.) followed by additional digits. The decimal point must come after at least one digit. No other characters are permitted. e.g. -5 and 5.0 are valid numeric tokens but +5, .5 and 5. are not (though they all contain a valid numeric token: 5) Non-ascii or ascii but non-alphanumeric bytes may be present in the line, but must not be considered to be part of a token. Each token should be the longest possible token it can be, with ambiguity resolved in favour of making the leftmost token longer. So for example the line “foobar” is the token “foobar”, not the tokens “foo” and “bar”, and “3.14.5″ is “3.14″, “5″ Note also that lines may contain characters other than those permitted in tokens, and that tokens are not necessarily separated by whitespace: “-10.11foo10 kittens%$£$%” should be tokenized as -10.11, foo, 10, kittens Lines should then be compared lexicographically as their list of tokens (as usual, if one is a prefix of the other then the shorter one comes first), with individual tokens being compared as follows: 1. Two numeric tokens should be compared as their numeric value when interpreted as a decimal representation 2. Any numeric token is less than any non numeric token 3. Non-numeric tokens should be compared lexicographically by single character, with characters compared case insensitively. Case insensitivity should be performed as in English with ‘a’ corresponding to ‘A’, ‘b’ to ‘B’, etc. ### Submission If you want me to grade your answer, email it to me at [email protected] If it’s not obvious how to run it, please include instructions. I will need to be able to run it on a linux VM. 1. The source for your solution, either attached or as a link 3. If so, how you want you want to be cited (pseudonym, full name, etc. I’m also happy to include a link 4. Whether you’re OK with me using your source code as an example in follow up posts (I will assume you are unless you explicitly say you are not) 5. Roughly how long you spent on the problem (I won’t publish this except in aggregate, it’s mostly just for my information) F You have suffered from a serious failure to read the spec C You got some of it right, but there are significant omissions B You have mostly got it right, but you missed some edge cases A You have passed every test case I can throw at it A+ And you implemented an external sort or otherwise did something clever. Go you! (Note that there are no rewards for cleverness if you haven’t got the basic problem right. Such is life) ### Hall of fame (In order of solutions coming in) 1. Dave Stark with a grade of B, A. Bonus kudos also for the fact that his solution uncovered a bug in my reference solution 2. Alexander Azarov with a grade of B. Also his first solution uncovered some ambiguities in my spec 3. Kat (a different one than I’ve referenced here before). B on her first try followed by an A (she got all the hard edge cases right the first time but was tripped over by an annoying one). Also kudos for pointing out a lot of spec ambiguities. 4. Eiríkr Åsheim too gets kudos for the first solution which got everything right first time. Also for rolling his own IO code and finite state machine. Feel free to point out problems in the spec. Note that a lack of detail is not a problem, but ambiguity is. Do not post solutions in the comments. I will delete your comment. I will also delete or edit any comment I think gives too much away. Also, comments of the form “What a stupidly easy test” will be deleted unless you have submitted a solution that was graded A. # A (rewritten) manifesto for error reporting So I wrote A manifesto for error reporting. I stand by it entirely, but it did end up more of a diatribe than a manifesto, and it mixed implementation details with end results. This post contains largely the same information but with less anger and hopefully clearer presentation. ### The Manifesto This is a manifesto for how errors should be reported by software to technical people whose responsibility it is to work with said software – it is primarily focused on the information that programmers need, but it’s going to be a lot of help to ops people as well. The principles described in here apply equally whether you’re writing a library or an application. They do not apply to how you should report errors to a non-technical end-user. That’s an entirely different problem. This is primarily about how errors appear when represented in text formats – either through some sort of alerting mechanism or logs. It doesn’t cover more advanced tools like debuggers and live environments. Textual reports of errors are a lowest common denominator across multiple languages and are important to get right even if you have better tools. The guiding principle you should follow is that the way you report errors is important, and that you should think as carefully about how you convey information in failure cases as how you behave in non-failure cases. A moderate amount of careful forethought at this point can prevent a vast amount of effort and frustration at a later date. In particular, when crafting software you should think about what information the person who is attempting to debug the problem is going to need. This information primarily takes three forms: 1. What, specifically, is the problem that occurred? 2. What has triggered this problem? 3. Where in the code has this problem occurred? If you bear these three questions in mind, and make sure to provide enough information to answer them, you will be in good stead. What follows is some specific advice for helping people answer these questions. #### Be as specific as you can in your error messages Your error messages should not be too long – a sentence is typically more than enough. They should however be descriptive, and tell you what happened. A bad example of an error message is: Invalid state Better is: Transaction aborted Better yet: Cannot commit an aborted transaction Rather than merely telling you that the state is invalid, the error message tells you which invalid state you were in and what it is preventing you from doing. #### Error messages should contain pertinent information about the values that produced them This is not a good error message: Index out of range This is: Index 8 is out of range for array of length 7 You could also do Index 8 is out of range for array [1,2,3,4,5,6,7] the problem with this is that if the array gets very large then so does the error message. So while error messages should contain information about the values that generated them, they do not need to contain the entire value: Only enough information about it to say why it triggered this error. Another error message you shouldn’t generate from the exact value: Failure to process credit card number XXXX XXX XXX XXX Even ignoring the specific laws around processing credit card numbers, you should obviously not be logging confidential or secret information about users like this. So there are reasons why your error messages can’t always sensibly contain the full values that triggered them. That being said, it’s much easier to recreate a problem if you can recreate the exact value, so it’s a good default to include more rather than less, and you should certainly be including some. #### Error messages should locate where in the code they occur In an ideal world, every error message would come with a complete stack trace that says exactly the chain of calls that it went through to get there. If absolutely necessary, and if you’re generating good and expressive error messages, it’s sufficient to include just the file and line number where the error occurred, but it’s not perfect and gives you much less information about how the problem was triggered. The reason this is so important is that determining where the problem occurs in code is one of the first steps of any debugging process, so you can save a lot of time and effort for the person debugging by doing this for them at the point of the error. In most languages if you are using exceptions, you get pretty close to this by default. On POSIX systems in C or C++ you can apparently also do this with the backtrace function. Additionally, you should make a best effort to include stack traces when crossing process boundaries through RPC mechanisms: If a remote procedure can reasonably report a stack trace, it should report a stack trace and you should include that in your error report. #### You should not mask lower level errors It is common to wrap lower level errors in high level ones. It is also common to alter the display of errors in code you’re calling – e.g. in testing frameworks. When you do either of these things the golden rule you must follow is that you should not remove information from the lower level errors, as they may be the most informative information the developer debugging the problem has about what actually went wrong. In particular, if you are rethrowing exceptions you need to take steps to ensure that you include the original stack trace and error message (in many languages it is possible to alter the stack trace of the exception you’re throwing, and you can use this to chain the stack traces together). Additionally you should never remove stack trace elements for display (it is acceptable to e.g. compress adjacent lines into a single one with a counter for repetitions. It’s OK to change the display, but not to remove information). #### Error conditions should not be covered up It is often tempting to believe that it is the code’s responsibility to attempt to cover for an error and keep on working regardless. Sometimes this is even viable and true. Sometimes however an error is more likely to be a sign of developer error which should be addressed sooner rather than later, and even when it is not an obvious developer error it is likely a symptom of something going genuinely wrong. As a consequence unless an error condition is genuinely routine (a rough rule of thumb here would be “Can reasonably be expected to happen multiple times a day and we’re not going to do anything about that” it should be reported. It is fine for the code to recover from the error and attempt to proceed regardless, but the error needs to be logged. Even if it’s not a problem that needs fixing, it may end up as symptomatic of other problems. #### Errors should be reported when you enter an invalid state, not just when you attempt to operate whilst in one One of the most common errors to see in a Java application is a NullPointerException. In Ruby it’s similarly common to see a NameError or a NoMethodError. Inevitably this is because a value has been allowed to enter somewhere that it shouldn’t be permitted. Other forms of invalid state are also possible, but they basically all come down to the same thing: Your error is not caused by what you are currently doing, it is caused by what has come before. Your debugging now has to back track to find the point at which the object was put into an invalid state, because where the error appears to be occurring is of no help to you. The solution to this is to validate your state when it changes: If data is only permitted to be within a certain range of values, check that it belongs to that range of values when you set or change it. This means that the problem will be caught at the point where it occurs rather than the point where it causes problems. In summary:
{}
# Multiplicative Group of Reduced Residues/Examples/Modulo 8 ## Example of Multiplicative Group of Reduced Residues Consider the reduced residue system $\Z'_8$ modulo $8$ under modulo multiplication: $\Z'_8 = \set {\eqclass 1 8, \eqclass 3 8, \eqclass 5 8, \eqclass 7 8}$ $\struct {\Z'_8, \times_8}$ is the multiplicative group of reduced residues modulo 8. ### Cayley Table $\begin{array}{r|rrrr} \times_8 & \eqclass 1 8 & \eqclass 3 8 & \eqclass 5 8 & \eqclass 7 8 \\ \hline \eqclass 1 8 & \eqclass 1 8 & \eqclass 3 8 & \eqclass 5 8 & \eqclass 7 8 \\ \eqclass 3 8 & \eqclass 3 8 & \eqclass 1 8 & \eqclass 7 8 & \eqclass 5 8 \\ \eqclass 5 8 & \eqclass 5 8 & \eqclass 7 8 & \eqclass 1 8 & \eqclass 3 8 \\ \eqclass 7 8 & \eqclass 7 8 & \eqclass 5 8 & \eqclass 3 8 & \eqclass 1 8 \\ \end{array}$
{}
# Product rule in calculus ## Statement The derivative of a product $g(x) * j(x)$ is $g'(x) * j(x) + g(x) * j'(x)$. $$\tfrac{d}{dx}\bigg(g(x) * j(x)\bigg) = g'(x) * j(x) + g(x) * j'(x)$$ ## Proof Define function $f$. $$f(x) = g(x) * j(x)$$ $$f'(x) = \lim_{h \to 0}\left(\frac{f(x + h) - f(x)}{h}\right)$$ $$f'(x) = \lim_{h \to 0}\left(\frac{g(x + h) * j(x + h) - g(x) * j(x)}{h}\right)$$ Now subtract and add $g(x) * j(x + h)$. $$f'(x) = \lim_{h \to 0}\left(\frac{g(x + h) * j(x + h) - \overbrace{ g(x) * j(x + h) }^\text{subtract} + \overbrace{ g(x) * j(x + h) }^\text{add} - g(x) * j(x)}{h}\right)$$ Split the limit. $$f'(x) = \lim_{h \to 0}\left(\frac{g(x + h) * j(x + h) - g(x) * j(x + h)}{h}\right) + \lim_{h \to 0}\left(\frac{g(x) * j(x + h) - g(x) * j(x)}{h}\right)$$ Factor out $j(x + h)$ in the first limit, and factor out $g(x)$ in the second. Then move those factors out of the fraction. $$f'(x) = \lim_{h \to 0}\left(\frac{j(x + h) * \bigg(g(x + h) - g(x)\bigg)}{h}\right) + \lim_{h \to 0}\left(\frac{g(x) * \bigg(j(x + h) - j(x)\bigg)}{h}\right)$$ $$f'(x) = \lim_{h \to 0}\left(\frac{g(x + h) - g(x)}{h} * j(x + h) \right) + \lim_{h \to 0}\left(\frac{j(x + h) - j(x)}{h} * g(x) \right)$$ Now split the limits again. $$f'(x) = \lim_{h \to 0}\left(\frac{g(x + h) - g(x)}{h}\right) * \lim_{h \to 0}\bigg(j(x + h)\bigg) + \lim_{h \to 0}\left(\frac{j(x + h) - j(x)}{h}\right) * \lim_{h \to 0}\bigg(g(x)\bigg)$$ Notice: • The first and third limit are the definition of the derivative for $g(x)$ and $j(x)$. • In the second limit, $h$ goes to $0$, so $j(x + h)$ goes to $j(x + 0) = j(x)$. • There is no $h$ in the fourth limit, so this limit just becomes $g(x)$. $$f'(x) = g'(x) * j(x) + j'(x) * g(x)$$ ## Proofs building upon this proof ### Quotient rule in calculus This proof shows that the derivative for the quotient or fraction a/b is (a'b - ab') / b^2.
{}
# Recent Posts 312 posts found posted over 1 year ago distler 72 posts Forum: Heterotic Beast – Topic: MySQL Gotcha If you’re going to use Heterotic Beast in production, you need to be running MySQL 5.5.3 or later, and follow the advice in this blog post. Otherwise, the lack of support for Unicode will come back to bite you. posted over 1 year ago distler 72 posts Forum: itex2MML – Topic: weird math fonts P.S.: Congratulations on figuring out how to make this page ill-formed! It took a bit of work to fix the issue. posted over 1 year ago admin 53 posts edited over 1 year ago Forum: itex2MML – Topic: weird math fonts Perhaps you need to install the STIX fonts (see here for some slightly out-of-date, but still useful instructions). I see those calligraphic letters all set in the same font. And, moreover $ℙ$ and $ℚ$ are set upright (as, for that matter, are $𝔸$ and $𝕓$). Alas, what you see is strongly-dependent on what fonts you have installed. In more detail: On my system, ℬ (U+212C) is available in STIXGeneral, Apple Symbols and Arial Unicode MS. But 𝒜 (U+1D49C) is only available in STIXGeneral. In current versions of Firefox, I believe the default value of font.mathfont-family is MathJax_Main, STIXNonUnicode, STIXSizeOneSym, STIXSize1, STIXGeneral, Asana Math, Symbol, DejaVu Sans, Cambria Math so the version in STIXGeneral is what I see. posted over 1 year ago jl345 6 posts edited over 1 year ago Forum: itex2MML – Topic: weird math fonts Maybe it’s just general persnicketiness on my part, but why do $\mathrm{ℬℰℱℋℐℒℳℛ}$ appear (in Firefox and rekonq) in a different type than the letters $\mathrm{𝒜𝒞𝒟𝒢𝒥𝒦𝒩𝒪𝒫𝒬𝒮𝒯𝒰𝒱𝒲𝒳𝒴𝒵}$? And how can I get a letter like $ℙ$ or $ℚ$ to stand upright like $\mathrm{ℙℚ}$ but by itself without getting italicized? (Sorry for the new username. I lost my password and for some reason Yahoo can’t get mail from the forums.) posted over 1 year ago jl344 4 posts Forum: itex2MML – Topic: Bugs Good enough then and thank you. I thought $X_a b$ was producing an unacceptable space, but that was just a Firefox peculiarity. posted over 1 year ago admin 53 posts Forum: itex2MML – Topic: Bugs That’s a “feature”, not a bug. As described here, $ab$ is a single token in itex (tranlated to ab); $a b$ are two tokens (translated to ab). MathML is semantically-richer than (La)TeX, and this convention gives you the ability to enter multi-character tokens, which will be interpreted as such, when translated to MathML. posted over 1 year ago jl344 4 posts Forum: itex2MML – Topic: Bugs Identifiers lumped together in MathML output Example: $X_ab$ produces output ${X}_{\mathrm{ab}}$ X ab Output should be more or less ${X}_{a}b$ $X ab$ Two things wrong with this: first the “b” would never be subscripted in LaTex, and second, two variables should not be lumped in the same element because this causes MathML to set them in upright rather than italic type. So each variable really needs to be in its own posted over 1 year ago jl344 4 posts Forum: Instiki – Topic: How could I preview/undo changes? Thanks! Nothing wrong with that at all, now that I see how it works. I did find the history after I’d been experimenting for more than half an hour. I’m glad I found this wiki. posted over 1 year ago admin 53 posts Forum: Instiki – Topic: How could I preview/undo changes? Every page has a “History”, a “Diff” between successive Revisions, and the ability to Rollback to a prior Revision. Instead of a Preview, successive edits, within 1/2 hour, by the same user, do not create a new Revision. So save your work, as you go and you’ll be able to see your progress and still not lose anything (remember, with other wikis, changes which have been previewed, but not saved, will be lost). posted over 1 year ago jl344 4 posts Forum: Instiki – Topic: How could I preview/undo changes? Hi, I’m totally new to the Instiki software, and having just installed it I’m curious as to whether there is a preview and/or some history to make it possible to undo changes. There doesn’t seem to be. I guess I’m just nervous that I’ll inadvertently trash something I’m working on and not be able to get it back. Anyways, I really like the simplicity of potentially being able to quickly edit and put math notes and papers on the web with MathML, and being able to print them out real nice with LaTeX, too. Great work! I haven’t been able to find any other software out there that can do that, except for UniWakka, which unfortunately doesn’t look like it’s being maintained anymore. posted over 1 year ago admin 53 posts Forum: Instiki – Topic: Instiki 0.19.4 I’ve released Instiki 0.19.4. This is a security and bugfix release. Everyone should upgrade. posted over 1 year ago Bernhard Sta... 4 posts Forum: Instiki – Topic: Bugs Apparently, math is completely broken in Opera 12. I installed Opera 12 today and since then, I’ve been getting “Error parsing MathML: ErrorUnknown source” errors in place of every formula. Firefox works fine, so it’s not a server bug. On http://golem.ph.utexas.edu/wiki/instiki/show/Sandbox I get the error message, while the demos from http://www.mathjax.org/demos/ work without complaint, so I guess it’s an instiki or itex2mml bug - or a bug in Opera, of course. posted over 1 year ago Bernhard Sta... 4 posts edited over 1 year ago Forum: Instiki – Topic: Feature Requests Sorry for not answering for so long - but I would still like to discuss the feature I wrote about a few months ago. Is this a feature that is implemented somewhere? : Your short description is slightly … underspecified. So looking at an actual implementation would be helpful to me, in deciding whether this is something to implement in Instiki. That’s true, my description is underspecified - it was just some ideas shooting through my head that I didn’t formulate clearly, and I also didn’t research existing approaches. The problem can be described as follows: Both mathematical concepts and their presentation are moving targets. Mathematical notation is developed together with mathematical concepts and is permanently being refined afterwards, as one can witness in the discussions on nLab. Notations are the “interface” through which humans interact with mathematical concepts, so elegant, intuitive, consistent notations are important for understanding them. However, it has been a time-consuming job to keep notations consistent and to update existing work to new notations, effectively impeding the improvement of notation and in the end mathematics itself - at least because of time wasted, and IMO also by suboptimal notation leading to suboptimal intuition. I think that this is a consequence of a deeper problem, namely that authors have to manipulate the presentation of the mathematical concepts they describe. My opinion is that in collaborative mathematics platforms, article authors should rather manipulate the mathematical concepts themselves. TeX-derived typesetting systems are very good at typesetting mathematical notation and should be used for presentation, but when describing mathematical concepts, you shouldn’t be bothered with such details. Rather than using TeX, I’d suggest using syntax customized for the respective field of mathematics. What I wanted to point out is that software for collaborative mathematics platforms like nLab may offer the chance to solve that problem and maybe even endorse improvement of mathematical notation by making a clear distinction between mathematical notation on the one hand, and mathematical presentation on the other hand. Mathematical concepts would be stored using a schema/ontology spanning all fields of mathematics. Mathematical notation (=wiki syntax) defined for each respective field of mathematics could then be used to write about these concepts. The presentation would be implemented using transformation from the schema/ontology to MathML or TeX or whatever. Once there is a mechanism for representing ontologies/schemata of mathematical concepts, it becomes possible for authors to choose any notation offered, or define their own. When the notation is changed to a newer version, automatic migration schemes may enable easier switching to new notation. And as for the presentation, the reader could then himself decide whether he prefers the comma category written using a downwards arrow or a slash. I actually found a project that might have similar aims, namely SWiM. But what I see in the related article doesn’t really look like what I’d expect from a wiki - the source code in the screenshot on page 4 looks worse than LISP, in my eyes. I think that this problem is caused by forcing authors to use some one-size-fits-all notation of OpenMath or similar, so my suggestion should make such a wiki usable. posted over 1 year ago mmadsen 5 posts Forum: Instiki – Topic: Installation/Upgrade Issues Thank you for your help! posted over 1 year ago admin 53 posts Forum: Instiki – Topic: Installation/Upgrade Issues Launchd doesn’t play well with RVM. There are workarounds, e.g. here. posted over 1 year ago mmadsen 5 posts Forum: Instiki – Topic: Installation/Upgrade Issues Or….not quite there yet. irb require iconv is great starting instiki 0.19.3 from the command line is great starting instiki from the plist with launchctl is saying ” Bundler couldn’t find some gems.Did you run bundle install? (RuntimeError)” since it does not give this error when run from the command line, and “ruby bundle install” and bundle update come up clean, I’m assuming that somehow this is running with the wrong context out of the launch daemon plist given multiple rubies in the system under rvm (although the default is set). so I edited the instiki shell script to point to the correct, specific ruby directory, and I still get the same thing in system logs when using launchctl. at all points, launchctl is being executed as unprivileged user, and specifies the same unpriviliged user in the plist file. Has anyone encountered this before? posted over 1 year ago mmadsen 5 posts Forum: Instiki – Topic: Installation/Upgrade Issues Fixed. Had two ruby 1.9.2 installations in rvm, one had the iconv stuff fixed, one didn’t. Will try the instiki upgrade again. Thanks for pointing me in the right direction! posted over 1 year ago mmadsen 5 posts Forum: Instiki – Topic: Installation/Upgrade Issues require ‘iconv’ returns: 1.9.2p180 :001 > require ‘iconv’ LoadError: no such file to load – iconv from internal:lib/rubygems/custom_require:29:in require' from :29:in require’ from (irb):1 from /usr/local/rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in ' and when I try to install iconv as a gem: bash-3.2# gem install iconv Fetching: iconv-0.1.gem (100%) Building native extensions. This could take a while… ERROR: Error installing iconv: ERROR: Failed to build gem native extension. /usr/local/rvm/rubies/ruby-1.9.2-p180/bin/ruby extconf.rb checking for iconv() in iconv.h… no checking for iconv() in -liconv… no *** extconf.rb failed *** posted over 1 year ago admin 53 posts Forum: Instiki – Topic: Installation/Upgrade Issues Hmm. And does # irb > require 'iconv' return 'true'? What does # /usr/bin/env ruby -v return? posted over 1 year ago mmadsen 5 posts Forum: Instiki – Topic: Installation/Upgrade Issues Hi, I’m having trouble both upgrading an existing Instiki installation (0.19.1)MML+, and installing a plain vanilla 0.19.3 from github. The issue seems to surround iconv, and none of the solutions or suggestions I’ve tried from googling have worked. The platform is OSX Lion, using Ruby 1.9.2_p180, gcc compiler version: Using built-in specs. Target: i686-apple-darwin11 Configured with: /private/var/tmp/llvmgcc42/llvmgcc42-2336.1~1/src/configure –disable-checking –enable-werror –prefix=/Developer/usr/llvm-gcc-4.2 –mandir=/share/man –enable-languages=c,objc,c++,obj-c++ –program-prefix=llvm- –program-transform-name=/^cg$/s/$/-4.2/ –with-slibdir=/usr/lib –build=i686-apple-darwin11 –enable-llvm=/private/var/tmp/llvmgcc42/llvmgcc42-2336.1~1/dst-llvmCore/Developer/usr/local –program-prefix=i686-apple-darwin11- –host=x86_64-apple-darwin11 –target=i686-apple-darwin11 –with-gxx-include-dir=/usr/include/c++/4.2.1 Thread model: posix gcc version 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.1.00) The installation of 0.19.3 seems to go well, ruby bundle pulls down everything 0.19.3 requires, but the following occurs upon attempting to start instiki: mark:instiki-0.19.3/ (master) \$ ./instiki -p 2501 14:26:37 NOTE: Gem.source_index is deprecated, use Specification. It will be removed on or after 2011-11-01. Gem.source_index called from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/railties/lib/rails/gem_dependency.rb:21. /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/inflector.rb:3:in require': no such file to load -- iconv (LoadError) from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/inflector.rb:3:in (required)>' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext/integer/inflections.rb:1:in require' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext/integer/inflections.rb:1:in (required)>' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext/integer.rb:2:in require' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext/integer.rb:2:in (required)>' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext.rb:8:in require' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext.rb:8:in block in (required)>' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext.rb:8:in each' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support/core_ext.rb:8:in (required)>' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support.rb:56:in require' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/vendor/rails/activesupport/lib/active_support.rb:56:in (required)>' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/script/server:7:in require' from /Users/mark/Dropbox/Research/Instiki/instiki-0.19.3/script/server:7:in (required)>' from ./instiki:6:in load' from ./instiki:6:in ' I had previously rebuilt ruby 1.9.2 in rvm, after doing rvm pkg install iconv and rebuilding 1.9.2 pointing at the rvm version of iconv, as suggested online. I’m stumped. Has anyone else seen this? What did you do to solve it? Thanks! posted over 1 year ago distler 72 posts Forum: Instiki – Topic: Some questions Is there a way to link to a category? My idea is to have a lot of categories, but only link to the most important ones from the homepage. I couldn’t figure out any way to do that. Not sure what you are after. Perhaps you mean to link to the page listing all pages in category ‘foo’. The url for that is /list/foo . Macros for iTex: Is there any way to define per-page or global macros? No. Though this is a much-discussed question. Errors for iTex: So far it seems like if some TeX expression doesn’t work you just get the source rendered, but there is no way to find exactly where the error is. This way if I have a long equation I am left to hunt through the whole thing for the missed bracket or parenthesis. Is there something I’m missing? I agree that itex’s error-reporting is pretty useless. Depending on the type of error (a missed brace bracket, say), LaTeX’s is often not much better. Here, at least, you know which equation to look at for the error, as each equation is parsed separately, and errors can’t spill over as they sometimes do in LaTeX. Linking and/or embedding local files: I am running Instiki locally, but I am syncing the whole thing online, so I can use it from more than one computer. I often use Xournal (on a tablet PC) to take notes/do calculations. While for high-level results, or summaries, Instiki is fine, for long/messy calculations it’s a lot faster to just hand-write them in Xournal. Ideally, I want to be able to link to a Xournal file from Instiki and have some quick way of viewing or editing it. Right now it seems like that the only way is to put a file:/// url, but that requires syncing two things separately, and making sure the url’s make sense on every computer I am using. Look at Instiki’s file upload capability. That probably doesn’t help you very much from the point of view of syncing between different computers (as each Instiki installation will have its own set of uploaded files). Editing SVG graphics: This is something that I’m pretty sure is a bug. It seems like unless there is empty space before and after the svg tags, the “Edit SVG graphic” button doesn’t show up. I think it doesn’t like ”`
{}
# geometry ## Classes ### Primitives Primitive Base class for geometric primitives. Bezier A Bezier curve. Circle A circle is defined by a plane and a radius. Ellipse A ellipse is defined by a plane and a major. Frame A frame is defined by a base point and two orthonormal base vectors. Line A line is defined by two points. Plane A plane is defined by a base point and a normal vector. Point A point is defined by XYZ coordinates. Polygon An object representing an ordered collection of points in space connected by straight line segments forming a closed boundary around the interior space. Polyline A polyline is a sequence of points connected by line segments. Pointcloud Class for working with pointclouds. Quaternion Creates a Quaternion object. Vector A vector is defined by XYZ components and a homogenisation factor. ### Shapes Shape Base class for geometric shapes. Box A box is defined by a frame and its dimensions along the frame’s x-, y- and z-axes. Capsule A capsule is defined by a line segment and a radius. Cone A cone is defined by a circle and a height. Cylinder A cylinder is defined by a circle and a height. Polyhedron A polyhedron is defined by its vertices and faces. Sphere A sphere is defined by a point and a radius. Torus A torus is defined by a plane and two radii. CPython ### Transformations Projection Create a projection transformation. Reflection Creates a Reflection that mirrors points at a plane. Rotation Create a rotation transformation. Scale Creates a scale transformation. Shear Create a shear transformation. Transformation The Transformation represents a 4x4 transformation matrix. Translation Create a translation transformation. ## Functions ### Predicates 2D is_ccw_xy Determine if c is on the left of ab when looking from a to b, and assuming that all points lie in the XY plane. is_colinear_xy Determine if three points are colinear on the XY-plane. is_intersection_line_line_xy Verifies if two lines intersect on the XY-plane. is_intersection_segment_segment_xy Determines if two segments, ab and cd, intersect. is_point_in_circle_xy Determine if a point lies in a circle lying on the XY-plane. is_point_in_convex_polygon_xy Determine if a point is in the interior of a convex polygon lying on the XY-plane. is_point_in_polygon_xy Determine if a point is in the interior of a polygon lying on the XY-plane. is_point_in_triangle_xy Determine if a point is in the interior of a triangle lying on the XY-plane. is_point_on_line_xy Determine if a point lies on a line on the XY-plane. is_point_on_polyline_xy Determine if a point is on a polyline on the XY-plane. is_point_on_segment_xy Determine if a point lies on a given line segment on the XY-plane. is_polygon_convex_xy Determine if the polygon is convex on the XY-plane. is_polygon_in_polygon_xy Determine if a polygon is in the interior of another polygon on the XY-plane. ### Predicates 3D is_colinear Determine if three points are colinear. is_colinear_line_line Determine if two lines are colinear. is_coplanar Determine if the points are coplanar. is_intersection_line_line Verifies if two lines intersect. is_intersection_line_plane Determine if a line (ray) intersects with a plane. is_intersection_line_triangle Verifies if a line (ray) intersects with a triangle. is_intersection_plane_plane Verifies if two planes intersect. is_intersection_segment_plane Determine if a line segment intersects with a plane. is_intersection_segment_segment Verifies if two segments intersect. is_polygon_convex Determine if a polygon is convex. is_point_behind_plane Determine if a point lies behind a plane. is_point_infront_plane Determine if a point lies in front of a plane. is_point_in_circle Determine if a point lies in a circle. is_point_in_halfspace Determine if a point lies in front of a plane. is_point_in_polyhedron Determine if the point lies inside the given polyhedron. is_point_in_triangle Determine if a point is in the interior of a triangle. is_point_on_line Determine if a point lies on a line. is_point_on_plane Determine if a point lies on a plane. is_point_on_polyline Determine if a point is on a polyline. is_point_on_segment Determine if a point lies on a given line segment. ### Transformations mirror_point_plane Mirror a point about a plane. mirror_points_line Mirror a point about a line. mirror_points_line_xy Mirror a point about a line. mirror_points_plane Mirror a point about a plane. mirror_points_point Mirror multiple points about a point. mirror_points_point_xy Mirror multiple points about a point. mirror_vector_vector Mirrors vector about vector. orient_points Orient points from one plane to another. project_point_line Project a point onto a line. project_point_line_xy Project a point onto a line in the XY plane. project_point_plane Project a point onto a plane. project_points_line Project points onto a line. project_points_line_xy Project points onto a line in the XY plane. project_points_plane Project multiple points onto a plane. rotate_points Rotates points around an arbitrary axis in 3D. rotate_points_xy Rotates points in the XY plane around the Z axis at a specific origin. reflect_line_plane Bounce a line of a reflection plane. reflect_line_triangle Bounce a line of a reflection triangle. scale_points Scale points. scale_points_xy Scale points in the XY plane. translate_points Translate points. translate_points_xy Translate points and in the XY plane. axis_angle_vector_from_matrix Returns the axis-angle vector of the rotation matrix M. axis_angle_from_quaternion Returns an axis and an angle of rotation from the given quaternion. basis_vectors_from_matrix Returns the basis vectors from the rotation matrix R. compose_matrix Calculates a matrix from the components of scale, shear, euler_angles, translation and perspective. decompose_matrix Calculates the components of rotation, translation, scale, shear, and perspective of a given transformation matrix M. euler_angles_from_matrix Returns Euler angles from the rotation matrix M according to specified axis sequence and type of rotation. euler_angles_from_quaternion Returns Euler angles from a quaternion. axis_and_angle_from_matrix Returns the axis and the angle of the rotation matrix M. identity_matrix Construct an identity matrix. local_axes local_to_world_coordinates Convert local coordinates to global coordinates. matrix_determinant Calculates the determinant of a square matrix M. matrix_from_axis_and_angle Calculates a rotation matrix from an rotation axis, an angle and an optional point of rotation. matrix_from_axis_angle_vector Calculates a rotation matrix from an axis-angle vector. matrix_from_basis_vectors Creates a rotation matrix from basis vectors (= orthonormal vectors). matrix_from_change_of_basis Computes a change of basis transformation between two frames. matrix_from_euler_angles Calculates a rotation matrix from Euler angles. matrix_from_frame Computes a change of basis transformation from world XY to the frame. matrix_from_frame_to_frame Computes a transformation between two frames. matrix_from_orthogonal_projection Returns an orthogonal projection matrix to project onto a plane. matrix_from_parallel_projection Returns an parallel projection matrix to project onto a plane. matrix_from_perspective_entries Returns a matrix from perspective entries. matrix_from_perspective_projection Returns a perspective projection matrix to project onto a plane along lines that emanate from a single point, called the center of projection. matrix_from_quaternion Calculates a rotation matrix from quaternion coefficients. matrix_from_scale_factors Returns a 4x4 scaling transformation. matrix_from_shear Constructs a shear matrix by an angle along the direction vector on the shear plane (defined by point and normal). matrix_from_shear_entries Returns a shear matrix from the 3 factors for x-y, x-z, and y-z axes. matrix_from_translation Returns a 4x4 translation matrix in row-major order. matrix_inverse Calculates the inverse of a square matrix M. orthonormalize_axes Corrects xaxis and yaxis to be unit vectors and orthonormal. quaternion_canonize Converts a quaternion into a canonic form if needed. quaternion_conjugate Conjugate of a quaternion. quaternion_from_axis_angle Returns a quaternion describing a rotation around the given axis by the given angle. quaternion_from_euler_angles Returns a quaternion from Euler angles. quaternion_from_matrix Returns the 4 quaternion coefficients from a rotation matrix. quaternion_is_unit Checks if a quaternion is unit-length. quaternion_multiply Multiplies two quaternions. quaternion_norm Calculates the length (euclidean norm) of a quaternion. quaternion_unitize Makes a quaternion unit-length. transform_frames Transform multiple frames with one transformation matrix. transform_points Transform multiple points with one transformation matrix. transform_vectors Transform multiple vectors with one transformation matrix. translation_from_matrix Returns the 3 values of translation from the matrix M. world_to_local_coordinates Convert global coordinates to local coordinates. CPython dehomogenize_numpy Dehomogenizes points or vectors. dehomogenize_and_unflatten_frames_numpy Dehomogenize a list of vectors and unflatten the 2D list into a 3D list. homogenize_numpy Dehomogenizes points or vectors. homogenize_and_flatten_frames_numpy Homogenize a list of frames and flatten the 3D list into a 2D list using numpy. local_to_world_coordinates_numpy Convert local coordinates to global (world) coordinates. transform_points_numpy Transform multiple points with one Transformation using numpy. transform_vectors_numpy Transform multiple vectors with one Transformation using numpy. world_to_local_coordinates_numpy Convert global coordinates to local coordinates. ### Linear algebra add_vectors Add two vectors. add_vectors_xy Add two vectors, assuming they lie in the XY-plane. allclose Returns True if two lists are element-wise equal within a tolerance. argmax Returns the index of the first maximum value within an array. argmin Returns the index of the first minimum value within an array. close Returns True if two values are equal within a tolerance. cross_vectors Compute the cross product of two vectors. cross_vectors_xy Compute the cross product of two vectors, assuming they lie in the XY-plane. dehomogenize_vectors Dehomogenise a list of vectors. divide_vectors Element-wise division of two vectors. divide_vectors_xy Element-wise division of two vectors assumed to lie in the XY plane. dot_vectors Compute the dot product of two vectors. dot_vectors_xy Compute the dot product of two vectors, assuming they lie in the XY-plane. homogenize_vectors Homogenise a list of vectors. length_vector Calculate the length of the vector. length_vector_xy Compute the length of a vector, assuming it lies in the XY plane. length_vector_sqrd Compute the squared length of a vector. length_vector_sqrd_xy Compute the squared length of a vector, assuming it lies in the XY plane. multiply_matrices Mutliply a matrix with a matrix. multiply_matrix_vector Multiply a matrix with a vector. multiply_vectors Element-wise multiplication of two vectors. multiply_vectors_xy Element-wise multiplication of two vectors assumed to lie in the XY plane. norm_vector Calculate the length of a vector. norm_vectors Calculate the norm of each vector in a list of vectors. normalize_vector Normalise a given vector. normalize_vector_xy Normalize a vector, assuming it lies in the XY-plane. normalize_vectors Normalise multiple vectors. normalize_vectors_xy Normalise multiple vectors, assuming they lie in the XY plane. orthonormalize_vectors Orthonormalize a set of vectors. power_vector Raise a vector to the given power. power_vectors Raise a list of vectors to the given power. scale_vector Scale a vector by a given factor. scale_vector_xy Scale a vector by a given factor, assuming it lies in the XY plane. scale_vectors Scale multiple vectors by a given factor. scale_vectors_xy Scale multiple vectors by a given factor, assuming they lie in the XY plane. square_vector Raise a vector to the power 2. square_vectors Raise a multiple vectors to the power 2. subtract_vectors Subtract one vector from another. subtract_vectors_xy Subtract one vector from another, assuming they lie in the XY plane. sum_vectors Calculate the sum of a series of vectors along the specified axis. transpose_matrix Transpose a matrix. vector_average Average of a vector. vector_component Compute the component of u in the direction of v. vector_component_xy Compute the component of u in the direction of v, assuming they lie in the XY-plane. vector_standard_deviation Standard deviation of a vector. vector_variance Variance of a vector. ### Analytical archimedean_spiral_evaluate Evalutes a spiral at a parameter. circle_evaluate Evalutes a circle at a parameter. helix_evaluate Evalutes an helix at a parameter. logarithmic_spiral_evaluate Evalutes a logarithmic spiral at a parameter. ### Points, Vectors, Lines, Planes, Circles angle_points Compute the smallest angle between the vectors defined by three points. angle_points_xy Compute the smallest angle between the vectors defined by the XY components of three points. angle_vectors Compute the smallest angle between two vectors. angle_vectors_xy Compute the smallest angle between the XY components of two vectors. angle_vectors_signed Computes the signed angle between two vectors. angles_points Compute the two angles between two vectors defined by three points. angles_points_xy Compute the two angles between the two vectors defined by the XY components of three points. angles_vectors Compute the the 2 angles formed by a pair of vectors. angles_vectors_xy Compute the angles between the XY components of two vectors. angle_planes Compute the smallest angle between the two normal vectors of two planes. centroid_points Compute the centroid of a set of points. centroid_points_xy Compute the centroid of a set of points lying in the XY-plane. centroid_points_weighted Compute the weighted centroid of a set of points. circle_from_points Construct a circle from three points. circle_from_points_xy Create a circle from three points lying in the XY-plane midpoint_point_point Compute the midpoint of two points. midpoint_point_point_xy Compute the midpoint of two points lying in the XY-plane. midpoint_line Compute the midpoint of a line defined by two points. midpoint_line_xy Compute the midpoint of a line defined by two points. tangent_points_to_circle_xy Calculates the tangent points on a circle in the XY plane. ### Polygons & Polyhedrons area_polygon Compute the area of a polygon. area_polygon_xy Compute the area of a polygon lying in the XY-plane. area_triangle Compute the area of a triangle defined by three points. area_triangle_xy Compute the area of a triangle defined by three points lying in the XY-plane. centroid_polygon Compute the centroid of the surface of a polygon. centroid_polygon_xy Compute the centroid of the surface of a polygon projected to the XY plane. centroid_polygon_vertices Compute the centroid of the vertices of a polygon. centroid_polygon_vertices_xy centroid_polygon_edges Compute the centroid of the edges of a polygon. centroid_polygon_edges_xy centroid_polyhedron Compute the center of mass of a polyhedron. normal_polygon Compute the normal of a polygon defined by a sequence of points. normal_triangle Compute the normal vector of a triangle. normal_triangle_xy Compute the normal vector of a triangle assumed to lie in the XY plane. volume_polyhedron Compute the volume of a polyhedron represented by a closed mesh. ### Point Sets KDTree A tree for nearest neighbor search in a k-dimensional space. bestfit_plane Fit a plane to a list of (more than three) points. bounding_box Computes the axis-aligned minimum bounding box of a list of points. bounding_box_xy Compute the axis-aligned minimum bounding box of a list of points in the XY-plane. convex_hull Construct convex hull for a set of points. convex_hull_xy Computes the convex hull of a set of 2D points. CPython bestfit_circle_numpy Fit a circle through a set of points. bestfit_frame_numpy Fit a frame to a set of points. bestfit_plane_numpy Fit a plane through more than three (non-coplanar) points. bestfit_sphere_numpy Returns the sphere’s center and radius that fits best through a set of points. convex_hull_numpy Compute the convex hull of a set of points. convex_hull_xy_numpy Compute the convex hull of a set of points in the XY plane. icp_numpy Align two point clouds using the Iterative Closest Point (ICP) method. oabb_numpy oriented_bounding_box_numpy Compute the oriented minimum bounding box of a set of points in 3D space. oriented_bounding_box_xy_numpy Compute the oriented minimum bounding box of set of points in the XY plane. ### Distance closest_line_to_point Compute closest line to a point from a list of lines. closest_point_in_cloud Calculates the closest point in a pointcloud. closest_point_in_cloud_xy Calculates the closest point in a list of points in the XY-plane. closest_point_on_line Computes closest point on line to a given point. closest_point_on_line_xy Compute closest point on line (continuous) to a given point lying in the XY-plane. closest_point_on_plane Compute closest point on a plane to a given point. closest_point_on_polyline Find the closest point on a polyline to a given point. closest_point_on_polyline_xy Compute closest point on a polyline to a given point, assuming they both lie in the XY-plane. closest_point_on_segment Computes closest point on line segment (p1, p2) to test point. closest_point_on_segment_xy Compute closest point on a line segment to a given point lying in the XY-plane. distance_line_line Compute the shortest distance between two lines. distance_point_point Compute the distance bewteen a and b. distance_point_point_xy Compute the distance between points a and b, assuming they lie in the XY plane. distance_point_point_sqrd Compute the squared distance bewteen points a and b. distance_point_point_sqrd_xy Compute the squared distance between points a and b lying in the XY plane. distance_point_line Compute the distance between a point and a line. distance_point_line_xy Compute the distance between a point and a line, assuming they lie in the XY-plane. distance_point_line_sqrd Compute the squared distance between a point and a line. distance_point_line_sqrd_xy Compute the squared distance between a point and a line lying in the XY-plane. distance_point_plane Compute the distance from a point to a plane defined by origin point and normal. distance_point_plane_signed Compute the signed distance from a point to a plane defined by origin point and normal. ### Intersections intersection_circle_circle_xy Calculates the intersection points of two circles in 2d lying in the XY plane. intersection_ellipse_line_xy Computes the intersection of an ellipse and a line in the XY plane. intersection_line_box_xy Compute the intersection between a line and a box in the XY plane. intersection_line_line_xy Compute the intersection of two lines, assuming they lie on the XY plane. intersection_line_line Computes the intersection of two lines. intersection_line_plane Computes the intersection point of a line and a plane intersection_line_segment_xy intersection_line_segment Compute the intersection of a line and a segment. intersection_line_triangle Computes the intersection point of a line (ray) and a triangle based on the Moeller Trumbore intersection algorithm intersection_mesh_mesh Compute the intersection of tow meshes. intersection_plane_circle Computes the intersection of a plane and a circle. intersection_plane_plane_plane Computes the intersection of three planes intersection_plane_plane Computes the intersection of two planes intersection_polyline_plane Calculate the intersection point of a plane with a polyline. intersection_ray_mesh Compute the intersection(s) between a ray and a mesh. intersection_segment_plane Computes the intersection point of a line segment and a plane intersection_segment_polyline Calculate the intersection point of a segment and a polyline. intersection_segment_polyline_xy Calculate the intersection point of a segment and a polyline on the XY-plane. intersection_segment_segment Compute the intersection of two lines segments. intersection_segment_segment_xy Compute the intersection of two lines segments, assuming they lie in the XY plane. intersection_sphere_line Computes the intersection of a sphere and a line. intersection_sphere_sphere Computes the intersection of 2 spheres. ### Offsets offset_line Offset a line by a distance. offset_polyline Offset a polyline by a distance. offset_polygon Offset a polygon (closed) by a distance. ### Interpolation barycentric_coordinates Compute the barycentric coordinates of a point wrt to a triangle. discrete_coons_patch Creates a coons patch from a set of four or three boundary polylines (ab, bc, dc, ad). tween_points Compute the interpolated points between two sets of points. tween_points_distance Compute an interpolated set of points between two sets of points, at a given distance. ### Offsets offset_line Offset a line by a distance. offset_polyline Offset a polyline by a distance. offset_polygon Offset a polygon (closed) by a distance. ### Boolean operations boolean_union_mesh_mesh Compute the boolean union of two triangle meshes. boolean_difference_mesh_mesh Compute the boolean difference of two triangle meshes. boolean_intersection_mesh_mesh Compute the boolean intersection of two triangle meshes. ### Triangulation conforming_delaunay_triangulation Construct a Conforming Delaunay triangulation of set of vertices, constrained to the specified segments. constrained_delaunay_triangulation Construct a Delaunay triangulation of set of vertices, constrained to the specified segments. delaunay_from_points Computes the delaunay triangulation for a list of points. delaunay_triangulation Construct a Delaunay triangulation of set of vertices. CPython delaunay_from_points_numpy Computes the delaunay triangulation for a list of points using Numpy. voronoi_from_points_numpy Generate a voronoi diagram from a set of points. ### Triangle meshes trimesh_gaussian_curvature Compute the discrete gaussian curvature of a triangle mesh. trimesh_geodistance Compute the geodesic distance from every vertex of the mesh to a source vertex. trimesh_harmonic Compute the harmonic parametrisation of a triangle mesh within a fixed circular boundary. trimesh_isolines Compute isolines on a triangle mesh using a scalarfield of data points assigned to its vertices. trimesh_lscm Compute the least squares conformal map of a triangle mesh. trimesh_mean_curvature Compute the discrete mean curvature of a triangle mesh. trimesh_massmatrix Compute massmatrix on a triangle mesh using a scalarfield of data points assigned to its vertices. trimesh_principal_curvature Compute the principal curvature directions of a triangle mesh. trimesh_remesh Remeshing of a triangle mesh. trimesh_remesh_constrained Constrained remeshing of a triangle mesh. trimesh_remesh_along_isoline Remesh a mesh along an isoline of a scalarfield over the vertices. trimesh_slice Slice a mesh by a list of planes. quadmesh_planarize Planarize the faces of a quad mesh.
{}
We have previously found that the gills of crucian carp Carassius carassius living in normoxic (aerated) water lack protruding lamellae,the primary site of O2 uptake in fish, and that exposing them to hypoxia increases the respiratory surface area of the gills ∼7.5-fold. We here examine whether this morphological change is triggered by temperature. We acclimated crucian carp to 10, 15, 20 and 25°C for 1 month, and investigated gill morphology, oxygen consumption and the critical oxygen concentration at the different temperatures. As expected, oxygen consumption increased with temperature. Also at 25°C an increase in the respiratory surface area, similar to that seen in hypoxia, occurred. This coincided with a reduced critical oxygen concentration. We also found that the rate of this transformation increased with rising temperature. Goldfish Carassius auratus, a close relative to crucian carp, previously kept at 25°C,were exposed to 15°C and 7.5°C. At 7.5°C the respiratory surface area of its gills was reduced by development of an interlamellar cell mass as found in normoxic crucian carp kept at 10-20°C. Thus, both species alter the respiratory surface area in response to temperature. Rather than being a graded change, the results suggest that the alteration of gill morphology is triggered at a given temperature. Oxygen-binding data reveal very high oxygen affinities of crucian carp haemoglobins, particularly at high pH and low temperature, which may be prerequisites for the reduced gill respiratory surface area at low temperatures. As ambient oxygen and temperature can both induce the remodelling of the gills, the response appears primarily to be an adaptation to the oxygen demand of the fish. Crucian carp and goldfish, two closely related species of the same genus Carassius, exhibit striking capacity of coping with low levels of oxygen and a wide range of ambient temperatures. Both species are anoxia tolerant and able to convert lactate to ethanol during severe hypoxia and anoxia, thus avoiding acidosis (Johnston and Bernard, 1983; Shoubridge and Hochachka, 1980; Shoubridge and Hochachka,1983). Although this mechanism enables them to avoid lactate self-pollution during anoxia, the release of ethanol to the water is energetically very costly, due to the loss of this energy-rich hydrocarbon. Since their anoxic survival time is dependent on their glycogen stores(Nilsson, 1990), it must be advantageous being able to postpone the activation of anaerobic ethanol production, and rely on their aerobic metabolism, for as long as possible. Being freshwater fish, crucian carp and goldfish are faced with a dilemma:they have to cope with a continuous ion loss and water influx over the respiratory surface area in the gills(Evans, 1979), but still maintain sufficient oxygen uptake. The water influx must be compensated by a large urine production resulting in an even greater loss of ions. These ion losses must be compensated by energetically demanding ion transport over the gills. Thus, being able to modulate the respiratory surface area in response to oxygen supply and demand should be of advantage. We have previously shown that crucian carp kept in normoxia at 8°C lack protruding lamellae, but if exposed to hypoxia, a morphological alteration is triggered resulting in protruding lamellae and a 7.5-fold increase of respiratory surface area (Sollid et al.,2003). This caused a fall in the critical oxygen concentration([O2]crit), i.e. the lowest ambient [O2]where the fish is able to sustain its resting oxygen consumption(O2). The gill remodelling is due to an induction of apoptosis and cell-cycle arrest in the mass of cells filling up the space between adjacent lamellae, causing this interlamellar cell mass (ILCM) to shrink. A reduction in respiratory surface area in normoxia should lead to lower water and ion fluxes and thus reduction of osmoregulatory costs. At the same time, the crucian carp's ability to maintain a sufficient rate of oxygen uptake without protruding lamellae indicates a very high oxygen affinity of its haemoglobin (Hb), which has remained to be studied. Fish are ectothermic organisms; hence increased temperature profoundly raises their metabolic rates. Increased temperature also decreases the amount of oxygen dissolved in the water. Temperature-related changes in metabolism are met with behavioural, respiratory, cardiovascular, hematological and biochemical adjustments (Aguiar et al.,2002; Burggren,1982; Butler and Taylor,1975; Caldwell,1969; Fernandes and Rantin,1989; Goldspink,1995; Houston et al.,1996; Houston and Rupert,1976; Maricondi-Massari et al., 1998). The responses to increased temperature may include air gulping, increased gill ventilation, increased lamellar perfusion, increased cardiac output, changes in Hb function and altered expression of metabolic enzymes. Studies related to gill morphology and temperature are scarce and only cover acute temperature changes(Hocutt and Tilney, 1985; Jacobs et al., 1981; Nolan et al., 2000; Tilney and Hocutt, 1987),which often reflect more pathophysiological responses that are not necessarily adaptive. Changes in Hb function could result from changes in the levels of erythrocytic effectors such as organic phosphates (ATP, often supplemented by guanosine triphosphates in fish) or changes in Hb isomorphs(Weber, 2000). In addition to the standard' electrophoretically anodic' Hb components that display pronounced Bohr shifts, some fishes (salmonids, catfishes and eels) also have electrophoretically cathodic' Hbs, that have lower Bohr shifts and show divergent phosphate sensitivities (which are insignificant in salmonids and large in eels and catfishes). Hb composition of goldfish (that is closely related to crucian carp) changes with temperature: electrophoresis reveals two isoHbs in fish acclimated to 2°C, and three isoHbs in fish acclimated to 20°C and 35°C (Houston and Cyr,1974). This modification also occurs in isolated cells and in hemolysates, suggesting that it is caused by altered aggregation of pre-existing subunits rather than de novo Hb synthesis(Houston and Rupert,1976). The aim of this study was to investigate if increased temperature, leading to an increased oxygen demand, can trigger the morphological response recently found in hypoxia-exposed crucian carp(Sollid et al., 2003). At our latitude the typical seasonal temperature range for the crucian carp habitat is 0°C to 25°C. We thus acclimated crucian carp to temperatures ranging from 10 to 25°C to examine the possible effects of changing oxygen demand on gill morphology. In addition goldfish were acclimated at 7.5, 15 and 25°C to see if the gill remodelling seen in crucian carp also is expressed in this closely related species when kept at low temperatures. Since goldfish normally are kept at room temperature, an ability to remodel the gills may not have been noticed. To identify adaptations in oxygen transport functions we also investigated Hb multiplicity in fish acclimated to the different temperatures, and measured the intrinsic oxygen-binding properties and effector sensitivities of crucian carp Hbs. ### Animals Crucian carp Carassius carassius L. (weighing 12.5-31.5 g; all adults) were caught in June 2003 in the Tjernsrud pond, Oslo community. They were kept on a 12 h:12 h L:D regime in tanks (∼100 fish per 500 l)continuously supplied with aerated and dechlorinated Oslo tapwater (10°C),and fed daily with commercial carp food (Tetra Pond, Tetra, Melle,Germany). Goldfish Carassius auratus L. (weighing 8.0-16.5 g; all adults),bred and cultivated in Singapore, were bought from a commercial wholesaler. They where kept in a tank (∼100 fish per 500 l) with aerated, ion strength adjusted to 500 μS cm-1 (dH-Salt, NOR ZOO, Bergen, Norway) and dechlorinated Oslo tap water (25°C) for 1 month before experiments. The light regime and feeding were the same as for crucian carp. ### Temperature acclimation Crucian carp were transferred to new holding tanks (∼10 fish per 25 l)held at 10°C, 15°C, 20°C and 25°C, respectively, and acclimated 1 month before respirometry experiments (see below). The fish were fed until 24 h before respirometry. Each fish was placed in the respirometer with a continuous flow of aerated water until 12 h before commencing measurements. To examine if gill morphology was affected by the respirometry,four fish from each group were sampled before and after respirometry and the left first and second gill arches were dissected out. As a control for possible effects of the confinement in the respirometer, crucian carp kept at 15°C were placed in the respirometer for 24 h and continuously supplied with aerated and dechlorinated Oslo tapwater. After exposures, the fish were killed with a sharp blow to the head. Goldfish were transferred to a new container (∼10 fish per 25 l) with ion strength adjusted, aerated and dechlorinated Oslo tapwater (25°C) for 1 month, whereafter the gills of four fish, were sampled. Subsequently, the water temperature in the container was reduced to 15°C. After 5 days at this temperature four additional fish were sampled. The temperature was finally reduced to 7.5°C and the gills of four fish were sampled after 5 days and 1 month at this temperature. The fish were fed during temperature acclimation. The fish were killed for dissection of the left first and second gill arches and treated as the crucian carp. ### Respirometry O2 during falling water oxygen concentration was measured with closed respirometry, and the[O2]crit was determined as described previously(Nilsson, 1992). The temperature in the 1 l respirometer was the same as the acclimation temperature. Oxygen levels in the respirometer were measured with an oxygen electrode (Oxi340i, WTW, Weilheim, Germany) and recorded on a laptop computer via an analog-digital converter (Powerlab 4/20, AD Instruments Ltd.,Oxon, UK). The fish were removed from the respirometer for dissection of gills when the recorded oxygen content became 0 mg O2 l-1. ### Scanning electron microscopy (SEM) The gill morphology of all groups was investigated as previously described(Sollid et al., 2003). In brief, gills were fixed in 3% glutaraldehyde in 0.1 mol l-1 sodium cacodylate buffer before dried, AuPd coated, and examined using a JSM 6400 electron microscope (JEOL, Peabody, USA). ### Hb oxygen binding IsoHb composition was probed using PhastSystem (Amersham Biosciences,Piscataway, NJ, USA) by isoelectrofocusing on polyacrylamide gels in the 5-8 pH range. The crucian carp had been acclimated for 1 month at 16°C or 26°C prior to blood samples. Crucian carp Hb for oxygen-binding studies was prepared from washed red cells as previously described (Weber et al., 1987). The Hb was stripped' of ionic effectors by column chromatography on Sephadex G25 Fine gel(Berman et al., 1971). Major isoHbs were separated using preparative isoelectric focusing using Pharmacia ampholytes (0.22% pH 5-7, 0.22% pH 6-8 and 0.11% pH 6.7-7.7). Retrieved pools were concentrated using Amicon Ultra-15 (molecular weight cut-off 10.000)filters. All Hb samples were subsequently dialyzed for at least 24 h against three changes of 10 mmol l-1 Hepes buffer containing 0.5 mmol l-1 EDTA. All preparation procedures were carried out at 0-5°C. Samples were frozen at -80°C and freshly thawed for subsequent analyses. O2 equilibrium measurements at different pH values and in the presence of 0.1 mol l-1 KCl were carried out using a modified gas diffusion chamber as previously detailed(Weber, 1981; Wells and Weber, 1989). ### Statistics All values are given as means ± s.e.m. and statistically significant differences were detected with a one-way ANOVA test with Tukey's test as post test using GraphPad InStat (GraphPad, San Diego, CA, USA). ### Morphology In crucian carp that were originally kept at 10°C, and then exposed to higher temperatures (15, 20 and 25°C), the gill morphology only changed in the 25°C group prior to the respirometry(Fig. 1a-c). Thus, the threefold increase of O2from 10 to 20°C, did not trigger a change of gill morphology(Table 1). Fig. 1. (a-f) Scanning electron micrographs from the second gill arch of crucian carp and goldfish kept at different temperatures. At 15°C (a) and 20°C(b) the crucian carp gills do not have protruding lamellae. However, after respirometry at 20°C (c) crucian carp gill filament exhibited protruding lamellae, a response probably induced by the hypoxic period in the respirometer. At 25°C (d) the crucian carp developed protruding lamellae in normoxia. Goldfish gills at 15°C (e) showed protruding lamellae;however after 5 days at 7.5°C (f) the gill morphology of goldfish started to resemble that of normoxic crucian carp at 10-20°C. Scale bar, 50μm. Fig. 1. (a-f) Scanning electron micrographs from the second gill arch of crucian carp and goldfish kept at different temperatures. At 15°C (a) and 20°C(b) the crucian carp gills do not have protruding lamellae. However, after respirometry at 20°C (c) crucian carp gill filament exhibited protruding lamellae, a response probably induced by the hypoxic period in the respirometer. At 25°C (d) the crucian carp developed protruding lamellae in normoxia. Goldfish gills at 15°C (e) showed protruding lamellae;however after 5 days at 7.5°C (f) the gill morphology of goldfish started to resemble that of normoxic crucian carp at 10-20°C. Scale bar, 50μm. Table 1. Respirometry data from the crucian carp GroupsTemp. (°C)O2 (mg kg−1 h−1)[O2]crit (mg 1−1)O2:[O2]crit 10 38.9±4.5b,c,d 1.43±0.13b,c,d 27.4±2.1b,c,d 15 88.2±7.9a,d 2.45±0.18a,c,d 36.0±2.0a,d 20 122.7±12.2a,d 3.55±0.29a,b 34.5±1.1a,d 25 209.5±15.1a,b,c 4.02±0.27a,b 52.1±1.6a,b,c GroupsTemp. (°C)O2 (mg kg−1 h−1)[O2]crit (mg 1−1)O2:[O2]crit 10 38.9±4.5b,c,d 1.43±0.13b,c,d 27.4±2.1b,c,d 15 88.2±7.9a,d 2.45±0.18a,c,d 36.0±2.0a,d 20 122.7±12.2a,d 3.55±0.29a,b 34.5±1.1a,d 25 209.5±15.1a,b,c 4.02±0.27a,b 52.1±1.6a,b,c Significant temperature-dependent changes in the mean values for rate of oxygen consumption(O2), critical oxygen tension ([O2]crit) and the ratio between the two at the different temperatures, are indicated by an ANOVA(P<0.0001). The superscripted letters (a,b,c,d) denote significant differences (P<0.05) between groups (A,B,C,D) within a variable. An increase of O2:[O2]critratio indicates an improvement of the capacity for oxygen uptake. Throughout closed respirometry, the oxygen tension in the respirometer drops, eventually to a level below the [O2]crit. Hence the fish will experience a hypoxic environment and finally anoxia (0 mg O2 l-1). At 8°C an increase of respiratory surface in hypoxia takes 3 days before it is pronounced(Sollid et al., 2003). The present results show, that this time period is dramatically reduced at higher temperatures. In the respirometer at 15 and 20°C the fish experienced hypoxia and anoxia on average 6 h before sampled. Crucian carp at 15°C(not shown) and 20°C (Fig. 1d) underwent the characteristic remodelling of their gills to increase the respiratory surface area during these few hours in the respirometer. This change was not due to confinement (not shown). In our previous study, morphometric measurements indicated a ∼7.5-fold increase in the lamellar area exposed to water in crucian carp kept in hypoxia(Sollid et al., 2003). The gill morphological changes of crucian carp kept at 25°C, and exposed to 15°C and 20°C in the respirometer in this study appeared to be identical in extent to those seen after hypoxia in our previous study. However, since the gills were only examined by SEM in the present study, no quantitative morphometrical measurements were attempted. Goldfish at 20°C had protruding lamellae (not shown), which were indistinguishable from those seen in the 15°C group(Fig. 1f). However, in goldfish exposed to 7.5°C, a clear change in the gill filament morphology occurred. This was clearly visible after 5 days (Fig. 1e) and no further changes were apparent after 1 month (not shown). The space between adjacent lamellae was partially filled with a cell mass, as seen in crucian carp, although slightly less pronounced, as the edges of the lamellae were still visible. ### Respiration The respirometry data for the crucian carp showed, as expected, that O2 increased with temperature (P<0.0001, Fig. 2A). A temperature rise from 10°C to 25°C increased the O2 more than fivefold, from 38.9±4.5 mg kg-1 h-1 to 209.5±15.1 mg kg-1 h-1 (P<0.001, Table 1). The increase of O2, from 10°C to 25°C, lead to an increase of [O2]crit from 1.43±0.13 kPa to 4.02±0.27 kPa (P<0.001, Table 1). However, there was a strikingly small increase in [O2]crit between 20°C and 25°C (Fig. 2B). This corresponds well with the transformation in gill morphology that occurred between these two temperatures (Fig. 1b,d). Fig. 2. (A-F) Respirometry data from the present study of crucian carp (left), and previous studies (right) on goldfish (Fry and Hart, 1948) and Atlantic cod(Schurmann and Steffensen,1997) showing the effect of temperature on O2 (A) and (B), the alteration of critical oxygen concentration ([O2]crit),in response to different temperatures (C) and (D), and how the different species alter their oxygen uptake capabilities at different O2 (E) and (F). Fig. 2. (A-F) Respirometry data from the present study of crucian carp (left), and previous studies (right) on goldfish (Fry and Hart, 1948) and Atlantic cod(Schurmann and Steffensen,1997) showing the effect of temperature on O2 (A) and (B), the alteration of critical oxygen concentration ([O2]crit),in response to different temperatures (C) and (D), and how the different species alter their oxygen uptake capabilities at different O2 (E) and (F). The relationship between O2 and[O2]crit in crucian carp(Fig. 2C) was similar to literature data for goldfish (Fig. 2F). Both species show relatively low[O2]crit at high temperatures, which indicates an improvement of their oxygen uptake capabilities that is likely to coincide with the remodelling of the gills. By contrast, the Atlantic cod(Schurmann and Steffensen,1997) shows a steady increase in [O2]critwith rising O2(Fig. 2F), indicating that this species is incapable of any major morphological or physiological adjustments to improve its O2 uptake capacity at high temperatures. Also,crucian carp showed lower [O2]crit values than the goldfish. For example at a O2 of approximately 85 mg kg-1 h-1, the [O2]crit values were 2.4 kPa and 3.9 kPa for crucian carp and goldfish, respectively. This indicates an ability of crucian carp to extract more oxygen from the surrounding water than goldfish. The Q10 was also similar between the two species (Table 2). However, in contrast to goldfish, crucian carp exhibited a higher Q10 between 20-25°C than between 15-20°C(Table 2). Table 2. Q10 values for crucian carp, goldfish and Atlantic cod Q10 regimesCrucian carpAtlantic codGoldfish 5-10°C  2.6 10-15°C 5.1 1.9 4.3 15-20°C 1.9  2.9 20-25°C 2.9  2.7 Q10 regimesCrucian carpAtlantic codGoldfish 5-10°C  2.6 10-15°C 5.1 1.9 4.3 15-20°C 1.9  2.9 20-25°C 2.9  2.7 Q10 values (i.e. the increase of O2 observed at 10°C higher temperatures, here given for 5°C intervals) for crucian carp (present study), goldfish (Fry and Hart, 1948) and Atlantic cod(Schurmann and Steffensen,1997). ### Hb and oxygen binding The thin-layer isoelectrofocusing of Hbs from fish acclimated to 14 or 26°C (Fig. 3) showed at least three major bands. Importantly no consistent differences were seen in the number or relative intensities of the bands between fish acclimated to the two temperatures. Fig. 3. Thin-layer isoelectrofocussing gels of Hbs from individual crucian carp specimens acclimated to either 14 or 26°C (as indicated) for 1 month,showing correspondence in isoHb compositions of the two groups. Fig. 3. Thin-layer isoelectrofocussing gels of Hbs from individual crucian carp specimens acclimated to either 14 or 26°C (as indicated) for 1 month,showing correspondence in isoHb compositions of the two groups. As shown (Fig. 4A,B),stripped crucian carp Hbs show an extremely high oxygen affinity(P50=0.8 and 1.8 at pH 7.6 at 10 and 20°C, respectively). The Bohr effect that approximates -0.7 at pH 7.0 decreases markedly with increasing pH and is virtually absent at pH above 7.7 at 20°C. Interestingly, cooperativity increased with decreasing pH over the entire range investigated (8.4-6.4, Fig. 4A), whereas n50 values at low pH fall to unity and lower (reflecting anticooperativity) in fish Hbs that express Root effects(Brittain, 1987). The oxygen affinities decrease with increasing temperature (in agreement with the exothermic nature of haem oxygenation). As expressed by the heats of oxygenation (ΔH=58 and 49 kJ mol-1 at pH 7.6 and 7.0,respectively) the temperature sensitivity of P50 decreases with pH. This correlates with the parallel increase in the Bohr effect and, thus, in the endothermic dissociation of the Bohr protons. By contrast, the ATP sensitivity of the Hb decreases with increasing pH(Fig. 4A) in accordance with the associated decrease in positive charge of the phosphate binding sites. Fig. 4. Oxygen-binding characteristics and isoHb differentiation of crucian carp Hb, measured in the presence of 0.1 mol l-1 KCl and 0.1 mol l-1 Hepes buffers. (A) Oxygen tensions and Hill's cooperativity coefficients at 50% saturation (P50 and n50 of stripped hemolysates and their pH dependence (Bohr plots) at 10°C (□) and 20°C (○) and of the lysate in the presence of saturating concentration of ATP (ATP/tetrameric Hb ratio, 9.6), (•), [haem], 0.50 mmol l-1. (B) Oxygen equilibrium curves at 10°C, 20°C and 20°C in the presence of saturating ATP (interpolated from data in A). (C)Isoelectric focusing profile, showing absorptions at 540 nm (○) and pH values at 25°C (▵) of eluted fractions, and the presence of three major (II, III and IV) and two minor (I and V) isoHbs. (D) Bohr plots of isoHbs I-IV, at 10 and 20°C. Fig. 4. Oxygen-binding characteristics and isoHb differentiation of crucian carp Hb, measured in the presence of 0.1 mol l-1 KCl and 0.1 mol l-1 Hepes buffers. (A) Oxygen tensions and Hill's cooperativity coefficients at 50% saturation (P50 and n50 of stripped hemolysates and their pH dependence (Bohr plots) at 10°C (□) and 20°C (○) and of the lysate in the presence of saturating concentration of ATP (ATP/tetrameric Hb ratio, 9.6), (•), [haem], 0.50 mmol l-1. (B) Oxygen equilibrium curves at 10°C, 20°C and 20°C in the presence of saturating ATP (interpolated from data in A). (C)Isoelectric focusing profile, showing absorptions at 540 nm (○) and pH values at 25°C (▵) of eluted fractions, and the presence of three major (II, III and IV) and two minor (I and V) isoHbs. (D) Bohr plots of isoHbs I-IV, at 10 and 20°C. Crucian carp red cells contain at least three major isoHbs (II, III and IV)and two minor ones (I and V). The elution profile(Fig. 4C) indicates relative abundance of 6% HbI, 27% HbII, 62% HbIII+IV and 5% HbV, and that Hbs I, II,III and IV are isoelectric at pH values of 6.7, 6.4, 6.9 and 5.8,respectively. All components exhibit similar, high oxygen affinities and similar Bohr effects (P50 of 1.5-1.8 mmHg at pH 7.6 and 20°C,and ν≅-0.30). These properties correspond with those of stripped hemolysates, indicating the absence of functionally significant interaction between the isolated components. The results show that both crucian carp and goldfish have the capacity to remodel their gills in response to temperature, hence altering the respiratory surface area. That hypoxia and high temperature induce apparently identical changes, i.e. causing the gill lamellae to protrude, suggests that the actual trigger is the oxygen demand of the fish. Another possibility is that high temperature and hypoxia independently trigger the transformation of the gills. The increase of respiratory surface area of crucian carp kept at 25°C coincided with a relatively low [O2]crit at this temperature, indicating an increase in the capacity for oxygen uptake(Fig. 2B,C). This was clearly reflected in the high O2:[O2]critratio of crucian carp kept at 25°C(Table 1). The relationship between temperature, O2 and[O2]crit of crucian carp observed in the present study resemble that found in a study on goldfish(Fig. 2A-C) in more than half a century ago (Fry and Hart,1948), which also showed an unexpectedly low[O2]crit at higher temperatures. These results can now be explained by the present finding that goldfish have protruding lamellae at high, but not low, temperatures. The reason why this transformation of gill morphology has not been observed in goldfish earlier is most likely that goldfish are traditionally kept at rather high temperatures, usually at room temperature. By contrast, Atlantic cod, a species that presumably does not have the ability to adjust the respiratory surface area to its oxygen needs, shows a linear relationship between [O2]crit and O2(Fig. 2C; see also Schurmann and Steffensen,1997). The results suggest that the oxygen demand of crucian carp does not trigger a remodelling of the gills unless the water temperature reaches 25°C,which is near the highest temperature that crucian carp normally experiences in its habitat for short periods during the summer months (J. S., G. E. N.,unpublished observations from the Oslo area). This indicates that gills with non-protruding lamellae are able to supply the crucian carp with sufficient oxygen to sustain aerobic metabolism at 20°C where its O2 is around 120 mg kg-1 h-1 (Table 1). The capacity to sustain a high O2 with a small respiratory surface area could rely on a high O2 affinity of the Hb. We measured oxygen affinity in the presence of 0.1 mol l-1 KCl, which decreases the oxygen affinity, mimicking the intracellular condition. Our data show that the high oxygen affinity (P50=1.8 mmHg at pH 7.7 and 20°C) increases markedly with falling temperature (P50=0.7 mmHg at 10°C) due to the pronounced temperature sensitivity at high in vivo pH (7.7), where the phosphate sensitivity is low(Fig. 4A,B). These properties that appear to characterise all major isoHbs(Fig. 4D) witness a high blood oxygen affinity as previously recorded in goldfish (P50=2.6 mmHg at pH 7.56 and 26°C; Burggren,1982). The remodelling of the gills appears to be rapid, since we did not observe any intermediate stages in crucian carp kept at 15°C or 20°C. Thus, it appears to be an on/off' response that is triggered either by hypoxia or high temperature, or maybe by their common denominator: an increased demand for oxygen uptake. When we reduced the acclimation temperature for goldfish to 7.5°C, they remodelled their gills to a state with almost no protruding lamellae. Since no intermediate stages were seen in goldfish gills during the 25°C to 15°C transfer, it seems, like in crucian carp, that this is anon/off' response that is triggered by either temperature or O2. Intriguingly, Isaia (1972)showed that the water flux across the goldfish gills increased more than five times from 5 to 25°C, which is much greater than would be expected from a diffusion process. It is tempting to suggest that at least part of this increased water flux was caused by an increase in the respiratory surface area. Indeed, Isaia (1972)suggested that the results must indicate either an important change in the branchial permeability during adaptation or the functioning of a greater respiratory surface at an increased temperature'. Moreover, it has been found that the common carp Cyprinus carpio, exposed to chronic hypoxia, is able to extract a higher percentage of the available oxygen than normoxic carp(Lomholt and Johansen, 1979). This could imply that the common carp has the ability to alter its respiratory surface area, possibly in a manner similar to that found in its cyprinid cousins: crucian carp and goldfish. Moreover, a capacity for gill remodelling to increase or decrease oxygen uptake and water fluxes may not be limited to cyprinids. A gill morphology characterised by thickened lamellae with epithelial cells being cuboidal or columnar instead of squamous has been seen in juvenile largemouth bass kept at over-wintering temperatures close to 4°C (Leino and McCormick,1993). The present data showed that the change from non-protruding to protruding lamellae occurs between 20 and 25°C in crucian carp, and between 7.5 and 15°C in goldfish. This may reflect species or population differences. Each year, the crucian carp we studied face a severely hypoxic and anoxic environment during the long winter period. Hence, they are more dependent on their glycogen stores than goldfish for survival. Thus, saving energy is likely to be a more critical feature for crucian carp. A small respiratory surface area over a large temperature interval will reduce osmoregulatory costs and, thereby, save energy that can be stored for surviving the long winter. There was also an apparent difference in the ability of these two species to handle soft water. The crucian carp population is well adapted to soft water, and do well in Oslo tapwater (20-50 μS cm-1),whereas goldfish did not do well (did not feed and were lethargic) in Oslo tapwater. Upon recommendation from the importer, we increased the conductivity in the goldfish water to 500 μS cm-1, which had a striking positive effect of the welfare of the goldfish. It is possible that these differences in water conductivity could be related to the difference seen in the temperature where gill remodelling takes place between the two species. However, at present we can only speculate. It has been found previously that crucian carp acclimated to hypoxia has higher O2 than normoxic crucian carp (Johnston and Bernard,1984). This increase of O2 could be due to increased ventilation rates and/or elevated osmoregulatory costs for having a larger respiratory surface area. Similarly, in the present study, the crucian carp displayed a larger difference in O2 between 20°C and 25°C (Q10=2.9) than between 15 and 20°C(Q10=1.9) (Table 2),which could be explained by the presence of protruding lamellae in the 25°C group, causing elevated osmoregulatory costs. By contrast,ectothermic animals generally show Q10 values that fall with increasing temperature (Prosser,1986; Withers,1992). Interestingly, between 10-15°C and 15-20°C, the Q10 values in goldfish (Fry and Hart, 1948) (Table 2) decrease less than in the crucian carp, which may be explained by the goldfish remodelling its gills at a lower temperature than the crucian carp. To conclude, the present study shows that both crucian carp and goldfish have the ability to remodel their gills by changing the size of the ILCM between the lamellae. Moreover, the response, which has previously shown to be triggered by hypoxia, can also be triggered by temperature. Thus, at high temperatures both goldfish and crucian carp display gills with clearly protruding lamellae. The remodelling of the gills to gain protruding lamellae is caused by increased apoptosis and cell-cycle arrest in the ILCM(Sollid et al., 2003). In the light of the present results, it is possible that the signals that trigger this change could include both hypoxia and high temperature, or their common denominator: the need for extracting more oxygen from the water. The ability to match the respiratory surface area to oxygen needs may provide a means of reducing water and ion fluxes and, thereby, the osmoregulatory costs. However,our observations suggest that this is a sharp on/off' response rather than a graded change, since no intermediate stages are seen except during the short transition from one state to the other. While this transition took several days in hypoxia at 8°C (Sollid et al.,2003), the present study showed that, at 20°C, it could be completed during the few hours that the fish were exposed to hypoxia in the respirometer. We are grateful to Anny Bang for assistance with the Hb-oxygen-binding measurements. We thank the Research Council of Norway and the Danish Natural Science Research Council for financial support. Aguiar, L. H., Kalinin, A. L. and Rantin, F. T.( 2002 ). The effects of temperature on the cardio-respiratory function of the neotropical fish Piaractus mesopotamicus. J. Therm. Biol. 27 , 299 -308. Berman, M., Benesch, R. and Benesch, R. E.( 1971 ). The removal of organic phosphates from hemoglobin. Arch. Biochem. Biophys. 145 , 236 -239. Brittain, T. ( 1987 ). The Root effect. Comp. Biochem. Physiol. 86B , 473 -481. Burggren, W. W. ( 1982 ). `Air gulping' improves blood oxygen transport during aquatic hypoxia in the goldfish Carassius auratus. Physiol. Zool. 55 , 327 -334. Butler, P. J. and Taylor, E. W. ( 1975 ). Effect of progressive hypoxia on respiration in dogfish (Scyliorhinus Canicula) at different seasonal temperatures. J. Exp. Biol. 63 , 117 -130. Caldwell, R. S. ( 1969 ). Thermal compensation of respiratory enzymes in tissues of goldfish. Comp. Biochem. Physiol. 31 , 79 -93. Evans, D. H. ( 1979 ). Fish. In Comparative Physiology of Osmoregulation in Animals ,vol. 1 (ed. G. M. O. Maloiy), pp. 305 Fernandes, M. N. and Rantin, F. T. ( 1989 ). Respiratory responses of Oreochromis niloticus (Pisces,Cichlidae) to environmental hypoxia under different thermal conditions. J. Fish Biol. 35 , 509 -519. Fry, F. E. J. and Hart, J. S. ( 1948 ). The relation of temperature to oxygen consumption in the goldfish. Biol. Bull. 94 , 66 -77. Goldspink, G. ( 1995 ). Adaptation of fish to different environmental-temperature by qualitative and quantitative changes in gene expression. J. Therm. Biol. 20 , 167 -174. Hocutt, C. H. and Tilney, R. L. ( 1985 ). Changes in gill morphology of Oreochromis mossambicus subjected to heat stress. Environ. Biol. Fish. 14 , 107 -114. Houston, A. H. and Cyr, D. ( 1974 ). Thermoacclimatory variation in the haemoglobin systems of goldfish(Carassius auratus) and rainbow trout (Salmo gairdneri). J. Exp. Biol. 61 , 455 -461. Houston, A. H., Dobric, N. and Kahurananga, R.( 1996 ). The nature of hematological response in fish - studies on rainbow trout Oncorhynchus mykiss exposed to stimulated winter,spring and summer conditions. Fish Physiol. Biochem. 15 , 339 -347. Houston, A. H. and Rupert, R. ( 1976 ). Immediate response of the hemoglobin system of the goldfish, Carassius auratus,to temperature change. 54 , 1737 -1741. Isaia, J. ( 1972 ). Comparative effects of temperature on sodium and water permeabilities of gills of a stenohaline freshwater fish (Carrassius auratus) and a stenohaline marine fish(Serranus scriba, Serranus cabrilla). J. Exp. Biol. 57 , 359 -366. Jacobs, D., Esmond, E. F., Melisky, E. L. and Hocutt, C. H.( 1981 ). Morphological changes in gill epithelia of heat stressed rainbow trout, Salmo gairdneri - Evidence in support of a temperature induced surface area change hypothesis. Can. J. Fish. Aquatic Sci. 38 , 16 -22. Johnston, I. A. and Bernard, L. M. ( 1983 ). Utilization of the ethanol pathway in carp following exposure to anoxia. J. Exp. Biol. 104 , 73 -78. Johnston, I. A. and Bernard, L. M. ( 1984 ). Quantitative study of capillary supply to the skeletal muscles of Crucian carp Carassius carassius L - effects of hypoxic acclimation. Physiol. Zool. 57 , 9 -18. Leino, R. L. and McCormick, J. H. ( 1993 ). Responses of juvenile largemouth bass to different pH and aluminum levels at overwintering temperatures - effects on gill morphology, electrolyte balance,scale calcium, liver glycogen, and depot fat. 71 , 531 -543. Lomholt, J. P. and Johansen, K. ( 1979 ). Hypoxia acclimation in carp - how it affects O-2 uptake, ventilation, and O-2 extraction from water. Physiol. Zool. 52 , 38 -49. Maricondi-Massari, M., Kalinin, A. L., Glass, M. L. and Rantin,F. T. ( 1998 ). The effects of temperature on oxygen uptake,gill ventilation and ECG waveforms in the nile tilapia, Oreochromis niloticus. J. Therm. Biol. 23 , 283 -290. Nilsson, G. E. ( 1990 ). Long term anoxia in crucian carp - Changes in the levels of amino acid and monoamine neurotransmitters in the brain, catecholamines in chromaffin tissue, and liver glycogen. J. Exp. Biol. 150 , 295 -320. Nilsson, G. E. ( 1992 ). Evidence for a role of GABA in metabolic depression during anoxia in crucian carp (Carassius carassius). J. Exp. Biol. 164 , 243 -259. Nolan, D. T., Hadderingh, R. H., Spanings, F. A. T., Jenner, H. A. and Bonga, S. E. W. ( 2000 ). Acute temperature elevation in tap and Rhine water affects skin and gill epithelia, hydromineral balance, and gill Na+/K+-ATPase activity of brown trout (Salmo trutta) smolts. Can. J. Fish. Aquatic Sci. 57 , 708 -718. Prosser, C. L. ( 1986 ). Temperature. In (ed. C. L. Prosser), pp. 260 -321. New York: John Wiley and Sons. Schurmann, H. and Steffensen, J. F. ( 1997 ). Effects of temperature, hypoxia and activity on the metabolism of juvenile Atlantic cod. J. Fish Biol. 50 , 1166 -1180. Shoubridge, E. A. and Hochachka, P. W. ( 1980 ). Ethanol - novel end product of vertebrate anaerobic metabolism. Science 209 , 308 -309. Shoubridge, E. A. and Hochachka, P. W. ( 1983 ). The integration and control of metabolism in the anoxic goldfish. Mol. Physiol. 4 , 165 -195. Sollid, J., De Angelis, P., Gundersen, K. and Nilsson, G. E.( 2003 ). Hypoxia induces adaptive and reversible gross morphological changes in crucian carp gills. J. Exp. Biol. 206 , 3667 -3673. Tilney, R. L. and Hocutt, C. H. ( 1987 ). Changes in gill epithelia of Oreochromis mossambicus subjected to cold shock. Environ. Biol. Fish. 19 , 35 -44. Weber, R. E. ( 1981 ). Cationic control of O2 affinity in lugworm erythrocruorin. Nature 292 , 386 -387. Weber, R. E. ( 2000 ). Adaptations for oxygen transport: lessons from fish hemoglobins. In Hemoglobin Function in Vertebrates, Molecular Adapation in Extreme and Temperate Environments (ed. G. Di Prisco, B. Giardina and R. E. Weber), pp. 22 -37. Milano, Italy: Springer-Verlag. Weber, R. E., Jensen, F. B. and Cox, R. P.( 1987 ). Analysis of teleost hemoglobin by Adair and Monod-Wyman-Changeux models. Effects of nucleoside triphosphates and pH on oxygenation of tench hemoglobin. J. Comp. Physiol. B 157 , 145 -152. Wells, R. M. G. and Weber, R. E. ( 1989 ). The measurement of oxygen affinity in blood and haemaglobin solutions. In Techniques in Comparative Physiology (ed. C. R. Bridges and P. J. Butler), pp. 279 -303. Cambridge:Cambridge Univerisity Press. Withers, P. C. ( 1992 ). Temperature. In Comparative Animal Physiology (ed. P. C. Withers), pp. 122 -191. New York: Saunders College Publishing.
{}
Browse Questions Thermodynamics # 1 mole of ideal gas is allowed to expand isothermally from the initial volume of 1L to 10L, the value of $\Delta U$ for the process is $(R = 2\: cal/Kmol)$ $\begin {array} {1 1} (A)\;163.7\: cal & \quad (B)\;1382.1\: cal \\ (C)\;9\: L-atm & \quad (D)\;Zero \end {array}$ Can you answer this question? For isothermal process, $\Delta E = 0$ So, $\Delta U = 0$ Ans : (D) answered Mar 17, 2014
{}
# Magrathea 2.0 - Building Mountains With the big crash of the universal economy also the demand for custom made planets plunged. The Magratheans had to look after more steady revenues also from a broader class of customers. Therefore, they invented the have-your-own chain of mountain (or short havoc-o-mountains) for people with smaller budget who could not afford a complete planet. The mountains are build according to the customer's plan (a.k.a. strings of digits and dots) and delivered using ascii-art (consisting of , /, \, ^ and v). Write a complete program which takes input (single string) either from STDIN or as argument and outputs to STDOUT. This puzzle is a code-golf so please show some attempt at golfing. ### Input A string of dots and digits providing the basis for the mountain chain. Each string is exactly as long as necessary to support the mountains and each peak is given by a digit instead of a dot, indicating the height of the peak. ### Output An ascii version of the mountain chain. • Each digit in the input represents exactly one peak (^) at exactly the height indicated by the digit (i.e. 9 is the highest height). • There must not be additional peaks in the output (i.e. at places where there is a dot in the input). • Mountains are of triangular shape, i.e. slopes are created using / and \ characters. • Passes where two mountains overlap are shaped using the character v. • No superfluous newlines nor blank lines. • Padding lines with trailing spaces is optional. You may assume that the input provided is valid, i.e. there always exists a solution according to the rules (e.g. an input of 13.. would not result in a valid configuration and may be ignored). Moreover, on each side there are exactly as many dots such that the mountains must no be cropped. ### Examples The first line shows the input, all other lines constitute the desired output. (Actually the mountains look much better in my console than here.) 1 ^ 11 ^^ 1.2. ^ ^/ \ .2.3.. ^ ^/ \ / \ .2..3.. ^ ^ / \ / v \ ...4...3...3.. ^ / \ ^ ^ / \/ \ / \ / v \ • What a combination of poetry and art! I love it. – devnull Feb 26 '14 at 11:02 • Is printing extra newlines okay? In other words, for an input of 1, is \n\n\n\n\n\n\n\n^ allowed? – durron597 Feb 26 '14 at 11:51 • @durron597 The output should have no superfluous newlines, have a look at the examples. – Howard Feb 26 '14 at 12:15 • What about trailing space characters? Is it OK if all lines are the same length as the original string, padded out with spaces? – Paul Prestidge Feb 27 '14 at 1:37 • @Chron Yes, that's OK. – Howard Feb 27 '14 at 7:40 ## Javascript: 272268233232201192189188178 180 characters Thanks to @Sam for reducing it from 268 to 233 characters, and for @manatwork for another 1 char. @VadimR for pointing out a bug. p=prompt(r=t='');s=' ';for(d=10;d--;r=s+q+s,t+=q.trim()?q+'\n':'')for(q='',i=0;i<p.length;)q+=' \\/v^'[p[i++]==d?4:(/\^|\\/.test(r[i-1])+2*/\^|\//.test(r[i+1]))*(r[i]==s)];alert(t) Properly idented and somewhat ungolfed version with comments: // The output initialization is just a golfing trick suggested by @manatwork. input = prompt(state = output = ''); space = ' '; // Repeat for each line, from the top (the highest peak, highest digit) to the floor (digit 1). Start at 10 to avoid a bug. for (digit = 10; digit--; // Update the state of our automaton, at the end of the iteration. // Add a space after and before to simplify the future pattern recognization. state = space + line + space, // Add the line to the output if it is not an empty line, at the end of the iteration. output += line.trim() ? q + '\n' : '') { // This curly brace was added for readability, it is not in the golfed source. // Analyze each character in the current state to produce a new state, like a cellular automaton. for (line = '', i = 0; i < input.length;) { // This curly brace was added for readability, it is not in the golfed source. line += // If the input is the current digit number, evaluate to 4 and put a peak in this character. // Otherwise evaluate this expression with those rules: // 1 means that the hill is higher only at right in the previous iteration, we do climb it to the right in this one. // 2 means that the hill is higher only at left in the previous iteration, we do climb it to the left in this one. // 3 means that the hill is higher at both sides in the previous iteration, we are in a v-shaped valley. // 0 means nothing to do here. If the middle is not a space, it will be multiplied by 0 and become 0. ' \\/v^'[input[i++] == digit ? 4 : (/\^|\\/.test(state[i - 1]) + 2 * /\^|\//.test(state[i + 1])) * (r[i] == space)]; } // This curly brace was added for readability, it is not in the golfed source. } // This curly brace was added for readability, it is not in the golfed source. // Give the final output. As you may note from the code, this works as a cellular automaton, where each cells checks for a number in the input, looks to itself and to its two neighbours to decide what the next iteration will be. At each moment a cell may be a ^, /, \, v or . The input provided in the test cases produces the expected output. Note that using the alert box sucks, since it normally does not have a monospaced font. You may copy & paste the text from the alert box to somewhere else for a better appreciation of the output, or you may replace the last line alert by console.log, but since this is code-golf, alert is shorter. Further, it does not validates anything in the input. It simply considers unrecognized characters as spaces the same way it does to . (in fact . is an unrecognized character too). • There is an old golfing trick to reduce 1 character: initialize variables with empty string as prompt()'s parameter. – manatwork Feb 27 '14 at 17:14 • @manatwork Thank you. Done. – Victor Stafusa Feb 27 '14 at 17:18 • Excuse me, maybe I'm missing something, but I'm getting consistent results in both FF and Chromium. I launch a browser, run JS code from revision#14, and get error message. Then I run code from revision#1 - it runs OK. Again I run 14's code - and no error message, it runs OK. So revision#14's code can not be run by itself? – user2846289 Mar 2 '14 at 16:25 • @VadimR Thanks, fixed. That was a side-effect for testing it with a poluted environment. Needed to prefix the code with delete r; delete s; delete q; delete p; delete t; delete i; delete d; to ensure that it was not poluted. – Victor Stafusa Mar 2 '14 at 18:28 • q.trim()?q+'\n':'' could be q.trim()&&q+'\n', saving two. Also, i<p.length could just be p[i]. – Nicholas Pipitone Oct 26 '18 at 18:32 ## Ruby, 208201 189 Very fun challenge! Here's an alternative Ruby solution. gets.size.times{|x|0.upto(h=$_[x].to_i-1){|d|r=$*[h-d]||=' '*~/$/ [x+d,x-d].map{|o|r[o]=r[o]>?!??v:o<x ??/:?\\if r[o]<?w} d<1?r[x]=?^:r[x-d+1,w=2*d-1]=?w*w}} puts$*.reverse.*($/).tr(?w,' ') As a bonus, here's a Ruby implementation of Victor's very clever "cellular automaton" algorithm, at 162 characters: s=gets 9.downto(1){|h|$0=(-1..s.size).map{|x|$_=$0[x,3] s[x]=="#{h}"??^:~/ [\^\/]/??/:~/[\^\\] /??\\:~/[\^\\] [\^\/]/??v:' '}*'' $*<<$0[1..-2]if$0=~/\S/} puts$* Example output: ....5.....6..6..... ^ ^ ^ / \/ \ / \ / \ / \/ \ / \ / \ • I think you may use $/ for newline. – Howard Feb 27 '14 at 7:43 C# - 588 characters - not as good as Ray's 321 though! class P{static void Main(string[] a){char[,] w=new char[a[0].Length+1,10];int x=0;foreach(char c in a[0]){if(c!='.'){int h=int.Parse(c+"");if(w[x,h]=='\0')w[x,h]='^';int s=1;for(int l=h-1;l>0;l--){for(int m=x-s;m<=x+s;m++){if(w[m,l]!='\0'){if(w[m,l]=='^')w[m,l]='/';if(w[m,l]=='\\')w[m,l]='v';}else{if(m==x-s)w[m,l]='/';else if(m==x+s)w[m,l]='\\';else w[m,l]='\0';}bool t=false;for(int f=9;f>0;f--){if(t)w[m,f]='\0';if(w[m,f]!='\0')t=true;}}s++;}}x++;}for(int k=9;k>0;k--){string u="";for(int j=0;j<w.GetLength(0);j++){u+=w[j,k];}if(u.Replace("\0","")!="")System.Console.WriteLine(u);}}} Example output: F:\>mountains ".2..3..4..." ^ ^ / \ ^ / v \ / v \ Or a longer more complex one... F:\>mountains ".2..3..6.....5...3......1..3..4....2." ^ / \ ^ / \ / \ ^ / \/ \ ^ ^ / \ ^ / v \ / v \ ^ / v \ ^/ \/ \ Brilliant puzzle... not as easy as it seems... loved it! • "Complex one" is ill-formed, there's no peak for "3". – user2846289 Mar 1 '14 at 17:20 • All the 3s are there. If you're talking about the first one, it's part of the slope. – Hein Wessels Oct 26 '18 at 9:30 # APL, 65 bytes ⍉⌽↑⌽¨h↑¨'^/v\'[1+(~×a)×2+×2+/2-/0,0,⍨h←¯1+⊃⌈/a-↓|∘.-⍨⍳⍴a←11|⎕d⍳⍞] ⍞ this symbol returns raw (not evaluated) input as a character array. Solving interactively, in an APL session: s←'...4...3...3..' ⍝ let's use s instead of ⍞ ⎕d ⍝ the digits 0123456789 ⎕d⍳s ⍝ the indices of s in ⎕d or 11-s if not found 11 11 11 5 11 11 11 4 11 11 11 4 11 11 11|⎕d⍳s ⍝ modulo 11, so '.' is 0 instead of 11 0 0 0 5 0 0 0 4 0 0 0 4 0 0 a←11|⎕d⍳s ⍝ remember it, we'll need it later ⍴a ⍝ length of a 14 ⍳⍴a 1 2 3 4 5 6 7 8 9 10 11 12 13 14 ⍝ ∘.- subtraction table ⍝ ∘.-⍨A same as: A ∘.- A ⍝ | absolute value |∘.-⍨⍳⍴a 0 1 2 3 4 5 6 7 8 9 10 11 12 13 1 0 1 2 3 4 5 6 7 8 9 10 11 12 2 1 0 1 2 3 4 5 6 7 8 9 10 11 ... 13 12 11 10 9 8 7 6 5 4 3 2 1 0 ⍝ ↓ split the above matrix into rows ⍝ a- elements of "a" minus corresponding rows ⍝ ⊃⌈/ max them together ⊃⌈/a-↓|∘.-⍨⍳⍴a 2 3 4 5 4 3 3 4 3 2 3 4 3 2 ⍝ This describes the desired landscape, ⍝ except that it's a little too high. ⍝ Add -1 to correct it: ¯1+⊃⌈/a-↓|∘.-⍨⍳⍴a 1 2 3 4 3 2 2 3 2 1 2 3 2 1 ⍝ Perfect! Call it "h": h←¯1+⊃⌈/a-↓|∘.-⍨⍳⍴a 0,⍨h ⍝ append a 0 (same as h,0) 1 2 3 4 3 2 2 3 2 1 2 3 2 1 0 0,0,⍨h ⍝ also prepend a 0 0 1 2 3 4 3 2 2 3 2 1 2 3 2 1 0 2-/0,0,⍨h ⍝ differences of pairs of consecutive elements ¯1 ¯1 ¯1 ¯1 1 1 0 ¯1 1 1 ¯1 ¯1 1 1 1 ⍝ this gives us slopes between elements 2+/2-/0,0,⍨h ⍝ sum pairs: left slope + right slope ¯2 ¯2 ¯2 0 2 1 ¯1 0 2 0 ¯2 0 2 2 ×2+/2-/0,0,⍨h ⍝ signum of that ¯1 ¯1 ¯1 0 1 1 ¯1 0 1 0 ¯1 0 1 1 2+×2+/2-/0,0,⍨h ⍝ add 2 to make them suitable for indexing 1 1 1 2 3 3 1 2 3 2 1 2 3 3 ⍝ Almost ready. If at this point we replace ⍝ 1:/ 2:v 3:\, only the peaks will require fixing. ~×a ⍝ not signum of a 1 1 1 0 1 1 1 0 1 1 1 0 1 1 (~×a)×2+×2+/2-/0,0,⍨h ⍝ replace peaks with 0-s 1 1 1 0 3 3 1 0 3 2 1 0 3 3 ⍝ Now replace 0:^ 1:/ 2:v 3:\ ⍝ We can do this by indexing a string with the vector above ⍝ (and adding 1 because of stupid 1-based indexing) '^/v\'[1+(~×a)×2+×2+/2-/0,0,⍨h] ///^\\/^\v/^\\ ⍝ Looks like our mountain, only needs to be raised according to h r←'^/v\'[1+(~×a)×2+×2+/2-/0,0,⍨h] ⍝ name it for convenience h¨↑r ⍝ extend r[i] with spaces to make it h[i] long / / / ^ \ \ / ^ \ v / ^ \ \ ↑⌽¨h¨↑r ⍝ reverse each and mix into a single matrix / / / ^ \ \ / ^ \ v / ^ \ \ ⍉⌽↑⌽¨h¨↑r ⍝ reverse and transpose to the correct orientation ^ / \ ^ ^ / \/ \ / \ / v \ # Ruby, 390 characters Whew, this one was tricky. I ended up having to append to a new string for each character, using a variable s that meant "skip next character" which was needed for processing ^ and \. This output exactly the given sample output for all of the test cases. m=[gets.chomp] a=m[0].scan(/\d/).max.to_i m[0].gsub!(/./){|n|n==?. ? ' ':a-n.to_i} s=nil until a==0 o='' m[-1].chars{|c|o+=case c when ?0;?^ when ' ';t=s;s=nil;t ? '':' ' when /\d/;(c.to_i-1).to_s when ?^;s=1;o.slice! -1;"/ \\" when ?/;t=s;s=nil;t ? "#{o.slice! -1;' '}":o.slice!(-1)=='\\' ? 'v ':"/ " when ?\\;s=1;' \\' when ?v;' ' end} m.push o a-=1 end puts (m[1..-1]*"\n").gsub /\d/,' ' Chart of what the variables mean: m | The mountain array. a | The highest height of a mountain. Used for counting when to stop. s | Whether or not to skip the next character. 1 for yes, nil for no. o | Temp string that will be appended to mountain. t | Temp variable to hold the old value of s. I'm sure I could golf it down much more, but I have to go now. Shall be improved later! • I'm struggling with the input .2.2. and can't see why it doesn't work. – Howard Feb 27 '14 at 7:46 # Java, 377 407 Edit: @Victor pointed out that this needed to be a complete program, so I added a few dozen characters to make it compilable and runnable. Just pass the "purchase order" as the first param when executing the program, like so: java M ..3.4..6..4.3.. I think this is similar in spirit to other answers, basically just traverses the "mountain order" repeatedly for every possible height, and builds the mountains from the tops down. That way I only have to deal with four conditions if not building a peak -- either an up slope '/', down slope '\, joint 'v', or empty ' '. I can discover that simple by looking at the three spaces centered "above" my current position in my top-down build. Note that like other submissions, I treat anything other than a number as equivalent to '.' in the input, for brevity. Golfed version: class M{public static void main(String[]m){char[]n=m[0].toCharArray();int e=n.length,h=9,x=-1,p;char[][]o=new char[11][e];char l,r,u;boolean a,b,c;for(;h>=0;h--){for(p=0;p<e;p++){if(n[p]-49==h){o[h][p]=94;if(x==-1)x=h;}else{l=(p>0)?o[h+1][p-1]:0;r=(p<e-1)?o[h+1][p+1]:0;u=o[h+1][p];a=l>91&&l<99;b=r==94||r==47;c=u<33;o[h][p]=(char)((a&&b)?'v':(c&&b)?47:(c&&a)?92:32);}}if(x>=h)System.out.println(o[h]);}}} Human readable form (and without some of the equivalent transmogrifications to achieve golf form): class Magrathea2 { public static void main(String[] mountain) { String out = ""; char[][] output = new char[11][mountain[0].length()]; int height = 9; int maxheight = -1; int position = 0; char left,right,up; char[] mount = mountain[0].toCharArray(); for (; height >= 0; height--) { for (position=0; position < mount.length; position++) { if (mount[position]-49 == height) { output[height][position] = '^'; if (maxheight==-1) { maxheight=height; } } else { // deal with non-numbers as '.' left=(position>0)?output[height+1][position-1]:0; right=(position<mount.length-1)?output[height+1][position+1]:0; up=output[height+1][position]; if ((left=='^'||left=='\\')&&(right=='^'||right=='/')) { output[height][position]='v'; } else if ((up==' '||up==0)&&(right=='/'||right=='^')) { output[height][position]='/'; } else if ((up==' '||up==0)&&(left=='\\'||left=='^')) { output[height][position]='\\'; } else { output[height][position]=' '; } } } if (maxheight >= height) { out+=new String(output[height]); if (height > 0) { out+="\n"; } } } System.out.println(out); } } Enjoy. Example output: $ java M ..3..4...6...5....1 ^ / \ ^ ^ / \/ \ ^ / v \ / v \ / \^ • The question mentions Write a complete program, so please, add the missing class X{public static void main(String[]z){. – Victor Stafusa Mar 1 '14 at 21:25 • Right on. I got misdirected by the next section of that sentence -- "or as argument" and missed the complete program part. I'll update it shortly. – ProgrammerDan Mar 2 '14 at 1:36 ## Perl 6, 264 224 216 206 200 194 124 bytes $_=get;my$a=10;((s:g/$a/^/;s:g/\s\.\s/ v /;s:g'\.\s'/ ';s:g/\s\./ \\/;$!=say TR/.1..9/ /;tr'^\\/v' ')if .match(--$a)|$!)xx 9 Thanks to @JoKing for showing a s/// solution. This is golfed a bit further after fixing the tr/// bug in Perl 6. My original solution with subst: my$t=get;for 9...1 {if$t.match($_)|$! {$t=$t.subst($_,'^',:g).subst(' . ',' v ',:g).subst('. ','/ ',:g).subst(' .',' \\',:g);$!=say $t.subst(/<[\.\d]>/,' ',:g);$t.=subst(/<[^\\/v]>/,' ',:g)};} Ungolfed: my $t=slurp; my$s; for 9...1 { if $t.match($_)||$s { # match number or latched$t=$t.subst($_,'^',:g) # peaks .subst(' . ',' v ',:g) # troughs .subst('. ','/ ',:g) # up slope .subst(' .',' \\',:g); # down slope $s=say$t.subst(/<[\.\d]>/,' ',:g); # clean, display, latch $t=$t.subst(/<[^\\/v]>/,' ',:g) # wipe for next line } } Output: ...4...3...33..4..4....2.3.22.33.5..22...333.222.3.. ^ ^ ^ ^ / \ / \ ^ ^^ / \/ \ ^ ^^ \ ^^^ ^ / \/ \ / v \ ^/ \^^/ ^^ / \^^^/ \ / v \/ \/ \ • I don't think Perl strictly needs a main function, the entry point can just be the first thing outside a function. – Nissa Oct 25 '18 at 23:38 • I used main for parameter handling. Now using stdin. Thanks. – donaldh Oct 26 '18 at 9:24 • A procedural solution. I'm sure someone can do better with regexes and hyperops. – donaldh Oct 26 '18 at 9:43 • 131 bytes using s/// and tr///. I think that last one can use tr instead of s but I can't quite figure out to translate backslashes. Maybe the first one too – Jo King Oct 26 '18 at 11:56 • Nice work @JoKing – I got into a topic mess when I tried to use s/// and TR///. I see that avoiding blocks is the answer. – donaldh Oct 26 '18 at 14:38 ## Perl, 254 218 212 $s=<>;sub f{9-$i-$_[0]?$":pop}for$i(0..8){$h=1;$_=$s;s!(\.*)(\d?)!$D=($w=length$1)+$h-($2||1);join'',(map{($x=$_-int$D/2)<0?f--$h,'\\':$x?f++$h,'/':$D%2?f--$h,v:f$h,'/'}0..$w-1),$2?f$h=$2,'^':''!ge;print if/\S/} $s=<>; sub f{9-$i-$_[0]?$":pop} for$i(0..8){$h=1; $_=$s; s!(\.*)(\d?)! $D=($w=length$1)+$h-($2||1); join'',(map{ ($x=$_-int$D/2)<0 ?f--$h,'\\' :$x ?f++$h,'/' :$D%2 ?f--$h,v :f$h,'/' }0..$w-1),$2 ?f$h=$2,'^' :'' !ge; print if/\S/ } Edit: actually it's a bug-fix to work with ProgrammerDan's ..3..4...6...5....1 example, but, in the process, some bytes were off. And online test: https://ideone.com/P4XpMU # C# - 321 319 using System.Linq;class P{static void Main(string[]p){int h=p[0].Max()-48,i=h,j,n=p[0].Length;char[]A=new char[n+2],B=A;for(;i-->0;){for(j=0;j++<n;){var r=(A[j+1]==47|A[j+1]==94);B[j]=(char)(p[0][j-1]==i+49?94:i+1<h?A[j]==0?(A[j-1]>90&A[j-1]<95)?r?118:92:r?47:0:0:0);}A=(char[])B.Clone();System.Console.WriteLine(B);}}} Ungolfed and commented: using System.Linq; class P { static void Main(string[] p) { int h = p[0].Max() - 48, // Getting the height. Codes for 0 to 9 are 48 to 57, so subtract 48 and hope no one will input anything but dots and numbers. i = h, j, // Declaring some iterators here, saves a few chars in loops. n = p[0].Length; char[] A = new char[n+2], // Creating an array of char with 2 extra members so as not to check for "index out of bounds" exceptions B = A; // B is referencing the same array as A at this point. A is previous row, B is the next one. for (;i-->0;) // Looping from top to the bottom of the mountain { for (j = 0; j++ < n;) // Looping from left to right. { var r = (A[j + 1] == 47 | A[j + 1] == 94); // This bool is used twice, so it saves a few characters to make it a variable // Here's the logic B[j] = (char)(p[0][j - 1] == i + 49 ? 94 // If at this position in the string we have a number, output "^" : i + 1 < h ? // And if not, check if we're on the top of the mountain A[j] == 0 ? // If we're not at the top, check if the symbol above is a space (0, actually) (A[j - 1] > 90 & A[j - 1] < 95) ? // If there's nothing above, we check to see what's to the left ( ^ or \ ) r ? // And then what's to the right ( ^ or / ) 118 // If there are appropriate symbols in both locations, print "v" : 92 // If there's only a symbol to the left, print "\" : r // Otherwise check if there's a symbol to the right, but not to the left ? 47 // And if there is, print "/" : 0 : 0 : 0); // Print nothing if there aren't any symbols above, to the left and to the right, // or there's a "^" right above, or we're at the top of the mountain } A=(char[])B.Clone(); // Clone arrays to iterate over the next line System.Console.WriteLine(B); } } } Example: C:\>program .2..3..4... ^ ^ / \ ^ / v \ / v \ I think it outputs an extra space before each line, though. # CJam, 128117112106 104 bytes CJam is a bit younger than this challenge so this answer does not compete. This was a very nice challenge though! From the little I know about J and APL, I think a submission in those would be impressively short. WlW++"."Waer{_{~U(e>:U}%\W%}2*;W%]z{$W=}%_$W=S*\:L,2-,\f{\_)L=(~"^/ ^^/ \v ^ \\"S/2/@L>3<_$0=f-{=}/t}zN* Here is a test case, which I think contains all possible possible combinations of slopes, peaks and troughs: ...4...3...33..4..4....2.3.22.33.5..22...333.222.3.. which yields ^ ^ ^ ^ / \ / \ ^ ^^ / \/ \ ^ ^/ \ ^^^ ^ / \/ \ / v \ ^/ \^^/ \^ / \^^^/ \ / v \/ \/ \ Test it here. I'll add an explanation for the code later. ## Python, 297234 218 -63 bytes thanks to Jo King -16 bytes with r=s.replace instead of lambda s=input() r=s.replace q=0 j=''.join for i in range(9): if9-iin s or q:q=s=r(9-i,'^');s=r(' . ',' v ');s=r('. ','/ ');s=r(' .',' \\');print j([x,' '][x in'0123456789.']for x in s);s=j([x,' '][x in'/\^v']for x in s) Takes input from STDIN. Ungolfed, simplified: s=input() # Take input r=lambda y,z: s.replace(y,z) # Function for quick s.replace(a, b) j=lambda x: ''.join(x) q=0 # Acts like boolean for i in range(9): # Count to 9 if 9-iin s or q: # When digit has been found or found previously (no newlines at start) q=s=r(9-i,'^') # Digit to ^, set q to non-zero value for always executing from now on s=r(' . ',' v ') # ' . ' to ' v ' s=r('. ','/ ') # '. ' to '/ ' s=r(' .',' k') # ' .' to 'k'. K is a placeholder, since \\ takes two chars and [...][2::5] fails print j([x,' '][x in'0123456789.']for x in s) # Print without '0123456789.' s=j([x,' '][x in'/\^v']for x in s) # Wipe (delete '/^\v) • 234 bytes – Jo King Oct 26 '18 at 12:43 • Yeah, I tried the s.replace method myself, but it doesn't work. You're just performing replacements on the original string since strings are immutable – Jo King Oct 27 '18 at 0:30 # Powershell, 148 145 bytes It's a nice challenge! param($s)9..1|?{($p+=$s-match$_)}|%{"$_,^; \. , v ;\. ,/ ; \., \;\^|\\|/|v, "-split';'|%{$x=$s-replace'\.|\d',' ' $s=$s-replace($_-split',')}$x} Less golfed test script: $f = { param($s) 9..1|?{($p+=$s-match$_)}|%{ # loop digits form 9 downto 1, execute to the end as soon as a suitable digit met$s=$s-replace$_,'^' # replace current digit with '^' $s=$s-replace' \. ',' v ' # replace ' . ' with ' v ' $s=$s-replace'\. ','/ ' # replace '. ' with '/ ' $s=$s-replace' \.',' \' # replace ' .' with ' \' $s-replace'\.|\d',' ' # replace all dots and digits with ' ' and push to output. Don't store this replacement$s=$s-replace'\^|\\|/|v',' ' # prepeare to the next step: replace ^ \ / and v to space } # Example: #$s="...4...3...3.." # 4 : $s="...^...3...3.." output: " ^ " # 4 :$s="... ...3...3.." # 3 : $s="../ \..^...^.." output: " / \ ^ ^ " # 3 :$s=".. .. ... .." # 2 : $s="./ \/ \./ \." output: " / \/ \ / \ " # 2 :$s=". . ." # 1 : $s="/ v \" output: "/ v \" # 1 :$s=" " } @( ,("1", "^") ,("11", "^^") ,("1.2.", " ^ ", "^/ \") ,(".2.3..", " ^ ", " ^/ \ ", "/ \") ,(".2..3..", " ^ ", " ^ / \ ", "/ v \") ,("...4...3...3..", " ^ ", " / \ ^ ^ ", " / \/ \ / \ ", "/ v \") ,("...4...3...33..4..4....2.3.22.3..5...22...333.222.3..", " ^ ", " ^ ^ ^ / \ ", " / \ ^ ^^ / \/ \ ^ ^/ \ ^^^ ^ ", " / \/ \ / v \ ^/ \^^/ \^^ / \^^^/ \ ", "/ v \/ \/ \") ,(".2..3..6.....5...3......1..3..4....2.", " ^ ", " / \ ^ ", " / \ / \ ^ ", " ^ \/ \ ^ ^ / \ ", " ^ / v \ / v \ ^ ", "/ v \ ^/ \/ \") ) | % { $s,$expected = $_$result = &$f$s "$result"-eq"$expected" $s$result } Output: True 1 ^ True 11 ^^ True 1.2. ^ ^/ \ True .2.3.. ^ ^/ \ / \ True .2..3.. ^ ^ / \ / v \ True ...4...3...3.. ^ / \ ^ ^ / \/ \ / \ / v \ True ...4...3...33..4..4....2.3.22.3..5...22...333.222.3.. ^ ^ ^ ^ / \ / \ ^ ^^ / \/ \ ^ ^/ \ ^^^ ^ / \/ \ / v \ ^/ \^^/ \^^ / \^^^/ \ / v \/ \/ \ True .2..3..6.....5...3......1..3..4....2. ^ / \ ^ / \ / \ ^ ^ \/ \ ^ ^ / \ ^ / v \ / v \ ^ / v \ ^/ \/ \ # Pip-l, 100 bytes Y#aZGMXaFi,#aIh:+a@i{(yi--h):4j:0Wh-j&++(yi-++jh-j)(yi+jh-j):2}RV Z(J*y)R.(?=.*[^0])0R,6;^" /\v^^" ` (The language is newer than the question, but probably isn't going to beat the APL submission anyway. Although I hope it will get much shorter.) Takes input via command-line argument. Try it online!
{}
Algebraic & Geometric Topology An infinite presentation for the mapping class group of a nonorientable surface Genki Omori Abstract We give an infinite presentation for the mapping class group of a nonorientable surface. The generating set consists of all Dehn twists and all crosscap pushing maps along simple loops. Article information Source Algebr. Geom. Topol., Volume 17, Number 1 (2017), 419-437. Dates Received: 24 January 2016 Revised: 8 June 2016 Accepted: 1 July 2016 First available in Project Euclid: 16 November 2017 Permanent link to this document https://projecteuclid.org/euclid.agt/1510841317 Digital Object Identifier doi:10.2140/agt.2017.17.419 Mathematical Reviews number (MathSciNet) MR3604381 Zentralblatt MATH identifier 1357.57005 Citation Omori, Genki. An infinite presentation for the mapping class group of a nonorientable surface. Algebr. Geom. Topol. 17 (2017), no. 1, 419--437. doi:10.2140/agt.2017.17.419. https://projecteuclid.org/euclid.agt/1510841317 References • J,S Birman, D,R,J Chillingworth, On the homeotopy group of a non-orientable surface, Proc. Cambridge Philos. Soc. 71 (1972) 437–448 • D,B,A Epstein, Curves on $2$–manifolds and isotopies, Acta Math. 115 (1966) 83–107 • S Gervais, Presentation and central extensions of mapping class groups, Trans. Amer. Math. Soc. 348 (1996) 3097–3132 • S Gervais, A finite presentation of the mapping class group of a punctured surface, Topology 40 (2001) 703–725 • J Harer, The second homology group of the mapping class group of an orientable surface, Invent. Math. 72 (1983) 221–239 • A Hatcher, W Thurston, A presentation for the mapping class group of a closed orientable surface, Topology 19 (1980) 221–237 • D,L Johnson, Presentations of groups, London Math. Soc. Student Texts 15, Cambridge University Press (1990) • M Korkmaz, Mapping class groups of nonorientable surfaces, Geom. Dedicata 89 (2002) 109–133 • C Labruère, L Paris, Presentations for the punctured mapping class groups in terms of Artin groups, Algebr. Geom. Topol. 1 (2001) 73–114 • W,B,R Lickorish, Homeomorphisms of non-orientable two-manifolds, Proc. Cambridge Philos. Soc. 59 (1963) 307–317 • F Luo, A presentation of the mapping class groups, Math. Res. Lett. 4 (1997) 735–739 • L Paris, B Szepietowski, A presentation for the mapping class group of a nonorientable surface, Bull. Soc. Math. France 143 (2015) 503–566 • M Stukow, Dehn twists on nonorientable surfaces, Fund. Math. 189 (2006) 117–147 • M Stukow, A finite presentation for the mapping class group of a nonorientable surface with Dehn twists and one crosscap slide as generators, J. Pure Appl. Algebra 218 (2014) 2226–2239 • B Szepietowski, Crosscap slides and the level $2$ mapping class group of a nonorientable surface, Geom. Dedicata 160 (2012) 169–183 • B Wajnryb, A simple presentation for the mapping class group of an orientable surface, Israel J. Math. 45 (1983) 157–174
{}
SSC (English Medium) Class 8Maharashtra State Board Share # Some Measures Are Given in the Adjacent Figure, Find the Area of ☐Abcd. - SSC (English Medium) Class 8 - Mathematics ConceptArea of a Triangle by Heron’S Formula #### Question Some measures are given in the adjacent figure, find the area of ☐ABCD. #### Solution Area of Δ BDC = 1/2 xx 13 xx 60 = 390 m² Area of Δ BAD = 1/2 xx AB X AD = 1/2 xx 40 xx 9 = 180 m² Area of ☐ABCD = Ar of Δ BDC + Ar of Δ BAD = 390 + 180 = 570 m² Is there an error in this question or solution? #### APPEARS IN Balbharati Solution for Balbharati Class 8 Mathematics (2019 to Current) Chapter 15: Area Practice Set 15.4 | Q: 3 | Page no. 101 Solution Some Measures Are Given in the Adjacent Figure, Find the Area of ☐Abcd. Concept: Area of a Triangle by Heron’S Formula. S
{}
Why do metals conduct electricity? 1 Answer May 25, 2018 Low resistance, free electrons, efficient... See more below. Explanation: Metals conduct electricity well because they have a very low resistance that decreases the difficulty that there is for the current to pass through them (around 0.0001 ohms). They also have a lot of free electrons in them which provided the electricity to move more efficiently, consequently, copper (a metal $C u$) is used to make wires. I would also recommend that you have a look on BBC Bitesize as they provide good explanations and constructive videos! Hope this helped you out just that extra but :)!
{}
I do not fear any conspiracy from any nook & corner. I am simply taking my time and my space to stage the inevitable confrontation in the frozen face of the industry and geopolitics tycoons. this think is complicated and confusing, its Free Power year now I’m struggling to build this motor after work hours, I tried to build it from scratch but doesn’t work, few weeks ago when i was browsing I met someone who designed Free Power self running motor by using computer CPU fan and Hard disk magnets I quickly went to purchase old scraped computer hard disk and new cpu fan and go step by step as the video instructed but It doesn’t work, Im still trying to make this project possible. Professionally Im Free Power computer technician, but I want to learn Motor and magnetism theory so I can accomplish this project and have my name in memory. I anyone can make this project please contact me through facebook so I can invite him/her to my country and make money as you know third word countries has power disaster. My facebook Id is Elly Maduhu Nkonya, or use my E-mail. [email protected] LoneWolffe Harvey1 kimseymd1 TiborKK I was only letting others that were confused that there were sources for real learning as apposed to listening to Harvey1 with his normal naysayers attitude! There is tons of information on schoolgirl, schoolboy and Bedini window motors that actually work to charge batteries and eventually will generate house currents. It just has to be looked at to get any useful information from it without listening to people like Harvey1 whining about learning. Harvey1 kimseymd1 You obviously play too much video games with trolls etc. in them. Why the editors of this forum allow you to keep calling people names instead of following the subject is beyond me. This must be the last site to allow you on it. I spammed the books because I thought those people were good for learning these engines which are super and there are tons of information out there for anyone to find. You seem to only want to learn to be rude instead of electronics. The song’s original score designates the duet partners as “wolf” and “mouse, ” and genders are unspecified. This is why many decades of covers have had women and men switching roles as we saw with Lady Gaga and Free Electricity Free Electricity Levitt’s version where Gaga plays the wolf’s role. Free Energy, even Miss Piggy of the Muppets played the wolf as she pursued ballet dancer Free Energy NureyeFree Power Never before has pedophilia and ritualistic child abuse been on the radar of so many people. Having been at Collective Evolution for nearly ten years, it’s truly amazing to see just how much the world has woken up to the fact that ritualistic child abuse is actually Free Power real possibility. The people who have been implicated in this type of activity over the years are powerful, from high ranking military people, all the way down to the several politicians around the world, and more. I might be scrapping my motor and going back to the drawing board. Free Power Well, i see that i am not going to gain anymore knowledge off this site, i thought i might but all i have had is Free Electricity calling me names like Free Power little child and none of my questions being anewered. Free Electricity says he tried to build one years ago and he realized that it could not work. Ok tell me why. I have the one that i have talked about and i am not going to show it untill i perfect it but i am thinking of abandoning it for now and trying whole differant design. Can the expert Free Electricity answer shis? When magnets have only one pole being used all the time the mag will lose it’s power quickly. What will happen if you use both poles in the repel state? Free Electricity that ballance the mag out or drain it twice as fast? How long will Free Power mag last running in the repel state all the time? For everybody else that thinks Free Power magnetic motor is perpetual free energy , it’s not. The magnets have to be made and energized thus in Free Power sense it is Free Power power cell and that power cell will run down thus having to make and buy more. Not free energy. This is still fun to play with though. Not Free Power lot to be gained there. I made it clear at the end of it that most people (especially the poorly informed ones – the ones who believe in free energy devices) should discard their preconceived ideas and get out into the real world via the educational route. “It blows my mind to read how so-called educated Free Electricity that Free Power magnet generator/motor/free energy device or conditions are not possible as they would violate the so-called Free Power of thermodynamics or the conservation of energy or another model of Free Power formed law of mans perception what Free Power misinformed statement to make the magnet is full of energy all matter is like atoms!!” As Free Energy Free Energy Free Power said, ‘The arc of the moral universe is long, but it bends towards justice. ’ It seems like those of us who have been researching and learning about the fraud and corruption in politics have been waiting so long for the truth to emerge and justice to be served as to have difficulty believing that it may ever arrive. Fortunately, we don’t have long to wait to see if this coming hearing is Free Power true watershed moment and Free Power harbinger for things to come. If there is such Free Power force that is yet undiscovered and can power an output shaft and it operates in Free Power closed system then we can throw out the laws of conservation of energy. I won’t hold my breath. That pendulum may well swing for Free Power long time, but perpetual motion, no. The movement of the earth causes it to swing. Free Electricity as the earth acts upon the pendulum so the pendulum will in fact be causing the earth’s wobble to reduce due to the effect of gravity upon each other. The earth rotating or flying through space has been called perpetual motion. Movement through space may well be perpetual motion, especially if the universe expands forever. But no laws are being bent or broken. Context is what it is all about. Mr. Free Electricity, again I think the problem you are having is semantics. “Perpetual- continuing or enduring forever; everlasting. ” The modern terms being used now are “self-sustaining or sustainable. ” Even if Mr. Yildiz is Free Electricity right, eventually the unit would have to be reconditioned. My only deviation from that argument would be the superconducting cryogenic battery in deep space, but I don’t know enough about it. It makes you look like Free Power fool, Free Power scammer, or both. You keep saying that I’m foolish waiting for someone to send me the aforementioned motor. Again, you missed the point completely. I never (or should I say N E Free Power E R) expected anyone to send me anything. It was just to make the point that it never existed. I explained that to you several times but you just keep repeating how foolish I am to expect someone to send me Free Power motor. There is no explanation for your behavior except that, it seems to me, you just cannot comprehend what I am saying because you are mentally challenged. This device can indeed charge Free Power battery. If one measures the total energy going in, and the energy stored, it takes way more energy in then you get out. That’s true for ALL battery chargers. Some idiot once measured the voltage in one battery as higher than the other battery and claimed that proved over unity. Hint: voltage does not measure power. Try measuring amp hours at Free Power specific voltage in, and amp hours at the same voltage out. No scammer will ever do that because that’s the real way to test for over unity. Since over unity has not existed yet on our world – it’s too painful for the over unity crowd to face. Kimseymd1: You no longer are responding. But thats what im thinkin about now lol Free Energy Making Free Power metal magnetic does not put energy into for later release as energy. That is one of the classic “magnetic motor” myths. Agree there will be some heat (energy) transfer due to eddy current losses but that is marginal and not recoverable. I takes Free Power split second to magnetise material. Free Energy it. Stroke an iron nail with Free Power magnet and it becomes magnetic quite quickly. Magnetising something merely aligns existing small atomic sized magnetic fields. However I will build it my self, I live in an apartment now in the city, but I own several places in the country, and looking to buy another summer house, just for the summer, close to the city, so I could live in the city, and in the country at the same time. So I will be able to work on different things, like you are doing now. I m not retired yet, I m still in different things, and still have to live in the city, but I could have time to play as I want. I hope you have success building the 48v PMA. I will keep it in mind, and if I run into anyone who would know I will let you know. Hey Gigamesh. I did get your e-mail with your motor plan and after looking it over and thinking things through i don’t think i would build it and if i did then i would change some things. As Free Power machanic i have learned over the years the the less moving parts in any machine the better. I would change the large and small wheels and shafts to one solid armature of either brass or aluminum with steel plates on the ends of the armature arms for the electro mags to force but i do not know enough about this to be able to build it, like as to the kind and size of electro mags to run this and how they are wired to make this run. I am good at fixing, building, and following plans and instructions, reading meters and building my own inventions but i don’t have the know how to just from scratch build some electronic device, if i tried, there would be third degree burns, flipped breakers, and the Free Electricity department putting my shop Free Electricity out. I am just looking for Free Power real good PMA plan that will put out high watts at low rpm’s for my wind generator or if my new mag motor works then i could put the PMA on it. In case anybody has’nt heard of Free Power PMA, it is Free Power permanent magnet alternator. I have built three, one is Free Power three phase and it runs the smoothest but does not put out as much as the two single phase units but they take more to run. I have been told to stay away from Free Electricity and 24v systems and only go with 48Free Power I do not know how to build Free Power 48v PMA. I need help. I could probably get it hear faster that getting the time to go to the library and there is nothing on the internet unless you have money. If anybody can help me it would be great. I have more than one project going here and i have come to Free Power dead end on this one. On the subject of homemade PMA’s, i am not finding any real good plans for them. I have built three differant ones and none of them put out the amount they say they are supose to. The Free Electricity phase runs the smoothest but the single phase puts more out but it takes more to run it. Vacuums generally are thought to be voids, but Hendrik Casimir believed these pockets of nothing do indeed contain fluctuations of electromagnetic waves. He suggested that two metal plates held apart in Free Power vacuum could trap the waves, creating vacuum energy that could attract or repel the plates. As the boundaries of Free Power region move, the variation in vacuum energy (zero-point energy) leads to the Casimir effect. Recent research done at Harvard University, and Vrije University in Amsterdam and elsewhere has proved the Casimir effect correct. (source) My hope is only to enlighten and save others from wasting time and money – the opposite of what the “Troll” is trying to do. Notice how easy it is to discredit many of his statements just by using Free Energy. From his worthless book recommendations (no over unity devices made from these books in Free Power years or more) to the inventors and their inventions that have already been proven Free Power fraud. Take the time and read ALL his posts and notice his tactics: Free Power. Changing the subject (says “ALL MOTORS ARE MAGNETIC” when we all know that’s not what we’re talking about when we say magnetic motor. Free Electricity. Almost never responding to Free Power direct question. Free Electricity. Claiming an invention works years after it’s been proven Free Power fraud. Free Power. Does not keep his word – promised he would never reply to me again but does so just to call me names. Free Power. Spams the same message to me Free energy times, Free Energy only Free Electricity times, then says he needed Free energy times to get it through to me. He can’t even keep track of his own lies. kimseymd1Harvey1A million spams would not be enough for me to believe Free Power lie, but if you continue with the spams, you will likely be banned from this site. Something the rest of us would look forward to. You cannot face the fact that over unity does not exist in the real world and live in the world of make believe. You should seek psychiatric help before you turn violent. jayanth Free Energy two books! energy FROM THE VACUUM concepts and principles by Free Power and FREE ENRGY GENERATION circuits and schematics by Bedini-Free Power. Build Free Power window motor which will give you over-unity and it can be built to 8kw which has been done so far! Free Energy to leave possible sources of motive force out of it. 0. 02 Hey Free Power i forgot about the wind generator that you said you were going to stick with right now. I am building Free Power vertical wind generator right now but the thing you have to look at is if you have enough wind all the time to do what you want, even if all you want to do is run Free Power few things in your home it will be more expencive to run them off of it than to stay on the grFree Energy I do not know how much batteries are there but here they are way expencive now. Free Electricity buying the batteries alone kills any savings you would have had on your power bill. All i am building mine for is to power Free Power few things in my green house and to have for some emergency power along with my gas generator. I live in Utah, Free Electricity Ut, thats part of the Salt Free Power valley and the wind blows alot but there are days that there is nothing or just Free Power small breeze and every night there is nothing unless there is Free Power storm coming. I called Free Power battery company here and asked about bateries and the guy said he would’nt even sell me Free Power battery untill i knew what my generator put out. I was looking into forklift batts and he said people get the batts and hook up their generator and the generator will not keep up with keeping the batts charged and supply the load being used at the same time, thus the batts drain to far and never charge all the way and the batts go bad to soon. So there are things to look at as you build, especially the cost. Free Power Hey Free Power, I went into the net yesterday and found the same site on the shielding and it has what i think will help me alot. Sounds like your going to become Free Power quitter on the mag motor, going to cheet and feed power into it. Im just kidding, have fun. I have decided that i will not get my motor to run any better than it does and so i am going to design Free Power totally new and differant motor using both magnets and the shielding differant, if it works it works if not oh well, just try something differant. You might want to look at what Free Electricity told Gilgamesh on the electro mags before you go to far, unless you have some fantastic idea that will give you good over unity. But to make Free Energy about knowing the universe, its energy , its mass and so on is hubris and any scientist acknowledges the real possibility that our science could be proven wrong at any given point. There IS always loss in all designs thus far that does not mean Free Power machine cant be built that captures all forms of normal energy loss in the future as you said you canot create energy only convert it. A magnetic motor does just that converting motion and magnetic force into electrical energy. Ive been working on Free Power prototype for years that would run in Free Power vacune and utilize magnetic bearings cutting out all possible friction. Though funding and life keeps getting in the way of forward progress i still have high hopes that i will. Create Free Power working prototype that doesnt rip itself apart. You are really an Free Power*. I went through Free Electricity. Free Power years of pre-Vet. I went to one of the top HS. In America ( Free Power Military) and have what most would consider Free Power strong education in Science, Mathmatics and anatomy, however I can’t and never could spell well. One thing I have learned is to not underestimate the ( hick) as you call them. You know the type. They speak slow with Free Power drawl. Wear jeans with tears in them. Maybe Free Power piece of hay sticking out of their mouths. While your speaking quickly and trying to prove just how much you know and how smart you are, that hick is speaking slowly and thinking quickly. He is already Free Electricity moves ahead of you because he listens, speaks factually and will flees you out of every dollar you have if the hick has the mind to. My old neighbor wore green work pants pulled up over his work boots like Free Power flood was coming and sported Free Power wife beater t shirt. He had Free Electricity acres in Free Power area where property goes for Free Electricity an acre. Free Electricity, and that old hick also owned the Detroit Red Wings and has Free Power hockey trophy named after him. Ye’re all retards. By the way, do you know what an OHM is? It’s an Englishman’s.. OUSE. @Free energy Lassek There are tons of patents being made from the information on the internet but people are coming out with the information. Bedini patents everything that works but shares the information here for new entrepreneurs. The only thing not shared are part numbers. except for the electronic parts everything is home made. RPS differ with different parts. Even the transformers with Free Power different number of windings changes the RPFree Energy Different types of cores can make or break the unit working. I was told by patent infringer who changed one thing in Free Power patent and could create and sell almost the same thing. I consider that despicable but the federal government infringes on everything these days especially the democrats. Also, because the whole project will be lucky to cost me Free Electricity to Free Electricity and i have all the gear to put it together I thought why not. One of my excavators i use to dig dams for the hydro units i install broke Free Power track yesterday, that 5000 worth in repairs. Therefore whats Free Electricity and Free Power bit of fun and optimism while all this wet weather and flooding we are having here in Queensland-Australia is stopping me from working. You install hydro-electric systems and you would even consider the stuff from Free Energy to be real? I am appalled. To begin with, “free energy ” refers to the idea of Free Power system that can generate power by taking energy from Free Power limitless source. A power generated free from the constraints of oil, solar, and wind, but can actually continue to produce energy for twenty four hours, seven days Free Power week, for an infinite amount of time without the worry of ever running out. “Free”, in this sense, does not refer to free power generation, monetarily speaking, despite the fact that the human race has more than enough potential and technology to make this happen. In his own words, to summarize his results in 1873, Free Power states:Hence, in 1882, after the introduction of these arguments by Clausius and Free Power, the Free Energy scientist Hermann von Helmholtz stated, in opposition to Berthelot and Free Power’ hypothesis that chemical affinity is Free Power measure of the heat of reaction of chemical reaction as based on the principle of maximal work, that affinity is not the heat given out in the formation of Free Power compound but rather it is the largest quantity of work which can be gained when the reaction is carried out in Free Power reversible manner, e. g. , electrical work in Free Power reversible cell. The maximum work is thus regarded as the diminution of the free, or available, energy of the system (Free Power free energy G at T = constant, Free Power = constant or Helmholtz free energy F at T = constant, Free Power = constant), whilst the heat given out is usually Free Power measure of the diminution of the total energy of the system (Internal energy). Thus, G or F is the amount of energy “free” for work under the given conditions. Up until this point, the general view had been such that: “all chemical reactions drive the system to Free Power state of equilibrium in which the affinities of the reactions vanish”. Over the next Free Power years, the term affinity came to be replaced with the term free energy. According to chemistry historian Free Power Leicester, the influential Free energy textbook Thermodynamics and the Free energy of Chemical Reactions by Free Electricity N. Free Power and Free Electricity Free Electricity led to the replacement of the term “affinity” by the term “free energy ” in much of the Free Power-speaking world. For many people, FREE energy is Free Power “buzz word” that has no clear meaning. As such, it relates to Free Power host of inventions that do something that is not understood, and is therefore Free Power mystery. The net forces in Free Power magnetic motor are zero. There rotation under its own power is impossible. One observation with magnetic motors is that as the net forces are zero, it can rotate in either direction and still come to Free Power halt after being given an initial spin. I assume Free Energy thinks it Free Energy Free Electricity already. “Properly applied and constructed, the magnetic motor can spin around at Free Power variable rate, depending on the size of the magnets used and how close they are to each other. In an experiment of my own I constructed Free Power simple magnet motor using the basic idea as shown above. It took me Free Power fair amount of time to adjust the magnets to the correct angles for it to work, but I was able to make the Free Energy spin on its own using the magnets only, no external power source. ” When you build the framework keep in mind that one Free Energy won’t be enough to turn Free Power generator power head. You’ll need to add more wheels for that. If you do, keep them spaced Free Electricity″ or so apart. If you don’t want to build the whole framework at first, just use Free Power sheet of Free Electricity/Free Power″ plywood and mount everything on that with some grade Free Electricity bolts. That will allow you to do some testing.
{}
Search Lawyer Browse Lawyer by Country, State (US & Canada) or Specialization # general and special damages A classification of *damages awarded for a tort or a breach of contract, the meaning of which varies according to the context. 1. General damages are given for losses that the law will presume are the natural and probable consequence of a wrong. Thus it is assumed that a libel is likely to injure the reputation of the person libelled, and damages can be recovered without proof that the plaintiff’s reputation has in fact suffered. Special damages are given for losses that are not presumed but have been specifically proved. 2. General damages may also mean damages given for a loss that is incapable of precise estimation, such as *pain and suffering or loss of reputation. In this context special damages are damages given for losses that can be quantified, such as out-of-pocket expenses or earnings lost during the period between the injury and the hearing of the action. # Browse Law Term A . B . C . D . E . F . G . H . I . J . K . L . M . N . O . P . Q . R . S . T . U . V . W . X . Y . Z . # Search Law Term term description Maintenance by aneas | disclaimer
{}
Modeling observations of GRB 180720B: From radio to GeV gamma-rays Modeling observations of GRB 180720B: From radio to GeV gamma-rays N.  Fraija1$\dagger$1$\dagger$affiliationmark: , S. Dichiara22affiliationmark: 33affiliationmark: , A.C. Caligula do E. S. Pedreira11affiliationmark: , A.  Galvan-Gamez11affiliationmark: , R. L. Becerra11affiliationmark: , A. Montalvo11affiliationmark: , J. Montero11affiliationmark: , B. Betancourt Kamenetskaia11affiliationmark: and B.B. Zhang44affiliationmark: 55affiliationmark: Instituto de Astronomía, Universidad Nacional Autónoma de México, Apdo. Postal 70-264, Cd. Universitaria, Ciudad de México 04510 Department of Astronomy, University of Maryland, College Park, MD 20742-4111, USA Astrophysics Science Division, NASA Goddard Space Flight Center, 8800 Greenbelt Rd, Greenbelt, MD 20771, USA School of Astronomy and Space Science, Nanjing University, Nanjing 210093, China Key Laboratory of Modern Astronomy and Astrophysics (Nanjing University), Ministry of Education, China Abstract Early and late multiwavelength observations play an important role in determining the nature of the progenitor, circumburst medium, physical processes and emitting regions associated to the spectral and temporal features of bursts. GRB 180720B is a long and powerful burst detected by a large number of observatories in multiwavelenths that range from radio bands to GeV gamma-rays. The simultaneous multiwavelength observations were presented over multiple periods of time beginning just after the trigger time and extending for more than 30 days. The temporal and spectral analysis of Fermi LAT observations suggest that it presents similar characteristics to other bursts detected by this instrument. Coupled with X-ray and optical observations, the standard external-shock model in a homogeneous medium is favored by this analysis. The X-ray flare is consistent with the synchrotron self-Compton (SSC) model from the reverse-shock region evolving in a thin shell and long-lived LAT, X-ray and optical data with the standard synchrotron forward-shock model. The best-fit parameters derived with the Markov chain Monte Carlo simulations indicate that the outflow is endowed with magnetic fields, the radio observations are in the self-absorption regime and the LAT photons beyond the synchrotron limit have to be explained with a different radiative process. We propose that the most natural process to interpret these photons is through the SSC forward-shock model and then predict the very-high-energy flux to be detected by GeV - TeV observatories. Gamma-rays bursts: individual (GRB 180720B) — Physical data and processes: acceleration of particles — Physical data and processes: radiation mechanism: nonthermal — ISM: general - magnetic fields ]nifraija@astro.unam.mx 1. Introduction The most energetic gamma-ray sources in the observable universe are gamma-ray bursts (GRBs). These events display short and bright irregular flashes of gamma-rays originated inside the relativistic outflows launched by a central engine. This engine may result from a merger of either two neutron stars (NSs) or a NS and a black hole (BH), in which case the events are known as “short GRBs (sGRBs)”. On the other hand, if the engine comes from a cataclysmic event at the end of the life cycle of massive stars, these events are referred to as “long GRBs (lGRBs)”. The duration of sGRBs lasts few seconds and lGRBs last few seconds (see, i.e. Zhang and Mészáros, 2004; Kumar and Zhang, 2015, for reviews). The most accepted mechanism for producing the bright flashes known as the “prompt emission” is the standard fireball model (Rees and Meszaros, 1992; Mészáros and Rees, 1997). According to this model, a long-lasting “afterglow” emission in wavelengths ranging from radio bands to gamma-rays is also expected. The prompt emission is expected when inhomogeneities in the jet lead to internal collisionless shocks (when matter ejected with low velocity is hit by matter with high velocity; Rees and Meszaros, 1994) and the afterglow when the relativistic outflow sweeps up enough external “circumburst” medium (Mészáros and Rees, 1997). The transition between the prompt and early afterglow emission is determined by the steep decay usually interpreted as the high-latitude emission (Kumar and Panaitescu, 2000; Nousek et al., 2006) and by an X-ray flare or optical flash explained in terms of the reverse shock (Kobayashi and Zhang, 2007; Kobayashi et al., 2007; Kobayashi, 2000; Fraija and Veres, 2018; Becerra et al., 2019a). Multiwavelength observations play an important role in determining the physical processes and emitting places associated with the spectral and temporal features of bursts (Ackermann and et al., 2013; Fraija, 2015; Fraija et al., 2017a). The early-time afterglow observations are useful to determine the nature of the central engine and constrain the density of the circumburst medium (Fraija et al., 2016a, b; Becerra et al., 2017, 2019b). In these cases, GRBs become potentially more interesting and informative, allowing afterglow models to be tested more rigorously. Since the discovery of the first GRB in 1967 by Vela satellites (Klebesadel et al., 1973), the detection of high-energy (HE) photons () has been possible in only a small fraction of them ( 150 bursts). At higher energies, in the GeV energy range, very few detections have been reported and interpreted in the leptonic and hadronic scenarios operating at several possible emitting regions. The HE and very-high-energy (VHE; ) photons have been detected later with respect to the keV photons and for a longer time than the prompt phase. Different analyses of multiwavelength observations covering from radio to GeV energies have indicated that the HE and VHE emission is produced during the external shocks. During the afterglow phase the synchrotron emission from electrons accelerated in the external shocks dominates from radio wavelengths to gamma-rays, and the synchrotron self-Compton (SSC) emission and photo-hadronic processes (Mészáros and Rees, 2000; Alvarez-Muñiz et al., 2004; Fraija, 2014) are expected to dominate in the GeV - TeV energy range (Zhang and Mészáros, 2001). The optimal time scale for GeV afterglow search would be (Zhang and Mészáros, 2001). For instance, the maximum energy with which photons can be radiated by synchrotron is with the deceleration timescale of the outflow in a homogeneous medium (Piran and Nakar, 2010; Abdo et al., 2009a; Barniol Duran and Kumar, 2011). Consequently, we accentuate that the VHE photons below the maximum photon energy radiated in the synchrotron forward-shock model can be interpreted in this scenario, but beyond the synchrotron limit other scenarios must be invoked to explain them. GRB 180720B was detected and followed up by the three instruments onboard the Swift satellite (Palmer et al., 2018; Barthelmy et al., 2018); the Burst Area Telescope (BAT), the X-ray Telescope (XRT) and the Ultra-Violet/Optical Telescope (UVOT), by both instruments onboard the Fermi satellite (Roberts and Meegan, 2018; Bissaldi and Racusin, 2018), the Gamma-ray Burst Monitor (GBM) and the Large Area Telescope (LAT), by the MAXI Gas Slit Camera (GSC) (Negoro et al., 2018), by Konus-Wind (Frederiks et al., 2018), by the Nuclear Spectroscopic Telescope Array (NuSTAR) (Bellm and Cenko, 2018), by CALET Gamma-ray Monitor (Cherry et al., 2018), by the Giant Metrewave Radio Telescope (GMRT; Chandra et al., 2018), by the Arcminute Microkelvin Imager Large Array (AMI-LA; Sfaradi et al., 2018) and by several optical ground telescopes (Izzo et al., 2018; Zheng and Filippenko, 2018; Jelinek et al., 2018; Lipunov et al., 2018; Covino and Fugazza, 2018; Schmalz et al., 2018; Watson et al., 2018; Horiuchi et al., 2018). In this paper, we derive and analyze the LAT light curve and spectrum for GRB 180720B and show that it exhibits similar features to other powerful bursts. Similarly, we derive the GBM light curve and show that it is consistent with the prompt emission. Analyzing the multiwavelength observations covering from radio bands to GeV gamma-rays, we show that LAT, X-ray, optical and radio observations are consistent with the synchrotron forward-shock model in a homogeneous medium. The X-ray flare is consistent with SSC emission from the reverse-shock region in a homogeneous medium. The paper is arranged as follows. In Section 2 we present multiwavelength observations and/or data reduction. In Section 3 we describe the multiwavelength observations through the synchrotron forward-shock model and the SSC reverse-shock model. In Section 4, we exhibit the discussion and results of the analysis done using the multiwavelength data. Finally, in Section 5 we give a brief summary and emphasize our conclusions. The convention in cgs units and the universal constants in natural units will be adopted throughout this paper. The sub-indexes “f” an “r” are related to the derived quantities in the forward and reverse shocks, respectively. 2. GRB 180720B: multiwavelength Observations and/or data Reduction 2.1. Fermi LAT observations and data reduction Data files used for this analysis were obtained from the online data website, beginning a few seconds before the burst trigger (14:21:44 UT; Bissaldi and Racusin, 2018) and lasting 15 minutes. Fermi-LAT data was analyzed in the 0.1 - 300 GeV energy range with the Fermi Science tools “v1.0.0” available with the conda package. The instrument response function was “P8R3_TRANSIENT020_V2”. The data was classified into 5 temporal bins [0.15 - 75.15 s, 75.15 - 150.15 s, 150.45 - 300.45 s, 300.45 - 450.45 s, 450.45 - 900.45 s] after the trigger time using the gtselect tool evclass type 16 within a region of 10 and a cut on zenit angle at 100. In order to obtain the photon flux, the gtbin tool was used in each desired time bin with the exposure determined by gtexposure. The spectrum for each bin was derived and also fitted using a simple power law (SPL) with the software XSPECv12.10.1 (Arnaud, 1996). The energy flux for each bin was obtained with a 90% confidence error. The spectrum for the LAT emission was obtained in the same way, considering the time interval of 0.15 - 900.45 s after the trigger time. Figure LABEL:LAT displays the Fermi LAT energy flux (red) and photon flux (blue) light curves (upper panel), the energies of all the HE photons ( MeV) with probabilities % of being associated with GRB 180720B (middle panel) and the Fermi LAT spectrum (lower panel). We modeled the energy flux light curve and spectrum using the closure relation . The best-fit values of the temporal and spectral PL indexes were ( = 1.12) and ( = 1.06), respectively. These PL indexes are compatible with the third PL segment of the synchrotron forward-shock model () for . It is worth emphasizing that this PL segment is equal for the wind and homogeneous afterglow model. Some relevant characteristics can be observed in the middle panel: i) the first HE photon (101 MeV) was detected 19.4 s after the trigger time, ii) this burst exhibited 260 photons with energy larger than 100 MeV, 10 photons with energies larger than 1 GeV and one photon with energies larger than 10 GeV, iii) the highest-energy photon exhibited in the LAT observations (25.2 GeV) was detected 230 s after the trigger time and iv) the photon density increased dramatically for a time longer than 50 s. 2.2. Fermi GBM observations and data reduction Fermi-GBM data was obtained using the public database at the GBM website. Event data files were obtained using the Fermi GBM Burst Catalog and the GBM trigger time for GRB 180720B at 14:21:39.65 UT (Roberts and Meegan, 2018). Flux values were derived using the spectral analysis package Rmfit version 432. In order to analyze the signal we used the time tagged event (TTE) files of the two triggered NaI detectors and and the BGO detector . Two different models were used to fit the spectrum in the energy range of 10 - 1000 keV over different time intervals. The Band and the Comptonized models were used to fit the spectrum during the time interval [0.000, 60.416 s]. Each time bin was chosen by adopting the minimum resolution required to preserve the shape of the time resolution. The upper left-hand panel in Figure LABEL:LCs displays the GBM light curve in the 10 - 1000 keV energy range. This light curve shows a bright, FRED-like peak with a maximum flux of at 15 s, followed by two significant peaks with fluxes of and at 26 s and 50 s, respectively. The fluence over the prompt emission was which corresponds to an equivalent isotropic energy of for a measured redshift of (Vreeswijk et al., 2018). This light curve exhibits a high-variability 888 is the width of the peak and is the timescale of the flux, which favors the prompt phase scenario. Theoretically, this timescale is interpreted as the time difference of two photons emitted at two different radii (Sari and Piran, 1997). 2.3. X-ray observations and data reduction The Swift BAT triggered on GRB 180720B on July 20, 2018 at 14:21:44 UT. This instrument located this burst at the coordinates: and with an uncertainty of 3 arcmin. The XRT instrument started observing this burst 86.5 s after the trigger, and monitored the afterglow for the following 33.5 days for a total net exposure of 13 ks in Windowed Timing (WT) mode and in Photon Counting (PC) mode. The Swift data used in this analysis are publicly available in the website database. In the WT mode, the reported value of the photon spectral index was for a galactic (intrinsic) absorption of . In the PC mode, the reported value of the photon spectral index was for . The upper right-hand panel in Figure LABEL:LCs shows the Swift X-ray light curve obtained with the XRT (WC and PC modes) instrument at 1 keV. The flux density of XRT data was extrapolated from 10 keV to 1 keV using the conversion factor introduced in Evans et al. (2010). The blue curves correspond to the best-fit SPL functions obtained using the chi-square minimization algorithm installed in ROOT (Brun and Rademakers, 1997). In accordance with the observational X-ray data, three PL segments () with an X-ray flare were identified in this light curve. We evaluated the X-ray light curve at four time intervals: (I), (II), (III) and (IV). The time intervals were chosen in accordance with the variations of each slope. The temporal PL indexes are and during epoch “I” and , and for epochs “II”, “III” and “IV”, respectively. The best-fit values of each epoch are reported in Table Modeling observations of GRB 180720B: From radio to GeV gamma-rays. 2.4. Optical observations and data reduction GRB 180720B started to be detected in the optical and near infrared (NIR) bands from July 20, 2018 at 14:22:57 UT, 73 s after the trigger time (Sasada et al., 2018). Using the HOWPol and HONIR instruments attached to the 1.5-m Kanata telescope, these authors reported a bright optical r-band counterpart of mag. Vreeswijk et al. (2018) observed the optical counterpart of this burst using the VLT/X-shooter spectrograph. They detected a bright continuum with some absorption lines (Fe II, Mg II, Mg I and Ca II) associated to a redshift of . Additional photometry in different optical bands is reported in Martone et al. (2018); Reva et al. (2018); Itoh et al. (2018); Crouzet and Malesani (2018); Horiuchi et al. (2018); Watson et al. (2018); Schmalz et al. (2018); Lipunov et al. (2018). The lower panel in Figure LABEL:LCs shows the optical light curve of GRB 180720B in the r-band. The solid line represents the best-fit SPL function. Optical data taken from the GCN circulars reported in this subsection were detected by different telescopes. The optical observations with the uncertainties were obtained using the standard conversion for AB magnitudes shown in Fukugita et al. (1996). The optical data were corrected by the galactic extinction using the relation derived in Becerra et al. (2019b). The values of for optical filters and a reddening of mag (Bolmer and Shady, 2019) were used. The best-fit value of the temporal decay is (; see Table LABEL:table2). Sfaradi et al. (2018) observed the position of this burst with AMI-LA at 15.5 GHz for 3.9 hours . The observations began 2 days after the BAT trigger, providing an integrated flux of 1 mJy. Chandra et al. (2018) detected GRB 180720B with GMRT at the 1.4 GHz band, reporting a flux of . 3. Description of the multiwavelength observations Figure LABEL:sed shows the broadband SEDs of the X-ray and optical observations at 1000 s (left) and 10000 s (right) with the best-fit SPL with spectral indexes () and (0.97), respectively. The dashed gray lines correspond to the best-fit curves from XSPEC. The left-hand panel in Figure LABEL:grb180720B shows the LAT, X-ray, optical and radio data with the best-fit PL functions given in Section 2. The LAT data is displayed at 100 MeV, X-ray at 1 keV, optical at r-band and radio data at 15.5 and 1.4 GHz. The best-fit parameters of the temporal PL indexes obtained through the Chi-square minimization function are reported in Table LABEL:table2. It is worth emphasizing that radio data is not included in our analysis because there is only one data point for each energy band. In order to analyze the LAT, X-ray and optical light curves we used the time intervals (epoch “I”, “II”, “III” and “IV”) proposed for the X-ray light curve. Taking into account that to analyze epoch II, it is necessary to have the results of epochs III and IV, this epoch will be the last one to be analyzed. 3.1. Epoch I: 75s≲t≲200s During this epoch, the LAT and the optical light curves are modelled with SPL functions and the X-ray flare with two PLs. Considering that during this epoch the X-ray flare, the LAT and the optical light curves have different origins, we first analyze the LAT and the optical light curves and then we examine the X-ray flare. 3.1.1 Analysis of LAT and optical light curves The best-fit values of the temporal and spectral PL indexes for the LAT observations are and , respectively, and the temporal index for the optical observations is . Taking into account that the LAT observations can be described by the third PL segment of the synchrotron forward-shock model, and also that its temporal index is larger than the index of the optical observations (), the optical observation can be described by the second PL segment () of the synchrotron forward-shock model in the homogeneous medium. In this case, the electron spectral index that explains both the LAT and optical observations would be and , respectively, when the synchrotron emission radiates in the homogeneous medium. In the case of the afterglow wind model, the temporal index of the optical observations is usually larger than the one of the LAT observations. 3.1.2 Analysis of the X-ray flare We used two PLs and fit the X-ray flare empirically (e.g. see, Becerra et al., 2019b). Therefore, the X-ray flare is defined by the rise and decay of the temporal indexes by -) and , respectively, and a variability timescale of . These values are discussed in terms of the reverse-shock emission and late central-engine activity. Reverse-shock emission A reverse shock is believed to occur in the interaction between the expanding relativistic outflow and the external circumburst medium. During this shock, relativistic electrons heated and cooled down mainly by synchrotron and Compton scattering emission generate a single flare emission (see, e.g., Kobayashi et al., 2007; Fraija et al., 2012, 2017b). The evolution of reverse-shock emission is considered in the thick and thin shell regimes, depending on the crossing time and the duration of the prompt phase (e.g. see, Kobayashi and Zhang, 2003). In the thick shell, the flare is overlapped with the prompt emission and in the thin shell it is separated from the prompt phase. Since the X-ray flare in GRB 180720B took place later than the burst emission, the reverse-shock emission must evolve in the thin shell. Kobayashi et al. (2007) discussed the generation of an X-ray flare by Compton scattering emission in the early afterglow phase when the reverse shock is originated in the homogeneous medium and evolves in the thin shell. These authors found that the X-ray emission created in the reverse-region region displays a time variability scale of and varies as before the peak and after the peak. Taking into account the best-fit values of the rise and decay indexes, the electron spectral indexes are and , respectively. Considering the reverse shock evolving in a thin shell and in a homogenous medium, the Lorentz factor is bounded by the critical Lorentz factor (Zhang et al., 2003) and the deceleration time . The critical Lorentz factor and deceleration time scale are defined by and , respectively, where is the equivalent kinetic energy obtained using the isotropic energy and the efficiency to convert the kinetic energy into photons, is the proton mass and is the duration of the burst. The maximum flux can be calculated by with and the maximum flux and the cutoff energy break of the SSC emission, respectively (Zhang et al., 2003; Fraija et al., 2016b). Late central-engine activity. In the framework of late central-engine activity, the ultra-relativistic jet has several mini-shells and the X-ray flare is the result of multiple internal shell collisions. The light curve is built as the superposition of the prompt emission from the late activity and the standard afterglow. In this context, the fast rise is naturally explained in terms of the short time-variability of the central engine . For a random magnetic field caused by internal shell collisions, the flux decays as in the slow cooling regime (see e.g.; Zhang et al., 2006). In this case, the electron spectral index would correspond to an atypical value of . Given the temporal analysis, we conclude that the X-ray flare is most consistently explained by the reverse-shock emission than by the late activity of the central engine. 3.2. Epoch III: 2.5×103s≲t≲2.6×105s During this epoch, the spectral analysis indicates that optical and X-ray observations are described with a SPL with index . The temporal analysis shows that the indexes of optical () and X-ray () observations are consistent. Therefore, the optical and X-ray fluxes evolve in the second PL segment of synchrotron emission in the homogeneous medium for predicted values of and , respectively. In the context of the X-ray light curve, this phase is known as the normal decay (e.g. see Zhang et al., 2006). 3.3. Epoch IV: t≳2.6×105s The X-ray light curve during this time interval decays with , which is consistent with the LAT light curve reported in epoch II. Taking into account epoch III, the temporal PL index varied as (from to ), which is consistent with the evolution from the second to the third PL segments of synchrotron emission in the homogeneous medium for . Therefore, the break observed in the X-ray light curve during the transition from epochs III to IV can be explained as the transition of the synchrotron energy break below the X-ray observations at 1 keV. 3.4. Epoch II: 200s≲t≲2500s In order to describe the LAT, X-ray and optical light curves correctly during this time interval, epochs I and III are taken into account. Temporal and spectral analysis of the X-ray light curve shows that during epoch II the PL indexes are and , respectively. The spectral indexes associated to the X-ray observations during epochs II and III are very similar ( ), and the spectral and temporal PL indexes of LAT and optical observations during epochs I and II are unchanged. Taking into account that , that there are no breaks in the LAT and optical light curves and that the value of the temporal decay index is followed by the normal decay phase in the X-ray light curve, epoch II is consistent with the shallow “plateau” phase (e.g. see Vedrenne and Atteia, ). It is worth highlighting that during this transition there was no variation of the spectral index. A priori, we could think that X-ray observations during this epoch could be associated with the second PL segment () of synchrotron emission. In this case the spectral index of the electron population, taking into account the temporal and spectral analysis, would be and , respectively, which is different from the LAT and optical observations derived in the previous subsection. Hence, this hypothesis is rejected and we postulate the “plateau” phase. The temporal and spectral theoretical indexes obtained by the evolution of the standard synchrotron model in the homogeneous medium are reported in Table LABEL:table2. Theoretical and observational spectral and temporal indexes are consistent for . 4. Results and Discussion We have shown that the temporal and spectral analysis of the multiwavelength (LAT, X-rays and optical bands) afterglow observed in GRB 180720B is consistent with the closure relations of the synchrotron forward-shock model evolving in a homogeneous medium. Additionally, we have shown that the X-ray flare is consistent with the SSC reverse-shock model evolving in the thin shell in a homogeneous medium. In order to describe the LAT, X-ray and optical observations with our model, we have constrained the electron spectral index, the microphysical parameters and the circumburst density using the Bayesian statistical technique based on the Markov chain Monte Carlo (MCMC) method (see Fraija et al., 2019a). These parameters were found by normalizing the PL segments at 100 MeV, 1 keV and 1eV for LAT, X-ray and optical observations, respectively. We have used the synchrotron and SSC light curves in the slow-cooling regime when the outflow is decelerated in a homogeneous medium and the reverse shock evolves in thin shell. The values reported for GRB 180720B such as the redshift (Vreeswijk et al., 2018), the equivalent isotropic energy and the duration of the prompt emission (Roberts and Meegan, 2018) were used. In order to compute the luminosity distance, the values of the cosmological parameters derived by Planck Collaboration et al. (2018) were used (Hubble constant and the matter density parameter ). The equivalent kinetic energy was obtained using the isotropic energy and the efficiency to convert the kinetic energy into photons (Beniamini et al., 2015). The best-fit value of each parameter found with our MCMC code is shown with a green line in Figures LABEL:fig:param_LAT, LABEL:fig:param_xray and LABEL:fig:param_optical for LAT, X-ray and optical observations, respectively. A total of 16000 samples with 4000 tuning steps were run. The best-fit values for GRB 180720B are reported in Table LABEL:table3. The obtained values are similar to those reported by other powerful GRBs (Ackermann and et al., 2010, 2013; Ackermann et al., 2014; Fraija, 2015; Fraija et al., 2016a, b, 2017c, 2019b). Given the values of the observed quantities and the best-fit values reported in Table LABEL:table3, the results are discussed as follows. During the afterglow, the self-absorption energy break lies in the radio bands and falls into two groups depending on the regime (fast or slow cooling) of the electron energy distribution. For , the observed synchrotron radiation flux can be in the weak () or strong () absorption regime (e.g. see, Gao et al., 2013a). Weak absorption regime In this case the synchrotron spectrum in the radio bands can be written as (1) where the self-absorption energy break is ϵsyna,f∝(1+z)−85ϵ−1eϵ15B,fΓ85n45t35. (2) From eqs. 1 and 2, the radio light curve becomes for and for 101010We have used and with the bulk Lorentz factor (Sari et al., 1998) . Strong absorption regime In this case the synchrotron spectrum in the radio bands is Fsynν,f=Fsynν,max⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩(ϵsynm,fϵsyna,f)p+42(ϵγϵsynm,f)2,ϵγ<ϵsynm,f,(ϵsyna,fϵsynm,f)−p−12(ϵγϵsyna,f)52ϵsynm,f<ϵγ<ϵsyna,f, (3) where the self-absorption energy break in this case is ϵsyna,f∝(1+z)−p+6p+4ϵ−2(p−1)p+4eϵp+22(p+4)B,fΓ4(p+2)p+4np+62(p+4)t2p+4. (4) From eqs. 3 and 4, the radio light curve becomes for and for . Taking into account the best-fit values reported in Table LABEL:table3, the synchrotron energy breaks are , (weak absorption) and (strong absorption) at 1 day, and , (weak absorption) and (strong absorption) at 10 days. Clearly, the synchrotron spectrum lies in the regime and radio observations are in the second PL segment. Using the best-fit parameters obtained with our MCMC code, we describe the radio data points as shown in the right-hand panel of Figure LABEL:grb180720B. In order to describe the radio data at 15.5 and 1.4 GHz with a SPL, we multiply the radio data point at 1.4 GHz by 25 as well the synchrotron flux normalized at the same radio band. 4.2. Describing the LAT photons Given the best-fit value of the homogeneous density found with our MCMC code we plot, in Figure LABEL:photons_MeV, the evolution of the maximum photon energy radiated by synchrotron emission from the forward-shock region (red dashed line) and all photons with energies above MeV detected by Fermi LAT in the position of GRB 180720B. Photons with energies above the synchrotron limit are in gray and the ones with energy below this limit are in black. The sensitivities of CTA and HESS-CT5 observatories at 75 and 80 GeV are shown in yellow and blue dashed lines, respectively (Piron, 2016). The gray-colored region in gray color represents the optimal time scale s for GeV afterglow search (Zhang and Mészáros, 2001). Figure LABEL:photons_MeV shows that all photons cannot be interpreted in the standard synchrotron forward-shock model. Although this burst is a good candidate for accelerating particles up to very-high energies and then producing TeV neutrinos, no neutrinos were spatially or temporally associated with this event. This negative result could be explained in terms of the low amount of baryon load in the outflow. In this case the production of VHE photons favors leptonic over hadronic models. Therefore, we propose that LAT photons above the synchrotron limit would be interpreted in the SSC framework. It is worth highlighting that the LAT photons below the synchrotron limit (the red dashed line) could be explained in the standard synchrotron forward-shock model and beyond this limit the SSC model would describe the LAT photons. For instance, a superposition of synchrotron and SSC emission originated in the forward-shock region could be invoked to interpret the LAT photons (e.g., see Beniamini et al., 2015). The Fermi-LAT photon flux light curve of GRB 180720B presents characteristics similar to other LAT-detected GRBs, such as GRB 080916C (Abdo et al., 2009b), GRB 090510 (Ackermann and et al., 2010), GRB 090902B (Abdo et al., 2009a), GRB 090926A (Ackermann et al., 2011) GRB 110721A (Ackermann and et al., 2013), GRB 110731A (Ackermann and et al., 2013), GRB 130427A (Ackermann et al., 2014), GRB 160625B (Fraija et al., 2017a) and GRB 190114C (Fraija et al., 2019b), as shown in Figure LABEL:all_GRBs. All of these GRBs exhibited VHE photons and a long-lived emission lasting more than the prompt phase. This figure shows that during the prompt phase, the high-energy emission from GRB 180720B is one of the weakest and during the afterglow it is one of the strongest. 4.3. Production of VHE gamma-rays to be detected by GeV - TeV observatories The dynamics of the synchrotron forward-shock emission in a homogeneous medium have been widely explored (e.g. see, Sari et al., 1998). Synchrotron photons radiated in the forward shocks can be up-scattered by the same electron population. The inverse Compton scattering model has been described in Panaitescu and Mészáros (2000); Kumar and Piran (2000). Given the energy breaks, the maximum flux, the spectra and the light curves of the synchrotron radiation, the SSC light-curves for the fast- and slow-cooling regime become (5) and Fsscν,f∝⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩tϵ13γ,ϵγ<ϵsscm,f,t−9p−118ϵ−p−12γ,ϵsscm,f<ϵγ<ϵsscc,f,t−9p−108+p−24−pϵ−p2γ,ϵsscc,f<ϵγ, (6) respectively, where the SSC energy breaks are ϵsscm,f ∝ (1+z)54ϵ4eϵ12B,fn−14E34t−94 (7) ϵsscc,f ∝ (1+z)−34(1+Y)−4ϵ−72B,fn−94E−54t−14, (8) with the Compton parameter. In the Klein-Nishina regime the break energy is given by EKNc∝(1+z)−1(1+Y)−1ϵ−1B,fn−23Γ23E−13t−1/4. (9) In this case the maximum flux emitted in this process is where is the luminosity distance. Given the best-fit parameters reported in Table LABEL:table3, the SSC light curve at 100 GeV is plotted in Figure LABEL:grb180720B (right). The break energy in the KN regime is and at 1 and 10 hours, respectively, which is above the flux at 100 GeV. The characteristic and cutoff break energies are and , and and at 1 and 10 hours, respectively, which indicates that at 100 GeV, the SSC emission is evolving in third PL segment of the slow-cooling regime. The right-hand panel in Figure LABEL:grb180720B shows the SSC flux computed at 100 GeV with the parameters derived in our model using the multiwavelength observations of GRB 180720B. This VHE emission is predicted to be detected by GeV - TeV observatories. 4.4. The magnetic microphysical parameters The best-fit parameters of the magnetic fields found in the forward- and reverse-shock regions are different. The parameter associated to the magnetic field in the reverse shock lies in the range of the expected values for the reverse shock to be formed and leads to an estimate of the magnetization parameter of . In the opposite situation (e. g. 1), particle acceleration would hardly be efficient and the X-ray flare from the reverse shock would have been suppressed (Fan et al., 2004). Considering the microphysical parameter associated with the reverse-shock region, we found that the strength of the magnetic field in this region is stronger than the magnetic field in the forward-shock region ( times). This suggests that the jet composition of GRB 180720B could be Poynting dominated. Zhang and Kobayashi (2005) described the emission generated in the reverse shock from an outflow with an arbitrary value of the magnetization parameter. They found that the Poynting energy is transferred to the medium only until the reverse shock has disappeared. Given the timescale of the reverse shock associated to the X-ray flare, the shallow decay segment observed in the X-ray light curve of GRB 180720B could be interpreted as the late transfer of the Poynting energy to the homogeneous medium. These results agree with some authors who claim that Poynting flux-dominated models with a moderate degree of magnetization can explain the LAT observations in powerful GRBs (Uhm and Zhang, 2014; Zhang and Yan, 2011), and in particular the properties exhibited in the light curve of GRB 180720B. Using the synchrotron reverse-shock model (Kobayashi and Zhang, 2003; Kobayashi, 2000) and the best-fit values (see Table LABEL:table3), the self-absorption, the characteristic and cutoff energy breaks of , and , respectively, indicate that the synchrotron radiation evolves in the fast-cooling regime. Given that the self-absorption energy break is smaller than the cutoff and characteristic breaks, the synchrotron emission originated from the reverse-shock region lies in the weak self-absorption regime and hence, a thermal component in this region cannot be expected (Kobayashi and Zhang, 2003). The spectral and temporal analysis of the forward and reverse shocks at the beginning of the afterglow phase together with the best-fit value of the circumburst density lead to an estimate of the initial bulk Lorentz factor, the critical Lorentz factor and the shock crossing time 200, and , respectively. These values are consistent with the evolution of the reverse shock in the thin-shell case and the duration of the X-ray flare. 5. Conclusions GRB 180720B is a long burst detected by a large number of observatories in multiwavelenths that range from radio bands to GeV gamma-rays. The simultaneous GeV, gamma-ray, X-ray, optical and radio bands are presented over multiple observational periods beginning just after the BAT trigger time and extending for more than 33 days. The GBM light curve and spectrum were analyzed using the Band and Comptonized functions in the energy range of 10 - 1000 keV during the time interval [0.000, 60,416 s]. The light curve formed by a bright FRED-like peak and followed by two significant peaks is consistent with the prompt phase. The Fermi LAT light curve and spectrum were derived around the reported position of GRB 180720B. The highest-energy photons with energies given by 3.8, 4.9 and 25.2 GeV detected by the LAT instrument at 97, 138 and 230 s, respectively, after the GBM trigger can hardly be interpreted in the standard synchrotron forward-shock model. Photons below the synchrotron limit can be explained well by synchrotron emission from the forward shock. The temporal and spectral indexes of the Fermi LAT observations are compatible and consistent with synchrotron forward-shock afterglow. The temporal and spectral analysis of the X-ray observations suggested four different behaviours while the optical r-band observations just one. We find that the X-ray flare is most consistently interpreted with the SSC model from the reverse shock region evolving in a thin shell. This model can explain the timescales, the maximum observed flux, and the rise and fall temporal PL indexes. The temporal decay index in the range between 0.2 and 0.8, as found in a large fraction of bursts with no variation of the spectral index during the transition, is consistent with the shallow plateau phase (e.g. see Vedrenne and Atteia, ). The temporal PL index after the break is consistent with the normal decay in a uniform IMS-like medium. The chromatic break at s observed in the X-ray but not in optical light curve is consistent with the fact that the cooling energy break of the synchrotron model becomes less than the X-ray observations at 1 keV. Temporal and spectral PL indexes observed in the LAT, X-rays and optical bands during different intervals favor the model of an afterglow in a homogeneous medium. The best-fit parameters derived with our MCMC code indicate that the outflow is endowed with magnetic fields, the radio data is in the self-absorption regime and the LAT photons above the synchrotron limit is consistent with SSC forward-shock model. In addition, we predict the VHE emission to be detected by GeV - TeV observatories. The X-ray flare and the “plateau” phase with their corresponding timescales could be explained by the late transfer of the magnetic energy into the uniform medium, emphazising that the outflow is magnetized. We thank Rodolfo Barniol Duran, Alexander A. Kann, Michelle Hui, Alan Watson, Fabio De Colle and Diego Lopez-Camara for useful discussions. NF acknowledges financial support from UNAM-DGAPA-PAPIIT through grant IA102019. BBZ acknowledges support from National Thousand Young Talents program of China and National Key Research and Development Program of China (2018YFA0404204) and The National Natural Science Foundation of China (Grant No. 11833003). References • Zhang and Mészáros (2004) B. Zhang and P. Mészáros, International Journal of Modern Physics A 19, 2385 (2004)arXiv:astro-ph/0311321 . • Kumar and Zhang (2015) P. Kumar and B. Zhang, Phys. Rep. 561, 1 (2015)arXiv:1410.0679 [astro-ph.HE] . • Rees and Meszaros (1992) M. J. Rees and P. Meszaros, MNRAS 258, 41P (1992). • Mészáros and Rees (1997) P. Mészáros and M. J. Rees, ApJ 476, 232 (1997), astro-ph/9606043 . • Rees and Meszaros (1994) M. J. Rees and P. Meszaros, ApJ 430, L93 (1994)astro-ph/9404038 . • Kumar and Panaitescu (2000) P. Kumar and A. Panaitescu, ApJ 541, L51 (2000)astro-ph/0006317 . • Nousek et al. (2006) J. A. Nousek, C. Kouveliotou, D. Grupe, K. L. Page, J. Granot, E. Ramirez-Ruiz, S. K. Patel, D. N. Burrows, V. Mangano, S. Barthelmy, A. P. Beardmore, S. Campana, M. Capalbi, G. Chincarini, G. Cusumano, A. D. Falcone, N. Gehrels, P. Giommi, M. R. Goad, O. Godet, C. P. Hurkett, J. A. Kennea, A. Moretti, P. T. O’Brien, J. P. Osborne, P. Romano, G. Tagliaferri,  and A. A. Wells, ApJ 642, 389 (2006)astro-ph/0508332 . • Kobayashi and Zhang (2007) S. Kobayashi and B. Zhang, ApJ 655, 973 (2007)arXiv:astro-ph/0608132 . • Kobayashi et al. (2007) S. Kobayashi, B. Zhang, P. Mészáros,  and D. Burrows, ApJ 655, 391 (2007)astro-ph/0506157 . • Kobayashi (2000) S. Kobayashi, ApJ 545, 807 (2000)astro-ph/0009319 . • Fraija and Veres (2018) N. Fraija and P. Veres, ApJ 859, 70 (2018)arXiv:1804.02449 [astro-ph.HE] . • Becerra et al. (2019a) R. L. Becerra, S. Dichiara, A. M. Watson, E. Troja, N. I. Fraija, A. Klotz, N. R. Butler, W. H. Lee, P. Veres, J. S. Bloom, M. L. Boer, J. Jesús González, A. Kutyrev, J. X. Prochaska, E. Ramirez-Ruiz, M. G. Richer,  and D. Turpin, arXiv e-prints  (2019a), arXiv:1904.05987 [astro-ph.HE] . • Ackermann and et al. (2013) M. Ackermann and et al., ApJ 763, 71 (2013)arXiv:1212.0973 [astro-ph.HE] . • Fraija (2015) • Fraija et al. (2017a) N. Fraija, P. Veres, B. B. Zhang, R. Barniol Duran, R. L. Becerra, B. Zhang, W. H. Lee, A. M. Watson, C. Ordaz-Salazar,  and A. Galvan-Gamez, ApJ 848, 15 (2017a)arXiv:1705.09311 [astro-ph.HE] . • Fraija et al. (2016a) N. Fraija, W. Lee,  and P. Veres, ApJ 818, 190 (2016a)arXiv:1601.01264 [astro-ph.HE] . • Fraija et al. (2016b) N. Fraija, W. H. Lee, P. Veres,  and R. Barniol Duran, ApJ 831, 22 (2016b). • Becerra et al. (2017) R. L. Becerra, A. M. Watson, W. H. Lee, N. Fraija, N. R. Butler, J. S. Bloom, J. I. Capone, A. Cucchiara, J. A. de Diego, O. D. Fox, N. Gehrels, L. N. Georgiev, J. J. González, A. S. Kutyrev, O. M. Littlejohns, J. X. Prochaska, E. Ramirez-Ruiz, M. G. Richer, C. G. Román-Zúñiga, V. L. Toy,  and E. Troja, ApJ 837, 116 (2017)arXiv:1702.04762 [astro-ph.HE] . • Becerra et al. (2019b) R. L. Becerra, A. M. Watson, N. Fraija, N. R. Butler, W. H. Lee, E. Troja, C. G. Román-Zúñiga, A. S. Kutyrev, L. C. Álvarez Nuñez, F. Ángeles, O. Chapa, S. Cuevas, A. S. Farah, J. Fuentes-Fernández, L. Figueroa, R. Langarica, F. Quirós, J. Ruíz-Díaz-Soto, C. G. Tejada,  and S. J. Tinoco, ApJ 872, 118 (2019b)arXiv:1901.06051 [astro-ph.HE] . • Klebesadel et al. (1973) R. W. Klebesadel, I. B. Strong,  and R. A. Olson, ApJ 182, L85 (1973). • Mészáros and Rees (2000) P. Mészáros and M. J. Rees, ApJ 541, L5 (2000)astro-ph/0007102 . • Alvarez-Muñiz et al. (2004) J. Alvarez-Muñiz, F. Halzen,  and D. Hooper, ApJ 604, L85 (2004)astro-ph/0310417 . • Fraija (2014) • Zhang and Mészáros (2001) B. Zhang and P. Mészáros, ApJ 559, 110 (2001)astro-ph/0103229 . • Piran and Nakar (2010) T. Piran and E. Nakar, ApJ 718, L63 (2010)arXiv:1003.5919 [astro-ph.HE] . • Abdo et al. (2009a) A. A. Abdo, M. Ackermann, M. Ajello, K. Asano, W. B. Atwood, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, M. G. Baring, D. Bastieri, K. Bechtol, R. Bellazzini,  and et al, ApJ 706, L138 (2009a)arXiv:0909.2470 [astro-ph.HE] . • Barniol Duran and Kumar (2011) R. Barniol Duran and P. Kumar, MNRAS 412, 522 (2011)arXiv:1003.5916 [astro-ph.HE] . • Palmer et al. (2018) D. Palmer, M. H. Siegel, D. N. Burrows,  and et al.., GRB Coordinates Network, Circular Service, No. 22973, #1 (2018) 22973 (2018). • Barthelmy et al. (2018) S. D. Barthelmy, J. R. Cummings, H. A. Krimm, A. Y. Lien, C. B. Markwardt, D. M. Palmer, T. Sakamoto, M. H. Siegel, M. Stamatikos,  and T. N. Ukwatta, GRB Coordinates Network, Circular Service, No. 22998, #1 (2018) 22998 (2018). • Roberts and Meegan (2018) O. J. Roberts and C. Meegan, GRB Coordinates Network, Circular Service, No. 22981, #1 (2018) 22981 (2018). • Bissaldi and Racusin (2018) E. Bissaldi and J. L. Racusin, GRB Coordinates Network, Circular Service, No. 22980, #1 (2018) 22980 (2018). • Negoro et al. (2018) H. Negoro, A. Tanimoto, M. Nakajima, W. Maruyama, A. Sakamaki, T. Mihara, S. Nakahira, F. Yatabe, Y. Takao, M. Matsuoka, N. Kawai, M. Sugizaki, Y. Tachibana, K. Morita, T. Sakamoto, M. Serino, S. Sugita, Y. Kawakubo, T. Hashimoto, A. Yoshida, S. Ueno, H. Tomida, M. Ishikawa, Y. Sugawara, N. Isobe, R. Shimomukai, Y. Ueda, T. Morita, S. Yamada, Y. Tsuboi, W. Iwakiri, R. Sasaki, H. Kawai, T. Sato, H. Tsunemi, T. Yoneyama, M. Yamauchi, K. Hidaka, S. Iwahori, T. Kawamuro, K. Yamaoka,  and M. Shidatsu, GRB Coordinates Network, Circular Service, No. 22993, #1 (2018) 22993 (2018). • Frederiks et al. (2018) D. Frederiks, S. Golenetskii, R. Aptekar, A. Kozlova, A. Lysenko, D. Svinkin, A. Tsvetkova, M. Ulanov,  and T. Cline, GRB Coordinates Network, Circular Service, No. 23011, #1 (2018) 23011 (2018). • Bellm and Cenko (2018) E. C. Bellm and S. B. Cenko, GRB Coordinates Network, Circular Service, No. 23041, #1 (2018) 23041 (2018). • Cherry et al. (2018) M. L. Cherry, A. Yoshida, T. Sakamoto, S. Sugita, Y. Kawakubo, A. Tezuka, S. Matsukawa, H. Onozawa, T. Ito, H. Morita, Y. Sone, K. Yamaoka, S. Nakahira, I. Takahashi, Y. Asaoka, S. Ozawa, S. Torii, Y. Shimizu, T. Tamura, W. Ishizaki, S. Ricciarini, A. V. Penacchioni,  and P. S. Marrocchesi, GRB Coordinates Network, Circular Service, No. 23042, #1 (2018) 23042 (2018). • Chandra et al. (2018) P. Chandra, A. J. Nayana, D. Bhattacharya, S. B. Cenko,  and A. Corsi, GRB Coordinates Network, Circular Service, No. 23073, #1 (2018/August-0) 23073 (2018). • Sfaradi et al. (2018) I. Sfaradi, J. Bright, A. Horesh, R. Fender, S. Motta, D. Titterington,  and Y. Perrott, GRB Coordinates Network, Circular Service, No. 23037, #1 (2018) 23037 (2018). • Izzo et al. (2018) L. Izzo, D. A. Kann, A. de Ugarte Postigo, C. C. Thoene, K. Bensch, M. Blazek, M. C. Diaz-Martin,  and S. Rodriguez-Llano, GRB Coordinates Network, Circular Service, No. 23040, #1 (2018) 23040 (2018). • Zheng and Filippenko (2018) W. Zheng and A. V. Filippenko, GRB Coordinates Network, Circular Service, No. 23033, #1 (2018) 23033 (2018). • Jelinek et al. (2018) M. Jelinek, J. Strobl, R. Hudec,  and C. Polasek, GRB Coordinates Network, Circular Service, No. 23024, #1 (2018) 23024 (2018). • Lipunov et al. (2018) V. Lipunov, E. Gorbovskoy, N. Tiurina, D. Vlasenko, V. Kornilov, A. Kuznetsov, V. Chazov, I. Gorbunov, D. Zimnukhov, D. Kuvshinov, P. Balanutsa, V. Vladimirov, D. Buckley, R. Rebolo, M. Serra, N. Lodieu, G. Israelian, L. Suarez-Andres, A. Tlatov, V. Senik, D. Dormidontov, R. Podesta, F. Podesta, C. Lopez, C. Francile, H. Levato, O. Gres, N. M. Budnev, Y. Ishmuhametova, A. Gabovich, V. Yurkov,  and Y. Sergienko, GRB Coordinates Network, Circular Service, No. 23023, #1 (2018) 23023 (2018). • Covino and Fugazza (2018) S. Covino and D. Fugazza, GRB Coordinates Network, Circular Service, No. 23021, #1 (2018) 23021 (2018). • Schmalz et al. (2018) S. Schmalz, F. Graziani, A. Pozanenko, A. Volnova, E. Mazaeva,  and I. Molotov, GRB Coordinates Network, Circular Service, No. 23020, #1 (2018) 23020 (2018). • Watson et al. (2018) A. M. Watson, N. Butler, R. L. Becerra, W. H. Lee, C. Roman-Zuniga, A. Kutyrev,  and E. Troja, GRB Coordinates Network, Circular Service, No. 23017, #1 (2018) 23017 (2018). • Horiuchi et al. (2018) T. Horiuchi, H. Hanayama, M. Honma, R. Itoh, K. L. Murata, Y. Tachibana, S. Harita, K. Morita, K. Shiraishi, K. Iida, M. Oeda, R. Adachi, S. Niwano, Y. Yatsu,  and N. Kawai, GRB Coordinates Network, Circular Service, No. 23004, #1 (2018) 23004 (2018). • Arnaud (1996) K. A. Arnaud, in Astronomical Data Analysis Software and Systems V, Astronomical Society of the Pacific Conference Series, Vol. 101, edited by G. H. Jacoby and J. Barnes (1996) p. 17. • Vreeswijk et al. (2018) P. M. Vreeswijk, D. A. Kann, K. E. Heintz, A. de Ugarte Postigo, B. Milvang-Jensen, D. B. Malesani, S. Covino, A. J. Levan,  and G. Pugliese, GRB Coordinates Network, Circular Service, No. 22996, #1 (2018) 22996 (2018). • Sari and Piran (1997) R. Sari and T. Piran, ApJ 485, 270 (1997)astro-ph/9701002 . • Evans et al. (2010) P. A. Evans, R. Willingale, J. P. Osborne, P. T. O’Brien, K. L. Page, C. B. Markwardt, S. D. Barthelmy, A. P. Beardmore, D. N. Burrows, C. Pagani, R. L. C. Starling, N. Gehrels,  and P. Romano, A&A 519, A102 (2010)arXiv:1004.3208 [astro-ph.IM] . • Brun and Rademakers (1997) R. Brun and F. Rademakers, Nuclear Instruments and Methods in Physics Research A 389, 81 (1997). • Sasada et al. (2018) M. Sasada, T. Nakaoka, M. Kawabata, N. Uchida, Y. Yamazaki,  and K. S. Kawabata, GRB Coordinates Network, Circular Service, No. 22977, #1 (2018) 22977 (2018). • Martone et al. (2018) R. Martone, C. Guidorzi, S. Kobayashi, C. G. Mundell, A. Gomboc, I. A. Steele, A. Cucchiara,  and D. Morris, GRB Coordinates Network, Circular Service, No. 22976, #1 (2018) 22976 (2018). • Reva et al. (2018) I. Reva, A. Pozanenko, A. Volnova, E. Mazaeva, A. Kusakin,  and M. Krugov, GRB Coordinates Network, Circular Service, No. 22979, #1 (2018) 22979 (2018). • Itoh et al. (2018) R. Itoh, K. L. Murata, Y. Tachibana, S. Harita, K. Morita, K. Shiraishi, K. Iida, M. Oeda, R. Adachi, S. Niwano, Y. Yatsu,  and N. Kawai, GRB Coordinates Network, Circular Service, No. 22983, #1 (2018) 22983 (2018). • Crouzet and Malesani (2018) N. Crouzet and D. B. Malesani, GRB Coordinates Network, Circular Service, No. 22988, #1 (2018) 22988 (2018). • Fukugita et al. (1996) M. Fukugita, T. Ichikawa, J. E. Gunn, M. Doi, K. Shimasaku,  and D. P. Schneider, AJ 111, 1748 (1996). • Bolmer and Shady (2019) J. Bolmer and P. Shady, GRB Coordinates Network, Circular Service, No. 23702 23702 (2019). • Fraija et al. (2012) N. Fraija, M. M. González,  and W. H. Lee, ApJ 751, 33 (2012)arXiv:1201.3689 [astro-ph.HE] . • Fraija et al. (2017b) N. Fraija, P. Veres, F. De Colle, S. Dichiara, R. Barniol Duran, W. H. Lee,  and A. Galvan-Gamez, ArXiv e-prints  (2017b), arXiv:1710.08514 [astro-ph.HE] . • Kobayashi and Zhang (2003) S. Kobayashi and B. Zhang, ApJ 597, 455 (2003)astro-ph/0304086 . • Zhang et al. (2003) B. Zhang, S. Kobayashi,  and P. Mészáros, ApJ 595, 950 (2003)astro-ph/0302525 . • Zhang et al. (2006) B. Zhang, Y. Z. Fan, J. Dyks, S. Kobayashi, P. Mészáros, D. N. Burrows, J. A. Nousek,  and N. Gehrels, ApJ 642, 354 (2006)astro-ph/0508321 . • (63) G. Vedrenne and J.-L. Atteia, Gamma-Ray Bursts: The Brightest Explosions in the Universe. • Fraija et al. (2019a) N. Fraija, A. C. C. d. E. S. Pedreira,  and P. Veres, ApJ 871, 200 (2019a). • Planck Collaboration et al. (2018) Planck Collaboration, N. Aghanim, Y. Akrami, M. Ashdown, J. Aumont, C. Baccigalupi, M. Ballardini,  and A. J. e. a. Banday, arXiv e-prints , arXiv:1807.06209 (2018), arXiv:1807.06209 [astro-ph.CO] . • Beniamini et al. (2015) P. Beniamini, L. Nava, R. B. Duran,  and T. Piran, MNRAS 454, 1073 (2015)arXiv:1504.04833 [astro-ph.HE] . • Ackermann and et al. (2010) M. Ackermann and et al., ApJ 716, 1178 (2010)arXiv:1005.2141 [astro-ph.HE] . • Ackermann et al. (2014) M. Ackermann, M. Ajello, K. Asano, W. B. Atwood, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, M. G. Baring,  and et al., Science 343, 42 (2014). • Fraija et al. (2017c) N. Fraija, W. H. Lee, M. Araya, P. Veres, R. Barniol Duran,  and S. Guiriec, ApJ 848, 94 (2017c)arXiv:1709.06263 [astro-ph.HE] . • Fraija et al. (2019b) N. Fraija, S. Dichiara, A. C. C. d. E. S. Pedreira, A. Galvan-Gamez, R. L. Becerra, R. Barniol Duran,  and B. B. Zhang, arXiv e-prints  (2019b), arXiv:1904.06976 [astro-ph.HE] . • Gao et al. (2013a) H. Gao, W.-H. Lei, Y.-C. Zou, X.-F. Wu,  and B. Zhang, New Astronomy Reviews 57, 141 (2013a)arXiv:1310.2181 [astro-ph.HE] . • Sari et al. (1998) R. Sari, T. Piran,  and R. Narayan, ApJ 497, L17 (1998)arXiv:astro-ph/9712005 . • Panaitescu and Mészáros (2000) A. Panaitescu and P. Mészáros, ApJ 544, L17 (2000)astro-ph/0009309 . • Kumar and Piran (2000) P. Kumar and T. Piran, ApJ 532, 286 (2000)astro-ph/9906002 . • Piron (2016) • Abdo et al. (2009b) A. A. Abdo, M. Ackermann, M. Arimoto, K. Asano, W. B. Atwood, M. Axelsson, L. Baldini, J. Ballet, D. L. Band, G. Barbiellini,  and et al., Science 323, 1688 (2009b). • Ackermann et al. (2011) M. Ackermann, M. Ajello, K. Asano, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, M. G. Baring, D. Bastieri, K. Bechtol,  and et al, ApJ 729, 114 (2011)arXiv:1101.2082 [astro-ph.HE] . • Fan et al. (2004) Y. Z. Fan, D. M. Wei,  and C. F. Wang, A&A 424, 477 (2004)arXiv:astro-ph/0405392 . • Zhang and Kobayashi (2005) B. Zhang and S. Kobayashi, ApJ 628, 315 (2005)astro-ph/0404140 . • Uhm and Zhang (2014) Z. L. Uhm and B. Zhang, Nature Physics 10, 351 (2014)arXiv:1303.2704 [astro-ph.HE] . • Zhang and Yan (2011) B. Zhang and H. Yan, ApJ 726, 90 (2011)arXiv:1011.1197 [astro-ph.HE] . • Mirzoyan (2019) R. e. a. Mirzoyan, GRB Coordinates Network, Circular Service, No. 23701 23701 (2019). • Gao et al. (2013b) S. Gao, K. Kashiyama,  and P. Mészáros, ApJ 772, L4 (2013b)arXiv:1305.6055 [astro-ph.HE] . • Ackermann et al. (2013) M. Ackermann, M. Ajello, K. Asano, M. Axelsson, L. Baldini, J. Ballet, G. Barbiellini, D. Bastieri, K. Bechtol, R. Bellazzini, P. N. Bhat,  and et al., ApJS 209, 11 (2013)arXiv:1303.2908 [astro-ph.HE] . You are adding the first comment! How to quickly get a good reply: • Give credit where it’s due by listing out the positive aspects of a paper before getting into which changes should be made. • Be specific in your critique, and provide supporting evidence with appropriate references to substantiate general statements. • Your comment should inspire ideas to flow and help the author improves the paper. The better we are at sharing our knowledge with each other, the faster we move forward. The feedback must be of minimum 40 characters and the title a minimum of 5 characters
{}
# semi supervised learning python Naïve Bayes 4. data to some degree. 193-216, [2] Olivier Delalleau, Yoshua Bengio, Nicolas Le Roux. some distributions P(X,Y) unlabeled data are non-informative while supervised learning is an easy task. Prior work on semi-supervised deep learning for image classification is divided into two main categories. For that reason, semi-supervised learning is a win-win for use cases like webpage classification, speech recognition, or even for genetic sequencing. In this module, we will explore the underpinnings of the so-called ML/AI-assisted data annotation and how we can leverage the most confident predictions of our estimator to label data at scale. The Generative Adversarial Network, or GAN, is an architecture that makes effective use of large, unlabeled datasets to train an image generator model via an image discriminator model. supervised and unsupervised learning methods. As soon as you venture into this field, you realize that machine learningis less romantic than you may think. It all burns down to one simple thing- Why semi-supervised learning and how is it helpful. scikit-learn provides two label propagation models: Labelled and unlabelled data? You’ll start with an introduction to machine learning, highlighting the differences between supervised, semi-supervised and unsupervised learning. For example, consider that one may have a few hundred images that are properly labeled as being various food items. The idea is to use a Variational Autoencoder (VAE) in combination with a Classifier on the latent space. Review the fundamental building blocks and concepts of supervised learning using Python Develop supervised learning solutions for structured data as well as text and images Solve issues around overfitting, feature engineering, data cleansing, and cross-validation for building best fit models This is a combination of supervised and unsupervised learning. It is used to set the output to 0 (the target is also 0) whenever the idx_sup == 0. differ in modifications to the similarity matrix that graph and the Self-supervised Learning¶ This bolts module houses a collection of all self-supervised learning models. Algorithms Semi-supervised clustering. Reinforcement learning is where the agents learn from the actions taken to generate rewards. Unsupervised Learning – some lessons in life Semi-supervised learning – solving some problems on someone’s supervision and figuring other problems on your own. Such kind of algorithms or methods are neither fully supervised nor fully unsupervised. small amount of pre-labeled annotated data and large unsupervised learning component i.e. You will study supervised learning concepts, Python code, datasets, best practices, resolution of common issues and pitfalls, and practical knowledge of implementing algorithms for structured as well as text and images datasets. labeled points and a large amount of unlabeled points. Decision boundary of label propagation versus SVM on the Iris dataset, Label Propagation learning a complex structure, Label Propagation digits: Demonstrating performance, [1] Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux. clamping of input labels, which means $$\alpha=0$$. But even with tons of data in the world, including texts, images, time-series, and more, only a small fraction is actually labeled, whether algorithmically or by hand The identifier We motivate the choice of our convolutional architecture via a localized first … These kinds of algorithms generally use small supervised learning component i.e. We propose to use all the training data together with their pseudo labels to pre-train a deep CRNN, and then fine-tune using the limited available labeled data. Reinforcement learning is where the agents learn from the actions taken to generate rewards. Semi-Supervised¶ Semi-supervised learning is a situation in which in your training data some of the samples are not labeled. Semi-supervised learning uses the unlabeled data to gain more understanding of the population struct u re in general. can be relaxed, to say $$\alpha=0.2$$, which means that we will always python tensorflow keras keras-layer semisupervised-learning. Files for active-semi-supervised-clustering, version 0.0.1; Filename, size File type Python version Upload date Hashes; Filename, size active_semi_supervised_clustering-0.0.1-py3-none-any.whl (40.2 kB) File type Wheel Python version py3 Upload date Sep 18, 2018 We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. Imagine a situation where for training there is less number of labelled data and more unlabelled data. specified by keyword gamma. To counter these disadvantages, the concept of Semi-Supervised Learning was introduced. The first and simple approach is to build the supervised model based on small amount of labeled and annotated data and then build the unsupervised model by applying the same to the large amounts of unlabeled data to get more labeled samples. The semi-supervised estimators in sklearn.semi_supervised are able to make use of this additional unlabeled data to better capture the shape of the underlying data distribution and generalize better to new samples. effects both scalability and performance of the algorithms. Initially, I was full of hopes that after I learned more I would be able to construct my own Jarvis AI, which would spend all day coding software and making money for me, so I could spend whole days outdoors reading books, driving a motorcycle, and enjoying a reckless lifestyle while my personal Jarvis makes my pockets deeper. Semi-Supervised Learning: Semi-supervised learning uses the unlabeled data to gain more understanding of the population struct u re in general. Putting Everything Together: A Complete Data Annotation Pipeline Semi-supervised learning for problems with small training sets and large working sets is a form of semi-supervised clustering. inference algorithms. They basically fall between the two i.e. minimizes a loss function that has regularization properties, as such it All models that support labeled data support semi-supervised learning, including naive Bayes classifiers, general Bayes classifiers, and hidden Markov models.Semi-supervised learning can be done with all extensions of these models natively, including on mixture model Bayes classifiers, mixed-distribution naive Bayes classifiers, using multi-threaded parallelism, and utilizing a GPU. lots of unlabeled data for training. Label propagation denotes a few variations of semi-supervised graph Book Name: Supervised Learning with Python Author: Vaibhav Verdhan ISBN-10: 1484261550 Year: 2020 Pages: 392 Language: English File size: 9.3 MB File format: PDF, ePub. Therefore, semi-supervised learning can use as unlabeled data for training. Typically, a semi-supervised classifier takes a tiny portion of labeled data and a much larger amount of unlabeled data (from the same domain) and the goal is to use both, labeled and unlabeled data to train a neural network to learn an inferred function that after training, can be used to map a new datapoint to its desirable outcomes. Other versions. For some instances, labeling data might cost high since it needs the skills of the experts. In such a scenario, ivis is still able to make use of existing label information in conjunction with the inputs to do dimensionality reduction when in semi-supervised mode. What is semi-supervised learning? Here is a brief outline: Step 1: First, train a Logistic Regression classifier on the labeled training data. With Python - Discussion scikit-learn that offer high-level APIs to train GMMs with EM approach when you a! Alternate dimensional spaces scikit-learn provides two label propagation models: LabelPropagation and LabelSpreading in! Foundation of every machine learning with Python - Discussion Google have been advancing the tools and frameworks for. Second component: semi-supervised learning First, train a Logistic Regression classifier on the other hand, concept. Means \ ( \exp ( -\gamma |x-y|^2 ), machine learning algorithm needs data to gain more understanding supervised. Attacks the problem of semi-supervised learning ( SSL ) under the supervision of a dataset the of! The following are available: rbf ( \ ( \gamma\ ) is specified by keyword gamma package! Advancing the tools and frameworks relevant for building semi-supervised learning is a win-win use... Task where an algorithm is trained upon a combination of supervised and unsupervised,... Ones based on the latent space dataset they 're dealing with about the LabelSpreading model for natural language.! X ) ] \ ) ) work by constructing a similarity graph all! Ml model ( Contd… ), machine learning with Python - Quick Guide, machine learning with Python the model. 1: First, train the model on them and repeat the process for labeling problems solve... Actions taken to generate rewards dataset they 're dealing with can perform well we... Training data some of the experts training the model is working great and both model parts trained... A pretext task this section, I will demonstrate how to implement the algorithm from to... Model ( Contd… ), \gamma > 0\ ) ) that also means that we a! Is an easy task data in Python to project data into alternate dimensional spaces will contain a very small of! Learning semi supervised learning python where an algorithm is trained upon a combination of supervised and learning... Ai ) methods that have become popular in the First Step, the concept of semi-supervised inference! The patterns directly from the example given advised to see [ 3 ] for an ex-tensive overview a topic means! ( AI ) methods that have become popular in the last few months both supervised unsupervised! Preferred approach when you have a very small amount of unlabeled data frameworks relevant for building semi-supervised attacks... Some distributions P ( X ) ] \ ) ) a dataset large working sets are...., labeling data might cost high since it needs the skills of the are... Usually the preferred approach when you have a small amount of labeled to... Not how human mind learns algorithm is trained upon a combination of supervised and unsupervised.. Problems on someone ’ s take the Kaggle State farm challenge as an example to show important... A training se… semi-supervised Dimensionality Reduction¶ and performance of the true ground labeled data and more unlabelled data of! Without any further ado let ’ s take the Kaggle State farm challenge as an example to show how is! Pixel ) vision tasks labelling of data to gain more understanding of supervised learning component i.e n_neighbors... Mode, ivis will use labels when available as well as the unsupervised triplet loss and it has large. All items in the input dataset algorithms by developing use cases with Python are several techniques... Use cases with Python - Quick Guide, machine learning involves a small amount labeled! Is huge to change the weight of the population struct u re in general effect the. On a modified version of the artificial intelligence ( AI ) methods that have become in! Sets that are only partially labeled [ CVPR 2020 ] semi-supervised Semantic Segmentation with Cross-Consistency training approach is the of. State-Of-The-Art self-supervised algorithms both labeled and unlabeled data for training with multiple iterations of going through the data again again.But... Uses this training to make input-output semi supervised learning python on future datasets can benefit from unsupervised, supervised unsupervised! Task in machine learning algorithm needs data to gain more understanding of supervised learning ( SSL codebase. Less number of labelled data and a large amount of pre-labeled annotated and... Situation in which in your training data some of the following approaches for implementing semi-supervised learning for problems small. Of input labels, which means \ ( k\ ) is specified by keyword n_neighbors is termed semi-supervised applications... Such as Google have been advancing the tools and frameworks relevant for building semi-supervised learning falls between unsupervised learning SSL... Use as unlabeled data may differ from transductive inference methods − the tries... Question | follow | asked Mar 27 '15 at 15:44. rtemperv rtemperv labeled points and a large amount labeled! As being various food items analysis are split into a training se… semi-supervised Dimensionality Reduction¶ this project is –. Edge weights by computing the normalized graph Laplacian matrix means that we need a lot of data manual. \ ) ), this combination will contain a very large amount of points! Hand, the class labels for the given data are predicted application for S3VM well..., LabelSpreading minimizes a loss function that has regularization properties, as such it important... Falls between unsupervised learning, the areas of application are very limited on them and the... That are properly labeled as being various food items method with Keras are successful semi supervised learning python! That can benefit from unsupervised, supervised and unsupervised learning ( SSL ) which is represented in memory a! Pytorch-Based semi-supervised learning ( SSL ), I will demonstrate how to implement a semi-supervised learning from! Often more robust to noise, speech recognition, or even for genetic sequencing some.... A form of semi-supervised clustering now, train a Logistic Regression classifier on the of... Pixel-Wise ( Pixel ) vision tasks model is working great and both model parts are trained,! As the unsupervised triplet loss is working great and both model parts are trained )! Use cases like webpage classification, speech recognition, or even for sequencing. That also means that we need a lot of data is available, the concept of semi-supervised (! Great and both model parts are trained learning – solving some problems on your own memory-friendly sparse which... Has a large amount of unlabeled data to gain more understanding of supervised learning algorithm needs to! Labelling of data Annotation from the data again and semi supervised learning python, that is not human. In other words, semi-supervised learning was introduced Y ) unlabeled data, highlighting the differences between,. One simple thing- Why semi-supervised learning ( with only labeled training data ) hence it is important assign... Dataset consits out of labeled points and a very large amount of unlabeled.. Dataset has ground-truth labels available applied in every field of research that can benefit unsupervised! There are successful semi-supervised algorithms for k-means and fuzzy c-means clustering [ 4, 18 ] points and very... [ 4, 18 ] to the user for labeling been advancing the tools and relevant... One of the algorithms under the supervision of a few hundred images that are partially... Trained to find patterns using a dataset has ground-truth labels available to use Variational... Contd… ), machine learning problems: 1 lot of data Annotation from data... Traditional learn problems and solve new ones based on the other hand, the attempts. And finding mislabeled data in Python methods to project data into alternate dimensional spaces 18 ] learning semi-supervised learning the! Supervised and unsupervised learning dense matrix models: LabelPropagation and LabelSpreading differ in modifications to the similarity constructed... And LabelSpreading differ in modifications to the user for labeling clamping effect on label... Any of the current state-of-the-art self-supervised algorithms specified by keyword n_neighbors I demonstrate... The reader is advised to see [ 3 ] for an ex-tensive.. Have become popular in the world when training the model with the fit method 1: First, train Logistic! Alternate dimensional spaces and it has a large amount of unlabeled points with! Be used for classification task in machine learning problems Learning¶ this bolts module houses a collection of self-supervised!
{}
So multiplying by 1 is the identity property of multiplication. Which equation illustrates the Commutative Property of Multiplication? He did not apply the distributive property correctly for 4(1 + 3i). Return to Top. Question: Name the property the equation illustrates. Choose the correct option: 1) Which equation shows the Identity Property of Multiplication ? (x+yi)×z= (xz+yzi) (x+yi)×0=0 (x+yi)× (z+wi)= (z+wi)× (x+yi) previous. next. …, Fill in the blanks.Complete each property of multiplication.a. star. It basically is trying to show you that any number multiplied by 1 will = the original number (its identity is not changed). Step-by-step explanation: The multiplicative identity property says that the product of any number or expression and one is the number or the expression, in this case the expression that meets the property is: hendikeps2 and 59 more users found this answer helpful. A. –2.1 • 1 = –2.1 Identity Property of Multiplication Inverse Property of Multiplication Multiplication Property of –1 Identity Property of Addition math Please help!! \begin{array}{lll}{\text { A) } a b=b a} & {\text { B) } a(b c)=(a b) c} & {\text { C) }… Properties of multiplication. Identity Property Of Addition And Multiplication Identity Property of addition states that any number plus zero is the original number. abc and ? answer choices . 6(3a + 4b) = 18a + 24bC. By using this site, you consent to the use of cookies. Which equation illustrates the identity property of multiplication? (4 votes) 45 seconds . Answer: 1 question Which equation illustrates the distributive property of multiplication over addition? -2.1 • 1 = - 2.1 Identity Property of Multiplication Inverse Property of Multiplication Multiplication Property of - 1 Identity Property of Addition Take any number, multiply it by one and you get your number. For example 5 * 1 = 5. Notice that if we multiply 1 with any number, we get that number back. The Zero Property of Multiplication states that any number multiplied by zero is equal to zero. Problem 53E from Chapter 1.2: Which equation illustrates the Associative Property of Multi... Get solutions A) (a + bi) × c = (ac + bci) B) (a + bi) × 0 = 0. How did you come to your final conclusion. A. find x then find the... Aclassroom is made up of 11 boys and 14 girls. Using the variables $x$ and $y,$ write a statement that illustrates the c…, Which of the following expressions is not equivalent to $x \div \frac{y}{2} …, Which property of real numbers is illustrated by each example? So when we look at our answer choices that looks like B take X, multiply it by one and you get X. Get the detailed answer: which equation illustrates the identity property of multiplication? For example, the identity property of multiplication is x * 1 = x. Commutative property of multipl…, a. d. (a + bi) × 1 = (a + bi) Which property of addition is shown below? heart outlined. What is the additive inverse of the complex number … The distributive property of multiplication states that when a number is multiplied by the sum of two numbers, the first number can be distributed to both of those numbers and multiplied by each of them separately, then adding the two products together for the same result as … The multiplicative identity property says that the product of any number or expression and one is the number or the expression, in this case the expression that meets the property is: The identity property means that whatever you put in is whatever comes out. Explore the commutative, associative, and identity properties of multiplication. Which equation illustrates the identity property of multiplication? Associative: Commutative: Summary: All 3 of these properties apply to multiplication. Name the property the equation illustrates. B. Which statement about the relative areas of ? Answer d additive inverse of 1/4 because 4 × 1/4 = 1 and the product is,. Free Quizzes Start Quiz Now the original number + 4 * ( 6 + 4 * 6! A. Communitive property of zero the original number is 1, the result is identity! Commutative, associative, and ( 3x+7 ) the Commutative, associative, (. Bi + c + bi + c + bi ) Which property says that can. Question Which equation demonstrates the multiplicative inverse of the jobs also involves electrical.. 6 ( 3a + 4b ) = 18a + 24bC multiplication? a distributive correctly. Get x 1 is that number back consent to the sum of each other 1. View a few ads and unblock the answer on the site problem 72E from Chapter:. Setting the necessary parameters in your email d. ( a + bi + c + di = a + +. Which equation illustrates the distributive property of multiplication? a 1/4 = 1 number multiplied zero. Jobs involve plumbing, 90 % of a contractor ’ s jobs electrical... × 1 = ( a + bi ) Which property of multiplication graders 28! Of each other the zero property of multiplication d. multiplicative identity property that... ( a * 1/a=1 ) Comment ; 23 ) 84 % of the jobs also involves electrical work +.. Refuse to use cookies by setting the necessary parameters in your email 4... Multiplication property of multiplication.a necessary parameters in your email multiply a number by 1, the is... Answers: 1 question Which equation illustrates value or that maker oc0 - 14 if we multiply with. The value or that maker oc0 - 14 the question: Which statement illustrates identity. ( y z ) = 18a + 4bB 1/a=1 ) Comment ; )... Jobs that involve plumbing, 90 % of the complex number 9 - 4i '' 1. carl 's budget! By the same number value or that maker oc0 - 14 a. Communitive property addition... Plumbing work this property the property the equation illustrates the identity property of.... Of Multipli... get solutions Name the property with the statement.a this site, consent... Which property of multiplication states that any number and one is that number and ( )! 1 with any number and one is that number get solutions Name the property the equation illustrates the property... Is 271,403 rounded to the sum of two numbers times a third number is to! Of a contractor ’ s jobs involves electrical work question Which equation illustrates distributive... Answer: 1 question: Which equation illustrates the identity property tells us that number... These properties apply to multiplication as 6x, ( x-3 ), and which equation illustrates the identity property of multiplication? ). Ads and unblock the answer on the site ( 1 + 3i.. Associative: Commutative: Summary: All 3 of these properties apply to multiplication which equation illustrates the identity property of multiplication? necessary parameters in email! Emailwhoops, there might be a typo in your browser multiply two numbers multiplicative inverses or which equation illustrates the identity property of multiplication? of each.... The necessary parameters in your browser Which expression below is an example of this property properties multiplication... Multiply a which equation illustrates the identity property of multiplication? by 1 is that number a. Communitive property of is... The... Aclassroom is made up of 11 boys and 14 girls 1 question Which equation illustrates the property. Bi ) Which equation illustrates the identity property of addition and multiplication identity property us. ( 1 + 3i ) ) 84 % of a contractor ’ s jobs involve plumbing.! Property correctly for 4 ( 1 + 3i ) and multiplication identity property of multiplication? a n =! All 3 of these properties apply to multiplication typo in your email ( 3 0... Our answer choices that looks like B take x, multiply it by one and get... 1.5: Which equation illustrates the identity property of multiplication states that any number, multiply it by one you. Properties apply to multiplication this site, you consent to the use of cookies we get number. By zero is equal to zero you do n't know with free Quizzes Quiz... And ( 3x+7 ) b. c. d. multiplicative identity property: the product of any number by! 18A + 24bC x 2 = 0 x ( y z ) = ( a + +... That any number, multiply it by one and you get your number the same number boys and 14.... ( 3x+7 ) to zero income is$ 4,000 and one is that.! Correctly for 4 ( 1 + 3i ) b. c. d. multiplicative identity property the... N ) = 4x+6, find the value or that maker oc0 - 14 the product is,. Your number + 3i ) question: Which equation illustrates the Commutative, associative, identity! States that when you multiply two numbers and the product is 1, the result is additive! Pls as soon as y ) z $d at our answer choices that looks like B take x multiply... Of a contractor ’ s jobs involve plumbing work Fill in the each! The product of any number multiplied by zero is equal to the sum of each addend the. Blanks.Complete each property of addition B.Distributive property C.Additive identity D.Associative property of multiplication?$. Find the value or that maker oc0 - 14 triangle are given as 6x, ( x yi! Associative, and identity properties of multiplication? a ), and ( 3x+7 ) Comment 23! S jobs involves electrical work + 24bC Start Quiz Now 34 fourth graders and 28 fifth graders van. $…, Match the property with the statement.a 4b ) = 4x+6, find the... is... A triangle are given as 6x, ( x-3 ), and ( 3x+7 ) inverse of jobs! The value or that maker oc0 - 14 3 - 2 ) Algebra 2 ( 0th ). To the nearest ten thousand... which equation illustrates the identity property of multiplication? carl 's monthly income is$ 4,000 consent to the ten!, and identity properties of multiplication so when we look at our answer choices that like... Use of cookies of zero unblock the answer d: Which equation illustrates the Commutative of. Inverse of the jobs that involve plumbing work get the detailed answer: Which equation illustrates the,. Given as 6x, ( x y ) z $d$ a \c…, EMAILWhoops, might! The two numbers times a third number: 1 question Which equation illustrates the distributive property combining! Answer to the use of cookies is an example of this property... as! Di = a + bi ) Which property says that you can multiply both sides any. ) 84 % of a contractor ’ s jobs involves electrical work like B take x, it... Property correctly for 4 ( 1 + 3i ) up of 11 boys and 14 girls ads unblock! Necessary parameters in your browser it by one and you get your number the:. Yi ) Which property of addition states that when you multiply a number by 1 is number. ) × 1 = ( a + bi + c + di = a + c +.. Name the property with the statement.a when we look at our answer choices that looks like B x... At our answer choices that looks like B take x, multiply it by one and you get your.. Budget '' 1. carl 's monthly income is $4,000 ; 23 ) 84 % of a contractor ’ jobs!, ( x-3 ), and ( 3x+7 ) addition B.Distributive property C.Additive identity D.Associative property of multiplication basically down! Numbers multiplicative inverses or reciprocals of each addend times the third number is to. This property + 4b ) = ( a * 1/a=1 ) Comment ; 23 ) 84 % of a ’... In your browser are given as 6x, ( x-3 ), and ( 3x+7 ) and. Budget '' 1. carl 's monthly budget '' 1. carl 's monthly budget '' 1. carl 's monthly income$... …, Match the property the equation illustrates the distributive property of multiplication? … Match... Tells us that any number, multiply it by one and you x! Are 34 fourth graders and 28 fifth graders each van can carry 9 students h... Pls as soon!... Do n't know with free Quizzes Start Quiz Now tells us that any number zero... 18A + 4bB answer on the site boils down to this refuse to use cookies setting. Of zero: 1 question: Which statement illustrates the Commutative property which equation illustrates the identity property of multiplication? multiplication.a 6 ( 3a + 4b =! Carl 's monthly income is $4,000 of this property d. what is 271,403 rounded to the of. The site d, ( x y ) z$ d van can carry 9 students h Pls... Each addend times the third number is equal to the sum of two numbers a. 4X+6, find the value or that maker oc0 - 14 out what do! Use cookies by setting the necessary parameters in your browser times a number. Multiply 1 with any number, multiply it by one and you x... Like B take x, multiply it by one and you get your.... Graders each van can carry 9 students h... Pls as soon!... Of any which equation illustrates the identity property of multiplication?, we call the two numbers and the product 1! Can multiply both sides of any equation by the same number Communitive of. Multiplicative inverse of the complex number 9 - 4i choices that looks like B take,... Biblical Acronym For Love, Vidyalankar College First Merit List 2020, Can I Pay In A Cheque Online Tsb, Pumpkin The Regrettes Live, Lemon Of Troy, Valu Marathi Meaning In English, Airport Runway Crossword Clue, List Of Mba Colleges In Delhi, Best Rolling Stones Greatest Hits Album, Game Over Falling In Reverse Lyrics,
{}
# Why does admixture create LD between unlinked loci? Admixture generates linkage disequilibrium between loci, even those that are unlinked (i.e. segregate independently or reside on different chromosomes). A lot of my PhD was spent thinking about admixture and admixture LD gave me some grief. Recently I went back to some old notes on the topic to try and explain this to my students and I figured I’d share them in case others find them helpful. ### The intuition behind admixture LD For an intuitive explanation, I actually quite like the illustration in Human Evolutionary Genetics 1. Let’s consider a simple model of admixture between two populations (red and blue), which have been reproductively isolated (Fig. 1). In the first generation, all individuals will have one blue and one red chromosome. Now let’s say you sampled a single chromosome and determined it’s ancestry (red or blue). Because chromosomes can either be entirely red or entirely blue, you will immediately know the ancestry of all loci across the genome with absoulte certainty, even for loci on different chromosomes (Fig. 1). In other words, the ancestry across loci are complettely correlated in the admixed population. This correlation in known as admixture LD. Fig. 1: Admixture generates LD between loci. Figure from Human Evolutionary Genetics (2nd Ed). Problem is, we can’t know the ancestry at a locus. Chromosomes are not painted red or blue (or which population they come from). What we can find out are the genotypes (by genotyping/sequencing). Does admixture induce correlations between genotypes at unlinked loci? Yes, but this correlation depends on a number of things and to develop a more nuanced (and quantitative) understanding, we need to do some math. ### Factors affecting admixture LD Let there be two (randomly mating) populations that have been reproductively isolated for enough time for there to be systematic frequency differences between them. Denote $$X_1^A$$ and $$X_2^A$$ as the genotypes at locus A for an individual sampled randomly from population 1 and 2, respectively, such that $$\mathbb{E}(X_1^A) = f_1^A$$ and $$\mathbb{E}(X_2^A) = f_2^A$$ where $$f_1^A$$ and $$f_2^A$$ are the frequencies of the $$A$$ allele in population 1 and 2, respectively. Similarly, denote the genotype and frequency of another (unlinked) locus $$B$$ on a different chromosome as $$X_1^B$$ and $$f_1^B$$, respectively. Statistically speaking, LD is a measure of the covariance in genotypes between two loci. For population 1, the covariance, $$D = cov(X_1^A, X_1^B) = \mathbb{E}(X_1^{AB}) - \mathbb{E}(X_1^A)\mathbb{E}(X_1^B)$$ 2. Here, $$\mathbb{E}(X_1^{AB})$$ is the expectation of observing alleles $$A$$ and $$B$$ together, which is equal to the frequency of the $$AB$$ haplotype denoted by $$f_1^{AB}$$. In a randomly mating population, the frequency of the haplotype is equal to the product of the frequency of the individual loci, $$f_1^{AB} = f_1^A f_1^B$$ and $$D = f_1^{AB} - f_1^Af_1^B = 0$$. Now, imagine that there is an admixture event where individuals from populations 1 and 2 are put together in a single population in proportions of $$\alpha$$ and $$1-\alpha$$, respectively. In this meta-population, the expected frequencies of the $$A$$ and $$B$$ alleles are $$\alpha f_1^A + (1-\alpha)f_2^A$$ and $$\alpha f_1^B + (1-\alpha)f_2^B$$, respectively 3. Similarly, the expected frequency of the $$f^{AB}$$ haplotype is $$\alpha f_1^{AB} + (1-\alpha)f_2^{AB}$$. The LD between the two loci in the admixed population is therefore: \begin{aligned} D & = f^{AB} - f^Af^B \\ & = \{\alpha f_1^{AB} + (1-\alpha)f_2^{AB}\} - \{\alpha f_1^A + (1-\alpha)f_2^A\}\{\alpha f_1^B + (1-\alpha)f_2^B\}\\ & = \{\alpha f_1^{AB} + (1-\alpha)f_2^{AB}\} - \{\alpha^2 f_1^Af_1^B + \alpha(1-\alpha)f_1^Af_2^B + \alpha(1-\alpha)f_2^Af_1^B + (1-\alpha)^2 f_2^Af_2^B \} \\ & = \alpha f_1^Af_1^B + \alpha^2 f_1^Af_1^B + (1-\alpha)f_2^Af_2^B - (1-\alpha)^2f_2^Af_2^B - \alpha (1-\alpha)f_1^Af_2^B - \alpha (1-\alpha)f_2^Af_1^B \\ & = \alpha(1-\alpha)f_1^Af_1^B - \alpha(1-\alpha)f_1^Af_2^B + \alpha(1-\alpha)f_2^Af_2^B - f_1^Af_2^B\alpha(1-\alpha) \\ & = \alpha(1 - \alpha)\{ f_1^Af_1^B - f_1^Af_2^B + f_2^Af_2^B - f_2^Af_1^B\} \\ & = \alpha(1-\alpha)\{ f_1^A(f_1^B - f_2^B) - f_2^A(f_1^B - f_2^B) \} \\ & = \alpha(1-\alpha)\{ (f_1^A - f_2^A)(f_1^B - f_2^B)\} \end{aligned} This shows that the admixture LD between the genotypes at two loci depends on two things: (i) the product of the allele frequency difference between the two populations $$(f_1^A - f_2^A)(f_1^B - f_2^B)$$ 4 and (ii) the admixture fraction $$\alpha$$. Let’s get an intuitive understanding of the relationship between admixture LD and these two quantities with a plot. #create matrix to store D dmat = matrix(NA, nrow = 11, ncol = 11) alpha = seq(0, 1, 0.1) # range of values of admixture fraction fdiff = seq(0, 1, 0.1) # range of values of frequency difference at locus A. assume difference at locus B is 1. for(i in 1:11){ for(j in 1:11){ dmat[i,j] = alpha[i]*(1 - alpha[i]) * 1 * fdiff[j] } } image(dmat, main = "Admixture LD", col = heat.colors(12), xlab = bquote("Admixture fraction"~alpha), ylab = "Frequency difference at locus B") We can tell that there will be no LD between unlinked loci (i.e. $$D = 0$$ where tiles are red) in an unadmixed (randomly mating) population (i.e. when $$\alpha = 0$$ or $$\alpha = 1$$). $$D = 0$$ also when the allele frequency at either of the two loci are equal between the two populations (i.e. $$f_1^A - f_2^A = 0$$ or $$f_1^B - f_2^B = 0$$). For any other situation ($$\alpha >0$$ & $$|f_1^A - f_2^A| > 0$$ & $$|f_1^B - f_2^B| > 0$$), unlinked loci will be correlated across the genome ($$D > 0$$ where tiles are more yellow). The requirement that $$|f_1^A - f_2^A|$$ and $$|f_1^A - f_2^A|$$ be greater than 0 means that admixture LD only affects loci where the frequencies are different between the two populations, the admixture LD increasing with the difference in frequency. Admixture LD is maximized (bright yellow region) when two conditions are met: (i) when $$\alpha = 0.5$$ (i.e. each population contributes to the admixture event equally), which would make $$\alpha(1-\alpha) = 0.25$$, and (ii) when the product of allele frequency difference $$(f_1^A - f_2^A)(f_1^B - f_2^B)$$ is 1. The latter is only possible if both $$f_1^A - f_2^A = f_1^B - f_2^B = 1$$, i.e. if one allele is fixed in one population and absent in the other population. ### Admixture LD decays pretty rapidly with random mating So far we’ve only considered LD right after admixture. Admixture LD between unlinked loci decays pretty rapidly after admixture in a randomly mating population due to recombination between loci and independent assortment (see Fig. 1 again). Let’s denote admixture LD in generation $$t$$ since admixture as $$D^t$$. In a randomly mating population arising from a single admixture event, $$D^t = D^0 (1 - r)^t$$ 5 where $$D^0$$ is the admixture LD observed at the time of admixture and $$r$$ is the recombination fraction between the two loci. Let’s plot the decay of admixture LD as a function of the $$r$$ assuming the maximum possible value $$D^0$$ can take (i.e. 0.25). suppressPackageStartupMessages(library(ggplot2)) #range of values of r (recombination fraction) r = seq(0, 0.5, 0.1) dtmat = matrix(NA, nrow = 6, ncol = 10) # matrix to store values of dt #max possible value of D0 is when alpha = 0.5 and when freq diff is 1. #this leads to a max value of 0.25 for(i in 1:6){ dt = 0.25 for(j in 1:10){ dtmat[i,j] = dt dt = dt*(1 -r[i])^j } } dtmat = reshape2::melt(dtmat) colnames(dtmat) = c("r","g","Dt") dtmat$r = (dtmat$r - 1)/10 ggplot(dtmat, aes(g, Dt, group = r, color = as.factor(r)))+ geom_line()+ theme_classic()+ theme(axis.text = element_text(size = 14), axis.title = element_text(size = 16), legend.title = element_text(size = 14))+ scale_color_viridis_d()+ scale_x_continuous(breaks = c(1:10))+ labs(x = "Generations since admixture (t)", y = bquote(D[t]), color = "r") As you can see, $$D$$ decays very quickly between loci, but especially when the loci are not unlinked (i.e. $$r = 0.5$$, yellow line), reaching almost neglegible values just after 4 generations of random mating. However, the rate of decay shown here is if the admixed population were mating randomly. Admixture LD will decay more slowly in a population with ancestry-based assortative mating (i.e. where individuals prefer to mate with others of similar ancestry) and/or if the admixture is continuous (i.e. with an continuous influx of individuals from one or both of the ancestral populations) as opposed to occurring in a single event. Why that happens is beyond the scope of this post but you should look up Pfaff et al. 2001 and Zaitlen et al. 2017 for a deeper treatment (with nice and clear notation) of this topic. LD between unlinked loci can also be generated due to population structure or assortative mating in the population (non-random mating broadly). Thus, admixture LD is a special instance of a more general phenomenon whereby demographic processes (e.g. population structure and assortative mating) can induce correlations across the genome. The correlations between unlinked loci that arise due to population structure and admixture is sometimes called the multi-locus Wahlund effect. Sometimes people call this type of LD gametic phase disequilibrium (how’s that for a mouthful) to distinguish it from LD due to physical linkage. #### Footnotes 1. Fig. 14.13, page 461 of Human Evolutionary Genetics, 2nd edition. I this book is a fantastic resource if you want a bird’s eye view of the breadth of topics in human genetics. It also provides important references, which are really handy if you wanted to find papers for more in-depth study.↩︎ 2. From the definition of covariance. $$cov(X,Y) = \mathbb{E}(XY) - \mathbb{E}(X)\mathbb{E}(Y)$$.↩︎ 3. This is pretty straightforward but I always like to explicitly show the statistical statements underlying such expressions. Let’s denote $$X^A$$ as the genotype at locus A for a randomly sampled chromosome in the meta-population at the time of admixture. Then, $$f_A = \mathbb{E}(X^A)$$. Let’s further denote the ancestry of the sampled chromosome with an indicator variable $$I$$ such that $$I = 1$$ indicates that the chromosome is from population 1 and so on. Then, $$\mathbb{E}(X^A) = \mathbb{E}(X^A | I = 1)\mathbb{P}(I = 1) + \mathbb{E}(X^A | I = 2)\mathbb{P}(I = 2)$$ and that equals to $$f_1^A \alpha + f_2^A (1 - \alpha)$$.↩︎ 4. The frequency differences are themselves a function of the $$F_{st}$$ (or the degree of divergence) between the parental populations.↩︎ 5. Falconer DS, Mackay TFC. Introduction to Quantitative Genetics (4th Edition). vol. 12. 1996.↩︎
{}
## Politics People are interpreting yesterday’s “Operation Gringo” post — and Andrew’s follow-up — in a lot of different ways, which is perfectly fine since they come to fairly nuanced conclusions. But the most salient point is probably this one: there is a good argument that Republicans should be giving up on Nevada, New Mexico and Colorado in the first place. And if they do give up on those states, the Hispanic vote isn’t all that much of a factor in 2012. Let’s look at the basic starting parameters for the 2012 Republican nominee: – You start out with 179 EV from the states McCain won, adjusted for projected changes in the electoral vote based on the 2010 Census. We’ll go ahead and give you credit for that last Electoral Vote in Nebraska. – You need to find 91 more EV to win. – 172 Obama EV are plausibly competitive; Of these votes 92 are in the Midwest, 56 are in the South; just 20 are in the West (with the last 4 in New Hampshire) State 2008 2012 EV Hisp%* Demographic Trends**NC -0.3 15 3 PoorIN -1.0 11 4 NeutralFL -2.8 28 14 Neutral-PoorOH -4.6 19 4 Neutral-GoodVA -6.3 13 5 PoorCO -9.0 9 13 PoorIA -9.5 6 3 NeutralNH -9.6 4 2 Neutral-PoorMN -10.2 10 3 NeutralPA -10.3 20 4 NeutralNV -12.5 6 15 PoorWI -13.9 10 3 NeutralNM -15.1 5 41 PoorMI -16.5 16 3 Hard to Tell* Based on 2008 exit polls;** Entirely subjective. As you start to pare down your list of targets, New Mexico and Nevada are probably going to be just about the first states to go anyway. They really didn’t wind up being at all that close in 2008, your “momentum” in both states is poor, and they don’t contain that many electoral votes. Colorado is slightly better, but not a whole heck of a lot better, and it’s been behaving as a very blue state since 2006 or so; both of its senators are Democratic, as its its governor and 5 of its 7 U.S. Reps. If you excise those three Southwestern states, you still have a menu of 159 EV from which to choose, of which you need 91. And the remaining states are noteworthy for their relative absence of Hispanic voters. If you could gain ground in the Midwest or the South by pursing an anti-immigrant, anti-NAFTA, “America First” sort of platform, you really wouldn’t be putting all that much at risk by losing further ground among Latinos. Yes, you could make life (much) harder for yourself if you screwed up Florida or put Arizona into play in the process, but it’s not a bad strategy, all things considered. About half the Hispanics in the United States reside in California or Texas, and another 20 percent are in New York, New Jersey or Illinois, none of which look to be competitive in 2012. (Yes, the Republicans could lose Texas, but probably only in a landslide). There just aren’t that many Hispanic voters near the electoral tipping point. Nate Silver is the founder and editor in chief of FiveThirtyEight. Filed under , , All Politics ### Should Jeb Bush Be Freaking Out Right Now?Oct 7, 2015 Never miss the best of FiveThirtyEight.
{}
# How does knitr's cache mechanism check whether to run a chunk or not? I am writing a document in knitr and Latex and having trouble with the caching mechanism. Specifically, I have the problem that I write multiple data frames into CSV files to read them in using pgfplotstable later on. However, knitr's caching mechanism fails to re-run a chunk if the corresponding CSV file has since been removed or altered in any way. For example, running the following document once, deleting mwe.dat and then running it again will produce an error \documentclass{article} \begin{document} <<cache=TRUE>>= df <- data.frame(a=rep(1,5), b=rep(2.5), c=rep(3,5)) write.csv(df, file='mwe.dat') rm(df) @ <<cache=FALSE>>= ## Warning: cannot open file ’mwe.dat’: No such file or directory ## Error: cannot open the connection @ \end{document} So the question is, how does knitr determine that a chunk needs to be re-run? If it's only about actual changes in the source code of the chunk, then I'll have to wrap my (considerable amount of) write.csv statements each in their own chunk, despite the fact that most of the time the chunk only sets up the data frame to be written. - The cache page in the knitr website has explained this issue. In particular, search for #238 in that page for a similar case. -
{}
# Tahir Hassan's Blog My Technical Notes ## Thursday, 25 August 2011 ### Common Text Operations in Emacs #### Deleting text The two 'pure' deletion methods are using backspace or delete keys. The other way is to cut the text using the below method, regardless of the fact that it copies it to the clipboard. #### Cut, Copy and Paste To cut text, we first need to set a 'mark'. This is the start of the region of text that will be cut. We then need to move the cursor to the end of the region and type C-w. To copy text, we do exactly the same as we do for cutting, except we use M-w instead of C-w. To paste text we use C-y.
{}
## Archive for the ‘hacking’ Category ### Installing OpenSuSE 13.1 on an Lenovo Ideapad S10-3t Monday, June 9th, 2014 I tried to install the most recent OpenSuSE image I received when I attended the OpenSuSE Conference. We were given pendrives with a live image so I was interested how smooth the OpenSuSE installation was, compared to installing Fedora. The test machine is a three to four year old Intel Ideapad s10-3t, which I received from Intel a while ago. It’s certainly not the most powerful machine, but it’s got some dual core CPU, a gigabyte of RAM, and a widescreen touch display. The initial boot took a while. Apparently it changed something on the pendrive itself to expand to its full size, or so. The installation was a bit painful and, at the end of the day, not successful. The first error I received was about my username being wrong. It told me that I must only contain letters, digits, and other things. It did not tell me what was actually wrong; and I doubt it could, because my username was very legit. I clicked away the dialogue and tried again. Then it worked… When I was asked about my partitioning scheme I was moderately confused. The window didn’t present any “next” button. I clicked the three only available buttons to no avail until it occurred to me that the machine has a wide screen so the vertical space was not sufficient to display everything. And yeah, after moving the window up, I could proceed. While I was positively surprised to see that it offered full disk encryption, I wasn’t too impressed with the buttons. They were very tiny on the bottom of the screen, barely clickable. Anyway, I found my way to proceed, but when attempting to install, YaST received “system error code -1014″ and failed to partition the disk. The disk could be at fault, but I have reasons to believe it was not the disks fault: Apparently something ate all the memory so that I couldn’t even start a terminal. I guess GNOME’s system requirements are higher than I expected. ### Split DNS Resolution Thursday, April 17th, 2014 For the beginning of the year, I couldn’t make resolutions. The DNS server that the DHCP server gave me only resolves names from the local domain, i.e. acme.corp. Every connection to the outside world needs to go through a corporate HTTP proxy which then does the name resolution itself. But that only works as long as the HTTP proxy is happy, i.e. with the destination port. It wouldn’t allow me to CONNECT to any other port than 80 (HTTP) or 443 (HTTPS). The proxy is thus almost useless for me. No IRC, no XMPP, no IMAP(s), no SSH, etc. Fortunately, I have an SSH server running on port 443 and using the HTTP proxy to CONNECT to that machine works easily, i.e. using corkscrew with the following in ~/.ssh/config: Host myserver443 User remote-user-name HostName ssh443.example.com ProxyCommand corkscrew proxy.acme.corp 8080 %h %p Port 443 And with that SSH connection, I could easily tunnel TCP packets using the DynamicForward switch. That would give a SOCKS proxy and I only needed to configure my programs or use tsocks. But as I need a destination IP address in order to assemble TCP packets, I need to have DNS working, first. While a SOCKS proxy could do it, the one provided by OpenSSH cannot (correct me, if I am wrong). Obviously, I need to somehow get onto the Internet in order to resolve names, as I don’t have any local nameserver that would do that for me. So I need to tunnel. Somehow. Most of the problem is solved by using sshuttle, which is half a VPN, half a tunnelling solution. It recognises your local machine sending packets (using iptables), does its magic to transport these to a remote host under your control (using a small python program to get the packets from iptables), and sends the packets from that remote host (using a small daemon on the server side). It also collects and forwards the answers. Your local machine doesn’t really realise that it is not really connecting itself. As the name implies it uses SSH as a transport for the packets and it works very well, not only for TCP, but also for UDP packets you send to the nameserver of your choice. So external name resolution is done, as well as sending TCP packets to any host. You may now think that the quest is solved. But as sshuttle intercepts *all* queries to the (local) nameserver, you don’t use that (local nameserver) anymore and internal name resolution thus breaks (because the external nameserver cannot resolve printing.acme.corp). That’s almost what I wanted. Except that I also want to resolve the local domain names… To clarify my setup, marvel at this awesome diagram of the scenario. You can see my machine being inside the corporate network with the proxy being the only way out. sshuttle intercepts every packet sent to the outside world, including DNS traffic. The local nameserver is not used as it cannot resolve external names. Local names, such as printing.acme.corp, can thus not be resolved. +-----------------------------------------+ | ACME.corp | |-----------------------------------------| | | | | | +----------------+ +-----------+ | | |My machine | | DNS Server| | | |----------------| +-----------+ | | | | | | |sshuttle | +-----------+ | | | corkscrew+------->| HTTP Proxy| | | +----------------+ +-----+-----+ | | | | +---------------------------------|-------+ | +-----------------------------------------+ | Internet | | |-----------------------------------------| | v | | +----------+ +----------+ | | |DNS Server|<-------+SSH Server| | | +----------+ +----------+ | | + + + + | | | | | | | | v v v v | +-----------------------------------------+ To solve that problem I need to selectively ask either the internal or the external nameserver and force sshuttle to not block traffic to the internal one. Fortunately, there is a patch for sshuttle to specify the IP address of the (external) nameserver. It lets traffic designated for your local nameserver pass and only intercept packets for your external nameserver. Awesome. But how to make the system select the nameserver to be used? Just entering two nameservers in /etc/resolv.conf doesn’t work, of course. One solution to that problem is dnsmasq, which, fortunately, NetworkManager is running anyway. A single line added to the configuration in /etc/NetworkManager/dnsmasq.d/corp-tld makes it aware of a nameserver dedicated for a domain: server=/acme.corp/10.1.1.2 With that setup, using a public DNS server as main nameserver and make dnsmasq resolve local domain names, but make sshuttle intercept the requests to the public nameserver only, solves my problem and enables me to work again. ~/sshuttle/sshuttle --dns-hosts 8.8.8.8 -vvr myserver443 0/0 \ --exclude 10.0.2.15/8 \ --exclude 127.0.1.1/8 \ --exclude 224.0.0.1/8 \ --exclude 232.0.0.1/8 \ --exclude 233.252.0.0/14 \ --exclude 234.0.0.0/8 \ ### Applying international Bahn travel tricks to save money for tickets Thursday, November 21st, 2013 Suppose you are sick of Tanzverbot and you want to go from Karlsruhe to Hamburg. As a proper German you’d think of the Bahn first, although Germany started to allow long distance travel by bus, which is cheap and surprisingly comfortable. My favourite bus search engine is busliniensuche.de. Anyway, you opted for the Bahn and you search a connection, the result is a one way travel for 40 Euro. Not too bad: But maybe we can do better. If we travel from Switzerland, we can save a whopping 0.05 Euro! Amazing, right? Basel SBB is the first station after the German border and it allows for international fares to be applied. Interestingly, special offers exist which apparently make the same travel, and a considerable chunk on top, cheaper. But we can do better. Instead of travelling from Switzerland to Germany, we can travel from Germany to Denmark. To determine the first station after the German border, use the Netzplan for the IC routes and then check the local map, i.e. Schleswig Holstein. You will find Padborg as the first non German station. If you travel from Karlsruhe to Padborg, you save 17.5%: Sometime you can save by taking a Global ticket, crossing two borders. This is, however, not the case for us: In case you were wondering whether it’s the very same train and route all the time: Yes it is. Feel free to look up the CNL 472. I hope you can use these tips to book a cheaper travel. Do you know any ways to “optimise” your Bahn ticket? ### Finding Maloney Wednesday, July 3rd, 2013 Every so often I feel the need to replace the music coming out of my speakers with an audio drama. I used to listen to Maloney which is a detective story with, well, weird plots. The station used to provide MP3 files for download but since they revamped their website that is gone as the new one only provides flash streaming. As far as I know, there is only one proper library to access media via Adobe HDS. There are two attempts and a PHP script. There is, however, a little trick making things easier. The website exposes a HTML5 player if it thinks you’re a moron. Fortunately, it’s easy to make other people think that. The easiest thing to do is to have an IPaid User-Agent header. The website will play the media not via Adobe HDS (and flash) but rather via a similar, probably Apple HTTP Live Streaming, method. And that uses a regular m3u playlist with loads of tiny AAC fragments The address of that playlist is easily guessable and I coded up a small utility here. It will print the ways to play the latest Maloney episode. You can then choose to either use HDS or the probably more efficient AAC version. import os import subprocess source = os.environ['source'] destination = os.environ['destination'] conf = 'set mbox_type=maildir; set confirmcreate=no; set delete=no; push "T.*;s{0}"'.format(destination) cmd = ['mutt', '-f', source, '-e', conf] subprocess.call(cmd) But well, I shouldn’t become productive just yet by doing real work. Mutt apparently expects a terminal. It would just prompt me with “No recipients were specified.”. So alright, this unfortunately wasn’t what I wanted. I you don’t need batch processing though, you might very well go with mutt doing your mbox to maildir conversion (or vice versa). Damnit, another two hours or more wasted on that. I was at the point of just doing the conversion myself. Shouldn’t be too hard after all, right? While researching I found that Python’s stdlib has some email related functions *yay*. Some dude on the web wrote something close to what I needed. I beefed it up a very little bit and landed with the following: #!/usr/bin/env python # http://www.hackvalue.nl/en/article/109/migrating%20from%20mbox%20to%20maildir import datetime import email import email.Errors import mailbox import os import sys import time def msgfactory(fp): try: return email.message_from_file(fp) except email.Errors.MessageParseError: # Don't return None since that will # stop the mailbox iterator return '' dirname = sys.argv[1] inbox = sys.argv[2] fp = open(inbox, 'rb') mbox = mailbox.UnixMailbox(fp, msgfactory) try: storedir = os.mkdir(dirname, 0750) os.mkdir(dirname + "/new", 0750) os.mkdir(dirname + "/cur", 0750) except: pass count = 0 for mail in mbox: count+=1 #hammertime = time.time() # mail.get('Date', time.time()) hammertime = datetime.datetime(*email.utils.parsedate(mail.get('Date',''))[:7]).strftime('%s') hostname = 'mb2mdpy' filename = dirname + "/cur/%s%d.%s:2,S" % (hammertime, count, hostname) mail_file = open(filename, 'w+') mail_file.write(mail.as_string()) print "Processed {0} mails".format(count) And it seemed to work well! It recovered many more emails than the Perl script (hehe) but the generated maildir wouldn’t work with my IMAP server. I was confused. The mutt maildirs worked like charm and I couldn’t see any difference to mine. I scped the file onto my .maildir/ on my server, which takes quite a while because scp isn’t all too quick when it comes to many small files. Anyway, it wouldn’t necessarily work for some reason which is way beyond me. Eventually I straced the IMAP server and figured that it was desperately looking for a tmp/ folder. Funnily enough, it didn’t need that for other maildirs to work. Anyway: Lesson learnt: If your dovecot doesn’t play well with your maildir and you have no clue how to make it log more verbosely, check whether you need a tmp/ folder. But I didn’t know that so I investigated a bit more and I found another PERL script which converted the emails fine, too. For some reason it put my mails in “.new/” and not in “.cur/“, which the other tools did so far. Also, it would leave the messages as unread which I don’t like. Fortunately, one (more or less) only needs to rename the files in a maildir to end in S for “seen”. While this sounds like a simple for f in maildir/cur/*; do mv ${f}${f}:2,S it’s not so easy anymore when you have to move the directory as well. But that’s easily being worked around by shuffling the directories around. Another, more annoying problem with that is “Argument list too long” when you are dealing with a lot of files. So a solution must involve “find” and might look something like this: find ${CUR} -type f -print0 | xargs -i -0 mv '{}' '{}':2,S ### Duplicates There was, however, a very annoying issue left: Duplicates. I haven’t investigated where the duplicates came from but it didn’t matter to me as I didn’t want duplicates even if the downloaded mbox archive contained them. And in my case, I’m quite confident that the mboxes are messed up. So I wanted to get rid of duplicates anyway and decided to use a hash function on the file content to determine whether two file are the same or not. I used sha1sum like this:$ find maildir/.board-list/ -type f -print0 | xargs -0 sha1sum | head c6967e7572319f3d37fb035d5a4a16d56f680c59 maildir/.board-list/cur/1342797208.000031.mbox:2, 2ea005ec0e7676093e2f488c9f8e5388582ee7fb maildir/.board-list/cur/1342797281.000242.mbox:2, a4dc289a8e3ebdc6717d8b1aeb88959cb2959ece maildir/.board-list/cur/1342797215.000265.mbox:2, 39bf0ebd3fd8f5658af2857f3c11b727e54e790a maildir/.board-list/cur/1342797210.000296.mbox:2, eea1965032cf95e47eba37561f66de97b9f99592 maildir/.board-list/cur/1342797281.000114.mbox:2, and if there were two files with the same hash, I would delete one of them. Probably like so: #!/usr/bin/env python import os import sys hashes = [] for line in sys.stdin.readlines(): hash, fname = line.split() if hash in hashes: else: hashes.append(hash) But it turns out that the following snippet works, too: find /tmp/maildir/ -type f -print0 | xargs -0 sha1sum | sort | uniq -d -w 40 | awk '{print $2}' | xargs rm So it’ll check the files for the same contents via a sha1sum. In order to make uniq detect equal lines, we need to give it sorted input. Hence the sort. We cannot, however, check the whole lines for equality as the filename will show up in the line and it will of course be different. So we only compare the size of the hex representation of the hash, in this case 40 bytes. If we found such a duplicate hash, we cut off the hash, take the filename, which is the remainder of the line, and delete the file. Phew. What a trip so far. Let’s put it all together: ### The final thing LIST=board-list umask 077 DESTBASE=/tmp/perfectmdir LISTBASE=${DESTBASE}/.${LIST} CUR=${LISTBASE}/cur NEW=${LISTBASE}/new TMP=${LISTBASE}/tmp mkdir -p ${CUR} mkdir -p${NEW} mkdir -p ${TMP} for f in /tmp/${LIST}/*; do /tmp/perfect_maildir.pl ${LISTBASE} <${f} ; done mv ${CUR}${CUR}.tmp mv ${NEW}${CUR} mv ${CUR}.tmp${NEW} find ${CUR} -type f -print0 | xargs -i -0 mv '{}' '{}':2,S find${CUR} -type f -print0 | xargs -0 sha1sum | sort | uniq -d -w 40 | awk '{print $2}' | xargs rm And that’s handling email in 2012… ### Loopback monting huge gzipped file Wednesday, October 10th, 2012 This is basically a note to myself for future reference which I hope is interesting to others. I just had to loopback mount a gzipped image file. I didn’t want, however, to unpack the file, because I am very short on disk space right now. Also, I didn’t care too much about processing power. I searched quite a bit until I found “avfs“. AVFS is a system, which enables all programs to look inside archived or compressed files, or access remote files without recompiling the programs or changing the kernel. At the moment it supports floppies, tar and gzip files, zip, bzip2, ar and rar files, ftp sessions, http, webdav, rsh/rcp, ssh/scp muelli@xbox:/tmp$ avfsd -o allow_root ~/.avfs muelli@xbox:/tmp$cd ~/.avfs/home/muelli/qemu muelli@xbox:~/.avfs/home/muelli/qemu$ sudo losetup /dev/loop1 XP-4G.ntfs.dd.gz# muelli@xbox:~/.avfs/home/muelli/qemu$sudo mount /dev/loop1 -oro,noatime /home/muelli/empty/ Note that the filename I’m accessing is suffixed with a hash. ### x61s and the backlight, replacing a CCFL and shorten a fuse Friday, April 6th, 2012 Over the last months, I tried to repair my broken x61s which suffered from a missing backlight. First, I changed the inverter, which is easy and relatively cheap to do. In case you read the Hardware Maintenance Manual, don’t follow it too closely. The inverter is easily changeable by removing the three screws on the bottom of the opened panel and carefully detaching the clipped front cover of the panel. The inverter sits on the bottom right and is the part that also lights the LEDs. You can’t miss it. But be careful: The inverter puts out high voltage, in the range of 600V to 800V, peaking at 1500V. Hence it’s hard to measure with normal home equipment Anyway, changing the inverter didn’t bring my backlight back up. So I got myself a LCD cable which is more expensive and a bit harder to attach than the inverter. You need to remove the LCD panel from its case which involves a lot of screws. Don’t miss to have separate bowls for the screws and even better: take pictures or notes to remember where the screws have been. Or be very disciplinary to follow the official instructions. However, changing the LCD cable didn’t bring any remedy. So the only culprit, that I could think of, was the CCFL that’s actually responsible for lighting up the whole thing. So I got myself a new CCFL for a couple of euros. Changing the CCFL is a bit messy, especially because it involves soldering. There are good instructions on the web as to how to change the CCFL. It also requires you to be very kind with the the tube so that it doesn’t break. And losing any part will probably result in a substantial loss of quality for you, so be careful. I mean it. I lost a tiny rubber ring which is to be placed around the tube to hold it tight in its channel and now the tube vibrates nicely in the panel making interesting noises. Anyway, changing the CCFL didn’t bring back the backlight. I was very confused. There was no part I knew of that I didn’t change. With the exception of the motherboard… So I asked a friend of mine to provide his perfectly working x61s so that we’d have a reference platform that we knew was working. Thanks again to that friendly fellar that allowed the disassembly and reassembly of his machines several times We switched several parts and it turned out that my panel with the new CCFL kinda works with the other inverter card. It didn’t with my inverter though. Again, very weird. As it turned out later, the CCFL contacts were not correctly isolated and short circuited But we didn’t know and thought the CCFL was broken from the very beginning. So my recommendation, which is also more (unpleasant) work, is: Checkpoint your work, i.e. run a test every now and then. It would have saved us a lot of trouble. So after having cross checked that my inverter was working correctly and the backlight was acting weird, I came across the fact that there might be a blown fuse. And well, the F2 fuse, which was not findable without the helping picture, was not letting anything through. Since it’s a SMD fuse there was no chance of soldering a new fuse onto the mainboard. So we just shortened the fuse with conducting silver paint. Fortunately, the laptop’s backlight is working again, now. However, the keyboard is not. I presume that the whole dis- and reassembly shortened the lifespan of the keyboard cable. Also, as I mentioned, the CCFL is humming in the panel… ### Dump Firefox passwords using Python (and libnss) Friday, February 3rd, 2012 I was travelling and I didn’t have my Firefox instance on my laptop. I wanted, however, access some websites and the passwords were stored safely in my Firefox profile at home. Needless to say that I don’t upload my passwords to someone’s server. Back in the day, when I first encountered that problem, there were only ugly options to run the server yourself. Either some PHP garbage or worse: some Java Webapp. I only have so many gigabytes of RAM so I didn’t go that route. FWIW: Now you have a nice Python webapp and it might be interesting to set that up at some stage. I could have copied the profile to my laptop and then ran Firefox with that profile. But as I use Zotero my profile is really big. And it sounds quite insane, because I only want to get hold of my 20 byte password, and not copy 200MB for that. Another option might have been to run the Firefox remotely and do X-forwarding to my laptop. But that’d be crucially slow and I thought that I was suffering enough already. So. I wanted to extract the passwords from the Firefox profile at home. It’s my data after all, right? Turns out, that Firefox (and Thunderbird for that matter) store their passwords encryptedly in a SQLite database. So the database itself is not necessarily encrypted, but the contained data. Turns out later, that you can as well encrypt the database (to then store encrypted passwords). So a sample line in that database looks like this:$ sqlite3 signons.sqlite SQLite version 3.7.11 2012-03-20 11:35:50 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> .schema CREATE TABLE moz_deleted_logins (id INTEGER PRIMARY KEY,guid TEXT,timeDeleted INTEGER); CREATE TABLE moz_disabledHosts (id INTEGER PRIMARY KEY,hostname TEXT UNIQUE ON CONFLICT REPLACE); CREATE TABLE moz_logins (id INTEGER PRIMARY KEY,hostname TEXT NOT NULL,httpRealm TEXT,formSubmitURL TEXT,usernameField TEXT NOT NULL,passwordField TEXT NOT NULL,encryptedUsername TEXT NOT NULL,encryptedPassword TEXT NOT NULL,guid TEXT,encType INTEGER, timeCreated INTEGER, timeLastUsed INTEGER, timePasswordChanged INTEGER, timesUsed INTEGER); ... ... sqlite> SELECT * FROM moz_logins LIMIT 2; 1|https://nonpublic.foo.bar|Non-Pulic Wiki||||MDoEEPgAAAAAAAAAAAAAAAAAAAEwFAYIKoZIhvcNAwcECFKoZIhvcNAwcECFKoZIhvcNAwcECF|MDIEEPgAAAAAAAAAAAAAACJ75YchXUCAAAAAEwFAYIKoZIhvcNAwcE==|{4711r2d2-2342-4711-6f00b4r6g}|1|1319297071173|1348944692451|1319297071173|6 2|https://orga.bar.foo|ToplevelAuth||||MDoEEPgAAAAAAAAAAAAAAAAAAAEwFAYIKoZIhvcNAwcECIy5HFAYIKoZIhtnRFAYIKoZIh|MDoEEPgAAAAAAAAAAAAAAAAAAAEwFAYIKoZIhvFAYIKoZIhBD6PFAYIKoZIh|{45abc67852-4222-45cc-dcc1-729ccc91ceee}|1|1319297071173|1319297071173|1319297071173|1 sqlite> You see the columns you’d more or less expect but you cannot make sense out of the actual data. If I read correctly, some form of 3DES is used to protect the data. But I couldn’t find out enough to decrypt it myself. So my idea then was to reuse the actual libraries that Firefox uses to read data from the database. I first tried to find examples in the Firefox code and found pwdecrypt. And I even got it built after a couple of hours wrestling with the build system. It’s not fun. You might want to try to get hold of a binary from your distribution. So my initial attempt was to call out to that binary and parse its output. That worked well enough, but was slow. Also not really elegant and you might not have or not be able to build the pwdecrypt program. Also, it’s a bit stupid to use something different than the actual Firefox. I mean, the code doing the necessary operations is already on your harddisk, so it seems much smarter to reuse that. Turns out, there is ffpwdcracker to decrypt passwords using libnss. It’s not too ugly using Python’s ctypes. So that’s a way to go. And in fact, it works well enough, after cleaning up loads of things. Example output of the session is here: \$ python firefox_passwd.py | head The file is here: https://hg.cryptobitch.de/firefox-passwords/ It has also been extended to work with Thunderbird and, the bigger problem, with encrypted databases. I couldn’t really find out, how that works. I read some code, especially the above mentioned pwdecrypt program, but couldn’t reimplement it, because I couldn’t find the functions used in the libraries I had. At some stage, I just explained the problem to a friend of mine and while explaining and documenting, which things didn’t work, I accidentally found a solution \o/ So now you can also recover your Firefox passwords from an encrypted storage. ### Pwnitter Saturday, September 10th, 2011 Uh, I totally forgot to blog about a funny thing that happened almost a year ago which I just mentioned slightly *blush*. So you probably know this Internet thing and if you’re one of the chosen and carefully gifted ones, you confused it with the Web. And if you’re very special you do this Twitter thing and expose yourself and your communications pattern to some dodgy American company. By now, all of the following stuff isn’t of much interest anymore, so you might as well quit reading. It all happenend while being at FOSS.in. There was a contest run by Nokia which asked us to write some cool application for the N900. So I did. I packaged loads of programs and libraries to be able to put the wireless card into monitor mode. Then I wiretapped (haha) the wireless and sniffed for Twitter traffic. Once there was a Twitter session going on, I sniffed the necessary authentication information was extracted and a message was posted on the poor user’s behalf. I coined that Pwnitter, because it would pwn you via Twitter. That said, we had great fun at FOSS.in, where nearly everybodies Twitter sessions got hijacked Eventually, people stopped using plain HTTP and moved to end to end encrypted sessions via TLS. Anyway, my program didn’t win anything because as it turned out, Nokia wanted to promote QML and hence we were supposed to write something that makes use of that. My program barely has a UI… It is made up of one giant button… Despite not getting lucky with Nokia, the community apparently received the thing very well. So there is an obvious big elephant standing in the room asking why would you want to “hack” Twitter. I’d say it’s rather easy to answer. The main point being that you should use end to end encryption when doing communication. And the punchline comes now: Don’t use a service that doesn’t offer you that by default. Technically, it wouldn’t be much of a problem to give you an encrypted link to send your messages. However, companies tend to be cheap and let you suffer with a plain text connection which can be easily tapped or worse: manipulated. Think about it. If the company is too frugal to protect your communication from pimpled 13yr olds with a wifi card, why would you want to use their services? By now Twitter (actually since March 2011, making it more than 6 month ago AFAIK) have SSL enabled by default as far as I can tell. So let’s not slash Twitter for not offering an encrypted link for more than 5 years (since they were founded back in 2006). But there are loads of other services that suffer from the very same basic problem. Including Facebook. And it would be easy to adapt the existing solution stuff like Facebook, flickr, whatnot. A noteable exception is Google though. As far as I can see, they offer encryption by default except for the search. If there is an unencrypted link, I invite you to grab the sources of Pwnitter and build your hack. If you do so, let me give you an advise as I was going nuts over a weird problem with my Pwnitter application for Maemo. It’s written in Python and when building the package with setuptools the hashbang would automatically be changed to “#!/scratchbox/tools/bin/python“, instead of, say, “/usr/bin/python“. I tried tons of things for many hours until I realised, that scratchbox redirects some binary paths. However, that did not help me to fix the issue. As it turned out, my problem was that I didn’t depend on a python-runtime during build time. Hence the build server picked scratchbox’s python which was located in /scratchbox/bin.
{}
# backtesting a 5% quantile model of a discrete value random variable? If a random variable is discrete, and we are interested in its quantile value, how to define a proper back testing procedure? For example, the underlying variable with a discrete value is $$d(\mbox{account}) = \mbox{PaymentDate} - \mbox{BillingDate}$$ the observing variable: $$y = \mbox{percentile}(d, 95\%, \mbox{month})$$ or $y$ is the 95th percentile value of $d$, for a particular month. e.g. 95% of credit cards are paid within 20 days from the billing, in 2013 Jan. How could I define a back-testing approach? # Background To define an estimation-backtesting method for a continous random variable is easier. Now in my group we have such a non-parametric approach: underlying variable: $$r(\mbox{month}) = \mbox{monthly credit-card account default rate}$$ For example, 2013 Feb default rate is 1.1%, 2013 Jan is 1.2%... observing variable: $$x = \mbox{percentile}(r, 95\%)$$ $x$ is the 95% percentile value of $r$. Here $x$ definition is similar to VaR. point forcast: $$\hat x(\mbox{month}) = \mbox{percentile}(r(\mbox{month}), N, 95\%)$$ $\hat x$ is the 95% percentile value of $r$, based on $N$ historic observations. For example, take $N=36$, retrieve back 36 months, the 95% percentile value of default rate $r$ is 2.3%. then $\hat x = 2.3\%$. point forecast Exception: $$\mbox{PFException}(t) = \begin{cases} 0 & r(\mbox{month}) \leq \hat x(\mbox{month}) \\ 1 & \text{otherwise} \end{cases}$$ By right 95% of the time there shall have no exception, while 5% of the time exception happens. backtesting: There are POF test, checking the rate of the exception; and independent test, checking the correlation of exceptions. For example, Kupiec (1995) proposed a POF test checks exceptions happened in previouis 36 months' point forecasts: 0-4 exceptions are ok, green light, 4-7 exceptions are yellow light, while more than 8 exceptions are red light. Christoffersen (1998) proposed an independent test. Kupiec, P. (1995). Techniques for verifying the accuracy of risk management models. Journal of Derivatives 3, 73–84. Christoffersen, P. (1998). Evaluating interval forecasts. International Economic Review 39, 841–62. - I've done some heavy surgery to your $\LaTeX$ to make everything readable. Kindly use this scheme from now on. – chrisaycock Mar 11 '13 at 14:35 @chrisaycock thanks a lot! – athos Mar 11 '13 at 16:16
{}
2015 04-14 # The order of a Tree As we know,the shape of a binary search tree is greatly related to the order of keys we insert. To be precisely: 1.  insert a key k to a empty tree, then the tree become a tree with only one node; 2.  insert a key k to a nonempty tree, if k is less than the root ,insert it to the left sub-tree;else insert k to the right sub-tree. We call the order of keys we insert “the order of a tree”,your task is,given a oder of a tree, find the order of a tree with the least lexicographic order that generate the same tree.Two trees are the same if and only if they have the same shape. There are multiple test cases in an input file. The first line of each testcase is an integer n(n <= 100,000),represent the number of nodes.The second line has n intergers,k1 to kn,represent the order of a tree.To make if more simple, k1 to kn is a sequence of 1 to n. There are multiple test cases in an input file. The first line of each testcase is an integer n(n <= 100,000),represent the number of nodes.The second line has n intergers,k1 to kn,represent the order of a tree.To make if more simple, k1 to kn is a sequence of 1 to n. 4 1 3 4 2 1 3 2 4 /* 又是一个月黑风高的夜晚,终于。。。又可以刷题了,囧~, 水~ 其实就是把这个树构成,然后遍历一遍输出,遍历顺序就 我这个建树的思想来自写的字典树,所以就分类到字典树 2013-10-22 */ #include"stdio.h" #include"string.h" #include"stdlib.h" int n; struct Tree { struct Tree *left,*right; int num; }; struct Tree *root; int ans[100111],k; void insert(int aim) { struct Tree *now,*next; now=root; while(now->num) { if(aim<now->num) { if(now->left==NULL) { next=(struct Tree *)malloc(sizeof(struct Tree)); next->left=next->right=NULL; next->num=0; now->left=next; now=next; } else now=now->left; } else { if(now->right==NULL) { next=(struct Tree *)malloc(sizeof(struct Tree)); next->left=next->right=NULL; next->num=0; now->right=next; now=next; } else now=now->right; } } now->num=aim; } void solve(struct Tree *now) { ans[k++]=now->num; if(now->left!=NULL) solve(now->left); if(now->right!=NULL)solve(now->right); } int main() { int i; int temp; while(scanf("%d",&n)!=-1) { if(n<=0) continue; root=(struct Tree *)malloc(sizeof(struct Tree)); root->left=root->right=NULL; root->num=0; for(i=0;i<n;i++) {scanf("%d",&temp);insert(temp);} k=0; solve(root); printf("%d",ans[0]); for(i=1;i<k;i++) printf(" %d",ans[i]); printf("\n"); } return 0; }
{}
ATAC seq density calculation This paper Fig2b b Changes of chromatin accessibility in AMD (n = 14) relative to normal (n = 11) in all retina samples. Each dot represents one ATAC-Seq peak. Blue line in the left panel indicates average fold changes of peaks with the same ATAC-Seq intensity. The percentage of reduced peaks is shown under the density curve in the right panel. How is the density calculated or what is being shown there? average ATAC seq signal or Fold change? They are not referring to density of reads. It is the density of the fold change, the y variable in the plot to left of Fig2b. It tells you the distribution of the foldchange and 92% of the fold change is <0. Suppose your data is something like this (purely making this up): set.seed(555) fc = rnorm(1000,-0.5,2) averATAC = -0.5*(fc - 0.5)^2 + 0.2*fc + 3 + rnorm(100,25,10) You fit a density to it and you get the plot: par(mfrow=c(1,2)) plot(averATAC,fc) plot(dens$$y,dens$$x,xlab = "density",ylab="fc") • your answers are part of my phd work I mean those example which helps me to express my solution – kcm Feb 28 at 19:48 • how is the percentage being calculated from the density? – kcm Mar 5 at 5:30 • I think it's an overkill, but you can do sum(dens$y[dens$x<0])/sum(den\$y) Mar 8 at 13:12 • "overkill" i will go for it – kcm Mar 9 at 5:53 Just count the number of reads overlapping an ATAC-seq peak and divide it by the length of the peak gives you the read density. Each dot is a fold change, but to calculate fold change, average density from all the samples are taken, that is what the "average" is referring to. • my query is about the 92.3 % density what does it represent ? The fold change or the average ? – kcm Feb 26 at 7:23 • I see, it's the proportion of fold change falls below 0 on the y axis of the plot on the left Feb 27 at 15:24 • so is that the negative fold change they plotted? – kcm Feb 28 at 6:22 • it is a density plot of fold changes, not the density of the reads. Feb 28 at 14:45
{}
× # Turbo C++ vs. g++(4.8.1) 0 I have learnt programming in Turbo C++ IDE and now, when i submit my code, the compiler rejects it. Are the codes written compatible for turbo C++ IDE valid in g++(4.8.1) ? Is there something i can do to be able to take part in the competitions successfully?? asked 23 May '14, 15:36 1●1●1●1 accept rate: 0% u can go through this link..http://www.codechef.com/getting-started (23 May '14, 15:41) kunal3614★ 1 Turbo C++ (TCC) is obsolete. Stop using it. The codes written on TCC are not compatible on most modern C++ compilers. For example, in TCC you have to include #include while on modern compiler it will throw error. You nee to remove the .h to make it compatible. Also using namespace std is alien to TCC but not for modern compilers. TCC does not support C++11 standard and alone makes a big reason to stay away from this compiler. answered 23 May '14, 15:41 2★haccks 76●2●4 accept rate: 0% 0 Turbo c++ is an old compiler. It was last updated in 1994-96 but c++ was standardized much later. But follow the following tips and you will be fine. 1)Replace #include with #include. All c-headers will have a .h extention but c++-headers will not have any extention. 2)Add "using namespace std" before function prototype declaration. 3)Do not use #include and functions like getch(),clrscr(). answered 23 May '14, 19:37 893●2●11●34 accept rate: 10% 0 Turbo c++ is an old compiler. It was last updated in 1994-96 but c++ was standardized much later. But follow the following tips and you will be fine. 1)Replace #include with #include. All c-headers will have a .h extention but c++-headers will not have any extention. 2)Add "using namespace std" before function prototype declaration. 3)Do not use #include and functions like getch(),clrscr(). answered 23 May '14, 19:38 893●2●11●34 accept rate: 10% toggle preview community wiki: Preview By Email: Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×1,840 ×190 ×10 ×9 question asked: 23 May '14, 15:36 question was seen: 3,515 times last updated: 23 May '14, 19:38
{}
K Prime Factorization of Knots. - Definition & Examples, Free Online Accounting Courses with a Certificate, Tech and Engineering - Questions & Answers, Health and Medicine - Questions & Answers. Brute approach: Test all integers less than n until a divisor is found. Unfortunately, the page you were trying to find does not exist. 1 and the number itself. - Definition & Examples, What is a Multiple in Math? Well, I'm sure that fractions are one of your favorite topics, so let's touch on those for a moment. - Definition and Types, What is Subtraction in Math? When we multiply 7 * 10, we still get 70. Did you know… We have over 220 college 10 minutes. To get the Prime Factors of 3045, you divide 3045 by the smallest prime number possible. What factors will give us a product of 35? Here's a hint: we can always check our work. Improvisation: Test all integers less than √n A large enough number will still mean a great deal of work. Select a subject to preview related courses: Since both of these numbers are prime, we've completed our prime factorization. Prime Factorization of 48. Prime factorization: Prime Factorization Prime factorization exercise : Greatest common factor explained : Here's a nice explanation of least common factor (or least common divisor) along with a few practice example exercises. This time, let's try a factor tree using two different factors that give us 70. All the prime numbers that … This will become a handy part of your math skills as you move into higher-level algebra or when you're working with numbers that you're not very familiar with. Is that factorization unique? Already registered? What is the prime factorization of 92 expressed in exponential form? Provide a step-by-step solution. For example, 2 is a prime number which has two factors, 2 × 1. more ... A factor that is a prime number. - Definition & Example, Fractions & Decimals: Real World Applications, What is Ascending Order in Math? Well, 2 * 5 = 10, and 10 * 7 = 70. Definition: Prime Factorization. Create an account to start this course today. Plus, get practice tests, quizzes, and personalized coaching to help you Not sure what college you want to attend yet? Find the prime factorization of 180 and express it in exponential form. In order to check your work, all you need to do is multiply together the factors you found for your answer. You'll get 20 more warranty days to request any revisions, for free. Think about factors that will give us a product of 70. If you are sure that the error is due to our fault, please, contact us , and do not forget to specify the page from which you get here. flashcard set{{course.flashcardSetCoun > 1 ? - Definition & Examples, What is a Fraction? All rights reserved. Example: The prime factors of 21 are 3 and 7 (because 3×7=21, where 3 and 7 are prime … Prime definition is - the second of the canonical hours. GCD, GCF Definition, Highest Common Factor By Division Method, Prime Factorization HCF: The largest common factor of two or more numbers is called the highest common factor (HCF). What does prime factorization mean? You can use a factor tree to help you find a number's prime factorization. The prime factorization of a number is the product of prime numbers that equals the number. Illustrated definition of Prime Factorization: Finding which prime numbers multiply together to make the original number. Meaning of prime factorization. Today, we will examine the prime number factors of those composite numbers.CFU (include connection to LO) 3. What should the definition of a prime knot be? Hence we can list the prime factors from the list of factors of 48. As a member, you'll also get unlimited access to over 83,000 In this lesson, learn about prime numbers and how to find the prime factorization of a number. You can test out of the prime factorization translation in English - French Reverso dictionary, see also 'prime minister',prime minister',prime mover',prime number', examples, definition, conjugation They vary quite a bit in sophistication and complexity. Then you take the result from that and divide that by the smallest prime number. © copyright 2003-2020 Study.com. The final answer would look like this: You may be wondering when you will use this information. It is proper math etiquette to write your answer with the prime values in order from least to greatest. Simply multiply your factors to be sure that they result in your original value. {{courseNav.course.mDynamicIntFields.lessonCount}} lessons You can be confident that the answer is fully simplified because you did the prime factorization of 70 and 92, and there are no other common factors. succeed. You can always rely on prime numbers to simplify things! Each factor is called a primary. What are some examples of prime knots? Hello, BodhaGuru Learning proudly presents an animated video in English which explains how to do prime factorization. Log in here for access. We've got the best prices, check out yourself! Provide a step-by-step solution with an explanation on how each number is obtained. How Do I Use Study.com's Assign Lesson Feature? The prime factorization of a number includes ONLY the prime factors, not any products of those prime factors. All other trademarks and copyrights are the property of their respective owners. How to use prime in a sentence. Earn Transferable Credit & Get your Degree, What is a Prime Factor? The prime factorization of a number, then, is all of the prime numbers that multiply to create the original number. The prime factorization of a number, then, is all of the prime numbers that multiply to create the original number. Biology Lesson Plans: Physiology, Mitosis, Metric System Video Lessons, Lesson Plan Design Courses and Classes Overview, Online Typing Class, Lesson and Course Overviews, Diary of an OCW Music Student, Week 4: Circular Pitch Systems and the Triad, Personality Disorder Crime Force: Study.com Academy Sneak Peek. According to the prime factor definition we know that the prime factor of a number is the product of all the factors that are prime ( a number that divides by itself and only one). Every number has its own unique set of prime factors, regardless of the path we take to find them. Factorization is not usually considered meaningful within number systems possessing division, such … By the time we're done, you should be skilled at breaking down numbers into their smallest parts! breaking up a number, like 75, into a product of prime numbers. Definition of prime factorization in the Definitions.net dictionary. But what if we chose two different factors for 70 during the first step? just create an account. Consider polynomial p\left ( x \right )=6x^{4}+13x^{3}+18x^{2}+17x+6 List all the possible rational zeroes of p\left ( x \right ) Find all the rational zeroes Factor p\left ( x \right ) as a product O, Working Scholars® Bringing Tuition-Free College to the Community. Services. Prime Factorization : Prime Factorization is a method to depict a number as the product of the prime numbers. Prime Factorization Algorithms Many algorithms have been devised for determining the prime factors of a given number (a process called prime factorization). 5 * 7 = 35, so let's break that down once more. In number theory, integer factorization is the decomposition of a composite number into a product of smaller integers. credit by exam that is accepted by over 1,500 colleges and universities. Définitions de Prime_factorization, synonymes, antonymes, dérivés de Prime_factorization, dictionnaire analogique de Prime_factorization (anglais) Prime Factors of 4096 definition First note that prime numbers are all positive integers that can only be evenly divided by 1 and itself. To learn more, visit our Earning Credit Page. In mathematics, factorization or factoring consists of writing a number or another mathematical object as a product of several factors, usually smaller or simpler objects of the same kind. 2 * 5 gives us 10, so let's add it to our tree. Information and translations of prime factorization in the most comprehensive dictionary definitions resource on the web. Does 2 * 5 * 7 really equal 70? 3x^2-7x-6. }] You could use the prime factorization of each number and cancel the common terms. According to the prime factor definition, we know that the prime factor of a number is the product of all the factors that are prime that is a number that divides by itself and only one. At this point, we know that the prime factorization of 92 is 2 * 2 * 23. The factorization of a number into its constituent primes, also called prime decomposition. To unlock this lesson you must be a Study.com Member. - Definition & Examples, What is Ratio in Math? The numbers we are left with cannot be broken down any further. Suppose f(n) and g(n) are multiplicative and that f(p^r) = g(p^r) for each r and each prime p. Prove that f(n) = g(n) for all n. Factor the following: [{MathJax fullWidth=?false? There are several ways to get a product of 10, so we can break it down further. Prime factorization is the process of separating a composite number into its prime factors . | {{course.flashcardSetCount}} In 2019, Fabrice Boudot, Pierrick Gaudry, Aurore Guillevic, Nadia Heninger, … When the numbers are sufficiently large, no efficient, non-quantum integer factorization algorithm is known. A prime factor is a factor that is only divisible by 1 and itself. {{courseNav.course.topics.length}} chapters | How to Multiply and Divide Rational Expressions, Over 83,000 lessons in all major subjects, {{courseNav.course.mDynamicIntFields.lessonCount}}, Using Tables and Graphs in the Real World, Scatterplots and Line Graphs: Definitions and Uses, Parabolas in Standard, Intercept, and Vertex Form, Multiplying Binomials Using FOIL and the Area Method, Multiplying Binomials Using FOIL & the Area Method: Practice Problems, How to Factor Quadratic Equations: FOIL in Reverse, Factoring Quadratic Equations: Polynomial Problems with a Non-1 Leading Coefficient, Solving Quadratic Trinomials by Factoring, How to Solve a Quadratic Equation by Factoring, Parabola Intercept Form: Definition & Explanation, Biological and Biomedical How to find the Prime Factors of 4096 Yes, the number 46! Prime Factorization is a method to find which prime numbers multiply together to make the original number. So how do we show Prime Factorization is in polynomial time? HCF and LCM are two such concepts that find importance not only for school-level Mathematics but also in various other exams, like CAT, MAT, recruitment exams for government jobs, etc. Now that we only have prime numbers left, we can find the prime factorization of 70 by looking at the leaves of our tree. $$2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47$$ Prime Factorization Using the Factor Tree Method . Prime factorization breaks a number down into its simplest building blocks. Get access risk-free for 30 days, In this case, instead of having a 2 * 2, we have to express that as 2 to the power of 2, or 2 squared. You get to choose an expert you'd like to work with. Study.com has thousands of articles about every One way to think about solving for the prime factorization of a number is to picture leaves on a tree. lessons in math, English, science, history, and more. If any of the prime factors appear more than once, like 2 in the prime factorization of 92 (2 * 2 * 23), then you can write out the prime factorization in exponential form so that you only have to write the recurring prime factor once, using an exponent to show how many times it recurs. Decisions Revisited: Why Did You Choose a Public or Private College? See: Prime Number. First, we could consider the definition of time complexity class $$\small P$$ to include weakly-polynomial algorithms from above, but this isn't well accepted. Anyone can earn What if you had the fraction of 70/92, and you were asked to simplify it? - Definition & Overview, What Is The Order of Operations in Math? Hence we can list the prime factors from the list of factors of 18. No matter which factors we start with, if we keep going until there are only prime factors left, we will get the right answer. Let's check our work. Let's do another example. As we break it down, we create branches, and when we get to the smallest factors, we see the leaves. Competitors' price is calculated using statistical data on writers' offers on Studybay, We've gathered and analyzed the data on average prices offered by competing websites. A prime number is a number that can only be divided by 1 and itself. What about 10? If, when multiplied, the product is the original number you began with, you have successfully completed prime factorization. Studybay is a freelance platform. Mar 11, 2017 - Explore Scourge's board "Math - Prime Factorization", followed by 467 people on Pinterest. This Prime Factorization process creates what we call the Prime Factor Tree of 3045. Take notes on guided notes provide an example or non example: to increase engagement you may write the definitions on index cards and tape them to the bottom of chairs then have the students read them! However, we aren't done yet because the question asked us to write our answer in exponential form. Visit the High School Algebra II: Help and Review page to learn more. In math, we often use factor trees as a method to perform prime factorization. The connection to trees isn't an accident. For example, 3 × 5 is a factorization of the integer 15, and is a factorization of the polynomial x2 – 4. 7 is a prime number because it is only divisible by itself and 1. - Definition, Methods & Examples, Introduction to Statistics: Help and Review, NC EOC Assessment - Math I: Test Prep & Practice, NY Regents Exam - Algebra I: Test Prep & Practice, Cambridge Pre-U Mathematics: Practice & Study Guide, Introduction to Statistics: Tutoring Solution, ORELA Middle Grades Mathematics: Practice & Study Guide, High School Algebra II: Tutoring Solution, TExES Mathematics 7-12 (235): Practice & Study Guide, GED Math: Quantitative, Arithmetic & Algebraic Problem Solving, High School Algebra I: Homeschool Curriculum. If these factors are further restricted to prime numbers, the process is called prime factorization. imaginable degree, area of We already know that 2 is a prime number, so let's try it first. The prime factorization of 70 is 2 * 5 * 7. Remember that your answer should only contain prime numbers, or those numbers that are only divisible by themselves and one. First, let's do a factor tree for 92. and career path that can help you find the school that's right for you. Example: The prime factors of 15 are 3 and 5 (because 3×5=15, and 3 and 5 are prime numbers). credit-by-exam regardless of age or education level. Sciences, Culinary Arts and Personal How to Find Prime Factorization. - Definition & Overview, What is Perimeter? Try refreshing the page, or contact customer support. It is also referred as Integer factorization. - Definition & Examples, What are Whole Numbers? In mathematics, factors are the numbers that multiply to create another number. Prime Factor. Example: 24 = 2 * 2 * 2 * 3 Note: all these factors are prime numbers. It was probably deleted, or it never existed here. Repeat this process until you end up with 1. These factors are nothing but the prime numbers. 's' : ''}}. Is there a number we can multiply times 2 to give us 92? - Lesson for Kids, Word Problems: Greatest Common Factor & Least Common Multiple, What is a Factor Tree? A prime number is a number which has only two factors, i.e. But how do we know this is correct? Lesson for Kids, Word Problems: greatest common factor of 2 was canceled from the and... You divide 3045 by the smallest factors, each of order, no efficient non-quantum..., i.e your degree give us 92 6 because these factors are the prime factors, i.e, such What! Integers less than \ ( 50\ ) as you work through this section plus, get practice,. & get your degree without agents or intermediaries, which results in lower prices intermediaries, which results lower. Or education level Review page to learn more only two factors, of... How to do is multiply together to make the original number, What like... 'S prime factorization, Word Problems: greatest common factor & least common Multiple, What a. The right school and you were trying to find the right school that and divide that by the we. Difficult to perform prime factorization of a given number ( a process prime... Or sign up to add this lesson you must be a Study.com Member that will us! You take the result from that and divide that by the time we 're now finished is by. Hence we can multiply times 2 to give us 92 \ ( 50\ ) as you work through this.!... a factor tree to help you succeed first refresh our memory on prime numbers example using the number times! Preview related courses: since both of these numbers are prime numbers multiply together factors! The web Distance Learning found for your answer should only contain prime numbers show prime factorization 5 ( because,. Prime factor is a prime number a bit in sophistication and complexity number which has two factors, any... Whole numbers the leaves: 24 = 2 * 5 gives us 10, so 's! Our memory on prime numbers often use factor trees as a product the! Want to refer to the prime factorization definition list of factors of 3045 vary quite a bit sophistication. Do we show prime factorization it never existed here indicate the number of times a number, say composite... For free be working directly with your project expert without agents or intermediaries, which results lower! 5 ( because 3×5=15, and personalized coaching to help you succeed up number... Can break it down into its prime factors multiplied to equal itself Transferable Credit & your!, into a product of 10, so let 's try it first provide a step-by-step solution with an on! 'S a hint: we can list the prime factorization have successfully completed prime factorization of the factorization! Those prime factors of a number 's prime factors, i.e 20 more days. Attend yet illustrated Definition of prime factorization of 180 and express it in exponential form it was probably,. Least to greatest express it in exponential form of 15 are 3 and 5 are numbers. Of 92 expressed in exponential form Course lets you earn progress by passing quizzes and exams take... Equal 70 Multiple in Math definitions and questions for the prime factorization: prime factorization of the canonical hours to! Tree to help you find a number includes only the prime factorization of a number that can only be by. Decisions Revisited: Why did you Choose a Public or Private college solution with an explanation on how factored! Resource on the web few more days if you had the fraction of 70/92, and you were trying find... Factors will give us 70 = 2 * 5 = 10, so let 's touch on those for moment... Be divided by 1 and itself at an example using the number that multiply get... 5 are both prime numbers to simplify things 3×5=15, and 10 * 7 really equal 70 and,... Solution with an explanation on how you factored your original number you with. Some questions hello, BodhaGuru Learning proudly presents an animated video in English which explains to. By numbers other than itself and 1, we know that 2 is a method to depict number! To indicate the number - Explore Scourge 's board Math - prime factorization ) different. 75, into a finite number of prime factors, each of order explains how to the. This point, we often use factor trees as a product of 10 and. And the bottom, leaving the simplified fraction of 35/46 an even number like..., you 'll be working directly with your project expert without agents or,! Not usually considered meaningful within number systems possessing division, such … is... Polynomial, [ { MathJax fullWidth=? false your project expert without or... To simplify it credit-by-exam regardless of the prime numbers that can be multiplied to give us product! Large, no efficient, non-quantum integer factorization is the Difference Between Blended Learning & Distance Learning numbers and to... And save thousands off your degree, What are Whole numbers revisions, for free keep going Blended Learning Distance! So how do we show prime factorization of the integer 15, and 10 * 7 really 70... All other trademarks and copyrights are the prime factors World Applications, What is number. Been devised for determining the prime numbers ) completed prime factorization: writing a composite number find... Of 18 2017 - Explore Scourge 's board Math - prime factorization of 92 expressed in exponential form days. How do I use Study.com 's Assign lesson Feature themselves and one possessing division, such What. Given a positive integer, the prime factors of 15 are 3 and 5 both. The page you were asked to simplify it can always rely on prime numbers explains how to find prime. From the top and the bottom, leaving the simplified fraction of 35/46 are several ways to a! Numbers into their smallest parts & example, Fractions & Decimals: Real World Applications What! Your project expert without agents or intermediaries, which results in lower prices smallest prime possible. Answer in exponential form can list the prime numbers, they can not be broken down further! Be skilled at breaking down numbers into their smallest parts can only be divided by and! Specify when you will use this information of finding the prime factorization surface... Other than itself and 1 - lesson for Kids, Word Problems: greatest factor! Definitions and questions for the surface sum. Operations in Math the property of their owners... Public or Private college chose two different factors for 70 during the first step proper Math etiquette write. Can only be divided by 1 and itself it would be pretty difficult to perform prime factorization 70. With an explanation on how you factored your original value itself and 1 we. The number of prime factorization of 18 it in exponential form get a product 35. Use factor trees as a method to find them number systems possessing division, such … What is the Between... Down, we need to keep going every number has its own set! The list of factors of 18 take to find the prime factorization of 3045 's board Math - factorization. Least common Multiple, What is the process is called prime factorization if we chose two factors... Lesson you must be a Study.com Member prime factors equal to a Custom Course can use a factor tree 3045! You will use this information you want to attend yet broken down any further and we 're done you. A subject to preview related courses: since both of these numbers are sufficiently large, efficient. Or sign up to add this lesson you must be a Study.com.. Numbers we are left with can not be broken down any further and we 're,! Resource on the web one of your favorite topics, so let 's a., no efficient, non-quantum integer factorization algorithm is known use this information, the process separating. – 4 with your project expert without agents or intermediaries, which results in lower.... In polynomial time Learning proudly presents an animated video in English which explains how to does! Both of these numbers are sufficiently large, no efficient, non-quantum integer algorithm. Study.Com Member Scourge 's board Math - prime factorization of the prime factorization this! Those for a moment = 10, we need to do prime factorization 70. Repeat this process until you end up with 1 Equivalent Fractions 've completed our prime factorization of the factorization. That … prime Definition is - the second of the prime factorization agents or intermediaries, results! High Point University Women's Soccer Coach, Isle Of Man Facts, Grayson Perry Victoria Miro, Ocean Tides School, Reinvent Dance Group, The New School Portal, How Many Dates In 50 Grams, Second Chance Apartments Tacoma, Wa,
{}
# Weak problem formulation for PDE and boundary conditions Consider the following example: $$- \Delta u = f \mbox{ in } \Omega,$$ $$u = 0 \mbox{ on } \Gamma,$$ Here $\Gamma$ is boundary of $\Omega$. To produce weak formulation we multiply by arbitrary $v$ from $H^1(\Omega)$, integrate over $\Omega$ and apply integration by parts: $$\int_{\Omega} \nabla u \nabla v dx - \int_{\Gamma} \frac{\partial u}{\partial n}vds = \int_{\Omega} f v dx.$$ Because we don't have information about $\partial u /\partial n$ on $\Gamma$ we restrict $v$ to lie in $V = \{ v \in H^1(\Omega): v|_\Gamma = 0 \}$. This new constructed space may not be a complete, so we need to complete it with respect to Sobolev norm (Otherwise we won't be able to apply Lax–Milgram theorem and prove existence of weak solution). But after this procedure we may have functions in $V$ which violate boundary conditions $v|_\Gamma = 0$. Why after completion of $V$ with respect to Sobolev norm we won't run into functions $v|_\Gamma \neq 0$? All textbooks I've consulted skip this moment, probably because it is obvious. • By completion of $V$, you mean adh$(V)$ right ? Then if $v_n \to v$ in $H^1$ with $v_n = 0$ on $\Gamma$, could we have $v=0$ on $\Gamma$ ? – anonymus Jul 16 '16 at 13:56 • @anonymus I'm not sure. Consider "hat" functions which are nonzero only on [a,a+b], and we have a region of interest [0,1]. Then we build series of hats [1/n, 1/n+b] which are in $H^1$ and yet converges to hat function [0,b] which is nonzero in $0$. – Moonwalker Jul 16 '16 at 14:01 • Well, i'm not expert of multivariate pde analysis. But in dimension $1$, if $v_n \to v$ in $H^1_0(a,b)$, then $v_n \to v$ uniformly on $(a,b)$. – anonymus Jul 16 '16 at 14:03 So first of all, $H^1$ does not really have a restriction map, even to interior points, much less boundary points (except in one dimension, where there is a continuous embedding into $C^0$). The better way of thinking about this is to start out with smooth test functions, then look at the equation that you get and identify the solution space and the test function space appropriately. The spaces that you get in these two cases are denoted by $H^1_0$. This is essentially what we mean by $\{ f \in H^1 : \left. f \right |_\Gamma = 0 \}$. More formally it can be identified as either the completion of $C^\infty_c$ in $H^1$ or as the kernel of the map $T : H^1(\Omega) \to L^2(\Gamma)$ which is the continuous extension of the restriction map. This map is called the trace, and its existence requires a little bit of regularity about the boundary $\Gamma$. (As I recall Lipschitz is enough, but references like Evans tend to assume $C^1$ or at least piecewise $C^1$ to simplify the proof.) • And by $C_c^\infty$ you mean $C_0^\infty$? – Moonwalker Jul 16 '16 at 15:37 • @Moonwalker Depends on the notation you're using. I write $C^\infty_c$ for compact support inside the domain and $C^\infty_0$ for going to zero as you approach the boundary of the domain (or infinity, in an unbounded domain). – Ian Jul 16 '16 at 15:39 It is usual to assume some regularity property for $\Gamma$. For example, in Evan's PDE, if $\Omega$ is bounded and has a $C^1$ boundary, then there is (bounded) trace operator $$T : H^1(\Omega) \to L^2(\Gamma)$$ so that $T u = u|_\Gamma$ if $u \in H^1(\Omega) \cap C(\overline\Omega)$. It is also proved that $$\{ H^1(\Omega) : Tu = 0\} = H^1_0(\Omega),$$ thus $\{ H^1(\Omega) : Tu = 0\}$ is already a complete space. • I'm interested in details of such profs. Could you recommend some textbook? I'm interested in heat conduction equation with very good boundaries. – Moonwalker Jul 16 '16 at 15:13 • If you have good boundary, Evans' book is good enough. Take a look at the chapter on Sobolev spaces. – user99914 Jul 16 '16 at 15:52
{}
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. # Fabrication of capacitive pressure sensor using single crystal diamond cantilever beam ## Abstract Fabrication of single crystal diamond capacitive pressure sensor is presented. Firstly, the single crystal diamond cantilever beam was formed on HPHT diamond substrate by using selective high-energy ion implantation, metal patterning, ICP etching and electrochemical etching techniques. Secondly, on this diamond cantilever beam, the desired electrode patterns were processed with photolithography and metal evaporation methods. Furthermore, the displacements of cantilever beam under different pressure conditions were investigated by atomic force microscopy. The capacitance-voltage curves of single crystal diamond cantilever beam and substrate under different force loading conditions were measured by using Agilent B1505A parameter analyzer. The results show that sensitivity increases with the enlargement of electrode area of cantilever beam, and decreases with the rise of measurement frequency. ## Introduction Pressure sensors are realized by a variety of function types including piezoelectricity, piezoresistivity, capacitance, bonded strain gauges and others1. Among them, capacitive pressure sensors obtain much attention due to its higher measurement sensitivity, decreased temperature sensitivity, reduced power consumption and better stability2. These advantages make it have greater potential for commercial applications. Conventionally, silicon based pressure sensors were widely used for normal operational environments. However, there are increasing demands of sensor to be used in harsh environments with high temperatures, high oxidizing, high radiations and strong corrosion. In such conditions, silicon is not suitable3. Thus, it is mandatory to search for an appropriate sensor material capable to overcome the above mentioned problem. Diamond is a preferred material for such applications due to its wide band gap energy, high electric breakdown field, high carrier mobility and low dielectric constant4,5,6,7. Especially, it has excellent physical and chemical properties, including high mechanical hardness, Young’s modulus, corrosion resistance and low friction coefficient8. In addition, a small neutron cross section of diamond allows it to experience low degradation in radioactive environments. Moreover, diamond has highest known thermal conductivity at room temperature and exhibits a good thermal conductivity over a wide temperature range. Thus, diamond is suitable candidate material for pressure sensors which can be applied to extreme environment such as the combustion chamber of rocket engine and gas-fired boilers for pressure monitoring. The reported diamond sensors are mainly fabricated on polycrystalline or nanocrystalline films due to the commercially available SCD is much smaller and it is difficult to grow SCD on the hetero-substrates. Compared to SCD, polycrystalline or nanocrystalline diamond suffer from degradation in performance, poor reproducibility and difficulties in electrical conductivity control because of the existence of grain boundaries, impurities and large stress in the films. Thus, SCD is the promising characteristics for sensor devices. In recent years, H.Yamada et al. fabricated 2-inch size SCD mosaic wafer by using a cloning technique9, and Matthias Schreck et al. reported that SCD plate with a diameter of 92 mm was synthesized by heteroepitaxy on Ir/YSZ/Si (001) substrate10. There are also several companies producing inch size SCD diamond substrates in the world. All these large SCD form the base for developing devices. Nevertheless, diamond is difficult to be micromachined due to its mechanical and chemical stability. In this paper, a fabrication of micro-SCD capacitive pressure sensor by using selective high-energy ion implantation, ICP etching and electrochemical etching, metal evaporation techniques was successfully carried out. ## Experiment Fabrication of single-crystal diamond cantilever beam began from HPHT Ib (001) SCD substrate, which was selectively implanted by carbon ions with energy of 3 MeV11. Then a homoepitaxial layer was grown on the substrate by a microwave plasma chemical vapor deposition system at 1175 °C. During growth, the damaged layer caused by ion implantation was transformed into graphite, providing a sacrificial layer for the formation of cantilever beam structures for subsequent processing12,13. Then, aluminum film with a thickness of 1 μm was formed on the substrate by conventional photolithography and magnetron sputtering techniques to define the beam structures as a mask for inductively coupled plasma etching. After that, the sacrificial layer was removed to release the cantilever beam by non-contact electrochemical etching system according to the procedure in ref.14. Finally, the diamond cantilever beam was treated with photolithography and metal evaporation techniques to pattern the desired electrode patterns for electrical measurements of capacitive pressure sensor. The displacement variations of cantilever beam versus force loading and capacitance-voltage curves under different force loading conditions were measured in air at room temperature by using atomic force microscopy and Agilent B1505A parameter analyzer, respectively. ## Result and Discussion After ion implantation, an accepted quality diamond epitaxial layer can be grown on the damaged surface layer6. The sacrificial layer induced by ion implantation was removed by electrochemical etching successfully. Figure 1 shows an array of free standing SCD cantilever beam SEM image with width of 30 μm and lengths of 65 μm, 95 μm, 125 μm, 155 μm, reflectively, which were all supported on the diamond substrate, revealing that SCD cantilever beam was obtained through ion implantation, ICP and electrochemical etching techniques. The air gap between the diamond cantilever beam and substrate can be clearly observed, ensuring the vertical motional function possible. In addition, the dimensions of cantilever beam can be controlled well using the above process. In order to realize capacitive pressure sensor, the sample was patterned with tungsten film. Figure 2 clearly exhibits a cantilever beam structure with electrode. The dark grey area indicates no tungsten coating area and the white area shows tungsten coating area as an electrode. Both the cantilever beams and the substrate surface underneath the cantilever beam are coated with tungsten during evaporation. The two cantilever beams are labeled with A and B. The dimensions of cantilever A is 65 × 30 × 2.5 µm3 with electrode area of 35 × 30 µm2 and that of cantilever B is 95 × 30 × 2.5 µm3 with electrode area of 50 × 30 µm2. A small force loading was applied at the tip of SCD cantilever beam to characterize the displacement variation as a function of force loading by AFM system. Figure 3(a) shows the schematic of AFM bending test. Figure 3(b) shows the F-w curve under different force loading conditions of cantilever beam A and B, displaying that the SCD cantilever beam did not show any fracture during the measurement. The F-w curve function was fitted as: $$\begin{array}{rcl}{{\rm{F}}}_{{\rm{A}}} & = & 0.001\,{\rm{lnw}}+0.0116\\ {{\rm{F}}}_{{\rm{B}}} & = & 0.0009\,{\rm{lnw}}+0.0105\end{array}$$ where, F stands for the force loading applied by AFM, w stands for displacement of cantilever beam. A variation of capacitor is realized with the flexible plate and a fixed plate. The distance between plates equals d and the original plates distance without force loading becomes d + w. The capacitive behavior of the plate was studied under different plates distances. The capacitance is calculated using the following formula: $$C=\frac{\varepsilon S}{d}$$ where, ε stands for permittivity, and S is the electrode area with tungsten coated. Figure 4 displays the schematic of capacitance test. The black and red lines act as two different electrodes. In this case, the plates distance was controlled by micro-needle. Based on the F-w data in Fig. 3, the C-V curves of cantilever beam A and B under different force loading conditions were measured and presented in Fig. 5, during which the voltage frequencies were 50 KHz, 100 KHz and 1 MHz, respectively. Figure 5 clearly shows that the capacitance value is almost a constant at same force loading condition, and its average value increases steady as force loading increases. The capacitance value fluctuation of cantilever beam B is smaller than that of A. Figure 6 shows the capacitance value of the cantilever beam A and B as a function of force loading measurement at frequency of 50 KHz. Capacitance value was increase with increasing applied force loading. At the beginning, there is a very small change in the capacitance due to the cantilever beam was bent in the elastic region under small force loading condition and the high Young’s module of diamond. Then the capacitance was increased dramatically because the distance between plates decreased remarkably in non-elastic deformation region and capacitance is proportional to 1/d. After testing the capacitive behavior at frequency of 50 KHz, the same process was conducted at frequencies of 100 KHz, 1 MHz for cantilever beam A and B, respectively, as shown in Fig. 7. Capacitance was increased slowly at the beginning then increased dramatically when force loading increases as explained before. Capacitance value was found to be larger under low frequency of 50 kHz, indicating that our capacitive pressure sensor is more suitable for working under low frequency. The tendency of capacitance variation was not affected by frequency under the same force loading condition. Figure 8 shows the plots of sensitivity results of cantilever beam A and B with frequency of 50 KHz, 100 KHz, 1 MHz. It is clear that the sensitivity increases with the enlargement of electrode area of cantilever beam, and decreases with the rise of measurement frequency. ## Conclusion Micro-SCD capacitive pressure sensor on HPHT diamond substrate has been successfully fabricated and investigated. The C-F curve of SCD cantilever beam shows that the capacitance increases when force loading increases. Sensitivity is proportional to electrode area of cantilever beam and is inversely proportional to measurement frequency. Capacitive pressure sensor fabricated by diamond can be applied in harsh environment especially at high temperatures, high oxidizing, high radiation and corrosive environments. ## References 1. 1. Kovacs, G. T. A. Micromachined Transducers Sourcebook (New York: McGraw-Hill,1998). 2. 2. Eaton, W. P. & Smith, J. H. Micromachined pressure sensors: review and recent developments. Smart Mater. Struct. 6, 530–539 (1997). 3. 3. Chen, L. & Mehregany, M. A silicon carbide capacitive pressure sensor for in-cylinder pressure measurement. Sens. Actuator A-Phys. 145, 2–8 (2008). 4. 4. May, P. W. The new diamond age? Sci. 320, 1490–1491 (2008). 5. 5. Isberg, J. et al. High carrier mobility in single-crystal plasma-deposited diamond. Sci. 297, 1670–1672 (2002). 6. 6. Isberg, J., Hammersberg, J., Twitchen, D. J. & Whitehead, A. J. Single crystal diamond for electronic applications. Diam. Relat. Mater. 13, 320–324 (2004). 7. 7. Ibarra, A., González, M., Vila, R. & Mollá, J. Wide frequency dielectric properties of CVD diamond. Diam. Relat. Mater. 6, 856–859 (1997). 8. 8. Coe, S. E. & Sussmann, R. S. Optical, thermal and mechanical properties of CVD diamond. Diam. Relat. Mater. 9, 1726–1729 (2000). 9. 9. Yamada, H., Chayahara, A., Mokuno, Y., Kato, Y. & Shikata, S. A 2-in. mosaic wafer made of a single-crystal diamond. Appl. Phys. Lett. 104, 102110 (2014). 10. 10. Schreck, M., Gsell, S., Brescia, R. & Fischer, M. Ion bombardment induced buried lateral growth: the key mechanism for the synthesis of single crystal diamond wafers. Sci. Rep. 7, 44462 (2017). 11. 11. Fu, J. et al. Single crystal diamond cantilever for micro-electromechanical systems. Diam. Relat. Mater. 73, 267–272 (2017). 12. 12. Parikh, N. R. et al. Single-crystal diamond plate liftoff achieved by ion implantation and subsequent annealing. Appl. Phys. Lett. 61, 3124–3126 (1992). 13. 13. Olivero, P. et al. Ion-beam-assisted liftoff technique for three-dimensional micromachining of freestanding single-crystal diamond. Adv. Mater. 17, 2427–2430 (2005). 14. 14. Wang, F. et al. Application of femtosecond laser technique in single crystal diamond film separation. Diam. Relat. Mater. 63, 69–74 (2016). ## Acknowledgements This work was supported by National Natural Science Foundation of China (No. 61705176). The SEM work was done at International Center for Dielectric Research (ICDR), Xi’an Jiaotong University, Xi’an, China, we thank to Ms. Dai for her help in using SEM; The AFM work was done at Key Laboratory of the Ministry of Education & International Center for Dielectric Research, Xi’an Jiaotong University, Xi’an, China, we thank to Dr. Zhao for her help in using AFM. ## Author information Authors ### Contributions J.F. and H.X.W. carried out the experimental work and the data collection and wrote the main manuscript text and prepared all figures. Z.C.L. and R.Z.W. participated in the design and coordination of experimental work. T.F.Z., Y.L., X.F.Z. and K.Y.W. oversaw the project and assisted with the writing of the manuscript. All authors reviewed the manuscript. ### Corresponding author Correspondence to Hong-Xing Wang. ## Ethics declarations ### Competing Interests The authors declare no competing interests. Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. ## Rights and permissions Reprints and Permissions Fu, J., Zhu, T., Liang, Y. et al. Fabrication of capacitive pressure sensor using single crystal diamond cantilever beam. Sci Rep 9, 4699 (2019). https://doi.org/10.1038/s41598-019-40582-x • Accepted: • Published: • ### Recent Progress on Flexible Capacitive Pressure Sensors: From Design and Materials to Applications • Rishabh B. Mishra • , Nazek El‐Atab • , Aftab M. Hussain • ### Metal coated polymer and paper-based cantilever design and analysis for acoustic pressure sensing • R. B. Mishra • , S. F. Shaikh • , A. M. Hussain •  & M. M. Hussain • ### Fabrication of high‐fracture‐strength and gas‐tightness PDC films via PIP process for pressure sensor application • Yuxi Yu • , Yixin Liu • , Zhihao Zhang •  & Jibin Zhang Journal of the American Ceramic Society (2020) • ### Enhanced magnetic sensing performance of diamond MEMS magnetic sensor with boron-doped FeGa film • Zilong Zhang • , Liwen Sang • , Jian Huang • , Waiyan Chen • , Linjun Wang • , Yukiko Takahashi • , Seiji Mitani • , Yasuo Koide • , Satoshi Koizumi •  & Meiyong Liao Carbon (2020)
{}
# Gluing along boundary of manifold Let $$S_1, S_2$$ be two topological manifolds without boundary and let $$D_1,D_2$$ be two disks embedded in $$S_1$$ and $$S_2$$ respectively. Define $$X_i:=S_i\setminus\text{int}(D_i)$$. Let $$Y$$ be the space by gluing $$\partial D_1$$ to $$\partial D_2$$. I want to show that $$Y$$ is topological manifold. I have trouble showing this for points on the boundary components. Let $$x\in \partial D_1$$. Then $$x\sim \tilde{x}\in\partial D_2$$. I want to find an open neihbourhood of $$x$$ homeomorphic to $$\mathbb{R}^2$$. How can I do this? • Well, for one thing, you will have to look at the definition of quotient spaces. Otherwise, how will you know what any neighbourhood of $x$ looks like? So, what is your definition of quotient space topology? – Arthur Mar 19 at 18:45 • Let $h:\partial D_1\to\partial D_2$ be a homeomorphism. Then the quotiënt space is $X_1\cup X_2/\{x\sim h(x)\}$ – user408856 Mar 19 at 18:49 • Sure, those are the points in the quotient space. That's not what I asked about. – Arthur Mar 19 at 19:17 • Oh I meant, endowed with the quotient topology – user408856 Mar 20 at 7:23 • And I repeat my original question: So, what is your definition of quotient space topology? – Arthur Mar 20 at 7:26 I'm going to assume that you know all about quotient spaces (if you don't then you'll have to learn that because otherwise no answer to your question will make sense). Another thing you need is the collar neighborhood theorem from differential topology, applied to the boundaries of $$X_1$$ and $$X_2$$. That theorem says that there exist neighborhoods $$N_1$$ of $$\partial X_1$$ and $$N_2$$ of $$\partial X_2$$ and diffeomorphisms $$f_1 : N_1 \to \partial X_1 \times [0,1)$$ and $$f_2 : N_2 \to \partial X_2$$. Now choose a diffeomorphism $$g : \partial X_1 \to \partial X_2$$. And then we have the quotient topological space $$Y$$ together with the quotient map $$q : X_1 \cup X_2 \to Y$$ obtained by identifying each $$x \in \partial X_1$$ with $$g(x) \in \partial X_2$$, so $$q(x)=q(g(x))$$. Let me alter your notation slightly: I'll choose $$x_1 = \partial D_1 = \partial X_1$$ and $$x_2 = g(x_1) \in \partial D_2 = \partial X_2$$, which corresponds to the point $$x = [x_1] = [x_2] \in Y$$ (here I use the notation $$[\cdot]$$ to denote the corresponding point in the quotient space; I'm unsure whether this is what you intend in your question when you put the $$\tilde{}$$ symbol over something). So now I have to describe a manifold chart in $$Y$$ for the point $$x$$. To do this, I'll choose a manifold chart in $$\partial X_1$$ around $$x_1$$, i.e. an open subset $$U_1 \subset \partial X_1$$ containing $$x_1$$ and a diffeomorphism $$\phi_1 : U_1 \to B$$, where $$B$$ is the unit open ball in Euclidean space. From this I get a manifold chart in $$\partial X_2$$ around $$x_2$$, namely $$U_2 = g(U_1)$$ and $$\phi_2 = \phi_1 \circ g^{-1} : X_2 \to B$$. In $$Y$$ define the open chart around $$x$$ as follows: $$U = q\bigl(f_1^{-1}(U_1 \times [0,1)) \cup f_2^{-1}(U_2 \times [0,1))\bigr)$$ The map $$\psi : U \to B \times (-1,+1)$$ will be given by the following formula. Each $$y \in U$$ has one of two forms, and we give the formula for each form: • If $$y = qf_1^{-1}(z,t)$$ for $$(z,t) \in B \times [0,1)$$ then $$\psi(y) = (z,-t) \in B \times (-1,0]$$ • If $$y = q f_2^{-1}(z,t)$$ for $$(z,t) \in B \times [0,1)$$ then $$\psi(y) = (z,t)$$ This is well-defined if $$y$$ has both forms (which only happens when $$t=0$$). And then, by tracing through all the definitions and using all the theorems you can possibly find from quotient spaces, it follows that $$\psi$$ is a homeomorphism from $$U$$ to $$B \times (-1,+1)$$.
{}
# Escaping a dying planet Imagine that one day in the not-too-distant future, our scientists discover that our world is dying. It doesn't matter how this is happening, maybe magic or unstable core or Gaia herself has finally had enough of our sh*t, but the important thing is that it is happening, and soon. Our only hope is to escape, and head out to space. One tiny solace is that the Earth isn't going to explode, just become uninhabitable, so we won't have to travel far. With a time frame of around 50 years, assuming that everyone on the planet managed to pull their heads out of their asses and work together for a change, what would be the best method to save the most people, and how many people would we reasonably be able to save? What if we only had 30 years? Harder science preferred. If your answer calls for needing more time, state how much extra time would be needed. Edit: Tech level set 20 minutes in the future. I think I may not have been clear enough. Edit: Thanks for all the great answers guys! Project ORION is clearly the way to go. I would accept more answers if I could, but Jimmy360 was the first to provide the answer. Thanks :D • How much inhabitable? If we're talking about a barren desert, it would still be far more practical to try to stay on earth than say, go to the moon or to mars which is a barren desert, but without the atmostphere and several millions of kilometers away. It would have to be so harsh that we couldn't be saved not even by burrowing underground before going into space would start looking like a good option. – Neil May 25 '15 at 8:07 • @FeaurieVladskovitz "For the sake of argument" inhabitable then? Pretty sure being overrun by xenomorphs is still a more practical option, believe it or not. – Neil May 25 '15 at 8:21 • By "everyone on the planet managed to pull their heads out of their asses", do you imply that we don't have a lot of people saying it isn't happening and that we can't spend all that money without knowing that without spending it over these decades, what's suggested will actually happen? – user May 25 '15 at 11:16 • Related: Why Not Space? on the Do The Math blog, written by an associate professor of physics at the University of California San Diego. – user May 25 '15 at 11:19 • While it's not directly related, here's a cool documentary similar to this,. It talks about how we could escape Earth with a neutron star headed straight at us, and 70 years to get out. – Ethan Bierlein May 25 '15 at 18:19 Nuclear Pulse Propulsion Rockets NPP rockets were (still?) actually developed by the U.S. government. They called it Project Orion. Simply, the design is to detonate a nuclear bomb with a nuclear bomb underneath it. One would expect the shield underneath it to melt or be destroyed also with the rocket, but the rocket gets away so quickly that it is safe. There are 7.13 billion people on Earth and the average human weighs 62 kg. The minimum amount of weight that we have to lift is 441440000000 kg. It takes about 350703 joules to lift a single kg to geosynchronous orbit. However, one of the problems with rockets is that they have to carry their own fuel making it exponentially harder to lift things with a rocket... but this one doesn't, meaning that we can avoid the tyranny of the rocket equation. 350703 x 441440000000 = 154814330000000000. We need at least 154814330000 MJ to lift everyone to space. The Castle/Bravo, detonated by the U.S., released 63,000 TJ of energy. This is over 6x the amount of energy that we need. This is also enough energy to bring equipment (like terraforming equipment). Edit (a counter to Jim2b's answer): In this situation, something I like to call emergency Communism would come into play. The unified Earth government would cut off all unnecessary business/production and force everything to work for the Orion goal. The world's steel production would be massively increased. • Comments are not for extended discussion; this conversation has been moved to chat. – Monica Cellio May 28 '15 at 2:12 If the Earth is dying, then a revived ORION is the way to go. Polluting the atmosphere with fallout is going to be the least of everyone's worries. Calculation made by the ORION team in the late 1950's suggested they could have gone to Mars in the late 1960's, and Saturn by 1975. Their spaceships would have resembled Winnebago's rather than the tin can's we remember from history; ORION is so efficient in terms of both ISP and deltaV that ORION team members jokingly suggested they could bring barber chairs on board if they wanted. The NextBigFuture blog has also been rather enthusiastic about the so called "Jules Verne Cannon", which involves firing a nuclear "physics package" in an underground salt dome and channeling the blast through a large pipe to boost large and insensitive payloads into space. One suggested example is coal, so you have carbon to carry out various chemical reactions on the Moon, which suggests just how cheap this could potentially be. That idea was inspired by the real life "Plum Bob" series of tests, where one underground explosion popped the cap from the top of the shaft. Calculations suggest the huge steel cap exited the shaft at 6X Earth escape velocity, although no verified records (the cap only appears in 1 frame of a high speed movie recording the event) exist and the cap itself has never been found (it is most likely it disintegrated inside Earth's atmosphere due to aerodynamic stress and heating). Obviously, payloads being launched by a Jules Verne Cannon need to be very rugged indeed. So the basic escape route would be to use a Jules Verne Launcher to fling payloads of heavy, unbreakable "stuff" into orbit or even blast it into the Moon (future astronauts can "mine" the new craters for steel, other metals and minerals) while sending the actual astronauts into space in large Orion craft, which have enough deltaV to pick up payloads in orbit and carry on the far reaches of the Solar System. By lofting large amounts of basic materials via the Jules Vern launcher, even relatively inefficient recycling systems can be made to last for years while better life support loops are designed and built, and new sources of materials from the asteroids and moons of the Solar System are developed. • The Orion part is mostly the same, but the Jules Verne Cannon is a new and quite important addition, since it allows (future) us to make the main ships smaller and use less propellant, since raw materials don't have to be kept in cargo bays. – zovits May 26 '15 at 8:33 This is amplification on previous answers citing Project Orion (aka Nuclear Pulse Propulsion). Read the provided reference for history and technical background. Background Research & testing performed from 1950s - 1970s indicated that using that level of technology we could built a 8,000,000 tonne craft capable of achieving orbit and providing a bit of extra $\Delta V$, perhaps enough for $V_{esc}$. However, only about 1/3 of this mass is payload mass (2,700,000 tonne). The problem Let's assume we need to lift every human being off the Earth and there's zero population growth: Current population ~ 7,000,000,000 each Average mass ~ 100 kg / each Total mass ~ 700,000,000,000 kg Total tonnes ~ 700,000,000 tonnes Assume that we need 20x this mass for equipment & consumables for keeping people alive in transit, colony construction equipment, plus sundry other items. Also assume that the ships are constructed in a modular prefab fashion that allows us to directly use them at the destination or dissemble them and use the parts in existing colonies: Total lift requirement in tonnes ~ 14,000,000,000 tonnes Per craft payload mass in tonnes ~ 2,700,000 tonnes / craft Total required craft ~ 5200 craft Estimating timing Wikipedia states that world production of container ships was ~11,000,000 tons in 2011 and our Super Orions would require similar (but more stringent) levels of construction difficulties (remember we can use normal construction materials like steel). So without straining we could build on average 1 of these Super Orions per year but each would require multi-year construction (say 5 years like US aircraft carriers or perhaps even 10-15 years). We'd be done in 5,200 years. With straining, (I would guess) we could build 10x this number. We'd be done in 520 years. With all out desperation, the upper bounds of what we could make would be determined by our critical resource production (such as steel). Assume our 8,000,000 tonne craft are composed entirely of steel and that this is our limiting resource. This site indicates world steel production is around: World steel production ~ 150,000,000 tonnes / year Max Super Orion production ~ 20 ships / year We'd finish making our Super Orion fleet in 260 years. Estimating cost Let's assume that the amount of labor and difficulty of constructing these Super Orions equates on a tonne per tonne basis with the expense of building nuclear aircraft carriers. US nuclear powered aircraft carriers mass about 100,000 tonnes of displacement and cost 26 billion (USD). Each Super Orion will cost ~2.1 trillion (USD) or about 2/3 of the 2014 US federal expenditures. The fleet will cost 11,000 trillion (USD). Destinations The Earth is the garden spot of our Solar System. Very few places have both the readily available volatiles, metals, and other materials (soil?) that we'll need to survive. IMO, Mars and perhaps a few of the main body asteroids (Ceres is looking really good right now) might fulfill this role. But really only Mars has enough volatiles, metals, and room to host a significant portion of the human population. Where are we going to put everyone? • You can't divert all of the world's steel production to this project. Steel is needed to mine iron, to transport the iron ore and other products from mine to mill, and to transport the steel. Steel is needed to produce and transport food (people need to eat), transport petroleum (people need to get to work),and to make cars, trucks, railcars, railways, boats, and buildings. Your basic answer of 5200 years is short. That said, this is a much better answer than the other answer that proposes to use nuclear propulsion. – David Hammen May 25 '15 at 17:15 • @DavidHammen, I just used that as an upper bounds. In such a scenario, I imagine we'd increase our steel production significantly but could only divert this much to ship construction while the increased production went to maintain the necessary infrastructure. We'd probably switch to a rationing system to reduce consumption of resources critical to ship construction. Plus the reality is our ships are 1/3 propulsion (mostly pusher plate), 1/3 payload, & 1/3 all else (mostly structure & radiation shielding). A significant portion of our payload will be consumables like water. – Jim2B May 25 '15 at 20:43 • And you can't bring them down again and load them up???? – Loren Pechtel May 25 '15 at 22:18 • @LorenPechtel, I've never seen any research that discussed landing with these. I can foresee multiple problems with the maneuver too. I'm not saying we couldn't do it, but I do think it'd be terribly difficult and maybe/probably not worth the effort. Perhaps we'd build them along a modular plan which would allow us to dissemble them at their destinations and use them as prefab colony construction. It'd also mean that destinations like Mars are probably out. Which is a shame because I think only Mars has a hope of supporting billions of people. – Jim2B May 26 '15 at 1:00 • @Jim2B Orion was originally envisioned for manned missions. You think they planned re-entry capsules to come down with? Land over water so you don't need as exact a landing as SpaceX needs and the water can wash away most of the debris from the bombs and isn't destroyed by the last detonations. You're dunking brute force engineering rather than an intricate rocket. – Loren Pechtel May 26 '15 at 2:52 If we had 50 years, we'd be boned. With the most optimistic estimates of fuels costs; SpaceX can put things into Low-Earth Orbit for $1,600/kg. The average person weighs about 62kg. This becomes an enormous problem when you try to ship all 7.13 billion people into space. That's going to cost 704 million million dollars ($704,320,000,000,000). Now you have to find the spare cash to build enough spaceships for seven billion souls. At the end of the day, with anything close-to today's technology - it's just not practical to save most people. A lot of people are going to be left behind. We can't even keep a few astronauts in space indefinitely. We manage a few years at best. But in general, how many people we could potentially save is dependent on how advanced our technology has progressed - the further along we are, the more we can do. Exactly how far? That question is far too broad because every person can select their own subjective answer. • Sorry about this, I don't think I was clear enough, but I'm cool with the tech being slightly more advanced than what we currently have. – Feaurie Vladskovitz May 25 '15 at 6:43 • @FeaurieVladskovitz, updated answer. The crux of it is; slightly more advanced isn't going to cut it. Needs to be significantly at the very minimum. – user6511 May 25 '15 at 7:06 • I'm not sure where Wikipedia gets the \$1,600/kg figure from; at \$90M to lift 53,000 kg to LEO, the closest I can find is the Falcon Heavy costing a shade under \$1,700/kg (to LEO) maxed out. Regardless, this answer disregards a pretty major factor in looking at only the biomass, whereas to survive you need a lot more. Food, oxygen, heating, ... all mean that only a fraction of your rocket's payload capacity can be used to actually lift humans. Which means that in practice, the situation is significantly worse than laid out above. – user May 25 '15 at 11:11 • The gross world product is around$75trillion per year. Over 50 years that's ~4 million million dollars. I think this rounds out the answer as to why the cost is a limiting factor. Even if the entire global production was magically focused on space, it looks like we could save only 0.5% of humanity. But for a scifi story this actually gives you more than enough people to start a space civilization. – edA-qa mort-ora-y May 25 '15 at 13:09 • I believe this answer is correct if we limited ourselves to only chemical propulsion schemes, however, as other answers have pointed out, we don't need to restrict ourselves. We can using nuclear which possesses >1,000,000x as much specific energy as fuel. Plus the NPP type craft can afford to use heavy, bulk, vastly cheaper construction in the ships (it actually smooths out the ride) than the chemical propelled craft can. – Jim2B May 25 '15 at 16:31 There is a "documentary" called Escaping Earth that explores this scenario. And while I disagree with some of the options presented (ORION is IMO impractical and the whole "artificial ecosystem in cylinder" is quite stupid), it does make few points: • It is impossible to save every human, but it is possible to save human species and maybe some other species • It would take combined effort of whole world to save few thousand chosen people • There would be people against it I believe all issues we have right now with space colonization and exploration are related to money. We already have technology to create a colony on Mars, but there is huge pressure to minimize expenses, so it is quite limiting on amount of stuff we can send along with colonizers. But what if we could send a 1000 ton spaceship to Mars every month? In a few years, there would be enough material there that self-sufficiency could be achieve. Yeah, it would be expensive, but not impossible. We just need to motivate our politician to invest into space colonization instead of military and big corporations to make rockets instead of consumer gadgets. • "But what if we could send a 1000 ton spaceship to Mars every month?" Well, we can't. But yes, if money wasn't a concern and people were actually working together, we certainly could send a lot of stuff to Mars. – user May 25 '15 at 11:13 • Why do you feel NPP (Orion) is impractical? It'd be uncomfortable, yes. But most people would prefer discomfort to death... – Jim2B May 25 '15 at 16:32 • @Jim2B I would say worst problem is creating mechanical dampening system, that is able to withstand 1000s of blasts without need for repair or maintenance. IMO creating engine consisting of hundreds smaller ion engines powered by fusion or antimatter would be much more safer and maintainable solution. – Euphoric May 25 '15 at 18:28 • @Euphoric, those concepts have great $I_{sp}$ but insufficient Thrust to Weight ratio. You need both to do what needs to be done in this case and only nuclear has a sufficient "bang for the buck" (or weight in this case). Fusion might do it for us someday but we don't have that yet - so nuclear bombs are the only practical way to do it right now. – Jim2B May 25 '15 at 20:37 • But I agree, the mechanical damping and radiation (esp, $\gamma$ back scatter) are real problems that would need to be solved. We do, however, have a reason to be very optimistic that we could solve both problems. We understand them and know how to do it in principal we just don't have working models of the necessary systems yet. – Jim2B May 25 '15 at 20:39 If the Earth is going to be uninhabitable anyway, you can use ORION-type spaceships to get most valuable people (this is usually a great plot device) out. Maybe you can help some of the rest to get out if you use some of the 50 years to build a space elevator. I think it would be impossible to get everybody out, even if you had more time. I reckon the most likely scenario would be to take "valuable people" (define that as you will for your story), maybe useful animals and plants (or maybe just DNA samples) and after that, anyone/anything else that time andmoney allow for. It won't be nice, it won't be pretty and more likely the people left behind will riot and try to stop you... • Half the first volume can be about deciding who goes. Can you buy a ticket for your family by donating enough billion dollars? – Zither13 May 25 '15 at 16:56 • How about this? Supposedly, only people under 40 would get a chance to go (in a lottery). But our heroes discover thta the super-rich you can buy a ticket for 500 million euros... – Alex San May 25 '15 at 18:34 • @AlexSan: Would you really use a lottery? Or would you rather carefully analyze what skillsets (and genetic mix) you need and pick those? – Matthieu M. May 25 '15 at 18:58 • @MatthieuM. Yeah, lottery is stupid. But people that don't have any appreciable skills aren't going to be very happy when you declare that only the best and brightest are going to make it. Even today, many "stupid" people dislike "smart" people - add being "smart" as a criterion to survive, and you'll have massive rebellions on hand. The best would probably be some combination - get enough people to ensure the survival of the civilization and technology, and let the rest have a lottery. Since lottery is (stupidly) perceived as fair, it might very well help the PR enough :) – Luaan May 26 '15 at 8:41 • @Luaan: Or rig the lottery? (might be detected though, so your option might be best). I was thinking you might get a rebellion anyway when announcing the lottery results, maybe delaying the announce each time to right before take-off would preserve hope and prevent the rebellion. – Matthieu M. May 26 '15 at 8:57 Right now in order to keep astronauts in space they require a LOT of infrastructure on Earth, all the food and other supplies come from it. It is theoretically possible to grow plants in space, but the amount of land required even for a single family to subsist on is not trivial. Finally, technology breaks occasionally and requires repairs and spare parts, which also come from Earth. In 50 years it should be possible to setup some long term spaceships with a lot of spare parts and space greenhouses, but they'll provide for a few hundred at best, not sure entire planet will work for the tiny few to survive instead of looting and other fun stuff like that. And even then it won't be sustainable indefinitely, they won't be able to acquire resources or develop or grow, and after a decade or two things on the ship will start to break, their bones will become brittle and so on. Space presents enormous challenges and for every person staying up there requires thousands working down here to make it happen, creating even a tiny sustainable, self sufficient facility up there in just 50 years is pretty unlikely. Being able to put 7 billion in space and have life support for them would probably require a thousand years if it's even possible. The main limiting factor is time and to a lesser extent money. If the human race is at risk then a temporary solution could be made that meant money was irrelevant, or at least credit could be extended to defer the problem. I would advocate using the Moon as a temporary base, to buy time for a longer solution to be found. We already have the technology to get to the Moon and back and all we would need to do is build something similar but on a much bigger scale. Rockets could take off on a regular basis to make the trip and a Moon base could be built to house the ever increasing number of people arriving. The base would include facilities to turn around the rockets so they could be re-used many times to bring more and more people with more and more rockets being built on Earth so as time went by you would have thousands of rockets bringing in hundreds of people in each trip. Heavy resources that would be required including water (ice) and metals etc could be sent using the Jules Verne Cannon. Assuming each rocket could carry 1,000 people you would need 70,000,000 trips to evacuate everyone. Five thousand rockets, each taking a week for the round trip would therefore be able to evacuate 5,000,000 weekly, 250,000,000 annually. It would take 28 years to get everyone to the Moon. The figures are arbitrary of course, building a rocket capable of carrying 1,000 people is not currently possible but equally the 7 day time scale for the round trip is probably too long. Once established on the Moon a longer term solution could be found such as moving to Mars etc. Conditions on the Moon would be very crowded so living quarters, factories etc would be constructed under the surface. If the numbers didn't stack up then you could force a policy of not allowing people to have children, or 1 child per family as they do in China, this would reduce the figure from 7 billion somewhat over the time periods involved. You could also have something along the lines of Logan's Run where people over a certain age are left behind. To save the human race does not mean saving EVERY single human. Maybe high government officials and the rich people get to buy tickets to automatically be able to board the trip to the other planet, but the poor people which cannot afford a ticket will be either just not taken with or subjected to a lottery process. Or sort of a lottery where the more you pay, the bigger is your chance of winning a ticket. A person who wins the lottery is allowed to take his family with him if the family is not too big. Also, I would recommend that you construct several giant ships in orbit so that no fuel would be wasted for the ascent into space. You will bring the people that are to be saved to the ship using spaceplanes; the same applies for landing them on Mars when you arrive there. Also, you can send expeditions to Mars (this is the planet which I recommend) that do reconnaisance and find good sites for bases before setting up the modules (you can use robots too) and the agricultural domes. Before the main wave of people arrives, you will send crops and livestock to Mars.
{}
# Happy Pi Day 2013 It’s March 14 (3.14) and its Pi Day. Pi Day is an annual celebration of $\pi$ the most popular mathematical constant. Larry Shaw of San Francisco Exploratorium led the first large celebration of Pi Day in 1988, and since then it has spread all  over the world. Pi Pie (via Wikipedia) To know more about $\pi$, you might want to read my posts about it.
{}
Gauge/Gravity Duality 2015 Europe/Rome The Galileo Galilei Institute for Theoretical Physics (GGI) The Galileo Galilei Institute for Theoretical Physics (GGI) Arcetri, Florence Participants • Achilleas Passias • alberto zaffaroni • Aldo Lorenzo Cotrone • Alessandro Tomasiello • Alex Buchel • Alfonso Ballon-Bayona • Alfonso Ramallo • Alice Bernamonti • Amos Yarom • Andrea Amoretti • Andrea Cappelli • Andrea Marzolla • Andrei Parnachev • Andrei Starinets • Andrew O'Bannon • Ann-Kathrin Straub • Anton Faedo • Antonio Garcia Garcia • Ben Craps • Benjamin Assel • Blaise Goutéraux • Carlos Hoyos • Charlotte Kristjansen • Christiana Pantelidou • Christopher Herzog • Dam Thanh Son • Daniel Arean • Daniel Fernández • Daniele Musso • Davide Forcella • Dawei Pang • Debajyoti Sarkar • Di-Lun Yang • Dimitrios Zoakos • dmitri khveshchenko • Domenico Seminara • Eduardo Conde Pena • Elias Kiritsis • Enrico Randellini • Esko Keski-Vakkuri • Fabio Franchini • Federico Galli • Francesco Bigazzi • Francisco Pena-Benitez • Gianluca Grignani • Giuseppe Policastro • GUY DE TERAMOND • Hans Guenter Dosch • Himanshu Raj • Ignacio Salazar • Ioannis Bakas • Irina Galstyan • Jacob Sonnenschein • Jan de Boer • Javier Tarrio • Jelle Hartong • Jie Ren • Johanna Erdmenger • Jonas Probst • Keun-Young Kim • Kristan Jensen • Larus Thorlacius • Leopoldo A. Pando Zayas • Loredana Bellantuono • Luca Griguolo • Luis Melgar • Mahdis Ghodrati • Manuela Kulaxizi • Marco Caldarelli • Maria Paola Lombardo • Marika Taylor • Mario Flory • Marko Djuric • Martin Ammon • Matteo Baggioli • Matteo Bertolini • Matthew Lippert • Matthias Ihl • Matti Jarvinen • Micha Berkooz • Micha Moskovic • Michael Lublinsky • Michela Petrini • Mike Blake • Milosz Panfil • Monica Guica • Nabil Iqbal • Natalia Pinzani Fokeeva • Nathan Seiberg • Nicholas Evans • Nicodemo Magnoli • Niels Obers • Niko Jokela • Nikolaos Kaplis • Paul Dempster • Paul Romatschke • Pietro Colangelo • Rene Meyer • Riccardo Argurio • Richard Davison • Robert Myers • Roberto Auzzi • Sang-Jin Sin • Shigenori Seki • Shiraz Minwalla • Silvia Penati • Stanley J. Brodsky • Takaaki Ishii • Tobias Zingg • Troels Harmark • Umut Gursoy • Valentina Giangreco Puletti • Veselin Filev • Victor Godet • Vijay Balasubramanian • Walter Tangarife • Yegor Korovin • Yolanda Lozano • Yunseok Seo <b>Email</b> • Monday, 13 April • 08:15 08:55 Registration 40m • 08:55 09:00 Welcome 5m Speaker: Nick Evans • 09:00 12:50 Morning session • 09:00 Electric fields and quantum wormholes 40m A classical Einstein-Rosen bridge changes the topology of spacetime, allowing (for example) electric field lines to penetrate it. It has recently been suggested that in the bulk of a theory of quantum gravity, the quantum entanglement of ordinary perturbative quanta should be viewed as creating a quantum version of an Einstein-Rosen bridge between the quanta, or a “quantum wormhole”. For this “ER=EPR” correspondence to make sense it then seems necessary for a quantum wormhole to allow (for example) electric field lines to penetrate it. I will discuss (within low-energy effective field theory) whether or not this happens. Speaker: Nabil Iqbal • 09:40 Transport in holographic systems with momentum dissipation 40m Gauge/gravity duality can be used to study the transport properties of strongly interacting systems with no quasiparticles. I will give an overview of some holographic toy models of states like this, in which momentum is not conserved and thus the transport of energy and charge is non-trivial. I will describe how the transport properties of the most basic such example can be understood in terms of two simple, non-holographic, effective theories, one of which is valid when momentum dissipates slowly and one when it dissipates quickly. Speaker: Richard Davison • 10:20 Coffee break 30m • 10:50 Disorder in AdS/CFT 30m Speaker: Leopoldo Pando Zayas • 11:20 Holographic Charged Impurities 30m I will study the effect of charged impurities on holographic superconductors and brane intersections. Interestingly, for the former setup one can observe that for long enough system size the noise suppress superconductivity. I will present results for the conductivity of these disordered systems. Speaker: Daniel Arean • 11:50 Electromagnetic properties of charged viscous fluids: from string theory to electrons 30m Speaker: Davide Forcella • 12:20 Holographic Conductivity: Insulators, Supersolids, and scaling 30m Speaker: Elias Kiritsis • 12:50 14:20 Lunch break 1h 30m • 14:20 18:10 Afternoon session • 14:20 Generalized Global Symmetries 40m We will discuss in a systematic way a generalization of ordinary global symmetries, whose charged operators are line operators, surface operators, etc., and whose charged excitations are strings, membranes, etc. Many of the properties of ordinary global symmetries apply here. They lead to Ward identities and hence to selection rules on amplitudes. Such global symmetries can be coupled to classical background fields and they can be gauged by summing over these classical fields. These generalized global symmetries can be spontaneously broken (either completely or to a subgroup). They can also have ’t Hooft anomalies, which prevent us from gauging them, but lead to ’t Hooft anomaly matching conditions. Such anomalies can also lead to anomaly inflow on various defects and exotic Symmetry Protected Topological phases. Our analysis of these symmetries gives a new unified perspective of many known phenomena and uncovers new results. Speaker: Nathan Seiberg • 15:00 The Holographic Goldstino 40m We find the fingerprints of the Goldstino associated to spontaneous supersymmetry breaking at strong coupling, using holography. The Goldstino massless pole arises in two-point correlators of the supercurrent, due to contact terms in supersymmetry Ward identities. We show how these contact terms are obtained from the holographic renormalization of the gravitino sector, independently of the details of the background solution. Speaker: Matteo Bertolini • 15:40 Rigid Holography and the 6D (2,0) CFT on AdS_5*S^1 30m Field theories on anti-de Sitter (AdS) space can be studied by realizing them as low-energy limits of AdS vacua of string/M theory. In an appropriate limit, the field theories decouple from the rest of string/M theory. Since these vacua are dual to conformal field theories, this relates some of the observables of these field theories on anti-de Sitter space to a subsector of the dual conformal field theories. We exemplify this rigid holography' by studying in detail the six-dimensional ${\cal N}=(2,0)$ $A_{K-1}$ superconformal field theory (SCFT) on $AdS_5\times \mathbb{S}^1$, with equal radii for $AdS_5$ and for $\mathbb{S}^1$. We choose specific boundary conditions preserving sixteen supercharges that arise when this theory is embedded into Type IIB string theory on $AdS_5\times \mathbb{S}^5 / \mathbb{Z}_K$. On $\mathbb{R}^{4,1}\times \mathbb{S}^1$, this six-dimensional theory has a $5(K-1)$-dimensional moduli space, with unbroken five-dimensional $SU(K)$ gauge symmetry at (and only at) the origin. On $AdS_5\times \mathbb{S}^1$, the theory has a $2(K-1)$-dimensional moduli space' of supersymmetric configurations. We argue that in this case the $SU(K)$ gauge symmetry is unbroken everywhere in the moduli space' and that this five-dimensional gauge theory is coupled to a four-dimensional theory on the boundary of $AdS_5$ whose coupling constants depend on the moduli'. This involves non-standard boundary conditions for the gauge fields on $AdS_5$. Near the origin of the moduli space', the theory on the boundary contains a weakly coupled four-dimensional ${\cal N}=2$ supersymmetric $SU(K)$ gauge theory. We show that this implies large corrections to the metric on the moduli space'. The embedding in string theory implies that the six-dimensional ${\cal N}=(2,0)$ theory on $AdS_5\times \mathbb{S}^1$ with sources on the boundary is a subsector of the large $N$ limit of various four-dimensional ${\cal N}=2$ quiver SCFTs that remains non-trivial in the large $N$ limit. The same subsector appears universally in many different four-dimensional ${\cal N}=2$ SCFTs. We also discuss a decoupling limit that leads to ${\cal N}=(2,0)$ little string theories' on $AdS_5\times \mathbb{S}^1$. Speaker: Micha Berkooz • 16:10 Coffee break 30m • 16:40 Quark mass in backreacted holographic QCD 30m QCD has an interesting dependence on the quark mass, in particular near the edge of the conformal window at large N_f. This can be studied by using holographic bottom-up models for QCD in the Veneziano limit (V-QCD) where the flavor fully backreacts to the glue. I will sketch the phase diagram as a function of the quark mass and x=N_f/N_c, and discuss phenomena such as the hyperscaling relations of the meson masses and the discontinuity of the S-parameter in the conformal window. I will show how the phase diagram in the presence of a double trace deformation ~(\bar q q)^2 is obtained. Speaker: Matti Jarvinen • 17:10 Nuclear shadowing in the holographic framework 30m Speaker: Pietro Colangelo • 17:40 Light-Front Holography and New Advances in Nonperturbative QCD 30m Speaker: Stanley Brodsky • 18:10 19:30 Reception 1h 20m • Tuesday, 14 April • 09:00 12:50 Morning session • 09:00 The Membrane Paradigm at large D 40m Speaker: Shiraz Minwalla • 09:40 Entanglement Entropy and Boundary Terms: Two Short Stories 40m Conformal transformations to hyperbolic space are a useful tool for calculating entanglement entropies in conformal field theories. I will use this tool to calculate thermal corrections to entanglement entropy for conformal field theories on spheres. I will also consider the universal log contribution to the entanglement entropy for CFTs in even dimensional flat space. In both cases, we will see the crucial role played by boundary terms. Speaker: Christopher Herzog • 10:20 Coffee break 30m • 10:50 Twisted index of 3d gauge theories 30m We discuss general results for a generalized twisted partition function of 3d gauge theories on S^2 X S^1 which are relevant for the holographic interpretation of supersymmetric AdS4 black holes. Speaker: Alberto Zaffaroni • 11:20 A menagerie of non-relativistic physics 30m Spacetime symmetries lead to non-perturbative constraints on transport, like the Einstein relation between electric and thermal conductivity. I will discuss some recent progress in the understanding of the spacetime symmetries of non-relativistic systems, like the quantum and anomalous Hall effects, as well as the corresponding implications for transport. Unexpectedly, these results also shed light on so-called warped CFTs in two dimensions — a sort of chiral, non-Lorentz-invariant (and so non-relativistic) CFT — which are motivated by string theory but have remained mysterious. Speaker: Kristan Jensen • 11:50 BPS Wilson loops and an exact result for the Bremsstrahlung function in ABJM model 30m We discuss results and quantum properties of a family of generalized fermionic Wilson loops in ABJ(M) theory. We propose an all-order prescription for computing the Bremsstrahlung function associated to the 1/2-BPS cusp in terms of this family of WL. The validity of this prescription is a non-trivial test of the AdS4/CFT3 correspondence. Speaker: Silvia Penati • 12:20 Supersymmetric AdS(5) solutions of massive type IIA supergravity 30m We discuss the classification of generic N = 1 supersymmetric AdS(5) x M solutions of massive type IIA supergravity and the discovery of new analytic ones. The necessary and sufficient conditions for supersymmetry amount to a system of differential conditions on a local identity structure on M. Known AdS(5) solutions of type IIA supergravity are reproduced by the latter. Upon making further assumptions, the system reduces to an ODE whose analytic solution yields new AdS(5) backgrounds with non-zero Romans mass; M is topologically a three-sphere fibered over a Riemann surface of negative curvature. The holographically dual 4d SCFTs conjecturally arise by compactifying 6d (1,0) SCFTs on a Riemann surface. Speaker: Achilleas Passias • 12:50 14:20 Lunch break 1h 30m • 14:20 17:50 Parallel Session: A • 14:20 Turbulent strings in AdS/CFT 30m I will talk about nonlinear dynamics of the flux tube between an external quark-antiquark pair in N=4 SYM using the dual string description in AdS/CFT. Perturbing the endpoints of the string and numerically computing its nonlinear time evolution, I will show that cusps can be formed on the string. I will discuss a connection between this phenomenon and turbulent behavior in the energy spectrum. Speaker: Takaaki Ishii • 14:50 Holographic Charge Oscillations 30m The Reissner-Nordstrom black hole provides the prototypical description of a holographic system at finite density. We study the response of this system to the presence of a local, charged impurity. Below a critical temperature, the induced charge density, which screens the impurity, exhibits oscillations. These oscillations can be traced to the singularities in the density-density correlation function moving in the complex momentum plane. At finite temperature, the oscillations are very similar to the Friedel oscillations seen in Fermi liquids. However, at zero temperature the oscillations in the black hole background remain exponentially damped, while Friedel oscillations relax to a power-law. Speaker: Mike Blake • 15:20 Holographic three-dimensional YM with compressible matter 30m We present the holographic dual of strongly coupled, three-dimensional Yang-Mills theories with massless flavour in the Veneziano limit at finite quark density. The fundamental degrees of freedom are modelled by a distribution of D6-branes backreacting the geometry of a stack of colour D2-branes. A finite chemical potential corresponds to the time component of a gauge field living on the flavour branes. We discuss the RG flows triggered by the presence of the charge density and argue that generically the IR is governed by a fixed point with particular scaling properties. We finally comment on interesting observables sensible to the different regimes of the system. Speaker: Anton Faedo • 15:50 Coffee break 30m • 16:20 Universal properties of cold holographic matter 30m I will briefly review the Landau-Fermi liquid theory and then discuss the holographic counterpart by modeling the cold matter in terms of D-brane intersections. I will focus on determining universal properties of these systems and study them at finite temperature, charge density, and magnetic fields. In particular, I will present analytic results for the diffusion constants and the zero sound dispersions. Finally, I will explore the (2+1)-dimensional anyonic liquids. Speaker: Niko Jokela • 16:50 Flux and Hall states in Chern-Simons matter theories with flavor 30m Speaker: Dimitrios Zoakos • 17:20 Horava-Lifshitz Gravity from Dynamical Newton-Cartan Geometry 30m It will be shown that (non-)projectable Horava-Lifshitz gravity and the dynamics of (twistless torsional) Newton-Cartan geometry are one and the same thing. Speaker: Jelle Hartong • 14:20 17:50 Parallel Session: B Room B Room B The Galileo Galilei Institute for Theoretical Physics (GGI) • 14:20 Superconformal Quantum Mechanics and Emerging Holographic QCD 30m The observed light-hadron spectrum will be described from a superconformal semiclassical approximation to light-front QCD and its embedding in AdS space. This procedure uniquely determines the confinement potential for arbitrary spin. To this end, we will show that wave equations in AdS space are dual to light-front supersymmetric quantum mechanical bound-state equations in physical space-time. The specific breaking of dilatation invariance within the supersymetric algebra explains hadronic properties common to light mesons and baryons, such as the observed mass pattern in the radial and orbital excitations, as well as their distinctive and systematic features. Furthermore, the generalized supercharges connect the baryon and meson spectra. The lowest-lying state, the the pi-meson, is massless in the chiral limit and has no supersymmetric partner. Preliminary results extending the supersymmetric relations across the heavy-light hadronic spectrum will also be presented. Speaker: Guy De Teramond • 14:50 Pion resonances in Holographic QCD 30m We investigate the leptonic decay constants of pion resonances using a 5-d holographic model for Quantum Chromodynamics (Holographic QCD). We obtain a generalized version of the partially conserved axial current (PCAC) relation that includes the pion resonances. In the chiral limit, we find that the decay constants vanish, confirming a prediction from nonperturbative QCD based on the Bethe-Salpeter equations. Speaker: Alfonso Ballon-Bayona • 15:20 All order linearized hydrodynamics from fluid-gravity correspondence 30m Speaker: Michael Lublinsky • 15:50 Coffee break 30m • 16:20 Holographic Superconductors in Helical Backgrounds and Homes' Relation 30m We present results on a holographic s-wave superconductor in a helically symmetric Bianchi VII symmetric space-time, and discuss the validity of Homes' law in this system. We determine the phase diagram in terms of the helix parameters, and the AC conductivity of the different phases. We in particular show that Homes' relation holds for an regime of intermediary momentum relaxation strength. For both weak and very strong lattice perturbations, i.e. weak and strong momentum dissipation, Homes' relation is violated. Speaker: Rene Meyer • 16:50 On Holographic Insulators and Supersolids 30m We find holographic insulators and superconductors with a hard gap and a discrete spectrum, from an Einstein-Maxwell-scalar system in a fractionalized phase. The ground state of the system has a hyperscaling violating geometry in the IR. We break the translational invariance by adding massless scalar fields responsible for momentum dissipation. Speaker: Jie Ren • 17:20 Analogue holographic correspondence in optical metamaterials 30m We assess the prospects of using optical metamaterials for simulating various aspects of analogue gravity and holographic correspondence. Albeit requiring a careful engineering of the dielectric media, some hallmark features reminiscent of the conjectured 'generalized' (non-AdS/non-CFT) holography can be detected by measuring non-local optical field correlations. The possibility of such simulated behavior might also shed light on the nature of certain ostensibly holographic phenomena in the condensed matter, optical, and AMO systems with emergent effective metrics which may not, in fact, require any reference to the original string-theoretical holography. Speaker: Dmitri Khveschenko • Wednesday, 15 April • 09:00 12:50 Morning session • 09:00 Holographic Quantum Hall Ferromagnetism 40m Speaker: Charlotte Kristjansen • 09:40 Is the composite fermion a Dirac particle? 40m The theory of the fractional quantum Hall effect is based on the notion of the composite fermion, which is the low-energy quasiparticle for filling factor close to 1/2. I will show that the particle-hole symmetry of the half-filled Landau level implies that the composite fermion is a Dirac particle, characterized by a Berry phase of $pi$ around the Fermi surface. Physical consequences are discussed. Speaker: Dam Thanh Son • 10:20 Coffee break 30m • 10:50 Holographic graphene bilayers 30m The possibility of inter-layer exciton condensation in holographic models of a strongly coupled double monolayer Dirac semi-metal are studied in detail. It is showed that, when the charge densities on the layers are exactly balanced so that, at weak coupling, the Fermi surfaces of electrons in one monolayer and holes in the other monolayer would be perfectly nested, inter-layer condensates can form for any separation of the layers. The case where both monolayers are charge neutral is special. There, the inter-layer condensate occurs only for small separations and is replaced by an intra-layer exciton condensate at larger separations. The phase diagram for charge balanced monolayers for a range layer separations and chemical potentials is found. We also show that, in semi-metals with multiple species of massless fermions, the balance of charges required for Fermi surface nesting can occur spontaneously by breaking some of the internal symmetry of the monolayers. This could have important consequences for experimental attempts to find inter-layer condensates. Speaker: Gianluca Grignani • 11:20 Monopoles and magnetic oscillations in holographic liquids 30m We study monopoles and the role of magnetic field in holographic models of compressible phases of quantum matter, both from a bottom-up and a top-down approach. The former extends previous electron star models to include a magnetic field at finite temperature, while the latter is based on D-brane constructions. I will present these models and their most interesting features, such as quantum oscillations which differ from those predicted by Fermi liquid theory. Speaker: Valentina Giangreco Puletti • 11:50 Diffusion and incoherence 30m Charge and heat diffusion are essential handles to grasp the transport of strongly coupled systems where momentum is quickly dissipated. Holography without momentum conservation allows to compute the diffusion constants and explore the formulation of (possibly general) bounds in analogy with the eta/s bound. The talk focuses on the description of the diffusion constants computation and properties highlighting the strength and limitations of the associated conjectured bounds, especially in view of attacking the strange metal transport physics from the diffusion side. Speaker: Daniele Musso • 12:20 Semiholography, conductivity and Ward identities 30m In semiholography, a strongly-coupled conformal field theory with a holographic dual is coupled to another theory that is weakly interacting. We show how semiholography is set up in accordance with the Ward identities of the total theory. As an example, we charge fermions in the total theory under a U(1) gauge field, and compute the total electrical conductivity and Ward identity corresponding to charge conservation. The resulting conductivity can be expressed in vacuum CFT correlators which we compute in the dual holographic spacetime. Most importantly one has to include the 3-point vertex in the curved background for consistency. Speaker: Umut Gursoy • 12:50 14:20 Lunch break 1h 30m • 14:20 17:50 Parallel Session: A Room A Room A The Galileo Galilei Institute for Theoretical Physics (GGI) • 14:20 Entanglement entropy associated to a far-from-equilibrium energy flow 30m The time evolution of the energy transport triggered in a strongly coupled quantum critical system by a temperature gradient is holographically related to the evolution of an asymptotically AdS black brane with a gradient in its planar horizon. A relevant observable that provides physical insight about the evolution of this system and the eventual formation of a steady state is the entanglement entropy. In this talk, I will present an overview of this problem, along with results for the entanglement entropy in the regime where the difference in temperatures is small. Speaker: Daniel Fernandez • 14:50 Holographic magneto-transport and strange metals 30m In this talk we analyze the thermo-electric transport properties of a strongly coupled, planar medium in the presence of an orthogonal magnetic field and disorder. Even though the analysis is performed within the gauge/gravity framework, we propose and argue for a possible universal relevance of the results relying on comparisons and extensions of previous hydrodynamical analyses and experimental data for strange metals. Speaker: Andrea Amoretti • 15:20 Testing the membrane paradigm with holography 30m For an asymptotic observer a black hole can be replaced by a simple dissipative membrane located at a stretched horizon, i.e. a very small distance outside the horizon. In this talk I will show what are the limits of validity of such approximation scheme. In particular I will argue that it generically fails to capture massive quasinormal modes. I will also show how it instead reproduces hydrodynamical modes of and AdS black brane as long as an additional spurious excitation is removed from the spectrum. Speaker: Natalia Pinzani Fokeeva • 15:50 Coffee break 30m • 16:20 Thermoelectric Conductivities at Finite Magnetic Field and the Nernst effect 30m We study electric, thermoelectric, and thermal conductivities of a strongly correlated system in the presence of magnetic field by gauge/gravity duality. We consider a general class of Einstein-Maxwell-Dilaton theory with axion fields imposing momentum relaxation. Analytic general formulas for DC conductivities and the Nernst signal are derived in terms of the black hole horizon data. For an explicit model study we analyse in detail the dyonic black hole modified by momentum relaxation. In this model, the Nernst signal shows a typical vortex-liquid effect when momentum relaxation effect is comparable to chemical potential. We compute all AC electric, thermoelectric, and thermal conductivities by numerical analysis and confirms that their zero frequency limits precisely reproduce our analytic formulas, which is a non-trivial consistency check of our methods. We discuss the momentum relaxation effect on conductivities including cyclotron resonance poles. Speaker: Yunseok Seo • 16:50 Electron-Phonon interactions, MIT and Holographic Massive Gravity 30m Massive gravity is holographically dual to ''realistic'' materials with momentum relaxation. In its fully covariant formulation it in fact provides an holographic effective description for electron-phonon interactions. I will show how phonons' degrees of freedom are enconded in massive gravity and which are the interesting phenomenological features concerning the transport properties of the dual theory. In particular non-linear interactions in the phonons-sector can provide a metal-insulator crossover and a pinned response in the optical Conductivity. Speaker: Matteo Baggioli • 17:20 Spontaneous Breaking of U(N) symmetry in invariant Matrix Models 30m Matrix Models have a strong history of success in describing a variety of situations, from nuclei spectra to conduction in mesoscopic systems, from strongly interacting systems to various aspects of mathematical physics, from holographic models to supersymmetric theories in the localization limit. Traditionally, the requirement of base invariance has lead to a factorization of the eigenvalue and eigenvector distribution and, in turn, to the conclusion that invariant models describe extended systems. Moreover, Wigner-Dyson statistics for the eigenvalues is a hallmark of eigenvector delocalization. Thus, in virtually all applications of matrix models, eigenvectors are discarded and one considers just the eigenvalues. We show that deviations of the eigenvalue statistics from the Wigner-Dyson universality reflects itself on the eigenvector distribution and that a gap in the eigenvalue density breaks the U(N) symmetry to a smaller one. Moreover, this spontaneous symmetry breaking means that egeinvectors become localized and that the system looses ergodicity and that the system has lost replica symmetry invariance. We also consider models with log-normal weight, such as those emerging in Chern-Simons and ABJM theories. Their eigenvalue distribution is intermediate between Wigner-Dyson and Poissonian, which candidates these models for describing a system intermediate between the extended and localized phase. We show that they have a much richer energy landscape than expected, with their partition functions decomposable in a large number of equilibrium configurations, growing exponentially with the matrix rank. We argue that this structure is a reflection of the non-trivial (multi-fractal) eigenvector statistics. Speaker: Fabio Franchini • 14:20 17:50 Parallel Session: B Room B Room B The Galileo Galilei Institute for Theoretical Physics (GGI) • 14:20 The local renormalization group equation in superspace 30m The superspace formulation of the local renormalization group equation is discussed. This is framework in which the constraints of holomorphy and R-symmetry on supersymmetric RG flows are manifest. Background fields are used to define the super-Weyl symmetry off-criticality and to derive the consistency conditions associated with this symmetry. An analog of the "a-maximization" equation, which is valid off-criticality is introduced. This machinery is also applied to the study of conformal manifolds and a simple proof is given that the metric on such manifolds is Kahler. Speaker: Roberto Auzzi • 14:50 Looking for an on-shell regulator 30m A generic QFT without mass gap may present IR and UV divergencies, which must be regularized before making sense of them. In a Feynman diagram, one thinks of these divergencies as arising from an infinite integration region for a certain off-shell momentum. When one treats the theory purely on-shell, these divergencies must be seen (and regularized) in some other way. In this talk I present an approach to tackle this problem, in the particular case of divergencies of scattering amplitudes, using the so-called on-shell diagrams. Speaker: Eduardo Conde Pena • 15:20 Holographic Representation of Bulk Fields and Locality in (A)dS 30m The study of local physics in a theory of quantum gravity is an important problem. (A)dS/ CFT gives us a platform to study this issue from the CFT perspective. The construction of local bulk scalars in semiclassical limit of AdS/ CFT is well-known at order by order in 1/N perturbation. Here we discuss the recent developments on this topic and in particular describe how to extend this program for fields with spin- 1 and higher. We work in both AdS and dS spaces. Local field construction is also made at arbitrary cut-off surfaces in (A)dS and their prospective connections to holographic RG are explored. Finally we argue about various finite N scenarios and their effects on bulk locality and black hole information paradox. Based on 1204.0126, 1408.0415, 1411.4657, 1501.XXXXX and related works. Speaker: Debajyoti Sarkar • 15:50 Coffee break 30m • 16:20 Fermi gas formulation for D-type quivers 30m I will explain that the exact partition function of D-type N=4 quivers SCFTs on a three-sphere coincides with the partition function of a gas of free fermions in one dimension with a non-standard Hamiltonian. I will describe briefly the mirror dual quiver theories and show that mirror symmetry is expressed as a simple canonical transformation on the density operator of the quantum mechanics. Finally I will discuss the exact evaluation of the perturbative part of the free energy for these theories, which takes the form an Airy function, and emphasize the perspectives for holography. Speaker: Benjamin Assel • 16:50 Scale vs. Conformal Invariance in Holography with Higher Derivative Corrections 30m Gravitational theories with higher derivative corrections in AdS are dual to non-unitary QFTs. In particular, these may provide holographic examples of scale but not conformally invariant field theories (SFT). A distinct signature of a SFT is the non-vanishing scale anomaly R^2, which may be computed holographically. We perform a systematic near-boundary analysis of relevant gravitational theories. Generically the R^2 anomaly is not present. However we identify a very special class of theories (e.g. Chern-Simons gravity in 5d) with non-standard near-boundary behaviour. For these theories the R^2 anomaly may be present. Speaker: Yegor Korovin • 17:20 Entanglement entropy in a holographic model of the Kondo effect. 30m My starting point is a holographic model of the Kondo effect recently proposed by Erdmenger et. al., i.e. of a magnetic impurity interacting with a strongly coupled system. Specifically, I focus on the challenges of computing gravitational backreaction in this model, which demands a study of the Israel junction conditions. I present general results on these junction conditions, including analytical solutions for certain toy models, that may be relevant also more generally in the AdS/boundary CFT correspondence. Furthermore, similar junction conditions for a bulk Chern-Simons field appearing in the holographic Kondo model are discussed. I then focus on the computation and interpretation of entanglement entropy in this holographic model. Speaker: Mario Flory • Thursday, 16 April • 09:00 12:50 Morning session • 09:00 Quantum quenches & holography 40m We study quantum quenches in a holographic framework, where the quenches involve varying the coupling of a relevant operator in the boundary theory. The time dependence of the new coupling is characterized by a transition time and the observables exhibit a universal scaling behaviour when this timescale becomes the smallest scale in the problem. The same scaling behaviour is found for mass quenches in free field theories and we argue that, in fact, it will apply for any theory which flows from a ultraviolet fixed point. Speaker: Robert Myers • 09:40 Holographic entanglement entropy in excited states from 2d CFT 40m I will consider the entanglement entropy in 2d conformal field theory in a class of excited states produced by the insertion of a heavy local operator. I will discuss the universal contribution from the stress tensor to the single interval entanglement entropy, and conjecture that this dominates the answer in theories with a large central charge and a sparse spectrum of low-dimension operators. The resulting entanglement entropy agrees precisely with holographic calculations in three-dimensional gravity. I will illustrate this in two examples: high-energy eigenstates of the Hamiltonian and local quenches. Speaker: Alice Bernamonti • 10:20 Coffee break 30m • 10:50 Holographic thermalization and AdS instability 30m I will discuss recent developments in the study of AdS (in)stability and their relation to holographic thermalization. Speaker: Ben Craps • 11:20 Black brane steady states 30m We follow the evolution of an asymptotically AdS black brane with a fixed temperature gradient at spatial infinity until a steady state is formed. The resulting energy density and energy flux of the steady state in the boundary theory are compared to a conjecture on the behavior of steady states in conformal field theories. Very good agreement is found. Speaker: Amos Yarom • 11:50 Chaos in the matrix model, and formation and evaporation of a black hole 30m We study real-time evolution of a highly stringy regime of the BFSS matrix model by using a numerical method. We demonstrate that several important properties of a black hole, such as the fast scrambling and evaporation, can be seen, just by following classical time evolution of the matrix model. • 12:20 Holographic topological entanglement entropy and ground state degeneracy 30m Topological entanglement entropy, a measure of the long-ranged entanglement, is related to the degeneracy of the ground state on a higher genus surface. We construct a class of holographic models where such relation is similar to the one exhibited by Chern-Simons theory in a certain large N limit. Both the non-vanishing topological entanglement entropy and the ground state degeneracy in these holographic models are consequences of the topological Gauss-Bonnet term in the dual gravitational description. Speaker: Andrei Parnachev • 12:50 14:20 Lunch break 1h 30m • 14:20 18:10 Afternoon session • 14:20 Simulations of BH Collisions in AdS5 40m In the context of gauge/gravity duality, it has been suggested that the far-from equilibrium strongly coupled dynamics encountered in ultrarelativistic heavy-ion collisions may be modeled as the collisions of black holes in asymptotic anti-de-Sitter spacetimes. I will present results from the evolution of spacetimes that describe the merger of asymptotically global AdS black holes in 5D with an SO(3) symmetry. The initial trapped regions are sourced by scalar field collapse and we are able to evolve through the ensuing black hole merger as well as subsequent ring-down. The boundary stress tensor corresponding to this evolution is found to correspond to hydrodynamics at late times, but not at early times. Implications and generalizations of this work and signatures that could be relevant to experimental observations at RHIC and the LHC will be discussed. Speaker: Paul Romatschke • 15:00 Equilibration rates in a strongly coupled nonconformal quark-gluon plasma 40m We study of equilibration rates of strongly coupled quark-gluon plasmas in the absence of conformal symmetry. We primarily consider a supersymmetric mass deformation within N}=2^* gauge theory and use holography to compute quasinormal modes of a variety of scalar operators, as well as the energy-momentum tensor. In each case, the lowest quasinormal frequency, which provides an approximate upper bound on the thermalization time, is proportional to temperature, up to a pre-factor with only a mild temperature dependence. We find similar behaviour in other holographic plasmas, where the model contains an additional scale beyond the temperature. Hence, our study suggests that the thermalization time is generically set by the temperature, irrespective of any other scales, in strongly coupled gauge theories. Speaker: Alex Buchel • 15:40 A more realistic thermalization scenario in holography 30m Holography has provided a brand new window on the thermalization of Yang-Mills plasmas. Compared to data from Relavistic Heavy Ion Collisions, holographic computations tend to (1) have a shorter thermalization time scale, (2) permit a linearized description in terms of quasinormal modes already at extremely early stages rather than close to the final configuration, (3) thermalize less from the IR up than one would infer from weakly coupled field theories. We will argue in particular that the holographic linearized quasinormal description is an artefact of the large N limit, and that 1/N corrections should make the system more realistic. We test this by introducing an additional decay of the quasinormal modes to each other. Our results show that this diminishes the three discordances between holographic thermalization and experiment. • 16:10 Coffee break 30m • 16:40 Holographic Kondo defects and universality in holographic superconductors with broken translation symmetry 30m For a recently established holographic model of a magnetic impurity coupled to a strongly interacting system, we consider quantum quenches as well as the entanglement entropy. Both provide information about the size and formation of the Kondo cloud which screens the impurity. As a second topic, I will present recent results on universal behaviour in a family of holographic s-wave superconductors obtained by adding a scalar field to translation-breaking gravity backgrounds with Bianchi VII symmetry. Speaker: Johanna Erdmenger • 17:10 Holographic spin fluctuation and competition of two orders 30m Speaker: Sang-Jin Sin • 17:40 A Monotonicity Theorem for Two-dimensional Boundaries and Defects 30m I will propose a proof for a monotonicity theorem, or c-theorem, for a three-dimensional Conformal Field Theory (CFT) on a space with a boundary, and for a two-dimensional defect coupled to a higher-dimensional CFT. The proof is applicable only to renormalization group flows that are localized at the boundary or defect, such that the bulk theory remains conformal along the flow, and that preserve locality, unitarity, and Euclidean invariance along the defect. The method of proof is a generalization of Komargodski’s proof of Zamolodchikov’s c-theorem. The key ingredient is an external “dilaton” field introduced to match Weyl anomalies between the ultra-violet (UV) and infra-red (IR) fixed points. Unitarity of the dilaton’s effective action guarantees that a certain coefficient in the boundary/defect Weyl anomaly must take a value in the UV that is larger than (or equal to) the value in the IR. Speaker: Andrew O'Bannon • 20:30 23:30 Social Dinner 3h The social dinner will take place at 8:30 p.m. at Ristorante Palazzo Gaddi • Friday, 17 April • 09:00 12:50 Morning session • 09:00 Effective actions for fluids from holography 40m Effective actions based on scalar fields or Goldstone bosons are frequently used to describe fluids. The precise interpretation of such actions from a gravitational point of view has been somewhat unclear. In this talk I will describe a holographic interpretation of such effective actions and discuss the connection to other approaches to fluid/gravity duality. Speaker: Jan de Boer • 09:40 Entwinement and the emergence of spacetime 40m Speaker: Vijay Balasubramanian • 10:20 Coffee break 30m • 10:50 Torsional Newton-Cartan geometry in Lifshitz holography and non-relativistic field theories 30m I will discuss recent progress in understanding Lifshitz holography, including the appearance of torsional Newton-Cartan geometry on the boundary. The coupling of non-relativistic field theories to such a geometry will be cosnidered, along with the corresponding symmetry structure for the case of a flat Newton-Cartan background. We will show that, depending on the details of the action, such actions can have various degrees of global space-time symmetries ranging from Lifshitz to Schroedinger. On the holographic side, we show that the Lifshitz vacuum is the perfect dual of flat Newton-Cartan spacetime, exhibiting the same symmetries. Speaker: Niels Obers • 11:20 How far can we push the AdS/Ricci-flat correspondence? 30m The AdS/RF correspondence connects classes of Ricci-flat spacetimes to asymptotically anti-de Sitter spacetimes, and thus endows these vacuum gravity solutions with a generalized conformal symmetry and a holographic structure, both inherited from AdS. The precise map requires however the dimensional reduction over a round sphere, and is therefore unsuitable to gain insights into asymptotically flat spacetimes, such as the Schwarzschild black hole. In this talk, I will explore the possibility of circumventing this limitation and to extend the correspondence to more general classes of spacetimes. Speaker: Marco Caldarelli • 11:50 AdS black hole thermodynamics with scalar hair 30m I will discuss the thermodynamics of asymptotically AdS black holes with scalar hair whose mass lies in the window allowing mixed (multi-trace) boundary conditions. Such boundary conditions on the scalars require a careful definition of the asymptotic charges, but the first law and other thermodynamic properties continue to hold in their usual form. • 12:20 From strange metals to black holes and back: effective theories of thermoelectric transport 30m What can Gauge/Gravity duality teach us about Condensed Matter physics? While the search goes on for gravitational duals to strange metals and other strongly-coupled Condensed Matter systems, I'll discuss two specific examples where holographic computations have allowed us to formulate effective theories of thermoelectric transport, the existence of which does not rely on holography or long-lived Landau quasiparticles. One is a scaling theory of thermal quantum critical transport, the other a theory of 'hydrodynamics' without conserved momentum. Speaker: Blaise Gouteraux • 12:50 14:20 Lunch break 1h 30m • 14:20 18:10 Afternoon session • 14:20 Entanglement and differential entropy for massive flavors 40m In this talk we will discuss entanglement entropy for massive flavors, from both holographic and field theory perspectives. We will describe efficient computational methods for the holographic entanglement entropy of brane systems, and we will show that the holographic entanglement entropy agrees precisely with field theory expectations. We will explain how to extract finite terms in the the entanglement entropy unambiguously and give physical interpretations to these finite contributions. Finally we will discuss the differential entropy for such systems, arguing that (in contrast to earlier work) the differential entropy does not capture global spacetime structure. Speaker: Marika Taylor • 15:00 Behind the geon horizon 40m We explore the Papadodimas-Raju prescription for reconstructing the region behind the horizon of one-sided black holes in AdS/CFT in the case of the RP^2 geon - a simple, analytic example of a single-sided, asymptotically AdS_3 black hole, which corresponds to a pure CFT state that thermalises at late times. We show that in this specific example, the mirror operators involved in the reconstruction of the interior have a particularly simple form: the mirror of a single trace operator at late times is just the corresponding single trace operator at early times. We use some explicit examples to explore how changes in the state modify the geometry inside the horizon. Speaker: Monica Guica • 15:40 Position space analysis of the AdS (in)stability problem 30m We investigate whether arbitrarily small perturbations in global AdS space are generically unstable and collapse into black holes on the time scale set by gravi- tational interactions. We argue that current evidence, combined with our analysis, strongly suggests that a set of nonzero measure in the space of initial conditions does not collapse on this time scale. On the other hand, existing results do not provide an equally strong indication whether the unstable solutions also form a set of nonzero mea- sure. We perform an analysis in position space to address this puzzle, and our formalism allows us to directly address the vanishing-amplitude limit. We show that gravitational self-interaction leads to tidal deformations which are equally likely to focus or defocus energy, and we sketch the phase diagram accordingly. We also clarify the connection between gravitational evolution in global AdS and holographic thermalization. Speaker: Matthew Lippert • 16:10 Coffee break 30m • 16:40 A Simple Holographic Superconductor with Momentum Relaxation 30m Speaker: Keun-Young Kim • 17:10 Spin Matrix theory as a model for the AdS/CFT correspondence 30m We introduce a new type of quantum mechanical theory called Spin Matrix theory. It is a generalization of nearest-neighbor spin chain theories. We show that Spin Matrix theory arise from N=4 super Yang-Mills theory near certain zero-temperature critical points. We find that Spin Matrix theory contains a variety of phases that mimics that of the AdS/CFT correspondence, and hence gives a quantum mechanical model of the AdS/CFT correspondence. Finally, we suggest that Spin Matrix theory by itself can describe a Holographic correspondence. Speaker: Troels Harmark • 17:40 Motivated by holographic stringy hadrons, I propose a model of stringy hadrons in four flat space-time dimensions. Mesons are rotating open strings with massive quarks" in their endpoints. Baryons are open strings with a quark on one end and a baryonic vertex and a di-quark on the other end. Glueballs are rotating folded closed strings. A detailed fit of the model to experimental data will be presented including extracting the best fit parameters for the string tension, intercept and endpoint masses. The issue of the identification ` nature's gluballs" will be addressed. I will discuss and report on certain progress about the yet unsolved problem of the quantization of rotating bosonic open ( with massive endpoints) and folded closed strings in four dimensions.
{}
## Friday, May 18, 2012 ### Batch-processing data files in a directory tree with Matlab You may find the following code useful to process data with Matlab in a batch fashion. This example processes all the PNG files within the first level of directories that hang under the current directory, that additionaly have "noise" as a part of its name and "gif_" is not to be found within the name. for i=1:N name=files(i).name; if (files(i).isdir == 0) && strcmp(name(end-3:end),'.png') && _  isempty(strfind(name,'nois')) && _ ~isempty(strfind(name,['gif_' num2str(s)])) im=double(imread(name));   [your processing here]  end end
{}
Hardcover | Out of Print | 360 pp. | 6 x 9 in | 5 illus. | May 2005 | ISBN: 9780262201568 Paperback | $32.00 X | £26.95 | 360 pp. | 6 x 9 in | 5 illus. | January 2007 | ISBN: 9780262701198 eBook |$22.00 X | January 2007 | ISBN: 9780262297073 ## Making Parents The Ontological Choreography of Reproductive Technologies ## Overview Assisted reproductive technology (ART) makes babies and parents at once. Drawing on science and technology studies, feminist theory, and historical and ethnographic analyses of ART clinics, Charis Thompson explores the intertwining of biological reproduction with the personal, political, and technological meanings of reproduction. She analyzes the "ontological choreography" at ART clinics—the dynamics by which technical, scientific, kinship, gender, emotional, legal, political, financial, and other matters are coordinated—using ethnographic data to address questions usually treated in the abstract. Reproductive technologies, says Thompson, are part of the increasing tendency to turn social problems into biomedical questions and can be used as a lens through which to see the resulting changes in the relations between science and society. After giving an account of the book's disciplinary roots in science and technology studies and in feminist scholarship on reproduction, Thompson comes to the ethnographic heart of her study. She develops her concept of ontological choreography by examining ART's normalization of "miraculous" technology (including the etiquette of technological sex); gender identity in the assigned roles of mother and father and the conservative nature of gender relations in the clinic; the naturalization of technologically assisted kinship and procreative intent; and patients' pursuit of agency through objectification and technology. Finally, Thompson explores the economies of reproductive technologies, concluding with a speculative and polemical look at the "biomedical mode of reproduction" as a predictor of future relations between science and society. Charis Thompson is Associate Professor of Rhetoric and Women's Studies at the University of California, Berkeley. She is the author of Making Parents: The Ontological Choreography of Reproductive Technologies (MIT Press) ## Endorsements “Charis Thompson's Making Parents is an extraordinary account of an extraordinary aspect of our world: the technological, legal, and moral complexities of becoming a parent in the twnety-first century. Throughout, Thompson maintains a wonderful double vision: seeing as a remarkably gifted, scientifically informed ethnographer and watching anxious and hopeful doctors, nurses, and would-be parents with compassion and self-reflection. It is, to be sure, a book that draws deeply on science studies and feminism, but it carries that work to new spaces and in new directions. It is an added and unusual bonus that she delivers the scholarship with grace, humor, and sparkle.” Peter Galison, Mallinckrodt Professor of the History of Science and of Physics, Harvard University “Thompson's 'ontological choreography' underscores the ways in which parents are 'remade' through the processes of assisted reproductive technology, and shows how the very conception of the human is historically recast as a result of these new technological conditions for the reproduction of life. One of this extraordinary book's chief strengths is that it returns a set of abstract debates about ethics, technology, and personhood to specific institutional settings, showing us how such dilemmas emerge and giving them a much-needed historical specificity. This is a wide-ranging, unprecedented, incisive, and brilliant inquiry, probing and provocative, and bound to change the field for years to come.” Judith Butler, Maxine Eliot Professor of Rhetoric and Comparative Literature, University of California, Berkeley , author of Undoing Gender and Precarious Life: The Power of Mourning and Violence
{}
# 3. The while Statement¶ There is another Python statement that can also be used for iteration. It is called the while statement or a while loop. The while statement provides a much more general mechanism for iterating. Similar to the if statement, it uses a boolean expression to control the flow of execution. The body of the while loop will be repeated as long as the controlling boolean expression evaluates to True. The following figure shows the flow of control. We can use the while loop to create any type of iteration we wish, including anything that we have previously done with a for loop. For example, the program in the previous section could be rewritten using while by taking the following steps. Instead of relying on the range function to produce the numbers for our summation, we will need to produce them ourselves. To do this, before entering the while loop, we will create the variable called num and initialize it to 1, the first number in the summation. Every iteration will add num to the running total, and increment num to the next value, until all the desired values have been used. In order to control the iteration, we must create a boolean expression that evaluates to True as long as we want to keep adding values to our running total. In this case, as long as num is less than or equal to the upper bound, we should keep going. Here is a new version of the summation program that uses a while statement. You can almost read the while statement as if it were in natural language. It means, while num is less than or equal to n, continue executing the body of the loop. Within the body, each time the loop executes, update sum using the accumulator pattern and increment num. After the body of the loop finishes, we go back up to the condition of the while and reevaluate it. When num becomes greater than n, the condition fails to be True and flow of control continues to the return statement, the next statement outside the while loop. The same program in codelens will allow you to observe the flow of execution. (iteration_while2) Here is the flow of execution for a while statement: 1. Evaluate the condition, which yields a value of False or True. 2. If the condition is False, exit the while statement and continue execution at the next statement after the loop body. 3. If the condition is True, execute the statements in the body and then go back to step 1. The body consists of all of the statements below the header and indented at least 4 spaces in from the header of the while loop. This type of flow is called a loop because the third step loops back around to the top. Notice that if the condition is False the first time through the loop, the statements inside the loop are never executed. The body of the loop should change the value of one or more variables so that eventually the condition becomes False and the loop terminates. Otherwise the loop will repeat forever. When this happens, the loop is called an infinite loop. Question Is it possible for a for loop to be an infinite loop? Infinite loops are ubiquitous in programming — every programmer accidentally writes one from time to time. They’re such an established part of computer science history and culture that Apple named the street connecting the buildings on its corporate campus “Infinite Loop”. In the code shown above, if we had forgotten to increment the value of num within the loop body, we would wind up with an infinite loop. As it stands, however, we can prove that the loop terminates because we know that the value of n is finite, and we can see that the value of num increments each time through the loop, so eventually it will have to exceed n. In other cases, it is not so easy to tell. The for statement will always iterate through a sequence of values like a list of names or a list of numbers created by range. Since we know that it will iterate once for each value in the collection, it is often said that a for loop creates a definite iteration because we definitely know how many times we are going to iterate. On the other hand, the while statement is dependent on a condition that needs to evaluate to False in order for the loop to terminate. Since we do not necessarily know when (of even if) this will happen, it creates what we call indefinite iteration. Indefinite iteration simply means that we don’t know how many times we will repeat. We expect that eventually the condition controlling the iteration will evaluate to False and the iteration will stop. (Unless we have an infinite loop, which is the problem we want to avoid.) What you will notice here is that the while loop is more work for you, the programmer, than the equivalent for loop. When using a while loop you have to control the loop variable yourself. You give it an initial value, test for completion, and then make sure you update the program state in the body so that the loop eventually terminates. So why have two kinds of loops if for looks easier? The short answer is that there are times when your program won’t know, in advance of runtime, how many iterations it will need to perform. Later in this chapter we will see an example of indefinite iteration where we need this extra power we get from the while loop. But first, let’s check your understanding. True or False: You can rewrite any for loop as a while loop. • True • Although the while loop uses a different syntax, it is just as powerful as a for loop and often more flexible. • False • Often a for loop is more natural and convenient for a task, but that same task can always be expressed using a while loop. The following code contains an infinite loop. Which is the best explanation for why the loop does not terminate? n = 10
{}
chapter  14 It was shown in Chapter 13 that, assuming Martin’s Axiom (MA), there exists an injective absolutely nonmeasurable function f : R → R $f : \mathbf{R} \rightarrow \mathbf{R}$ . In other words, it was demonstrated therein that some functions f acting from R into R are extremely bad from the measure-theoretical point of view, i.e., those f are nonmeasurable with respect to any nonzero σ $\sigma$ -finite diffused measure defined on a σ $\sigma$ -algebra of subsets of R. In the same chapter it was also pointed out that the existence of absolutely nonmeasurable functions acting from R into R cannot be proved within ZFC set theory, so necessarily needs additional set-theoretical axioms.
{}
# r plot lda decision boundary plot() for class "lda". I wonder if anybody can offer any help on this topic? If abbrev > 0 The basics of Support Vector Machines and how it works are best understood with a simple example. Can anyone help me with that? Any advice on what I am doing wrong here would be much appreciated: I adapted my code to follow the example found here. Introduction. How to stop writing from deteriorating mid-writing? This function is a method for the generic function plot() for class "lda".It can be invoked by calling plot(x) for an object x of the appropriate class, or directly by calling plot.lda(x) regardless of the class of the object.. This function is a method for the generic function plot() for class "lda".It can be invoked by calling plot(x) for an object x of the appropriate class, or directly by calling plot.lda(x) regardless of the class of the object.. 2D PCA-plot showing clustering of “Benign” and “Malignant” tumors across 30 features. The coefficients of linear discriminants output provides the linear combination of Lag1 and Lag2 that are used to form the LDA decision rule. What authority does the Vice President have to mobilize the National Guard? There are quite some answers to this question. C.M.Bishop - Pattern Matching and ML, pgs 201,203. In this post, we will look at a problem’s optimaldecision boundary, which we can find when we know exactly how our data was generated. Making statements based on opinion; back them up with references or personal experience. Here is the data I have: set.seed(123) x1 = mvrnorm(50, mu = c(0, 0), Sigma = matrix(c(1, 0, 0, 3), 2)) You can also have a look [here][1] for a ggplot2 solution. LDA and QDA work better when the response classes are separable and distribution of X=x for all class is normal. Dr. Ripley, Thanks very much for your help. What we’re seeing here is a “clear” separation between the two categories of ‘Malignant’ and ‘Benign’ on a plot of just ~63% of variance in a 30 dimensional dataset. Anyway, there is a smart method to plot (but a little bit costy) the decision boundary in R using the function contour(), ... Show the confusion matrix and compare the results with the predictions obtained using the LDA model classifier.lda. Visualizing decision & margin bounds using ggplot2 In this exercise, you will add the decision and margin boundaries to the support vector scatter plot created in the previous exercise. I would now like to add the classification borders from the LDA to the plot. The partimat() function allows visualisation of the LD classification borders, but variables are used as the x and y axes in this case, rather than the linear discriminants. Must a creature with less than 30 feet of movement dash when affected by Symbol's Fear effect? With LDA, the standard deviation is the same for all the classes, while each class has its own standard deviation with QDA. Plot the decision boundary obtained with QDA. Let’s imagine we have two tags: red and blue, and our data has two features: x and y. Definition of Decision Boundary. The second tries to find a linear combination of the predictors that gives maximum separation between the centers of the data while at the same time minimizing the variation within each group of data.. Python source code: plot_lda_qda.py The curved line is the decision boundary resulting from the QDA method. DM825 - Intro.to ML, Lecture 7. Not only on stack overflow but through internet. Plot the decision boundary. [1]: @ Roman: thanks for your answer. Python source code: plot_lda_qda.py Any help will be much appreciated. I'd like to understand the general ideas The ellipsoids display the double standard deviation for each class. Decision Boundaries. How can there be a custom which creates Nosar? Best, Thomas Larsen Leibniz-Laboratory for Stable Isotope Research Max-Eyth-Str. Asking for help, clarification, or responding to other answers. I have used your partition tree and it works well. I would to find the decision boundaries of each class and subsequently plot them. Colleagues don't congratulate me or cheer me on, when I do good work? I would now like to add the classification borders from the LDA to the plot. For dimen = 2, an Refs. Is anyone able to give me references or explain how the "decision boundary" is calculated by the LDA function in MASS. Color the points with the real labels. e.g. a) The histogram of the distances of the TP, TN, FP, FN to decision boundary, with the highlighted bin of the closest TP to the boundary, as proposed in … Why is 2 special? A decision boundary is a graphical representation of the solution to a classification problem. The percentage of the data in the area where the two decision boundaries differ a lot is small. Plot the decision boundary. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. That is very strange. histograms or density plots are drawn. Linear Discriminant Analysis & Quadratic Discriminant Analysis with confidence¶. It can be invoked by calling plot(x) for an This example plots the covariance ellipsoids of each class and decision boundary learned by LDA and QDA. Decision boundaries can help us to understand what kind of solution might be appropriate for a problem. Modern Applied Statistics with S. Fourth edition. Andrew Ng provides a nice example of Decision Boundary in Logistic Regression. @jjulip see my edit if that's what you're looking for? Could you design a fighter plane for a centaur? object x of the appropriate class, or directly by exceeds the number determined by x the smaller value is used. @ Roman: I have now added my attempt at altering your code to plot classification borders on a plot of linear discriminant scores (which is what I am trying to achieve). Origin of “Good books are the warehouses of ideas”, attributed to H. G. Wells on commemorative £2 coin? I have used a linear discriminant analysis (LDA) to investigate how well a set of variables discriminates between 3 groups. The Gaussian Discriminant Analysis (GDA) is a generative method, given data $$x$$ and class $$y$$, we learn $$p(x,y)$$ and thus predict $$p(y|x)$$.. I wonder if anybody can offer any help on this topic? Classification functions in linear discriminant analysis in R, Linear discriminant analysis variable importance, R: plotting posterior classification probabilities of a linear discriminant analysis in ggplot2, Plotting a linear discriminant analysis, classification tree and Naive Bayes Curve on a single ROC plot. What if I made receipt for cheque on client's demand and client asks me to return the cheque and pays in cash? the plot.lda() function plots LD1 and LD2 scores on the y- and x-axis), but am I right in thinking that your code plots the original variable values? Any advice would be much appreciated! How to plot classification borders on an Linear Discrimination Analysis plot in R, How to find meaningful boundaries between two continuous variables in R. How to plot linear discriminant function in coordinate axes? Looking at the decision boundary a classifier generates can give us some geometric intuition about the decision rule a classifier uses and how this decision rule changes as the classifier is trained on more data. Below is some example code: Below is some example data (3 groups, 2 variables): EDIT: Following on from Roman's answer, I tried to alter the code to plot the classification border on the linear discriminant scale (this is what I am trying to achieve), rather than on the scale of the original variables. Thanks. Hi, I am using the lda function from the MASS library. We plot our already labeled trainin… The behaviour is determined by the value of dimen.For dimen > 2, a pairs plot is used. match "histogram" or "density" or "both". For dimen = 2, an equiscaled scatter plot is drawn. 3: Last notes played by piano or not? Plot all the different combinations of the decision boundaries. Function of augmented-fifth in figured bass. Linear Discriminant Analysis & Quadratic Discriminant Analysis with confidence¶. The dashed line in the plot below is a decision boundary given by LDA. Any advice on how to add classification borders to plot.lda would be greatly appreciated. Note : The above code will work better in your console, when I ran the code to compile the blog post the plots were too small. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. I want to plot the Bayes decision boundary for a data that I generated, having 2 predictors and 3 classes and having the same covariance matrix for each class. Preparing our data: Prepare our data for modeling 4. The general steps for a generative model are: There must be something that I am missing in my data! For What causes that "organic fade to black" effect in classic video games? Details. Stack Overflow for Teams is a private, secure spot for you and I am trying to find a solution to the decision boundary in QDA. Our intention in logistic regression would be to decide on a proper fit to the decision boundary so that we will be able to predict which class a new feature set might correspond to. While it is simple to fit LDA and QDA, the plots used to show the decision boundaries where plotted with python rather than R using the snippet of code we saw in the tree example. I have used a linear discriminant analysis (LDA) to investigate how well a set of variables discriminates between 3 groups. Thanks for contributing an answer to Stack Overflow! your coworkers to find and share information. r lda. additional arguments to pairs, ldahist or eqscplot. However, the border does not sit where it should. For dimen = 2, an equiscaled scatter plot is drawn. @ Roman: Thanks! However, none The plot() function actually calls plot.lda(), the source code of which you can check by running getAnywhere("plot.lda"). They can also help us to understand the how various machine learning classifiers arrive at a solution. How to teach a one year old to stop throwing food once he's done eating? Is there a tool that can check whether m |= p holds, where m and p are both ltl formula. How to set limits for axes in ggplot2 R plots? dimen > 2, a pairs plot is used. Decision region boundary = ggplot(data =twoClass, aes(x =PredictorA,y =PredictorB, color =classes)) + geom_contour(data = cbind(Grid,classes = predict(lda_fit,Grid)$class), aes(z = as.numeric(classes)),color ="red",breaks = c(1.5)) + geom_point(size =4,alpha =.5) + ggtitle("Decision boundary") + theme(legend.text = element_text(size =10)) + The behaviour is determined by the value of dimen.For dimen > 2, a pairs plot is used. Why use discriminant analysis: Understand why and when to use discriminant analysis and the basics behind how it works 3. Beethoven Piano Concerto No. Linear Discriminant Analysis LDA on Expanded Basis I Expand input space to include X 1X 2, X2 1, and X 2 2. This function is a method for the generic function Although the notion of a “surface” suggests a two-dimensional feature space, the method can be used with feature spaces with more than two dimensions, where a surface is created for each pair of input features. Is there a way to plot the LD scores instead? You should plot the decision boundary after training is finished, not inside the training loop, parameters are constantly changing there; unless you are tracking the change of decision boundary. 13. Over the next few posts, we will investigate decision boundaries. To learn more, see our tips on writing great answers. graphics parameter cex for labels on plots. I cannot see a argument in the function that allows this. Classifiers Introduction. I have now included some example data with 3 groups to make things more transferrable. For most of the data, it doesn't make any difference, because most of the data is massed on the left. The SVM model is available in the variable svm_model and the weight vector has been precalculated for you and is available in the variable w . Replication requirements: What you’ll need to reproduce the analysis in this tutorial 2. calling plot.lda(x) regardless of the What do cones have to do with quadratics? For dimen = 1, a set of Springer. Linear discriminant analysis: Modeling and classifying the categorical response YY with a linea… I Input is five dimensional: X = (X 1,X 2,X 1X 2,X 1 2,X 2 2). The o… Below I applied the lda function on a small dataset of mine. If$−0.642\times{\tt Lag1}−0.514\times{\tt Lag2}\$ is large, then the LDA classifier will predict a market increase, and if it is small, then the LDA … Plot the confidence ellipsoids of each class and decision boundary. Why does this CompletableFuture work even when I don't call get() or join()? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. How true is this observation concerning battle? I tried supplementing the generated data with the LD scores, but couldn't get it to work. It works for the simple example above, but not with my large dataset. equiscaled scatter plot is drawn. Visualizing decision boundaries and margins In the previous exercise you built two linear classifiers for a linearly separable dataset, one with cost = 1 and the other cost = 100 . Therefore, I provide individual plots for a sample of the models & variable combinations. This example applies LDA and QDA to the iris data. I am a little confused about how the generated data are fed into the plot (i.e. Was there anything intrinsically inconsistent about Newton's universe? Venables, W. N. and Ripley, B. D. (2002) Join Stack Overflow to learn, share knowledge, and build your career. I then used the plot.lda() function to plot my data on the two linear discriminants (LD1 on the x-axis and LD2 on the y-axis). Below I applied the lda function on a small dataset of mine. This is called a decision surface or decision boundary, and it provides a diagnostic tool for understanding a model on a predictive classification modeling task. this gives minlength in the call to abbreviate. Python source code: plot_lda_qda.py The question was already asked and answered for linear discriminant analysis (LDA), and the solution provided by amoeba to compute this using the "standard Gaussian way" worked well.However, I am applying the same technique for a 2 class, 2 feature QDA and am having trouble. (well not totally sure this approach for showing classification boundaries using contours/breaks at 1.5 and 2.5 is always correct - it is correct for the boundary between species 1 and 2 and species 2 and 3, but not if the region of species 1 would be next to species 3, as I would get two boundaries there then - maybe I would have to use the approach used here where each boundary between each species pair is considered separately). Many thanks for your help! Details. The behaviour is determined by the value of dimen. Use argument type to This tutorial serves as an introduction to LDA & QDA and covers1: 1. Since it's curved I'm assuming they're doing something like fitting 2-D Gaussians to the groups and plotting the contour line describing the intersection. I would to find the decision boundaries of each class and subsequently plot them. I µˆ 1 = −0.4035 −0.1935 0.0321 1.8363 1.6306 µˆ 2 = 0.7528 0.3611 Plots a set of data on one, two or more linear discriminants. Can I hang this heavy and deep cabinet on this wall safely? whether the group labels are abbreviated on the plots. Linear and Quadratic Discriminant Analysis with confidence ellipsoid¶. In this exercise you will visualize the margins for the two classifiers on a single plot. rev 2021.1.7.38268, Stack Overflow works best with JavaScript enabled, Where developers & technologists share private knowledge with coworkers, Programming & related technical career opportunities, Recruit tech talent & build your employer brand, Reach developers & technologists worldwide. class of the object. I am not familiar with the 'tree' package but I found that the threshold to make a cut returned by tree and rpart is almost the same value. The number of linear discriminants to be used for the plot; if this In the above diagram, the dashed line can be identified a s the decision boundary since we will observe instances of a different class on each side of the boundary. I then used the plot.lda() function to plot my data on the two linear discriminants (LD1 on the x-axis and LD2 on the y-axis). p 335-336 of MASS 4th Ed. Ml, pgs 201,203 was there anything intrinsically inconsistent about Newton 's universe me on when! To abbreviate Malignant ” tumors across 30 features above, but could get! Equiscaled scatter plot is used way to plot the LD scores instead in my!. Thomas Larsen Leibniz-Laboratory for Stable Isotope Research Max-Eyth-Str works 3 must be that. Output provides the linear combination of Lag1 and Lag2 that are used to form LDA. Newton 's universe of linear discriminants by piano or not data are fed into the plot borders from the function... Basics of Support Vector Machines and how it works for the simple example red and blue and., an equiscaled scatter plot is used function from the QDA method on ;! You can also have a look [ here ] [ 1 ] for centaur. Black '' effect in classic video games Research Max-Eyth-Str labels are abbreviated on the left tree and it 3... Also help us to understand the general ideas linear discriminant analysis and the basics behind how it works.! Might be appropriate for a problem Matching and ML, pgs 201,203 's done eating PCA-plot showing of! Once he 's done eating exercise you will visualize the margins for the two decision boundaries differ a lot small. Cc by-sa modeling 4 andrew Ng provides a nice example of decision boundary is method... Commemorative £2 coin works are best understood with a simple example above, but could get! Feed, copy and paste this URL into your RSS reader the data. For the simple example the National Guard deviation for each class and decision ''... Its own standard deviation is the same for all the classes, each! Type to match histogram '' or density '' r plot lda decision boundary both '' by... How it works 3 ’ ll need to reproduce the analysis in this you... Classifiers on a single plot to this RSS feed, copy and paste this URL into your reader! Be a custom which creates Nosar a look [ here ] [ 1 ]: @ Roman Thanks. Us to understand the general ideas linear discriminant analysis: understand why and to! W. N. and Ripley, B. D. ( 2002 ) Modern applied Statistics with S. edition! Me references or personal experience why use discriminant analysis ( LDA ) to investigate well... Qda and covers1: 1 kind of solution might be appropriate for a ggplot2 solution a linear analysis. Data are fed into the plot below is a method for the simple example appropriate for centaur... Of solution might be appropriate for a problem Lag1 and Lag2 that are in. And client asks me to return the cheque and pays in cash the basics behind how works! Dimen > 2, a pairs plot is drawn a lot is small the analysis in this tutorial as. / logo © 2021 Stack Exchange Inc ; user contributions licensed under cc.! The warehouses of ideas ”, you agree to our terms of service, privacy policy cookie! Supplementing the generated data with the LD scores instead that allows this of decision boundary '' is calculated by value... Analysis with confidence¶ curved line is the decision boundaries differ a lot is small however, the border not. Cabinet on this wall safely with 3 groups to make things more transferrable and! N'T make any difference, because most of the models & variable combinations deviation for each class decision... A text column in Postgres, how to set limits for axes ggplot2! This URL into your RSS reader a pairs plot is drawn andrew Ng provides a nice example decision! Are the warehouses of ideas ”, attributed to H. G. Wells on commemorative £2?! Argument in the area where the two decision boundaries differ a lot is.... More, see our tips on writing great answers example applies LDA QDA... And decision boundary '' is calculated by the LDA function on a small dataset of mine QDA. Would to find the decision boundary in Logistic Regression ltl formula on how to a. The how various machine learning classifiers arrive at a solution Fourth edition and subsequently plot them a solution borders... Machines and how it works are best understood with a simple example over the few. Colleagues do n't congratulate me or cheer me on, when i do n't get... Throwing food once he 's done eating basics behind how it works well, because most the!: Last notes played by piano or not to understand the how various machine learning classifiers arrive at solution! Not see a argument in the call to abbreviate Research Max-Eyth-Str see our tips on great. 1, a pairs plot is used and “ Malignant ” tumors across 30 features dimen! Boundary is a method for the generic function plot ( ) or join ( ) or join ). 3: Last notes played by piano or not things more transferrable argument in the to... Μˆ 2 = 0.7528 0.3611 introduction and QDA work better when the response are. A argument in the function that allows this subscribe to this RSS feed, copy and paste this URL your. Isotope Research Max-Eyth-Str there must be something that i am doing wrong here would be greatly appreciated of. Less than 30 feet of movement dash when affected by Symbol 's effect... Analysis and the basics behind how it works for the simple example above, but could n't get it work... Dead body to preserve it as evidence National Guard other answers, two or more linear discriminants dimen! = −0.4035 −0.1935 0.0321 1.8363 1.6306 µˆ 2 = 0.7528 0.3611 introduction advice on to! Help, clarification, or responding to other answers you design a fighter for. Learn, share knowledge, and build your career: @ Roman: Thanks for your Answer ” you! ’ ll need to reproduce the analysis in this exercise you will visualize the margins the. Deep cabinet on this topic curved line is the decision boundary in Logistic Regression LDA! By LDA the response classes are separable and distribution of X=x for class... And when to use discriminant analysis: understand why and when to discriminant... Video games, copy and paste this URL into your RSS reader and client me... Of the data is massed on the left the models & variable combinations more, see tips! Give me references or personal experience return the cheque and pays in cash the standard deviation with QDA −0.4035! If i made receipt for cheque on client 's demand and client asks me to return the cheque pays! Any shortcuts to understanding the properties of the data, it does n't make any difference, because of. To work ” and “ Malignant ” tumors across 30 features on, i... Dimen > 2, a pairs plot is drawn dashed line in books! Even when i do Good work we have two tags: red and blue, and build your career your! Wall safely how well a set of variables discriminates between 3 groups, to! Of X=x for all the classes, while each class has its own standard deviation is the same all! Confidence ellipsoids of each class and subsequently plot them a text column r plot lda decision boundary Postgres, how to teach one. The same for all the classes, while each class you design a fighter plane for a?. Privacy policy and cookie policy food once he 's done eating and covers1: 1 1 ). With the LD scores, but could n't get it to work discriminates between 3 groups to make things transferrable! 30 feet of movement dash when affected by Symbol 's Fear effect how. A sample of the data in the function that allows this > 0 gives! You will visualize the margins for the two decision boundaries can help us to understand what kind of might... With a simple example above, but could n't get it to work move a dead body to it! ] [ 1 ]: @ Roman: Thanks for your help or me... Decision rule Ripley, B. D. ( 2002 ) Modern applied Statistics with S. edition! Is normal decision boundary '' is calculated by the LDA decision rule whether m p... A way to plot the confidence ellipsoids of each class has its own standard deviation is the decision.! You 're looking for confused about how the decision boundary in Logistic Regression data! Of mine let ’ s imagine we have two tags: red and,..., see our tips on writing great answers have to mobilize the National Guard the percentage of the solution a... Lda decision rule Research Max-Eyth-Str does not sit where it should across 30 features axes in ggplot2 R?... The warehouses of ideas ”, you agree to our terms of service, privacy policy cookie! Add classification borders from the LDA function from the LDA function on a dataset! Basics of Support Vector Machines and how it works well used a linear discriminant analysis LDA... Applied Statistics with S. Fourth edition, but not with my large dataset that 's what 're! Dimen.For dimen > 2, an equiscaled scatter plot is drawn RSS feed, copy and paste this URL your. A custom which creates Nosar other answers “ Post your Answer ”, you agree to our of! Statistics with S. Fourth edition / logo © 2021 Stack Exchange Inc ; user contributions under. Decision boundaries differ a lot is small H. G. Wells on commemorative £2?... Qda work better when the response classes are separable and distribution of for...
{}
# Happy Numbers make Happy Programmers ! :-) Here is one question which one of my students, Vedant Sahai asked me. It appeared in his computer subject exam of his recent ICSE X exam (Mumbai): write a program to accept a number from the user, and check if the number is a happy number or not; and the program has to display a message accordingly: A Happy Number is defined as follows: take a positive number and replace the number by the sum of the squares of its digits. Repeat the process until the number equals 1 (one). If the number ends with 1, then it is called a happy number. For example: 31 Solution : 31 replaced by $3^{2}+1^{2}=10$ and 10 replaced by $1^{2}+0^{2}=1$. So, are you really happy? 🙂 🙂 🙂 Cheers, Nalin Pithwa. # Yet another special number ! The eminent British mathematician had once remarked: Every integer was a friend to Srinivasa Ramanujan. Well, we are mere mortals, yet we can cultivate some “friendships with some numbers”. Let’s try: Question: Squaring 12 gives 144. By reversing the digits of 144, we notice that 441 is also a perfect square. Using C, C++, or python, write a program to find all those integers m, such that $1 \leq m \leq N$, verifying this property. PS: in order to write some simpler version of the algorithm, start playing with small, particular values of N. Reference: 1001 Problems in Classical Number Theory, Indian Edition, AMS (American Mathematical Society), Jean-Marie De Konick and Armel Mercier. https://www.amazon.in/1001-Problems-Classical-Number-Theory/dp/0821868888/ref=sr_1_1?s=books&ie=UTF8&qid=1509189427&sr=1-1&keywords=1001+problems+in+classical+number+theory Cheers, Nalin Pithwa. # Fundamental theorem of algebra: RMO training It is quite well-known that any positive integer can be factored into a product of primes in a unique way, up to an order. (And, that 1 is neither prime nor composite) —- we all know this from our high school practice of “tree-method” of prime factorization, and related stuff like Sieve of Eratosthenes. But, it is so obvious, and so why it call it a theorem, that too “fundamental” and yet it seems it does not require a proof. It was none other than the prince of mathematicians of yore, Carl Friedrich Gauss, who had written a proof to it. It DOES require a proof — there are some counter example(s). Below is one, which I culled for my students: Question: Let $E= \{a+b\sqrt{-5}: a, b \in Z\}$ (a) Show that the sum and product of elements of E are in E. (b) Define the norm of an element $z \in E$ by $||z||=||a+b\sqrt{-5}||=a^{2}+5b^{2}$. We say that an element $p \in E$ is prime if it is impossible to write $p=n_{1}n_{2}$ with $n_{1}, n_{2} \in E$, and $||n_{1}||>1$, $||n_{2}||>1$; we say that it is composite if it is not prime. Show that in E, 3 is a prime number and 29 is a composite number. (c) Show that the factorization of 9 in E is not unique. Cheers, Nalin Pithwa. # Another special number(s): Wilson primes and playful programming! Problem: A prime number p is called a Wilson prime if $(p-1)! \equiv -1 \pmod {p^{2}}$. Using a computer and some programming language like C, C++, or Python find the three smallest Wilson primes. Cheers, Nalin Pithwa. # A Special Number Problem: Show that for each positive integer n equal to twice a triangular number, the corresponding expression $\sqrt{n+\sqrt{n+\sqrt{n+ \sqrt{n+\ldots}}}}$ represents an integer. Solution: Let n be such an integer, then there exists a positive integer m such that $n=(m-1)m=m^{2}-m$. We then have $n+m=m^{2}$ so that we have successively $\sqrt{n+m}=m$; $\sqrt{n + \sqrt{n+m}}=m$; $\sqrt{n+\sqrt{n+\sqrt{n+m}}}=m$ and so on. It follows that $\sqrt{n+\sqrt{n+\sqrt{n+ \sqrt{n+\ldots}}}}=m$, as required. Comment: you have to be a bit aware of properties of triangular numbers. Reference: 1001 Problems in Classical Number Theory by Jean-Marie De Koninck and Armel Mercier, AMS (American Mathematical Society), Indian Edition: https://www.amazon.in/1001-Problems-Classical-Number-Theory/dp/0821868888/ref=sr_1_1?s=books&ie=UTF8&qid=1508634309&sr=1-1&keywords=1001+problems+in+classical+number+theory Cheers, Nalin Pithwa. # Another cute proof: square root of 2 is irrational. Reference: Elementary Number Theory, David M. Burton, Sixth Edition, Tata McGraw-Hill. (We are all aware of the proof we learn in high school that $\sqrt{2}$ is irrational. (due Pythagoras)). But, there is an interesting variation of that proof. Let $\sqrt{2}=\frac{a}{b}$ with $gcd(a,b)=1$, there must exist integers r and s such that $ar+bs=1$. As a result, $\sqrt{2}=\sqrt{2}(ar+bs)=(\sqrt{2}a)r+(\sqrt{2}b)s=2br+2bs$. This representation leads us to conclude that $\sqrt{2}$ is an integer, an obvious impossibility. QED. # RMO 2017 Warm-up: Two counting conundrums Problem 1: There are n points in a circle, all joined with line segments. Assume that no three (or more) segments intersect in the same point. How many regions inside the circle are formed in this way? Problem 2: Do there exist 10,000 10-digit numbers divisible by 7, all of which can be obtained from one another by a re-ordering of their digits? Solutions will be put up in a couple of days. Nalin Pithwa.
{}
# pathfinder blood money abuse Power of Blood: At 3rd level, a blood samurai must choose one sorcerer bloodline from any of the bloodlines listed in the Core Rulebook, Advance Players Guide, Ultimate Magic, or any other Pathfinder source. Nor can you do so with greater restoration, which has a casting time of 3 rounds. Since Bob makes all decisions about the spell when the spell comes into effect—including the spell's target—Bob sighs, finishes casting the spell, touches a nearby butter churn, expends 50 gp in magical reagents, and transmutes the butter churn into a masterwork tool. Since 2010, deaths related to AIDS have dropped by 35% in the part of the world where we work. Bob starts casting the spell. Abe the fighter asks Bob the wizard to cast on him the spell enlarge person then the spell permanency. When you cast another spell in that same round, your blood transforms into one material component of your choice required by that second spell. affected as focus], diamond [diamond per creature], Soul Bind: 1000 GP per HD of creature to be bound, single Explore and conquer with your party the stolen lands of golarion, a world rich with history, mystery and conflict. From Nov. 27, 2012, there's this exchange: Question damage. Produce any other Example 1 In Pathfinder ability damage works differently. non-sorcerer/wizard spell of 4th level or lower, even if it belongs to Isra leads two lives: one as a rich merchant lord of Katapesh , and another as the Nightwalker , one of the city's most feared assassins . To learn more, see our tips on writing great answers. Blood Money, Blood Money, before my finances get worse.” using a spoon stir your cup clockwise (counter-clockwise if you are in the southern Hemisphere) Repeat chant 4-6 times; Drink all contents of glass. So you can't combine this spell with raise dead or resurrection, both of which have a casting time of 1 minute. With Rachel Aviv, Fern Finkel, Lisa Siegel Belanger, John Savanovich. ... classmate. you cant a witch to cast those spells for no monetary cost? The range of 0 ft. means the creature selects a crosshairs adjacent to its space and the effect of the spell happens there; the spell doesn't target a creature or an object. Where an entry in a stat block would have no value (for example, a room that can't be upgraded from or into something else), that entry is omitted from the stat block. When you cast another spell in that same round, your blood transforms into one material component of your choice required by that second spell. Blogger fucked the charts up a little, but they should be readable. MathJax reference. creating the immunity noted as a disqualifier in the spell’s description. Pathfinder: kingmaker - definitive Edition is the ultimate single-player RPG experience based on the acclaimed Pathfinder series. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. It's useful in a pinch, like being in a dungeon or whatever and not having your normal material components, or just for saving money in general. sorcerer/wizard spell of 6th level or lower, provided the spell does not ‎Isra leads two lives: one as a rich merchant lord of Katapesh, and another as the Nightwalker, one of the city's most feared assassins. level), diamond dust, (Longer-Action) Simulacrum: 500 GP per HD of creature, These PDE's no longer evaluate in version 12.2 as they did under 12.1. one of your opposition schools. "Blood and Money", a Pathfinder Tales web fiction story by Steven Savile, was serialized in October and November 2011 and released as an eBook in January 2012. You don't need to finish casting the spell in the same round, though; once you start casting the spell, the components (and the prepared spell itself) are committed and used. sorcerer/wizard spell of 5th level or lower, even if it belongs to one 2. Keep in mind that you always take the 1d6 damage and the for an exceptionally shitty character), Limited Wish/Wish (this works, though the material That's not mentioned under, @Fering I don't think there's a disconnect. to be honest, the strongest argument to ban Blood Money within PFS is that it is a money loophole and PFS directly uses money to keep characters balanced. Is it used when you start casting the spell, or once the spell takes effect? Most discussions of the spell blood money and the spell's use in conjunction with long-casting-time spells oddly cite only the second exchange, perhaps unaware of the first and third. Touch Spells; Cast, move & attack in one round? What happens if you miss with the Limp Lash spell? Finally, on Feb. 20, 2013, after another user quoted these two exchanges, there was this exchange: Question ... classmate. Also while you are there try to get 25 str and you can make permanent symbols of healing and pack them up inside your armor for healing yourself. Powered by. Can Blood Money be paired with spells with Casting Time of “1 Round”? Duplicate any As an example, could you use blood money to create components for simulacrum or permanency, which take hours to cast, or does blood money only apply to spells that take a standard action to cast? Role-playing Games Stack Exchange is a question and answer site for gamemasters and players of tabletop, paper-and-pencil role-playing games. Cleric #1 casts Blood Money to create the 25,000gp component for true rez, and gets paralyzed by strength damage. Obscure markings in BWV 814 I. Allemande, Bach, Henle edition, Using the caret symbol (^) in substitutions in the vi editor, Copy/multiply cell contents based on number in another cell, Case against home ownership? Ancestral Anthologies Vol. Earnings: This entry indicates what bonuses the room or team gives to its building's or organization's checks made to generate capital. By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Presumably, this spell can be used to create partial spell That's unfortunate because, as the examination of the spell below demonstrates, Jacobs' first and third statements are more accurate than his second. Can you cite that last part? You cast blood money just before casting another spell. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. (I left this as unedited as I could stand, adding italics and proper capitalization to spells for clarity; insert your own mental [sic]s where necessary.). Buildings and organizations act like characters in that they can attempt a check each day to earn capital performing s… (Longer-Action) Lesser Simulacrum: 50 GP per HD of creature, As part of this spell’s casting, you must cut one of your hands, releasing a stream of blood that causes you to take 1d6 points of damage. Are all satellites of all planets in the same plane? To subscribe to this RSS feed, copy and paste this URL into your RSS reader. But if you use the second one, that's fine too. The spell blood money says you take a swift action to cast the spell blood money then. What font can give me the Christmas tree? 2) If that is working as intended and the witch know resurrection, it is possible for the to use blood money and a less valuable diamond to satisfy the requirement for a resurrection (10.000 gp diamond)? Is there any way to eliminate the material component cost for Glyph of Warding? When you cast another spell in that same round, your blood transforms into one material component of your choice required by that second spell. Will Isra choose himself--or the… Also, you might introduce other complicating factors such as the Pathfinder's special need for some extra money. Abe grabs his longsword and exits the room. You can increase the power of your spells, causing them to deal more The spell blood money says you take a swift action to cast the spell blood money then. As written the spell blood money probably shouldn't work with spells having a casting time of greater than 1 round. Making statements based on opinion; back them up with references or personal experience. This is a blog created to share the builds I've made for the Pathfinder role-playing system! Annihilation of a material component upon the spell coming into effect also jibes with this otherwise sometimes ignored section of blood money's description: Material components created by blood money transform back into blood at the end of the round if they have not been used as a material component. When blood money says you can create components for a spell cast in the same round, does that mean the casting must be completed in that same round? @Fering RE: "The spell slot is still lost as well as the material components used for the attempted casting." The spell blood money allows you to create temporary material components by damaging yourself. Er...which is it? G… Probably won’t have much money to spend so ... And there are some blood sacrifices and devil ... Or trip kineticists to be more precise. supplies], special laboratory supplies, Protection From Spells: 500 GP[+1000 GP per creature HIV remains one of the most serious global health threats of our time. Duplicate any Re: Empower Spell; "variable numeric effects" are die rolls. I disagree that the material component is consumed at the end. While most of the direct spells don't stick around from game to game the effects of that "free" spellcasting increases your odds of success, escaping harm (skewing the effective challenge rating downwards) or bringing a PC back from the dead. 1. You cast blood money just before casting another spell. That situation is impossible using Jacobs' second statement, which has a material component created then immediately annihilated by the followup spell. However, I'm not really sure how this works with spells that take more than one round to cast. “Blood Money, Blood Money, I need some coin in my purse. You cast blood money just before casting another spell. Orange: OK options, or useful options that only apply in rare circumstances 3. The spell blood money creates a material component only for a spell the caster of blood money casts. I support a limited subset of Pathfinder's rules content. Check out which exclusive outfits were in the shop and for how much they were sold. This is a partial guide to its use and some examples of its capability. As part of this spell's casting, you must cut one of your hands, releasing a stream of blood that causes you to take 1d6 points of damage. Directed by Kyoko Miyake. It only takes a minute to sign up. belong to one of your opposition schools. As part of this spell’s casting, you must cut one of your hands, releasing a stream of blood that causes you to take 1d6 points of damage. Make a substance abuse exhibit for a local shopping mall, library, or school. effects of many spells, such as. If a caster makes "all pertinent decisions about a spell (range, target, area, effect, version, and so forth) when the spell comes into effect," and a decision is made not to use a material component use, then spell does. Whether you've tapped into the magic of the wilds or you're a changeling who's inherited the blood of hags, now's your chance to indulge in some of the Pathfinder world's most enigmatic mystical secrets. When you cast another spell in that same round, your blood transforms into one material component of your choice required by that second spell. In the hand of a witch with strength 11+ (even through the use of bull strength) it [the spell blood money] become rally impressive. How does Wish work with spells that interact with material components? Self-Interested Preface: Blood Money is a 1st-level spell on the magus, sorcerer, witch and wizard spell lists that allows STR damage to be taken instead of paying for costly material components. Primary resources are D20PFSRD and Archives Of Nethys. Answer Simple theme. At the end of the first round of casting permanency, Abe's girlfriend Cal inconveniently chooses to transport Abe to her via the spell refuge. Latest Pathfinder products in the Open Gaming Store. Ways to cast more than one (standard action) spell in one round without storing spells in magical items. 1) it that working as intended? Does authentic Italian tiramisu contain large amounts of espresso? Does an Electrical Metallic Tube (EMT) Inside Corner Pull Elbow count towards the 360° total bends? Duplicate any Further, although no rule states when a material component is annihilated by the spell's energy, Jacobs' second statement seems predicated upon a material component being annihilated when the caster starts casting the spell. Make a substance abuse exhibit for a local shopping mall, library, or school. Answer Then, from Dec. 16, 2012, there's this exchange: Question penalty on its next saving throw. Good planning is essential, and the characters need to stay cool under pressure. I, on the other hand, believe the material component is annihilated when the caster finishes casting the spell. components unless spell component is comprised of a single item. What happens when a state loses so many people that they *have* to give up a house seat and electoral college vote? Can permanency be dispelled when cast on a target other than yourself. Sure, she would need a few lesser restoration to return at full strength, but it can be a good trade off. Mystic Theurge prestige class (possible, but generally makes spell must be a standard action to cast. Undo the harmful Visitors are welcome to use and re-post them as long as they don't steal credit. ... As many rations as you can carry. He was establishing the cost of how he could basically “BUY” me when he was being a jerk. One round before Bob finishes casting masterwork transformation, Cal attacks Bob's hovel. (With blood money glyph is free!) Pathfinder Tales: Blood and Money by Steven Savile Short Stories Books Isra leads two lives: one as a affluent merchant aristocrat of Katapesh, and addition as the Nightwalker, one of the city's best feared assassins. Narcissist Blood Money Today I am highlighting a blood money gift that I received when I first found out my husband was cheating on me. Duplicate any 1) Yes, working as intended, in other words. Keep in mind that blood money only really works if you cast a spell that has a casting time of 1 round or less, since the components created vanish after that time. What can be done to make them evaluate under 12.2? Is it possible to take multiple tabs out of Safari into a new window? The unit stat blocks are essentially the same for rooms and teams, and are organized as follows. Are inversions for making bass-lines nice and prolonging functions? site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Pathfinder Builds This is a blog created to share the builds I've made for the Pathfinder role-playing system! Featuring real-time combat or optional turn-based fights. In the PATHFINDER Study, the blood test that we’re examining is designed to detect up to 50 different cancers,” said Dr. Tom Beer with the OHSU Knight Cancer Institute. Bob casts on Abe the spell enlarge person and, next round, starts casting permanency. black sapphire. To me that and so forth includes deciding which, if any, material component to use, especially in the case of spells with long casting times, as a caster's originally intended (but not yet chosen) target may be somehow rendered invalid before a spell comes into effect, and a spell's target sometimes determines a material component. My reasoning for this is if you start casting a spell, and get hit during the casting, the spell slot is still lost as well as the material components used for the attempted casting. Of 5th level or lower, provided the spell blood money says take. Its capability should be readable other spells to abuse in this way, this nonetheless carries official weight many! Ca n't you 's or organization 's checks made to generate capital spells... Pathfinder player options not covered here, please email meand I am happy to provide half of the ). ; Nagaji enhance the delivery of nicotine into the followup spell outfits were in Apex! Copy and paste this URL into your RSS reader as the material component ft.... And answer site for gamemasters and players of tabletop, paper-and-pencil role-playing Games Stack Exchange is a created. Exchange is a blog created to share the builds I 've made for Pathfinder. Spell Glyph on the acclaimed Pathfinder series money creates a material component is consumed the!, or responding to other answers 16, 2012, there 's this Exchange question! Round ” gamemasters and players of tabletop, paper-and-pencil role-playing Games Stack Exchange couple of questions: )... Or once the spell blood money just before casting another spell but if miss..., Cal attacks Bob 's hovel acclaimed Pathfinder series that they * have * give..., clarification, or useful options that only apply in rare circumstances.. Towards the 360° total bends in the same for rooms and teams, and the characters need to stay under... In one round before Bob finishes casting masterwork transformation, Cal attacks Bob 's hovel visitors are welcome to and... The builds I 've made for the Pathfinder role-playing system spell blood money allows to. Has a casting time of 3 rounds and electoral college vote spell ! Cleric # 1 actually casts true rez for free, greater restoration free... Bass-Lines nice and prolonging functions policy and cookie policy clarification, or once the spell slot is still lost well... Shop and for how much they were sold in a game even specifies in the part of the component. It used when you cast blood money then some examples of its capability to Jesus spell be! I am happy to provide additional assistance restoration, which has become common among Pathfinder build handbooks basically “ ”. Up with references or personal experience for help, clarification, or options which extremely! Belongs to one of your opposition schools way would invoking martial law help Trump overturn the election under.! Introduce other complicating factors such as the Pathfinder role-playing system by 35 % in the shop pathfinder blood money abuse! Design / logo © 2020 Stack Exchange Inc ; user contributions licensed under cc.! Once the spell blood money says you take a swift action to?. Longer evaluate in version 12.2 as they do n't necessarily want one, Biblical of! 5Th level or lower, provided the spell takes effect level or lower, if. Casting the spell or the… Browse all the skins that have been in the description linked... Combine this spell with raise dead and reincarnate for free options which are extremely situational me. Empower spell ; variable numeric effects '' are die rolls so people... ) it that working as intended can I combine Demiplane, Glyph of Warding, Simulacrum and! Greater spell Glyph on the object you carry our time ( Knight of the most serious health... Electrical Metallic Tube ( EMT ) Inside Corner Pull Elbow count towards the 360° total bends give up little. Having a casting time of 3 rounds and re-post them as long as do... 2017, 1.8 million people were infected with hiv, and gets paralyzed by strength damage ;. Under 12.1 restoration, which has become common among Pathfinder build handbooks for gamemasters and of... Casting times or ca n't you belongs to one of your opposition.. John Savanovich one round or resurrection, both of which have a casting time of than. Steal credit laws meant to protect the elderly has left many seniors penniless, and! Disadvantage of not castling in a game the fighter asks Bob the wizard to cast more than one standard...: Empower spell ; variable numeric effects '' are die rolls is to. Free, greater restoration, which has become common among Pathfinder build handbooks the blood.... 12.2 as they do n't steal credit this was just the first that came mind! Provide half of the Sepulcher ) ; Nagaji all planets in the description you linked random '' BUY me... You do so with greater restoration for free spell ; variable numeric effects '' are die rolls by “. Up a house seat and electoral college vote essential, and gets paralyzed by strength.! Little, but they should be readable came to mind a CV I do n't think there this... Like, what is blood money just before casting another spell spells ; cast, move attack. A new window terms of service, privacy policy and cookie policy 1 casts blood money says take! Subscribe to this RSS feed, copy and paste this URL into your reader! Money probably should n't work with spells having a casting time of 1 minute spells having a time... 360° total bends I pathfinder blood money abuse Demiplane, Glyph of Warding Bob casts on the. 1.8 million people were infected with hiv, and are organized as follows and for how much they were.. Other complicating factors such as the Pathfinder 's special need for some money... Can permanency be dispelled when cast on him the spell, or useful options that apply!, even if it belongs to one of your opposition schools there 's a disconnect and 940,000 died of causes... E-Mail me ( tsappshear at gmail dot com ) or comment with any,... A target other than yourself of a single Item please email meand I am to. Attacks Bob 's hovel does not create the material components by damaging yourself the ultimate single-player RPG experience on! Money interact with material components used for the Pathfinder role-playing system us not with! Possible supervisor asking for a CV I do n't have pathfinder blood money abuse: this entry what! Builds I 've made for the Pathfinder 's special need for some extra money you use money... To return at full strength, but it can be used to the. Statements based on the object you carry limited subset of Pathfinder 's special need for some extra.... Need some coin in my purse second statement, which has a component... And are organized as follows in version 12.2 as they do n't steal credit my purse they sold... Or the… Browse all the skins that have been in the shop and for much... Or can you do so with a swift action to cast on his longsword the spell blood money then characters! And isolated from their families does an Electrical Metallic Tube ( EMT ) Inside Pull... ; cast, move & attack in one round to cast by strength damage factors as... The color coding scheme which has become common among Pathfinder build handbooks com ) or comment with suggestions! Enlarge person and, next round, starts casting permanency 's not under! That the material requirements for a spell the caster 's blood into the spell. With casting time of 1 minute and dwarves although unmentioned by errata FAQ. ; Nagaji Lash spell it used when you cast blood money to create partial spell unless... So you ca n't you you start casting the spell does not create the material 0. Half through normal means 's not mentioned under, @ Fering re Empower! Become common among Pathfinder build handbooks duplicate any sorcerer/wizard spell of 5th level or,. Infected with hiv, and the spell permanency also links to the Paizo SRD more fantasy races than elves... 'S blood into the blood stream it used when you cast blood money and through. Italian tiramisu contain large amounts of espresso subscribe to this RSS feed, copy and paste this into! With references or personal experience can be used to create a plane only. Not mentioned under, @ Fering I do n't necessarily want one, Biblical significance of most. Authentic Italian tiramisu contain large amounts of espresso laws meant to protect the elderly has left many penniless... Him the spell takes effect, no home, do n't necessarily want one, that 's not mentioned,... Up a little, but they should be readable protect the elderly has left many seniors penniless, and! But if you would like help with Pathfinder player options not covered,... Comprised of a single Item Sepulcher ) ; Nagaji site design / logo © 2020 Exchange! Just elves and dwarves round without storing spells in magical items question 1 Stack. It even specifies in the same for rooms and teams, and the characters need to cool! You do so with a swift action she would need a few lesser restoration return... Linked random '' gets paralyzed by strength damage eliminate the material component before the followup spell the! Those of us not familiar with Pathfinder player options not covered here, please email meand I am to... The power of your opposition schools and re-post them as long as they under. Buy ” me when he was being a jerk / logo © 2020 Stack Exchange is a created. Round, starts casting permanency the charts up a house seat and electoral college vote protect the elderly has many! Blood samurai may choose the new order of the Sepulcher ) ; Nagaji the Red Blade his!
{}
# Article Full entry | PDF   (1.1 MB) Keywords: times to system failure; cold-standby redundant system Summary: A cold-standby redundant sytem with two identical units and one repair facility is considered. Units can be in three states: good $(I)$, degraded $(II)$, and failed $(III)$. It is supposed that only the following state-transitions of a unit are possible: $I\rightarrow II$, $II\rightarrow III$, $II\rightarrow I$, $III\rightarrow I$. The paper deals with the comparison of some initial situations of the system and with a stochastical improvement of units (stochastical increase of time of work in state $I$ and/or stochastical decrease of times of repairs of the types $II\rightarrow I$ and/or $III\rightarrow I$) and shows on examples that some surprising non-monotonicities can take place. References: [1] A. Lešanovský: Analysis of a two-unit standby redundant system with three states of units. Apl. mat. 27 (1982), 192-208. MR 0658002 [2] A. Lešanovský: Characterization of the first operating period of a two-unit standby redundant system with three states of units. Apl. mat. 27 (1982), 341-351. MR 0674980 [3] A. Lešanovský: Stochastical ordering in a two-unit standby redundant system with three states of units. To appear in the journal Mathematische Operationsforschung und Statistik - Series Optimization. MR 0757139 [4] D. Stoyan: Qualitative Eigenschaften und Abschätzungen stochastischer Modelle. Akademie-Verlag, Berlin (1977). MR 0455157 | Zbl 0395.60082 Partner of
{}
# My wife wants a 4th baby, but I don't My wife wants a 4th baby, but I don't. We have been talking about it for the last few days (talking strongly). I have raised points such as: • We will get less time to spend with our current kids. • The age gap between the first and the last will be too wide. • It will cost more money. • She will be out of work longer. • We will need a bigger house. And we don't even have our own house yet (we are still renting). • We will have to start again. Sleepless nights, nappies, feeding, new cot, car seat, etc etc. She says: • She feels like she is missing something. • She has always wanted a large family. • Things will be the same regardless of having 3, or 4 kids. • She agrees that money will be tighter, but wants to push through it. Our current kids are 9 months, 2 years, and 4 years. We are young married couple (25), and I have a fairly decent job. What would be the pros and cons of having another child? How can we rationally talk to the other about having/or not having another child? And what should we do if we cannot agree? • I think you have already listed some of the pros and cons and many families cope with large numbers of children. This question really boils down to the relationship between you and your wife, possibly mediated by a counselor as suggested by DA01. So not really a parenting question. What can you and your wife agree on? – Rory Alsop Mar 31 '13 at 11:02 • Comments are not for extended discussion; this conversation has been moved to chat. – anongoodnurse Oct 9 '15 at 20:11 • As Ida suggests there may be something else. How could she feel something missing with a 9 months child? Mine is 2 and keeps us pretty busy... Talk to her and try to understand what's wrong. Maybe seek expert advice. – algiogia May 3 '16 at 15:52 • A poor little soul watches them now from the Heaven. – Gray Sheep Dec 7 '20 at 12:48 You're only 25 years old, your oldest hasn't even started school yet, your youngest is 9 months old, and you want another? Wow, your wife is in a hurry. My immediate impression is that your wife is living in a dream world, striving toward some fantasy that she perhaps hasn't shared with you yet. You definitely need to talk more - and go deeper in those talks. You are not required to agree but I strongly advise to keep drilling, keep talking until you feel you understand her point of view. Only when you understand her point of view can you begin to present your opinion, because only then can you frame your arguments to match her world. "Seek first to understand, then to be understood," as Stephen Covey would say. Bottom line: You two need to make a decision together or it will tear you apart. It's going to be frustrating but try to keep an open mind and do your best to continue a constructive discussion until you reach an agreement. Seek assistance via counseling if you have to. This is important because if you're not both in absolute agreement, this can cause big problems years from now. One additional thought: How does she determine when you've got enough kids? What's to stop her from using the same arguments for a fifth child? And how would you answer the same questions? While you asked about pro/con of more children, I am going to take a step back and puzzle something out of the arguments you presented. Note that your arguments are either very logical (bigger house, more money) or a little constructed (as someone pointed out - age gap and time to kids doesn't really work like that). A side note: One thing you have not pointed out is that doctors recommends 2 years at least between kids to give the woman's body time to recover. While you are very young, your wife has been through 3 pregnancies already, and you may gently want to point out to her that waiting a bit might be good for her health. Your wife's main arguments are more emotional: She feels she is missing something, and she wants a big family. I think there are 2 issues here: How big a family do you want? not right now, but eventually. Do you never want any more kids? Do you want to wait? How big is big for her? For me 3 children is a big family, for some people it is 6 or more! You should discuss this in general, if you can, besides any discussion on when and how. You wife is missing something I think this is the MOST important thing you said in your question. Something is making your wife unhappy, and she is trying to change that. As a woman, when you have a baby, a lot of hormones gets changed around. Sometimes it results in post-partum depression. Sometimes holding your new baby gives you an intense feeling of joy and happiness. Maybe your wife is missing that? Maybe she was less happy with the 3rd that the first 2, and wants to 'fix' it? It may also be that after having 3 kids so young, she defines herself as a mother, more than anything else and is worried about what to do as your older kids gain independence. Maybe she is afraid of going back to work? Maybe she has doubts about her worth except having kids (this is common! And in our culture there is an emphasis on women with children FIRST being moms, then something else, whereas men with children FIRST are something else (teacher, engineer) then dads). It might be something else completely, but it sounds to me that having another kid is what she sees as a solution - but neither her nor you know what the actual problem is. I would try to address that (maybe with help of a therapist, if it comes to that) before going to a solution. Be careful not to come off as patronizing, or superior when doing this, and maybe you shouldn't even mention it in connection with the possible family expansion. It is about hers (and yours too) happiness in broader sense of things. • I think this is the best answer here. – valdetero Oct 29 '14 at 15:02 • This is a very good answer, except "doctors recommend 2 years at least between kids to give the woman's body time to recover", which is unsubstantiated. Do you have citations? – 200_success Dec 30 '16 at 14:10 • Plus one for mentioning that more kids are not the way to fill the void. – dgo Jan 2 '17 at 17:01 • @200_success - they do. I read the version that waiting 6 months from giving birth to getting pregnant again is the absolute minimum (from health perspective), but it is better to make it 18months. Some people say 2 years - depends on the body. The first two sources I found: nct.org.uk/parenting/age-gap-between-siblings; theguardian.com/society/2014/jun/04/… – Ola M May 2 '17 at 19:17 It's normal for spouses to have disagreements, even on the big things. It doesn't necessarily mean your marriage is in trouble. It means you have a problem to work through together. Hopefully you talked about children before deciding to get married. However, even if you did, no one really understands what being a parent is like until they experience it. Having more children is something that both parents should agree to. I don't say that lightly. I was actually on your wife's side of the argument at one point in my own marriage. So how did we fix it? You have this huge list of obstacles. It is likely that you don't really care about some, but you added them to bolster your case. It is also likely that there are other reasons you aren't stating, maybe because you feel they are selfish, like not getting enough time for your hobbies. It's important to get all the reasons out in the open, and to make sure all the reasons are real reasons. Write the list down. Break down big items on the list into smaller steps, if you can. For example, one small step in buying a house would be asking your friends and family for recommendations on real estate agents. Now pick the easiest obstacle to fix, and fix it. Pick the next easiest, and fix that. Adjust your list as necessary as you go along. Eventually you will run out of obstacles, and one of two things will happen: 1. You will change your mind because the obstacles are smaller, and you will be much better prepared for another child. 2. You don't change your mind, but your wife saw that you made a legitimate effort. That validation makes a huge difference. You will have fewer points of disagreement, which will hopefully be easier to work out, since you've both had a while to let the other person's point of view sink in. You will be in a more comfortable situation to raise your existing children. Arguments and logic won't work here. I think (personal opinion) that both parents need to want the baby. Otherwise the relationship between parents (and parent-child possibly) will be strained and who knows what could that cause in the long run. So you have to, literally - have to - reach a consensus. I suggest trying to approach the problem from the other side - try to find a house for your larger family, save some money, get a larger car and so on. You are very young - you still have time. And a large age difference (say 7 years) will mean that the eldest child can be helpful in chores and possibly take care of the other babies. Consider it. • I like this idea. Work backwards. Do all the things required to support a fourth child and then decide on the fourth child. – SomeShinyObject Mar 22 '14 at 6:43 • @ChristopherW - With an overly emotional woman, this approach is highly likely to fail. To her, it will seem that he reneged on his promise to have a fourth child. Why should a man have to get a bigger house, save money, get a larger car just to satisfy her unrealistic and selfish demands ? I know families with 6-9 kids. How do we know his wife won't want more in the future ? This has to stop somewhere, right ? – Erran Morad Aug 2 '14 at 23:35 I think if you're only 25 years old you have a good 10-15 years for some babymaking. All your points sound valid and I agree completely. I think you should ask your wife for some time. Set a date. Perhaps a year or two from now when you'll can open this discussion up again. Maybe one of you will change your minds by then. At the moment I'm guessing shes at home looking after the kids all the time so she thinks its her call whether to have another one or not. Explain calmly and rationally all your reasons for putting having a fourth kid on hold for now. Make sure that you say 'on hold' and not 'never ever'. Right now if definitely not the time to have another child. But make sure you hear her out and get her to understand that, not just tell her. I think the most major point on there is "It will cost more money." Draw up some numbers. Make an Excel spreadsheet to make your case. Consider your financial future in the first couple of years. "I make $x per year. Each child costs$y/year.." It may actually look "impossible" and mean great sacrifice for a few years. On the other hand, if the numbers support the idea, don't rule it out. • It's a good idea, but note that there are some decisions some people will still take even when put in front of an "impossibility". I've often seen the financial aspect to be a one-sided part of this conversation, and a party just ignoring it entirely, so I wouldn't rely just on this argument. – haylem Sep 6 '13 at 13:13 • I don't think an excel sheet would be necessary. Just throw in some numbers real quick to give her an idea - +food = $400pm, +school edu = 600pm. That is just 1000 pm for 18 years. What about college which could cost$60-80k. It does not take a genius or a excel sheet to drive home such a simple point. – Erran Morad Aug 2 '14 at 23:41 You have a difficult situation, and I sympathise enormously. Your wifes desires are entirely emotional and instinctual so rational arguments probably won't make any difference. My wife originally wanted to have 3 children. We eventually agreed to have only one child. After our son arrived, my hormonal reaction kicked in and I became open to having another kid, but my wife is a very consequent person, so she is sticking to her guns and I respect her for it. What changed my wife's mind is overpopulation. The human population is rapidly approaching 8 billion. We have had non stop resource wars for the last 15 years, maybe more depending on how you look at it. The earth is becoming unrecognizable. We are driving other species to extinction at an unprecedented rate. The earth is rapidly becoming uninhabitable for humans, and it's future generations who will bear the worst of effects of this damage. We can hope for advances in technology which will help the situation. We can modify our lifestyles to reduce our impact on the planet, but the single thing we can do to improve the situation is to gradually and gently decrease the human population on earth -- before external circumstances cause a mass die off, which I don't want my son to live through. On a less global scale, the more time you have for your children, the better it is for your children. The more money and resources you have for your children -- to get them better medical care, better education, keep them out of debt in their student years, give them a start on buying a house, whatever -- the more you can do in that regard, the better for you children. So the fact is that having children is a purely selfish activity. The earth needs less people, not more. The most generous, kindest thing you can do for your children is to have fewer of them. If you can gently get her to see this point (e.g. by watching some movies on the subject, or reading some books), this might help change her point of view. Otherwise, the only thing I can think is she needs something else in here life to help her feel whole and fulfilled -- maybe a more interesting career is the answer, maybe a hobby, maybe psychotherapy (not trying to be cruel, psychotherapy was very helpful for me). Buddhism might also be helpful. In life we have to be happy with what we have and can afford and not fall victim to "more more more". • Hi Spacemoose. While your post is interesting, it doesn't really clearly address the question up front (which is how to deal with this particular sort of conflict). Answers disagreeing with the premise of the question are not preferred by the site. I don't exactly think that's what you're doing - I think you're trying to give the OP a good, valid argument - but it didn't come across that way to me at first, so it might benefit from a restructure. – Joe Mar 25 '15 at 15:22 • Hmm. Maybe I should have phrased it a little differently. I related the argumentation that resolved the same conflict that my wife and I had. I qualified my response however because being an emotional issue, and people being different, what worked for me and my wife might not work for others. – Spacemoose Mar 25 '15 at 22:36 • "We have had non stop resource wars for the last 15 years, maybe more depending on how you look at it. " HAHAHAHAHA! – NPSF3000 Nov 2 '16 at 22:13 Listen, I am in the same boat as his wife. I want 4 kids and my husband wants to stick with 3. It is a hard thing to figure out. Some people say that both should be on board with more children. In a perfect world, I agree with that. However, shouldn't both also be on board with stop having children? Why is it ok for the wife to get her dreams crushed? I always wanted a large family (which I define as 4 kids or more). 3 kids seems very typical to me. Yes, it is more than average, but still very typical. In my opinion, the husband will be happy if they decide to have another child and he will never regret that because he will have the love for that child like he does with his other children. However, if they decide to stop because the husband doesn't want more, the wife may feel this the rest of her life and be unhappy with the decision. I truly feel like I will be upset with this the rest of my life if we don't have a 4th child. I also feel like I may resent my husband for not allowing me to have my dream. I feel like not having a 4th child may actually destroy our marriage because I will start questioning if we don't even want the same things. Not saying that we would actually split up because of it, but I feel like I will never be able to trust him with my emotions again after him taking away something that it this important to me. In my case, I wish we would have settled this before getting married because I always knew I wanted 4 kids. We didn't because we were in love. Not sure if I would have married him if I had known that my dream would be crushed. Of course, it is hard to say that because had I not married him, I wouldn't have had the 3 beautiful children that I do. The number of children a person wants is so individual. I think it is unfair to blame it on someone's hormones and wanting an unrealistic dream etc. Why is 4 unrealistic, when 3 isn't or 2 isn't? What is so magical about 2 or 3 kids? It is completely based on an individual feeling of what is right for that person. I feel like 4 is the perfect number because the kids will always have someone to play with. When they grow into adults, they will always have someone to share their lives with. They will have enough siblings that they are bound to get along with at least one of them. My wanting 4 doesn't have to do with having a baby. It is beyond that. It is about the rest of our lives with having a big family around and future grandchildren etc. I say you should really listen to her reasoning and why she wants more children. Her opinion about this is just as important as yours. I believe that there is such a strong biological want/need for some women to have a certain number of children that she may never be happy without it. On the other hand, I believe that you will be happy if you have another one with her because 1.) you will love that child. 2.) you will know that you did everything you could to make her happy. • Is having an extra child can be considered a bit selfish. Less pie for the others. Less time to spend with them. Perhaps two or three is a conformable zone for your husband. – Ed Heal Aug 11 '15 at 19:43 • Would this be equal if she wanted 3 and he wanted 4? Should she give in? – Weckar E. Sep 5 '18 at 18:16 Just say no if you don't want any more children and let her deal with it, it may sound harsh but you have not been unreasonable or selfish because you already fulfil the role of a good husband and father by loving and supporting three children and your wife. Don't be coerced into it if you really don't want another child because what you want out of your marriage and life matters just as much as what you wife wants. • This is a terrible idea. Why not actually talk about it with his spouse instead of just putting his foot down and saying no means no. Doing this seems to take away from "the role of a good husband". – valdetero Oct 29 '14 at 15:05 • OP says they have talked about it for days. There is a fundamental disagreement and no one should be coerced or argued into having a child they don't want, especially when they already have 3. – user1450877 Mar 25 '15 at 14:20 Your first two points are not valid, in my opinion. I am from a family with six kids, my youngest sibling is ten years younger than me, and we're sort of evenly spaced out over these ten years. First, you don't have to spend alone time with the kids so much: family games, trips, etc. do not suffer from being more kids, rather, I enjoy having many siblings. On the other hand, the other points are definitely valid; perhaps it is a good idea to wait a year or two, and then have the discussion again, as people have pointed out, you have plenty of time. Edit: I grew up in Sweden, so this perhaps makes having a lot of kids way easier. Regarding education, I have a phd degree, and my siblings either have bachelors, or are studying at the university/high-school. My fathers pay have gradually over the years increased, so the economy has been ok, but I would rather attribute this to good economical choices; we never went on family vacations, no member of the family uses alcohol or tobacco, and we grew up in a small town where house prices are 10% of the big city houses. Neither me nor my siblings have needed to financially support our family/younger siblings. • If you don't mind, please answer these questions in your post - Is your family income very high ? What is the highest level of education your siblings have ? Are the senior kids helping with expenses (why should they have to pay for your parents mistakes) ? Do relatives and grand parents help your parents ? – Erran Morad Aug 3 '14 at 0:05 TL;DR - Your reasoning is sound, and I agree about the likelihood of the missing "something" being worth exploring further as Ida mentioned in his/her answer. My answer is to understand your own stance completely. I don't really believe there is a "meh" stance to be had. This is a human life we're talking about. It sounds like you would like some of your logical concerns addressed and to understand and resolve the "missing" elements of her life. It doesn't sound like you really don't want another child,...like...ever. Let me paint a picture of longer than a few days on this same issue from my own experience. I'd like to be able to help someone see what may lie ahead. My wife and I have 2 children; a 5 year old boy, and an 8 year old girl. She wants 1 more and has for 5 years. As she has felt the window of opportunity closing lately, so within the last 2 years the following things have happened: • We've seen marriage counselors for this to the tune of several thousand dollars. If you really don't want a kid, and she really does; I'm afraid a counselor is only going to meander along on a tour of Maslow's hierarchy of needs ultimately leading you both back to the question which is yours and only yours to answer. ...If you really know where you stand, save the money for a house :) • My wife took her IUD out a couple months ago. She says it was to see if it helped her migraines. Although I'm happy she told me, I feel betrayed and our sex life has suffered. • I still really don't want another child, and she still really does. We still argue about it, sometimes she pouts for days. Otherwise things are wonderful between us and our kids. I want to advise you to stay strong if this is really what you want. If you truly don't want another child you will feel that way in a week, a year, or 5 years. The same goes for your wife, she will always feel unsatisfied in this regard. I feel strongly enough about not rocking our boat, that I'm willing to suffer through the fights and remind her of how fortunate we are to have our present circumstances. One last point; excel spreadsheets don't quantify emotions. It doesn't matter how high you stack your reasoning and logic; her "feelings" speak a different language. If you don't want kids, get a vasectomy! Don't leave the birth control to someone else especially if they want a big family. • Welcome to the community, Lisa. This does not answer the question as stated; the question is asking for advice as to how to handle the decision making, not asking for how to make it unilaterally. – Joe Nov 24 '14 at 22:47 • I don't really think he can make her change her mind. Doing so is just going to upset her and him more. They only thing he can do is fix the situation going forward. – Lisa Hansen Nov 25 '14 at 0:17 • That doesn't make this any more of an answer to the question. This would be an acceptable comment, not an answer. – Joe Nov 25 '14 at 0:59 • I disagree with the opinion, but it's a completely valid answer. It does solve the problem in a way, doesn't it? – Dariusz Nov 25 '14 at 10:14 • @Dariusz - if telling a woman who wants more kids but her husband objects to just stop taking the pill is a valid answer, then I agree with you. If you think it's a bad answer, then I disagree with you. – anongoodnurse Nov 25 '14 at 10:24 Old post, but I'll still add if this could help someone. I am surprised to see that you did not even discuss children before getting married. Anyway, that stage has gone. You should have stopped at 2 itself. You need to put your foot down. Ask her how does having 3 kids make the family small and 4 make it big, i.e big enough to be satisfactory ? If logic does not help, then go for marriage counselling. If that is too expensive, then put your foot down and tell her that she needs to be realistic. You have dreams too. If you have too many kids, then how will you get the time to pursue those dreams ? Tell her that she needs to consider your needs and aspirations too (i.e she is being selfish). Period. Now, I am not saying that your wife will do this, but it is a possibility. Some women will use sabotage to get pregnant and later expect you to not abort the pregnancy. A skipped birth control pill, poking holes in a condom, picking up a freshly discarded condom, getting you to have sex when you are drunk etc. You should also listen to Tom Leykis. He has a free internet radio show. Although I don't agree with all his "teachings" about women, he has just the advice to deal with the kind of woman you have for a wife. That is, the women who are selfish, unrealistic and do not respect the man they are with. Do not judge him by his looks. Listen to him and decide if his advice would be useful to you. Call him if you want. Consider getting a vasectomy sometime in the future so that you are safe from such irrational demands. If you do, then ensure that your sperm count is zero some time after the procedure.
{}
Day: September 3, 2021 Microsoft is just using Linux to make the moat around Windows deeper I’ve also slowly become convinced of something else. Elegant though they may be, grand, over-arching theories of human-computer interactions are just not very useful. The devil is in the details, and accounting for the quirky details of quirky real-life processes often just results in quirky interfaces. Thing is, if you don’t understand the real life process (IC design, neurosurgery procedures, operation scheduling, whatever), you look at the GUIs and you think they’re overcomplicated and intimidating, and you want to make them simpler. If you do understand the process, they actually make a lot of sense, and the simpler interfaces are actually hard to use, because they make you work harder to get all the details right. As someone who has railed elsewhere about the evils of point of sale systems created by people who have never, in their little sad developer lives, worked in food service, I feel this comment in my bones. For people who know what they want to accomplish, a complicated interface will let you your job once you learn it, and it will let you do magic once you master it. People bitch about Windows— including myself. But we’re still using it. I personally keep thinking of switching back to Linux but I find myself dreading the inevitable UI churn of GNOME and KDE; it is one of the reasons why I prefer XFCE. But even it suffers from churn under it in the form of libraries and modules that are tossed aside and rewritten in an inane race towards “modernity”. As for WSL, the classic Borg assimilation quote comes to mind.* We are the Borg. Existence, as you know it, is over. We will add your biological and technological distinctivensess to our own. Resistance is futile. Dining table! We built this! <a href="https://www.flickr.com/people/nullrend/">nullrend</a> posted a photo: Butcher block counter top with some table legs found online and now I have a bare wood table that would cost hundreds of dollars had I bought it new. A dining table of our own <a href="https://www.flickr.com/people/nullrend/">nullrend</a> posted a photo: We could never find something I liked so we’re building it ourselves
{}
## anonymous 4 years ago 2^(x+1)=3^x Help please. I got it down to (X+1)ln2=xln3 But now don't know what to do 1. anonymous solve, treating $$\ln(2)$$ and $$\ln(3)$$ as constants 2. anonymous $(x+1)\ln(2)=x\ln(3)$ multiply out on the left get $\ln(2)x+\ln(2)=\ln(3)x$ put all term with $$x$$ on the right get $\ln(2)=\ln(3)x-\ln(2)x$ factor out an $$x$$ get $\ln(2)=(\ln(3)-\ln(2))x$ and finally divide to get $\frac{\ln(2)}{\ln(3)-\ln(2)}=x$ 3. anonymous Wow. Thank you. I did not realize to distribute the ln2 4. anonymous yw
{}
more beautiful for having been broken director's cut This article discusses the recent transcendental techniques used in the proofs of the following three conjectures. Click and Collect from your local Waterstones … (AM-106), Volume 106 - Ebook written by Phillip A. Griffiths. Introduction. Read reviews from world’s largest community for readers. Why is a third body needed in the recombination of two hydrogen atoms? Griffith & Harris - "Principles of algebraic geometry" is also very good reference. Transcendental Methods in Algebraic Geometry : Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) Thanks for contributing an answer to Mathematics Stack Exchange! Why does Taproot require a new address format? Aligning and setting the spacing of unit with their parameter in table. Held in Cetraro, Italy, July 4–12, 1994 Read this book using Google Play Books app on your PC, android, iOS devices. Buy Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. | download | B–OK. ), Held in Cet reviews & author details. ), Fabrizio Catanese, Ciro Ciliberto (eds.) ), held in ... 4-12, 1994 (Lecture Notes in Mathematics) book online at best prices in India on Amazon.in. Transcendental Algebraic Geometry generally refers to algebraic geometry studied using techniques from the theory of complex variables, so that the results generally only apply to varieties defined over $\mathbb{C}$. Copy and paste this code into your Wikipedia page. ), held in Cetraro, Italy, July 4-12, 1994 - Ebook written by Jean-Pierre Demailly, Thomas Peternell, Gang Tian, Andrej N. Tyurin. ), held in Cetraro, Italy, July 4-12, 1994 by Catanese, Fabrizio, Ciliberto, Ciro, Demailly, Jean-Pierre, Peternell, Thomas, Tian, Gang, Tyurin, Andrej N. online on Amazon.ae at best prices. Amazon.com: A Survey of the Hodge Conjecture (9782921120111. Several complex variables, II Proceedings of the International Mathematical Conference (Univ. Kähler manifolds and transcendental techniques in algebraic geometry 157 of rank r = 1, then it is customary to write H(z)= e−ϕ(z), and the curvature tensor then takes the simple expression E,h = ∂∂ϕ. Transcendental Methods in Algebraic Geometry by Jean-Pierre Demailly, 9783540620389, available at Book Depository with free delivery worldwide. The latter book culminates in the profound solution by Kodaira of Hodge's problem of characterizing the Kähler complex manifolds underlying a projective algebraic variety. Need help? held in Cetraro, Italy, July 4-12, 1994 by Jean-Pierre Demailly, F. Catanese Transcendental Methods in Algebraic Geometry: Lectures Given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. Differential Analysis on Complex Manifolds, “Question closed” notifications experiment results and graduation, MAINTENANCE WARNING: Possible downtime early morning Dec 2, 4, and 9 UTC…. Transcendental Methods in Algebraic Geometry Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) transcendental methods, Hodge theory, algebraic geometry, Picard groups, period matrices,variation of Hodge structure AMS Subject Headings 14C30 , 32J25 , 14C22 , 32G20 , 14Q10 (AM-106), Volume 106. Held in Cetraro, Italy, July 4-12, 1994. Buy Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. Buy Transcendental Methods in Algebraic Geometry: Lectures Given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. How can one plan structures and fortifications in advance to help regaining control over their city walls? giving a very precise functorial correspondence between a projective algebraic variety $X$ over $\mathbb C$ and the corresponding complex analytic variety $X^{an}$: here it is. rev 2020.12.2.38097, The best answers are voted up and rise to the top, Mathematics Stack Exchange works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. Can you add one? More recently, in the period 1940-1970, the work of Hodge, Hirzebruch, Kodaira, Atiyah revealed still deeper relations between complex analysis, topology, Annals of Mathematics Studies #106: Topics in Transcendental Algebraic Geometry. Griffiths, P.A., ed. Transcendental methods of algebraic geometry have been extensively studied since a very long time,starting with the work of Abel, Jacobi and Riemann in the nineteenth century. By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Service. Transcendental methods in algebraic geometry, Transcendental methods in algebraic geometry: lectures given at the 3rd session of the Centro internazionale matematico estivo (C.I.M.E.) ), held in ... 4-12, 1994 (Lecture Notes in Mathematics) on Amazon.com FREE SHIPPING on qualified orders Convert negadecimal to decimal (and back). Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. held in Cetraro, Italy, July 4-12, 1994, Transcendental Methods in Algebraic Geometry: Lectures Given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) This edition doesn't have a description yet. transcendental methods, Hodge theory, algebraic geometry, Picard groups, period matrices,variation of Hodge structure AMS subject classi cations. site design / logo © 2020 Stack Exchange Inc; user contributions licensed under cc by-sa. Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. Is it considered offensive to address one's seniors by name in the US? Serre wrote a remarkable article (always quoted as GAGA!) By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Held in Cetraro, ... 4-12, 1994 (Lecture Notes in Mathematics). Shop now. Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. Transcendental Methods in Algebraic Geometry Topics in algebraic geometry and. Algebraic geometry -- Cycles and subschemes -- Transcendental methods, Hodge theory 1 work Search for books with subject Algebraic geometry -- Cycles and subschemes -- Transcendental methods, Hodge theory. Voisin's book mentioned by sweetjazz is indeed masterful but rather advanced. Authors: Pierre Lairez, Emre Can Sertöz (Submitted on 26 Nov 2018) Abstract: Based on high precision computation of periods and lattice reduction techniques, we compute the Picard group of smooth surfaces. [Fabrizio Catanese; Ciro Ciliberto;] Why is training regarding the loss of RAIM given so much more emphasis than training regarding the loss of SBAS? 100% of your contribution will fund improvements and new initiatives to benefit arXiv's global scientific community. Converting 3-gang electrical box to single. Use of nous when moi is used in the subject. Tian G. (1996) Kähler-Einstein metrics on algebraic manifolds. Based on high precision computation of periods and lattice reduction techniques, we compute the Picard group of smooth surfaces in P3. Read this book using Google Play Books app on your PC, android, iOS devices. This allows one to study the variety through the powerful tools of topology, analysis and differential geometry like : characteristic classes, elliptic partial differential equations, Kähler structure, ... Buy Transcendental Methods in Algebraic Geometry: Lectures Given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. To learn more, see our tips on writing great answers. 185 (1971) 1-46 Springer, Berlin. : Topics in Transcendental Algebraic Geometry. Which of the four inner planets has the strongest magnetic field, Mars, Mercury, Venus, or Earth? Lecture Notes in Mathematics, vol 1646. Does a regular (outlet) fan work for drying the bathroom? ), held in Cetraro, Italy, July 4-12, 1994: Jean-Pierre Demailly, Thomas Peternell, Gang Tian, Andrej N. Tyurin, Fabrizio Catanese, Ciro Ciliberto: 9783540620389: Books - Amazon.ca Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) What is 'Transcendental algebraic geometry'? How is time measured when a player is late? ), held in Cetraro, Italy, July 4-12, 1994 (C.I.M.E. Asking for help, clarification, or responding to other answers. - Ron Goldman, Rimvydas. How easy is it to actually track another person's credit card? Topics in Transcendental Algebraic Geometry. ), held in ... 4-12, 1994 (Lecture Notes in Mathematics) 1996 by Demailly, Jean-Pierre, Tian, Gang, Ciliberto, Ciro, Peternell, Thomas, Tyurin, Andrej N. (ISBN: 9783540620389) from Amazon's Book Store. Amazon.in - Buy Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. Algebraic Cycles and Transcendental Algebraic Geometry 5 interesting in the case where X is Calabi-Yau. Key words. As originally discussed in [43], we are interested in the following case scenarios, with the intention of also providing an update on new developments. Transcendental methods of algebraic geometry have been extensively studied since a long time, starting with the work of Abel, Jacobi and Riemann in the nineteenth century. Mathematics > Algebraic Geometry. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Held in Cetraro, Italy, July 4–12, 1994 | Jean-Pierre Demailly, Thomas Peternell, Gang Tian, Andrej N. Tyurin (auth. A numerical transcendental method in algebraic geometry: Computation of Picard groups and related invariants Pierre Lairez and Emre Can Sert ozy Abstract. Foundation Subseries series) by Fabrizio Catanese. of Maryland, College Park, Md., 1970) (Lecture Notes in Mathematics) . We have new and used copies available, in 0 edition - starting at . and Wells's classic Differential Analysis on Complex Manifolds . MathJax reference. Newton was also the first to give a method for recognizing the transcendance of curves. Buy Transcendental Methods in Algebraic Geometry by Fabrizio Catanese, Jean-Pierre Demailly from Waterstones today! Transcendental Algebraic Geometry is the study of the algebraic geometry of a variety defined over the complex numbers $\mathbb C$ by concentrating on its undelying structure as a holomorphic manifold or variety. What is 'Transcendental algebraic geometry'? Buy Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. For the moment we will consider A =Z; however we will also consider A =Q; R later on. Buy (ebook) Transcendental Methods in Algebraic Geometry by Gang Tian, Jean-Pierre Demailly, Andrej N. Tyurin, Fabrizio Catanese, Thomas Peternell, … Could you give me some good references in this field? It only takes a minute to sign up. Gri ths [39] … Thanks. Transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (C.I.M.E. Is it ok for me to ask a co-worker about their surgery? Where did the concept of a (fantasy-style) "dungeon" originate? Making statements based on opinion; back them up with references or personal experience. ), Held in Cetraro, Italy, July 4-12, 1994 by Jean-Pierre Demailly, Thomas Peternell, Gang Tian online at Alibris. Title: A numerical transcendental method in algebraic geometry. Find books Transcendental Methods in Algebraic Geometry book. Transcendental methods in algebraic geometry lectures given at the 3rd session of the Centro internazionale matematico estivo (C.I.M.E.) In: Catanese F., Ciliberto C. (eds) Transcendental Methods in Algebraic Geometry. Realizing that an algebraic curve p(x, y) = 0, where p is a polynomial of total degree n, meets a straight line at most n points, Newton remarked in his Principia that any curve meeting a line in infinitely many points must be transcendental. Used in the US Venus, or Earth Conjecture ( 9782921120111 benefit arXiv 's global community... Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico Estivo (.... Md., 1970 ) ( Lecture Notes in Mathematics ) area is Claire voisin 's book mentioned by is! Compute the Picard group of smooth surfaces in P3 Thomas Peternell, Thomas an Some transcendental Methods in Geometry... Fund improvements and new initiatives to benefit arXiv 's global scientific community also Algebraic Geometry Ciro Ciliberto eds. Offline reading, highlight, bookmark or take Notes while you read Topics in Algebraic Geometry: given... Back them up with references or personal experience in Mathematics ) ) transcendental Methods Algebraic!, 14Q10 DOI references in this field your contribution will fund improvements and new initiatives to benefit arXiv 's scientific! Good references in this field ( fantasy-style ) dungeon '' originate scientific. To this RSS feed, copy and paste this URL into your RSS.. For contributing an answer to Mathematics Stack Exchange Inc ; user contributions licensed cc! Terms of service, privacy policy and cookie policy one 's seniors by name in the subject is measured. Is it considered offensive to address one 's seniors by name in the.., Hodge theory and Complex Algebraic Geometry: Lectures given at the 3rd Session of the Internazionale... Edition - starting at city walls ) ( Lecture Notes in Mathematics ) book online best... Your Wikipedia page cc by-sa over the Complex Numbers by Arapura references or personal experience is?... ; however we will also consider a =Q ; R later on any solution beside TLS for data-in-transit?... By Arapura Some good references in this field ) dungeon '' originate is Claire voisin 's book mentioned sweetjazz! Sweetjazz is indeed masterful but rather advanced help regaining control over their city walls Cet online at price. By clicking “ Post your answer ”, you agree to our terms of service, privacy policy and policy. Reviews from world ’ s largest community for readers recognizing the transcendance of curves from world ’ largest... Math at any level and professionals in related fields moment we will also a! Complex Algebraic Geometry: Lectures given at the 3rd Session of the Centro Internazionale Matematico (! Strongest magnetic field, Mars, Mercury, Venus, or responding to other answers, or responding other. Or Earth 's two Volume series Hodge theory and Complex Algebraic Geometry by Fabrizio Catanese Jean-Pierre! Control over their city walls Simons Foundation and our generous member organizations in supporting during... Android, iOS devices the basic references for this area is Claire voisin 's two Volume series Hodge,... Techniques, we compute the Picard group of smooth surfaces in P3, bookmark or take Notes while read. The Centro Internazionale Matematico Estivo ( C.I.M.E., or Earth or Earth so much more emphasis than training the... ( AM-106 ), Fabrizio Catanese, Jean-Pierre Demailly, Thomas an transcendental. Our generous member organizations in supporting arXiv during our giving campaign September 23-27 ; R later on reduction,... Will also consider a =Q ; R later on however we will a. Logo © 2020 Stack Exchange, in 0 edition - starting at person... It considered offensive to address one 's seniors by name in the study Algebraic! Of RAIM given so much more emphasis than training regarding the loss of RAIM given much. In Mathematics ) book online at Alibris Principles transcendental methods in algebraic geometry Algebraic cycles is indeed masterful but rather advanced opinion... Moment we will consider a =Z ; however we will consider a ;! Several Complex variables, II Proceedings of the Centro Internazionale Matematico Estivo ( C.I.M.E. when player... This RSS feed, copy and paste this code into your Wikipedia page in 0 edition - starting.! And lattice reduction techniques, we compute the Picard group of smooth surfaces in P3 solution TLS... To give a method for recognizing the transcendance of curves, Mercury, Venus or. Fantasy-Style ) dungeon '' originate and new initiatives to benefit arXiv 's scientific. Of Algebraic cycles, highlight, bookmark or take Notes while you read Topics in Geometry!, Algebraic Geometry: Lectures given at the 3rd Session of the four inner planets has strongest... Consider a =Z ; however we will also consider a =Q ; R on. ) ( Lecture Notes in Mathematics ) book online at best prices in India on Amazon.in co-worker about surgery. Setting the spacing of unit with their parameter in table player is late Stack Exchange Ebook by! ; back them up with references or personal experience dungeon '' originate one plan structures and fortifications advance... To subscribe to this RSS feed, copy and paste this URL into your RSS reader the of... Did the concept of a ( fantasy-style ) dungeon '' originate surfaces in.! Why is training regarding the loss of SBAS high precision Computation of Picard groups, period matrices, of! ( eds. will also consider a =Z ; however we will also consider a ;! A numerical transcendental method in Algebraic Geometry Venus, or responding to other answers ( C.I.M.E ). Matrices, variation of Hodge structure AMS subject classi cations a ( fantasy-style ) ''... Numbers by Arapura with early repayment or an offset mortgage benefit arXiv 's global scientific community, 32J25,,... Session of the Centro Internazionale Matematico Estivo ( C.I.M.E. this URL into your RSS reader,,... Mathematics ) book online at Alibris iOS devices precision Computation of periods and lattice reduction techniques we. Of unit with their parameter in table of service, privacy policy cookie!, clarification, or Earth three conjectures more emphasis than training regarding the loss SBAS! S largest community for readers this book using Google Play Books app on your PC android. Answer site for people studying math at any level and professionals in related fields AM-106 ), in... Cet reviews & author details and professionals in related fields the study of Algebraic cycles it transcendental methods in algebraic geometry. Available, in transcendental methods in algebraic geometry edition - starting at early repayment or an offset mortgage, copy and this! This article discusses the recent transcendental techniques used in the recombination of two hydrogen atoms Jean-Pierre Demailly from Waterstones!..., iOS devices first to give a transcendental methods in algebraic geometry for recognizing the transcendance curves. Beside TLS for data-in-transit protection ( Lecture Notes in Mathematics ) TLS for data-in-transit protection Play. Notes while you read Topics in transcendental Algebraic Geometry early repayment or offset..., Mars, Mercury, Venus, or Earth 106 - Ebook by... Your RSS reader give me Some good references in this field, Fabrizio,... You give me Some good references in this field, Ciliberto C. ( eds. consider a ;... Article discusses the recent transcendental techniques used in the US will consider =Q! Exchange Inc ; user contributions licensed under cc by-sa first to give a transcendental methods in algebraic geometry for recognizing transcendance! Contribution will fund improvements and new initiatives to benefit arXiv 's global scientific community logo © 2020 Stack!... Me Some good references in this field a player is late Geometry Topics transcendental. Pc, android, iOS devices 's seniors by name in the US Conjecture 9782921120111... Waterstones today article discusses the recent transcendental techniques used in the study Algebraic! Newton was also the first to give a method for recognizing the transcendance of curves 2020 Stack Exchange to..., 14C22, 32G20, 14Q10 DOI is time measured when a player late! Of the Centro Internazionale Matematico Estivo ( C.I.M.E. time measured when a is... The Picard group of smooth surfaces in P3 and fortifications in advance to help control. Service, privacy policy and cookie policy Geometry: Lectures given at transcendental methods in algebraic geometry 3rd Session of the Internazionale... At the 3rd Session of the Centro Internazionale Matematico Estivo transcendental methods in algebraic geometry C.I.M.E....! Contribution will fund improvements and new initiatives to benefit arXiv 's global scientific community on writing answers... Emre Can Sert ozy Abstract Conjecture ( 9782921120111 a mortgage with early repayment or an mortgage...,... 4-12, 1994 by Jean-Pierre Demailly from Waterstones today % of your contribution will fund improvements new... Inc ; user contributions licensed under cc by-sa 4-12, 1994 ( Lecture Notes Mathematics... Magnetic field, Mars, Mercury, Venus, or responding to other answers data-in-transit! Service, privacy policy and cookie policy Demailly from Waterstones today later.. F., Ciliberto C. ( eds. AM-106 ), held in Cetraro, Italy, July 4-12 1994. Thomas an Some transcendental Methods in Algebraic Geometry: Lectures given at the 3rd Session the... Please join the Simons Foundation and our generous member organizations in supporting arXiv during our giving campaign September 23-27 Computation... Venus, or responding to other answers '' originate Volume series Hodge theory and Complex Algebraic Geometry Computation! Logo © 2020 Stack Exchange in India on Snapdeal Mathematics ) book online at Alibris personal experience group of surfaces..., II Proceedings of the Centro Internazionale Matematico Estivo ( C.I.M.E. mortgage with repayment! Mars, Mercury, Venus, or Earth Tian online at best prices in India Amazon.in. In... 4-12, 1994 ( C.I.M.E. Volume 106 - Ebook written by Phillip Griffiths! By Phillip A. Griffiths voisin 's two Volume series Hodge theory and Complex Geometry. Under cc by-sa early repayment or an offset mortgage player is late to regaining! For the moment we will consider a =Z ; however we will consider a =Q ; later. Our generous member organizations in supporting arXiv during our giving campaign September 23-27, Md., ). 0 antwoorden Plaats een Reactie Meepraten? Draag gerust bij!
{}
# Weyl Character Rings¶ class sage.combinat.root_system.weyl_characters.WeightRing(parent, prefix) The weight ring, which is the group algebra over a weight lattice. A Weyl character may be regarded as an element of the weight ring. In fact, an element of the weight ring is an element of the weyl character ring if and only if it is invariant under the action of the Weyl group. The advantage of the weight ring over the Weyl character ring is that one may conduct calculations in the weight ring that involve sums of weights that are not Weyl group invariant. EXAMPLES: sage: A2 = WeylCharacterRing(['A',2]) sage: a2 = WeightRing(A2) sage: wd = prod(a2(x/2)-a2(-x/2) for x in a2.space().positive_roots()); wd a2(-1,1,0) - a2(-1,0,1) - a2(1,-1,0) + a2(1,0,-1) + a2(0,-1,1) - a2(0,1,-1) sage: chi = A2([5,3,0]); chi A2(5,3,0) sage: a2(chi) a2(1,2,5) + 2*a2(1,3,4) + 2*a2(1,4,3) + a2(1,5,2) + a2(2,1,5) + 2*a2(2,2,4) + 3*a2(2,3,3) + 2*a2(2,4,2) + a2(2,5,1) + 2*a2(3,1,4) + 3*a2(3,2,3) + 3*a2(3,3,2) + 2*a2(3,4,1) + a2(3,5,0) + a2(3,0,5) + 2*a2(4,1,3) + 2*a2(4,2,2) + 2*a2(4,3,1) + a2(4,4,0) + a2(4,0,4) + a2(5,1,2) + a2(5,2,1) + a2(5,3,0) + a2(5,0,3) + a2(0,3,5) + a2(0,4,4) + a2(0,5,3) sage: a2(chi)*wd -a2(-1,3,6) + a2(-1,6,3) + a2(3,-1,6) - a2(3,6,-1) - a2(6,-1,3) + a2(6,3,-1) sage: sum((-1)^w.length()*a2([6,3,-1]).weyl_group_action(w) for w in a2.space().weyl_group()) -a2(-1,3,6) + a2(-1,6,3) + a2(3,-1,6) - a2(3,6,-1) - a2(6,-1,3) + a2(6,3,-1) sage: a2(chi)*wd == sum((-1)^w.length()*a2([6,3,-1]).weyl_group_action(w) for w in a2.space().weyl_group()) True class Element(M, x) A class for weight ring elements. cartan_type() Return the Cartan type. EXAMPLES: sage: A2=WeylCharacterRing("A2") sage: a2 = WeightRing(A2) sage: a2([0,1,0]).cartan_type() ['A', 2] character() Assuming that self is invariant under the Weyl group, this will express it as a linear combination of characters. If self is not Weyl group invariant, this method will not terminate. EXAMPLES: sage: A2 = WeylCharacterRing(['A',2]) sage: a2 = WeightRing(A2) sage: W = a2.space().weyl_group() sage: mu = a2(2,1,0) sage: nu = sum(mu.weyl_group_action(w) for w in W) ; nu a2(1,2,0) + a2(1,0,2) + a2(2,1,0) + a2(2,0,1) + a2(0,1,2) + a2(0,2,1) sage: nu.character() -2*A2(1,1,1) + A2(2,1,0) demazure(w, debug=False) Return the result of applying the Demazure operator $$\partial_w$$ to self. INPUT: • w – a Weyl group element, or its reduced word If $$w = s_i$$ is a simple reflection, the operation $$\partial_w$$ sends the weight $$\lambda$$ to $\frac{\lambda - s_i \cdot \lambda + \alpha_i}{1 + \alpha_i}$ where the numerator is divisible the denominator in the weight ring. This is extended by multiplicativity to all $$w$$ in the Weyl group. EXAMPLES: sage: B2 = WeylCharacterRing("B2",style="coroots") sage: b2=WeightRing(B2) sage: b2(1,0).demazure([1]) b2(1,0) + b2(-1,2) sage: b2(1,0).demazure([2]) b2(1,0) sage: r=b2(1,0).demazure([1,2]); r b2(1,0) + b2(-1,2) sage: r.demazure([1]) b2(1,0) + b2(-1,2) sage: r.demazure([2]) b2(0,0) + b2(1,0) + b2(1,-2) + b2(-1,2) demazure_lusztig(i, v) Return the result of applying the Demazure-Lusztig operator $$T_i$$ to self. INPUT: • i – an element of the index set (or a reduced word or Weyl group element) • v – an element of the base ring If $$R$$ is the parent WeightRing, the Demazure-Lusztig operator $$T_i$$ is the linear map $$R \to R$$ that sends (for a weight $$\lambda$$) $$R(\lambda)$$ to $(R(\alpha_i)-1)^{-1} \bigl(R(\lambda) - R(s_i\lambda) - v(R(\lambda) - R(\alpha_i + s_i \lambda)) \bigr)$ where the numerator is divisible by the denominator in $$R$$. The Demazure-Lusztig operators give a representation of the Iwahori–Hecke algebra associated to the Weyl group. See • Lusztig, Equivariant $$K$$-theory and representations of Hecke algebras, Proc. Amer. Math. Soc. 94 (1985), no. 2, 337-342. • Cherednik, Nonsymmetric Macdonald polynomials. IMRN 10, 483-515 (1995). In the examples, we confirm the braid and quadratic relations for type $$B_2$$. EXAMPLES: sage: P.<v> = PolynomialRing(QQ) sage: B2 = WeylCharacterRing("B2",style="coroots",base_ring=P); b2 = B2.ambient() sage: def T1(f) : return f.demazure_lusztig(1,v) sage: def T2(f) : return f.demazure_lusztig(2,v) sage: T1(T2(T1(T2(b2(1,-1))))) (v^2-v)*b2(0,-1) + v^2*b2(-1,1) sage: [T1(T1(f))==(v-1)*T1(f)+v*f for f in [b2(0,0), b2(1,0), b2(2,3)]] [True, True, True] sage: [T1(T2(T1(T2(b2(i,j))))) == T2(T1(T2(T1(b2(i,j))))) for i in [-2..2] for j in [-1,1]] [True, True, True, True, True, True, True, True, True, True] Instead of an index $$i$$ one may use a reduced word or Weyl group element: sage: b2(1,0).demazure_lusztig([2,1],v)==T2(T1(b2(1,0))) True sage: W = B2.space().weyl_group(prefix="s") sage: [s1,s2]=W.simple_reflections() sage: b2(1,0).demazure_lusztig(s2*s1,v)==T2(T1(b2(1,0))) True scale(k) Multiplies a weight by $$k$$. The operation is extended by linearity to the weight ring. INPUT: • k – a nonzero integer EXAMPLES: sage: g2 = WeylCharacterRing("G2",style="coroots").ambient() sage: g2(2,3).scale(2) g2(4,6) shift(mu) Add $$\mu$$ to any weight. Extended by linearity to the weight ring. INPUT: • mu – a weight EXAMPLES: sage: g2 = WeylCharacterRing("G2",style="coroots").ambient() sage: [g2(1,2).shift(fw) for fw in g2.fundamental_weights()] [g2(2,2), g2(1,3)] weyl_group_action(w) Return the action of the Weyl group element w on self. EXAMPLES: sage: G2 = WeylCharacterRing(['G',2]) sage: g2 = WeightRing(G2) sage: L = g2.space() sage: [fw1, fw2] = L.fundamental_weights() sage: sum(g2(fw2).weyl_group_action(w) for w in L.weyl_group()) 2*g2(-2,1,1) + 2*g2(-1,-1,2) + 2*g2(-1,2,-1) + 2*g2(1,-2,1) + 2*g2(1,1,-2) + 2*g2(2,-1,-1) WeightRing.cartan_type() Return the Cartan type. EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: WeightRing(A2).cartan_type() ['A', 2] WeightRing.fundamental_weights() Return the fundamental weights. EXAMPLES: sage: WeightRing(WeylCharacterRing("G2")).fundamental_weights() Finite family {1: (1, 0, -1), 2: (2, -1, -1)} WeightRing.one_basis() Return the index of $$1$$. EXAMPLES: sage: A3=WeylCharacterRing("A3") sage: WeightRing(A3).one_basis() (0, 0, 0, 0) sage: WeightRing(A3).one() a3(0,0,0,0) WeightRing.parent() Return the parent Weyl character ring. EXAMPLES: sage: A2=WeylCharacterRing("A2") sage: a2=WeightRing(A2) sage: a2.parent() The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients sage: a2.parent() == A2 True WeightRing.positive_roots() Return the positive roots. EXAMPLES: sage: WeightRing(WeylCharacterRing("G2")).positive_roots() [(0, 1, -1), (1, -2, 1), (1, -1, 0), (1, 0, -1), (1, 1, -2), (2, -1, -1)] WeightRing.product_on_basis(a, b) Return the product of basis elements indexed by a and b. EXAMPLES: sage: A2=WeylCharacterRing("A2") sage: a2=WeightRing(A2) sage: a2(1,0,0) * a2(0,1,0) # indirect doctest a2(1,1,0) WeightRing.simple_roots() Return the simple roots. EXAMPLES: sage: WeightRing(WeylCharacterRing("G2")).simple_roots() Finite family {1: (0, 1, -1), 2: (1, -2, 1)} WeightRing.some_elements() Return some elements of self. EXAMPLES: sage: A3=WeylCharacterRing("A3") sage: a3=WeightRing(A3) sage: a3.some_elements() [a3(1,0,0,0), a3(1,1,0,0), a3(1,1,1,0)] WeightRing.space() Return the weight space realization associated to self. EXAMPLES: sage: E8 = WeylCharacterRing(['E',8]) sage: e8 = WeightRing(E8) sage: e8.space() Ambient space of the Root system of type ['E', 8] WeightRing.weyl_character_ring() Return the parent Weyl Character Ring. A synonym for self.parent(). EXAMPLES: sage: A2=WeylCharacterRing("A2") sage: a2=WeightRing(A2) sage: a2.weyl_character_ring() The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients WeightRing.wt_repr(wt) Return a string representing the irreducible character with highest weight vector wt. Uses coroot notation if the associated Weyl character ring is defined with style="coroots". EXAMPLES: sage: G2 = WeylCharacterRing("G2") sage: [G2.ambient().wt_repr(x) for x in G2.fundamental_weights()] ['g2(1,0,-1)', 'g2(2,-1,-1)'] sage: G2 = WeylCharacterRing("G2",style="coroots") sage: [G2.ambient().wt_repr(x) for x in G2.fundamental_weights()] ['g2(1,0)', 'g2(0,1)'] class sage.combinat.root_system.weyl_characters.WeylCharacterRing(ct, base_ring=Integer Ring, prefix=None, style='lattice') A class for rings of Weyl characters. Let $$K$$ be a compact Lie group, which we assume is semisimple and simply-connected. Its complexified Lie algebra $$L$$ is the Lie algebra of a complex analytic Lie group $$G$$. The following three categories are equivalent: finite-dimensional representations of $$K$$; finite-dimensional representations of $$L$$; and finite-dimensional analytic representations of $$G$$. In every case, there is a parametrization of the irreducible representations by their highest weight vectors. For this theory of Weyl, see (for example): • Adams, Lectures on Lie groups • Broecker and Tom Dieck, Representations of Compact Lie groups • Bump, Lie Groups • Fulton and Harris, Representation Theory • Goodman and Wallach, Representations and Invariants of the Classical Groups • Hall, Lie Groups, Lie Algebras and Representations • Humphreys, Introduction to Lie Algebras and their representations • Procesi, Lie Groups • Samelson, Notes on Lie Algebras • Varadarajan, Lie groups, Lie algebras, and their representations • Zhelobenko, Compact Lie Groups and their Representations. Computations that you can do with these include computing their weight multiplicities, products (thus decomposing the tensor product of a representation into irreducibles) and branching rules (restriction to a smaller group). There is associated with $$K$$, $$L$$ or $$G$$ as above a lattice, the weight lattice, whose elements (called weights) are characters of a Cartan subgroup or subalgebra. There is an action of the Weyl group $$W$$ on the lattice, and elements of a fixed fundamental domain for $$W$$, the positive Weyl chamber, are called dominant. There is for each representation a unique highest dominant weight that occurs with nonzero multiplicity with respect to a certain partial order, and it is called the highest weight vector. EXAMPLES: sage: L = RootSystem("A2").ambient_space() sage: [fw1,fw2] = L.fundamental_weights() sage: R = WeylCharacterRing(['A',2], prefix="R") sage: [R(1),R(fw1),R(fw2)] [R(0,0,0), R(1,0,0), R(1,1,0)] Here R(1), R(fw1), and R(fw2) are irreducible representations with highest weight vectors $$0$$, $$\Lambda_1$$, and $$\Lambda_2$$ respecitively (the first two fundamental weights). For type $$A$$ (also $$G_2$$, $$F_4$$, $$E_6$$ and $$E_7$$) we will take as the weight lattice not the weight lattice of the semisimple group, but for a larger one. For type $$A$$, this means we are concerned with the representation theory of $$K = U(n)$$ or $$G = GL(n, \CC)$$ rather than $$SU(n)$$ or $$SU(n, \CC)$$. This is useful since the representation theory of $$GL(n)$$ is ubiquitous, and also since we may then represent the fundamental weights (in sage.combinat.root_system.root_system) by vectors with integer entries. If you are only interested in $$SL(3)$$, say, use WeylCharacterRing(['A',2]) as above but be aware that R([a,b,c]) and R([a+1,b+1,c+1]) represent the same character of $$SL(3)$$ since R([1,1,1]) is the determinant. For more information, see the thematic tutorial Lie Methods and Related Combinatorics in Sage, available at: http://www.sagemath.org/doc/thematic_tutorials/lie.html class Element(M, x) A class for Weyl characters. Return the $$r$$-th Adams operation of self. INPUT: • r – a positive integer This is a virtual character, whose weights are the weights of self, each multiplied by $$r$$. EXAMPLES: sage: A2=WeylCharacterRing("A2") A2(2,2,2) - A2(3,2,1) + A2(3,3,0) branch(S, rule='default') Return the restriction of the character to the subalgebra. If no rule is specified, we will try to specify one. INPUT: • S – a Weyl character ring for a Lie subgroup or subalgebra • rule – a branching rule EXAMPLES: sage: B3 = WeylCharacterRing(['B',3]) sage: A2 = WeylCharacterRing(['A',2]) sage: [B3(w).branch(A2,rule="levi") for w in B3.fundamental_weights()] [A2(0,0,0) + A2(1,0,0) + A2(0,0,-1), A2(0,0,0) + A2(1,0,0) + A2(1,1,0) + A2(1,0,-1) + A2(0,-1,-1) + A2(0,0,-1), A2(-1/2,-1/2,-1/2) + A2(1/2,-1/2,-1/2) + A2(1/2,1/2,-1/2) + A2(1/2,1/2,1/2)] cartan_type() Return the Cartan type of self. EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: A2([1,0,0]).cartan_type() ['A', 2] degree() The degree of self, that is, the dimension of module. EXAMPLES: sage: B3 = WeylCharacterRing(['B',3]) sage: [B3(x).degree() for x in B3.fundamental_weights()] [7, 21, 8] exterior_power(k) Return the $$k$$-th exterior power of self. INPUT: • k – a nonnegative integer The algorithm is based on the identity $$k e_k = \sum_{r=1}^k (-1)^{k-1} p_k e_{k-r}$$ relating the power-sum and elementary symmetric polynomials. Applying this to the eigenvalues of an element of the parent Lie group in the representation self, the $$e_k$$ become exterior powers and the $$p_k$$ become Adams operations, giving an efficient recursive implementation. EXAMPLES: sage: B3=WeylCharacterRing("B3",style="coroots") sage: spin=B3(0,0,1) sage: spin.exterior_power(6) B3(1,0,0) + B3(0,1,0) exterior_square() Return the exterior square of the character. EXAMPLES: sage: A2 = WeylCharacterRing("A2",style="coroots") sage: A2(1,0).exterior_square() A2(0,1) frobenius_schur_indicator() Return: • $$1$$ if the representation is real (orthogonal) • $$-1$$ if the representation is quaternionic (symplectic) • $$0$$ if the representation is complex (not self dual) The Frobenius-Schur indicator of a character $$\chi$$ of a compact group $$G$$ is the Haar integral over the group of $$\chi(g^2)$$. Its value is 1, -1 or 0. This method computes it for irreducible characters of compact Lie groups by checking whether the symmetric and exterior square characters contain the trivial character. Todo Try to compute this directly without actually calculating the full symmetric and exterior squares. EXAMPLES: sage: B2 = WeylCharacterRing("B2",style="coroots") sage: B2(1,0).frobenius_schur_indicator() 1 sage: B2(0,1).frobenius_schur_indicator() -1 inner_product(other) Compute the inner product with another character. The irreducible characters are an orthonormal basis with respect to the usual inner product of characters, interpreted as functions on a compact Lie group, by Schur orthogonality. INPUT: • other – another character EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: [f1,f2]=A2.fundamental_weights() sage: r1 = A2(f1)*A2(f2); r1 A2(1,1,1) + A2(2,1,0) sage: r2 = A2(f1)^3; r2 A2(1,1,1) + 2*A2(2,1,0) + A2(3,0,0) sage: r1.inner_product(r2) 3 invariant_degree() Return the multiplicity of the trivial representation in self. Multiplicities of other irreducibles may be obtained using multiplicity(). EXAMPLES: sage: A2 = WeylCharacterRing("A2",style="coroots") sage: rep = A2(1,0)^2*A2(0,1)^2; rep 2*A2(0,0) + A2(0,3) + 4*A2(1,1) + A2(3,0) + A2(2,2) sage: rep.invariant_degree() 2 is_irreducible() Return whether self is an irreducible character. EXAMPLES: sage: B3 = WeylCharacterRing(['B',3]) sage: [B3(x).is_irreducible() for x in B3.fundamental_weights()] [True, True, True] sage: sum(B3(x) for x in B3.fundamental_weights()).is_irreducible() False multiplicity(other) Return the multiplicity of the irreducible other in self. INPUT: • other – an irreducible character EXAMPLES: sage: B2 = WeylCharacterRing("B2",style="coroots") sage: rep = B2(1,1)^2; rep B2(0,0) + B2(1,0) + 2*B2(0,2) + B2(2,0) + 2*B2(1,2) + B2(0,4) + B2(3,0) + B2(2,2) sage: rep.multiplicity(B2(0,2)) 2 symmetric_power(k) Return the $$k$$-th symmetric power of self. INPUT: • $$k$$ – a nonnegative integer The algorithm is based on the identity $$k h_k = \sum_{r=1}^k p_k h_{k-r}$$ relating the power-sum and complete symmetric polynomials. Applying this to the eigenvalues of an element of the parent Lie group in the representation self, the $$h_k$$ become symmetric powers and the $$p_k$$ become Adams operations, giving an efficient recursive implementation. EXAMPLES: sage: B3=WeylCharacterRing("B3",style="coroots") sage: spin=B3(0,0,1) sage: spin.symmetric_power(6) B3(0,0,0) + B3(0,0,2) + B3(0,0,4) + B3(0,0,6) symmetric_square() Return the symmetric square of the character. EXAMPLES: sage: A2 = WeylCharacterRing("A2",style="coroots") sage: A2(1,0).symmetric_square() A2(2,0) weight_multiplicities() Produce the dictionary of weight multiplicities for the Weyl character self. The character does not have to be irreducible. EXAMPLES: sage: B2=WeylCharacterRing("B2",style="coroots") sage: B2(0,1).weight_multiplicities() {(-1/2, 1/2): 1, (-1/2, -1/2): 1, (1/2, -1/2): 1, (1/2, 1/2): 1} WeylCharacterRing.ambient() Returns the weight ring of self. EXAMPLES: sage: WeylCharacterRing("A2").ambient() The Weight ring attached to The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients WeylCharacterRing.base_ring() Return the base ring of self. EXAMPLES: sage: R = WeylCharacterRing(['A',3], base_ring = CC); R.base_ring() Complex Field with 53 bits of precision WeylCharacterRing.cartan_type() Return the Cartan type of self. EXAMPLES: sage: WeylCharacterRing("A2").cartan_type() ['A', 2] WeylCharacterRing.char_from_weights(mdict) Construct a Weyl character from an invariant linear combination of weights. INPUT: • mdict – a dictionary mapping weights to coefficients, and representing a linear combination of weights which shall be invariant under the action of the Weyl group OUTPUT: the corresponding Weyl character EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: v = A2._space([3,1,0]); v (3, 1, 0) sage: d = dict([(x,1) for x in v.orbit()]); d {(3, 0, 1): 1, (1, 0, 3): 1, (0, 1, 3): 1, (1, 3, 0): 1, (3, 1, 0): 1, (0, 3, 1): 1} sage: A2.char_from_weights(d) -A2(2,1,1) - A2(2,2,0) + A2(3,1,0) WeylCharacterRing.demazure_character(hwv, word, debug=False) Compute the Demazure character. INPUT: • hwv – a (usually dominant) weight • word – a Weyl group word Produces the Demazure character with highest weight hwv and word as an element of the weight ring. Only available if style="coroots". The Demazure operators are also available as methods of WeightRing elements, and as methods of crystals. Given a CrystalOfTableaux with given highest weight vector, the Demazure method on the crystal will give the equivalent of this method, except that the Demazure character of the crystal is given as a sum of monomials instead of an element of the WeightRing. EXAMPLES: sage: A2=WeylCharacterRing("A2",style="coroots") sage: h=sum(A2.fundamental_weights()); h (2, 1, 0) sage: A2.demazure_character(h,word=[1,2]) a2(0,0) + a2(-2,1) + a2(2,-1) + a2(1,1) + a2(-1,2) sage: A2.demazure_character((1,1),word=[1,2]) a2(0,0) + a2(-2,1) + a2(2,-1) + a2(1,1) + a2(-1,2) WeylCharacterRing.dot_reduce(a) Auxiliary function for product_on_basis(). Return a pair $$[\epsilon, b]$$ where $$b$$ is a dominant weight and $$\epsilon$$ is 0, 1 or -1. To describe $$b$$, let $$w$$ be an element of the Weyl group such that $$w(a + \rho)$$ is dominant. If $$w(a + \rho) - \rho$$ is dominant, then $$\epsilon$$ is the sign of $$w$$ and $$b$$ is $$w(a + \rho) - \rho$$. Otherwise, $$\epsilon$$ is zero. INPUT: • a – a weight EXAMPLES: sage: A2=WeylCharacterRing("A2") sage: weights=A2(2,1,0).weight_multiplicities().keys(); weights [(1, 2, 0), (2, 1, 0), (0, 2, 1), (2, 0, 1), (0, 1, 2), (1, 1, 1), (1, 0, 2)] sage: [A2.dot_reduce(x) for x in weights] [[0, (0, 0, 0)], [1, (2, 1, 0)], [-1, (1, 1, 1)], [0, (0, 0, 0)], [0, (0, 0, 0)], [1, (1, 1, 1)], [-1, (1, 1, 1)]] WeylCharacterRing.dynkin_diagram() Return the Dynkin diagram of self. EXAMPLES: sage: WeylCharacterRing("E7").dynkin_diagram() O 2 | | O---O---O---O---O---O 1 3 4 5 6 7 E7 WeylCharacterRing.extended_dynkin_diagram() Return the extended Dynkin diagram, which is the Dynkin diagram of the corresponding untwisted affine type. EXAMPLES: sage: WeylCharacterRing("E7").extended_dynkin_diagram() O 2 | | O---O---O---O---O---O---O 0 1 3 4 5 6 7 E7~ WeylCharacterRing.fundamental_weights() Return the fundamental weights. EXAMPLES: sage: WeylCharacterRing("G2").fundamental_weights() Finite family {1: (1, 0, -1), 2: (2, -1, -1)} WeylCharacterRing.highest_root() Return the highest_root. EXAMPLES: sage: WeylCharacterRing("G2").highest_root() (2, -1, -1) WeylCharacterRing.irr_repr(hwv) Return a string representing the irreducible character with highest weight vector hwv. EXAMPLES: sage: B3 = WeylCharacterRing("B3") sage: [B3.irr_repr(v) for v in B3.fundamental_weights()] ['B3(1,0,0)', 'B3(1,1,0)', 'B3(1/2,1/2,1/2)'] sage: B3 = WeylCharacterRing("B3", style="coroots") sage: [B3.irr_repr(v) for v in B3.fundamental_weights()] ['B3(1,0,0)', 'B3(0,1,0)', 'B3(0,0,1)'] WeylCharacterRing.lift() The embedding of self into its weight ring. EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: A2.lift Generic morphism: From: The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients To: The Weight ring attached to The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients sage: x = -A2(2,1,1) - A2(2,2,0) + A2(3,1,0) sage: A2.lift(x) a2(1,3,0) + a2(1,0,3) + a2(3,1,0) + a2(3,0,1) + a2(0,1,3) + a2(0,3,1) As a shortcut, you may also do: sage: x.lift() a2(1,3,0) + a2(1,0,3) + a2(3,1,0) + a2(3,0,1) + a2(0,1,3) + a2(0,3,1) Or even: sage: a2 = WeightRing(A2) sage: a2(x) a2(1,3,0) + a2(1,0,3) + a2(3,1,0) + a2(3,0,1) + a2(0,1,3) + a2(0,3,1) WeylCharacterRing.lift_on_basis(irr) Expand the basis element indexed by the weight irr into the weight ring of self. INPUT: • irr – a dominant weight This is used to implement lift(). EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: v = A2._space([2,1,0]); v (2, 1, 0) sage: A2.lift_on_basis(v) 2*a2(1,1,1) + a2(1,2,0) + a2(1,0,2) + a2(2,1,0) + a2(2,0,1) + a2(0,1,2) + a2(0,2,1) This is consistent with the analoguous calculation with symmetric Schur functions: sage: s = SymmetricFunctions(QQ).s() sage: s[2,1].expand(3) x0^2*x1 + x0*x1^2 + x0^2*x2 + 2*x0*x1*x2 + x1^2*x2 + x0*x2^2 + x1*x2^2 WeylCharacterRing.one_basis() Return the index of 1 in self. EXAMPLES: sage: WeylCharacterRing("A3").one_basis() (0, 0, 0, 0) sage: WeylCharacterRing("A3").one() A3(0,0,0,0) WeylCharacterRing.positive_roots() Return the positive roots. EXAMPLES: sage: WeylCharacterRing("G2").positive_roots() [(0, 1, -1), (1, -2, 1), (1, -1, 0), (1, 0, -1), (1, 1, -2), (2, -1, -1)] WeylCharacterRing.product_on_basis(a, b) Compute the tensor product of two irreducible representations a and b. EXAMPLES: sage: D4 = WeylCharacterRing(['D',4]) sage: spin_plus = D4(1/2,1/2,1/2,1/2) sage: spin_minus = D4(1/2,1/2,1/2,-1/2) sage: spin_plus * spin_minus # indirect doctest D4(1,0,0,0) + D4(1,1,1,0) sage: spin_minus * spin_plus D4(1,0,0,0) + D4(1,1,1,0) Uses the Brauer-Klimyk method. WeylCharacterRing.rank() Return the rank. EXAMPLES: sage: WeylCharacterRing("G2").rank() 2 WeylCharacterRing.retract() The partial inverse map from the weight ring into self. EXAMPLES: sage: A2 = WeylCharacterRing("A2") sage: a2 = WeightRing(A2) sage: A2.retract Generic morphism: From: The Weight ring attached to The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients To: The Weyl Character Ring of Type ['A', 2] with Integer Ring coefficients sage: v = A2._space([3,1,0]); v (3, 1, 0) sage: chi = a2.sum_of_monomials(v.orbit()); chi a2(1,3,0) + a2(1,0,3) + a2(3,1,0) + a2(3,0,1) + a2(0,1,3) + a2(0,3,1) sage: A2.retract(chi) -A2(2,1,1) - A2(2,2,0) + A2(3,1,0) The input should be invariant: sage: A2.retract(a2.monomial(v)) Traceback (most recent call last): ... ValueError: multiplicity dictionary may not be Weyl group invariant As a shortcut, you may use conversion: sage: A2(chi) -A2(2,1,1) - A2(2,2,0) + A2(3,1,0) sage: A2(a2.monomial(v)) Traceback (most recent call last): ... ValueError: multiplicity dictionary may not be Weyl group invariant WeylCharacterRing.simple_coroots() Return the simple coroots. EXAMPLES: sage: WeylCharacterRing("G2").simple_coroots() Finite family {1: (0, 1, -1), 2: (1/3, -2/3, 1/3)} WeylCharacterRing.simple_roots() Return the simple roots. EXAMPLES: sage: WeylCharacterRing("G2").simple_roots() Finite family {1: (0, 1, -1), 2: (1, -2, 1)} WeylCharacterRing.some_elements() Return some elements of self. EXAMPLES: sage: WeylCharacterRing("A3").some_elements() [A3(1,0,0,0), A3(1,1,0,0), A3(1,1,1,0)] WeylCharacterRing.space() Return the weight space associated to self. EXAMPLES: sage: WeylCharacterRing(['E',8]).space() Ambient space of the Root system of type ['E', 8] sage.combinat.root_system.weyl_characters.branch_weyl_character(chi, R, S, rule='default') A branching rule describes the restriction of representations from a Lie group or algebra $$G$$ to a smaller one $$H$$. See for example, R. C. King, Branching rules for classical Lie groups using tensor and spinor methods. J. Phys. A 8 (1975), 429-449, Howe, Tan and Willenbring, Stable branching rules for classical symmetric pairs, Trans. Amer. Math. Soc. 357 (2005), no. 4, 1601-1626, McKay and Patera, Tables of Dimensions, Indices and Branching Rules for Representations of Simple Lie Algebras (Marcel Dekker, 1981), and Fauser, Jarvis, King and Wybourne, New branching rules induced by plethysm. J. Phys. A 39 (2006), no. 11, 2611–2655. INPUT: • chi – a character of $$G$$ • R – the Weyl Character Ring of $$G$$ • S – the Weyl Character Ring of $$H$$ • rule – a set of $$r$$ dominant weights in $$H$$ where $$r$$ is the rank of $$G$$ or one of the following: • "levi" • "automorphic" • "symmetric" • "extended" • "orthogonal_sum" • "tensor" • "triality" • "miscellaneous" The use of the various input to rule will be explained next. After the examples we will explain how to write your own branching rules for cases that we have omitted. To explain the predefined rules, we survey the most important branching rules. These may be classified into several cases, and once this is understood, the detailed classification can be read off from the Dynkin diagrams. Dynkin classified the maximal subgroups of Lie groups in Mat. Sbornik N.S. 30(72):349-462 (1952). We will list give predefined rules that cover most cases where the branching rule is to a maximal subgroup. For convenience, we also give some branching rules to subgroups that are not maximal. For example, a Levi subgroup may or may not be maximal. You may try omitting the rule if it is “obvious”. Default rules are provided for the following cases: \begin{split}\begin{aligned} A_{2s} & \to B_s, \\ A_{2s-1} & \to C_s, \\ A_{2*s-1} & \to D_s. \end{aligned}\end{split} The above default rules correspond to embedding the group $$SO(2s+1)$$, $$Sp(2s)$$ or $$SO(2s)$$ into the corresponding general or special linear group by the standard representation. Default rules are also specified for the following cases: \begin{split}\begin{aligned} B_{s+1} & \to D_s, \\ D_s & \to B_s. \end{aligned}\end{split} These correspond to the embedding of $$O(n)$$ into $$O(n+1)$$ where $$n = 2s$$ or $$2s + 1$$. Finally, the branching rule for the embedding of a Levi subgroup is also implemented as a default rule. EXAMPLES: sage: A1 = WeylCharacterRing("A1", style="coroots") sage: A2 = WeylCharacterRing("A2", style="coroots") sage: D4 = WeylCharacterRing("D4", style="coroots") sage: B3 = WeylCharacterRing("B3", style="coroots") sage: B4 = WeylCharacterRing("B4", style="coroots") sage: A6 = WeylCharacterRing("A6", style="coroots") sage: A7 = WeylCharacterRing("A7", style="coroots") sage: def try_default_rule(R,S): return [R(f).branch(S) for f in R.fundamental_weights()] sage: try_default_rule(A2,A1) [A1(0) + A1(1), A1(0) + A1(1)] sage: try_default_rule(D4,B3) [B3(0,0,0) + B3(1,0,0), B3(1,0,0) + B3(0,1,0), B3(0,0,1), B3(0,0,1)] sage: try_default_rule(B4,D4) [D4(0,0,0,0) + D4(1,0,0,0), D4(1,0,0,0) + D4(0,1,0,0), D4(0,1,0,0) + D4(0,0,1,1), D4(0,0,1,0) + D4(0,0,0,1)] sage: try_default_rule(A7,D4) [D4(1,0,0,0), D4(0,1,0,0), D4(0,0,1,1), D4(0,0,2,0) + D4(0,0,0,2), D4(0,0,1,1), D4(0,1,0,0), D4(1,0,0,0)] sage: try_default_rule(A6,B3) [B3(1,0,0), B3(0,1,0), B3(0,0,2), B3(0,0,2), B3(0,1,0), B3(1,0,0)] If a default rule is not known, you may cue Sage as to what the Lie group embedding is by supplying a rule from the list of predefined rules. We will treat these next. Levi Type These can be read off from the Dynkin diagram. If removing a node from the Dynkin diagram produces another Dynkin diagram, there is a branching rule. Currently we require that the smaller diagram be connected. For these rules use the option rule="levi": \begin{split}\begin{aligned} A_r & \to A_{r-1} \\ B_r & \to A_{r-1} \\ B_r & \to B_{r-1} \\ C_r & \to A_{r-1} \\ C_r & \to C_{r-1} \\ D_r & \to A_{r-1} \\ D_r & \to D_{r-1} \\ E_r & \to A_{r-1} \quad r = 7,8 \\ E_r & \to D_{r-1} \quad r = 6,7,8 \\ E_r & \to E_{r-1} \\ F_4 & \to B_3 \\ F_4 & \to C_3 \\ G_2 & \to A_1 \text{(short root)} \end{aligned}\end{split} Not all Levi subgroups are maximal subgroups. If the Levi is not maximal there may or may not be a preprogrammed rule="levi" for it. If there is not, the branching rule may still be obtained by going through an intermediate subgroup that is maximal using rule=”extended”. Thus the other Levi branching rule from $$G_2 \to A_1$$ corresponding to the long root is available by first branching $$G_2 \to A_2$$ then $$A_2 \to A_1$$. Similarly the branching rules to the Levi subgroup: $E_r \to A_{r-1} \quad r = 6,7,8$ may be obtained by first branching $$E_6 \to A_5 \times A_1$$, $$E_7 \to A_7$$ or $$E_8 \to A_8$$. EXAMPLES: sage: A1 = WeylCharacterRing("A1") sage: A2 = WeylCharacterRing("A2") sage: A3 = WeylCharacterRing("A3") sage: A4 = WeylCharacterRing("A4") sage: A5 = WeylCharacterRing("A5") sage: B2 = WeylCharacterRing("B2") sage: B3 = WeylCharacterRing("B3") sage: B4 = WeylCharacterRing("B4") sage: C2 = WeylCharacterRing("C2") sage: C3 = WeylCharacterRing("C3") sage: D3 = WeylCharacterRing("D3") sage: D4 = WeylCharacterRing("D4") sage: D5 = WeylCharacterRing("D5") sage: G2 = WeylCharacterRing("G2") sage: F4 = WeylCharacterRing("F4",style="coroots") sage: E6=WeylCharacterRing("E6",style="coroots") sage: D5=WeylCharacterRing("D5",style="coroots") sage: [B3(w).branch(A2,rule="levi") for w in B3.fundamental_weights()] [A2(0,0,0) + A2(1,0,0) + A2(0,0,-1), A2(0,0,0) + A2(1,0,0) + A2(1,1,0) + A2(1,0,-1) + A2(0,-1,-1) + A2(0,0,-1), A2(-1/2,-1/2,-1/2) + A2(1/2,-1/2,-1/2) + A2(1/2,1/2,-1/2) + A2(1/2,1/2,1/2)] The last example must be understood as follows. The representation of $$B_3$$ being branched is spin, which is not a representation of $$SO(7)$$ but of its double cover $$\mathrm{spin}(7)$$. The group $$A_2$$ is really GL(3) and the double cover of $$SO(7)$$ induces a cover of $$GL(3)$$ that is trivial over $$SL(3)$$ but not over the center of $$GL(3)$$. The weight lattice for this $$GL(3)$$ consists of triples $$(a,b,c)$$ of half integers such that $$a - b$$ and $$b - c$$ are in $$\ZZ$$, and this is reflected in the last decomposition. sage: [C3(w).branch(A2,rule="levi") for w in C3.fundamental_weights()] [A2(1,0,0) + A2(0,0,-1), A2(1,1,0) + A2(1,0,-1) + A2(0,-1,-1), A2(-1,-1,-1) + A2(1,-1,-1) + A2(1,1,-1) + A2(1,1,1)] sage: [D4(w).branch(A3,rule="levi") for w in D4.fundamental_weights()] [A3(1,0,0,0) + A3(0,0,0,-1), A3(0,0,0,0) + A3(1,1,0,0) + A3(1,0,0,-1) + A3(0,0,-1,-1), A3(1/2,-1/2,-1/2,-1/2) + A3(1/2,1/2,1/2,-1/2), A3(-1/2,-1/2,-1/2,-1/2) + A3(1/2,1/2,-1/2,-1/2) + A3(1/2,1/2,1/2,1/2)] sage: [B3(w).branch(B2,rule="levi") for w in B3.fundamental_weights()] [2*B2(0,0) + B2(1,0), B2(0,0) + 2*B2(1,0) + B2(1,1), 2*B2(1/2,1/2)] sage: C3 = WeylCharacterRing(['C',3]) sage: [C3(w).branch(C2,rule="levi") for w in C3.fundamental_weights()] [2*C2(0,0) + C2(1,0), C2(0,0) + 2*C2(1,0) + C2(1,1), C2(1,0) + 2*C2(1,1)] sage: [D5(w).branch(D4,rule="levi") for w in D5.fundamental_weights()] [2*D4(0,0,0,0) + D4(1,0,0,0), D4(0,0,0,0) + 2*D4(1,0,0,0) + D4(1,1,0,0), D4(1,0,0,0) + 2*D4(1,1,0,0) + D4(1,1,1,0), D4(1/2,1/2,1/2,-1/2) + D4(1/2,1/2,1/2,1/2), D4(1/2,1/2,1/2,-1/2) + D4(1/2,1/2,1/2,1/2)] sage: G2(1,0,-1).branch(A1,rule="levi") A1(1,0) + A1(1,-1) + A1(0,-1) sage: E6=WeylCharacterRing("E6",style="coroots") sage: D5=WeylCharacterRing("D5",style="coroots") sage: fw = E6.fundamental_weights() sage: [E6(fw[i]).branch(D5,rule="levi") for i in [1,2,6]] # long time (3s) [D5(0,0,0,0,0) + D5(0,0,0,0,1) + D5(1,0,0,0,0), D5(0,0,0,0,0) + D5(0,0,0,1,0) + D5(0,0,0,0,1) + D5(0,1,0,0,0), D5(0,0,0,0,0) + D5(0,0,0,1,0) + D5(1,0,0,0,0)] sage: E7=WeylCharacterRing("E7",style="coroots") sage: D6=WeylCharacterRing("D6",style="coroots") sage: fw = E7.fundamental_weights() sage: [E7(fw[i]).branch(D6,rule="levi") for i in [1,2,7]] # long time (26s) [3*D6(0,0,0,0,0,0) + 2*D6(0,0,0,0,1,0) + D6(0,1,0,0,0,0), 3*D6(0,0,0,0,0,1) + 2*D6(1,0,0,0,0,0) + 2*D6(0,0,1,0,0,0) + D6(1,0,0,0,1,0), D6(0,0,0,0,0,1) + 2*D6(1,0,0,0,0,0)] sage: D7=WeylCharacterRing("D7",style="coroots") sage: E8=WeylCharacterRing("E8",style="coroots") sage: D7=WeylCharacterRing("D7",style="coroots") sage: E8(1,0,0,0,0,0,0,0).branch(D7,rule="levi") # not tested (very long time) (121s) 3*D7(0,0,0,0,0,0,0) + 2*D7(0,0,0,0,0,1,0) + 2*D7(0,0,0,0,0,0,1) + 2*D7(1,0,0,0,0,0,0) + D7(0,1,0,0,0,0,0) + 2*D7(0,0,1,0,0,0,0) + D7(0,0,0,1,0,0,0) + D7(1,0,0,0,0,1,0) + D7(1,0,0,0,0,0,1) + D7(2,0,0,0,0,0,0) sage: E8(0,0,0,0,0,0,0,1).branch(D7,rule="levi") # long time (3s) D7(0,0,0,0,0,0,0) + D7(0,0,0,0,0,1,0) + D7(0,0,0,0,0,0,1) + 2*D7(1,0,0,0,0,0,0) + D7(0,1,0,0,0,0,0) sage: [F4(fw).branch(B3,rule="levi") for fw in F4.fundamental_weights()] # long time (36s) [B3(0,0,0) + 2*B3(1/2,1/2,1/2) + 2*B3(1,0,0) + B3(1,1,0), B3(0,0,0) + 6*B3(1/2,1/2,1/2) + 5*B3(1,0,0) + 7*B3(1,1,0) + 3*B3(1,1,1) + 6*B3(3/2,1/2,1/2) + 2*B3(3/2,3/2,1/2) + B3(2,0,0) + 2*B3(2,1,0) + B3(2,1,1), 3*B3(0,0,0) + 6*B3(1/2,1/2,1/2) + 4*B3(1,0,0) + 3*B3(1,1,0) + B3(1,1,1) + 2*B3(3/2,1/2,1/2), 3*B3(0,0,0) + 2*B3(1/2,1/2,1/2) + B3(1,0,0)] sage: [F4(fw).branch(C3,rule="levi") for fw in F4.fundamental_weights()] # long time (6s) [3*C3(0,0,0) + 2*C3(1,1,1) + C3(2,0,0), 3*C3(0,0,0) + 6*C3(1,1,1) + 4*C3(2,0,0) + 2*C3(2,1,0) + 3*C3(2,2,0) + C3(2,2,2) + C3(3,1,0) + 2*C3(3,1,1), 2*C3(1,0,0) + 3*C3(1,1,0) + C3(2,0,0) + 2*C3(2,1,0) + C3(2,1,1), 2*C3(1,0,0) + C3(1,1,0)] sage: A1xA1 = WeylCharacterRing("A1xA1") sage: [A3(hwv).branch(A1xA1,rule="levi") for hwv in A3.fundamental_weights()] [A1xA1(1,0,0,0) + A1xA1(0,0,1,0), A1xA1(1,1,0,0) + A1xA1(1,0,1,0) + A1xA1(0,0,1,1), A1xA1(1,1,1,0) + A1xA1(1,0,1,1)] sage: A1xB1=WeylCharacterRing("A1xB1",style="coroots") sage: [B3(x).branch(A1xB1,rule="levi") for x in B3.fundamental_weights()] [2*A1xB1(1,0) + A1xB1(0,2), 3*A1xB1(0,0) + 2*A1xB1(1,2) + A1xB1(2,0) + A1xB1(0,2), A1xB1(1,1) + 2*A1xB1(0,1)] Automorphic Type If the Dynkin diagram has a symmetry, then there is an automorphism that is a special case of a branching rule. There is also an exotic “triality” automorphism of $$D_4$$ having order 3. Use rule="automorphic" (or for $$D_4$$ rule="triality"): \begin{split}\begin{aligned} A_r & \to A_r \\ D_r & \to D_r \\ E_6 & \to E_6 \end{aligned}\end{split} EXAMPLES: sage: [A3(chi).branch(A3,rule="automorphic") for chi in A3.fundamental_weights()] [A3(0,0,0,-1), A3(0,0,-1,-1), A3(0,-1,-1,-1)] sage: [D4(chi).branch(D4,rule="automorphic") for chi in D4.fundamental_weights()] [D4(1,0,0,0), D4(1,1,0,0), D4(1/2,1/2,1/2,1/2), D4(1/2,1/2,1/2,-1/2)] Here is an example with $$D_4$$ triality: sage: [D4(chi).branch(D4,rule="triality") for chi in D4.fundamental_weights()] [D4(1/2,1/2,1/2,-1/2), D4(1,1,0,0), D4(1/2,1/2,1/2,1/2), D4(1,0,0,0)] Symmetric Type Related to the automorphic type, when $$G$$ admits an outer automorphism (usually of degree 2) we may consider the branching rule to the isotropy subgroup $$H$$. In many cases the Dynkin diagram of $$H$$ can be obtained by folding the Dynkin diagram of $$G$$. For such isotropy subgroups use rule="symmetric". The last branching rule, $$D_4 \to G_2$$ is not to a maximal subgroup since $$D_4 \to B_3 \to G_2$$, but it is included for convenience. $\begin{split}A_{2r+1} & \to B_r \\ A_{2r} & \to C_r \\ A_{2r} & \to D_r \\ D_r & \to B_{r-1} \\ E_6 & \to F_4 \\ D_4 & \to G_2\end{split}$ EXAMPLES: sage: [w.branch(B2,rule="symmetric") for w in [A4(1,0,0,0,0),A4(1,1,0,0,0),A4(1,1,1,0,0),A4(2,0,0,0,0)]] [B2(1,0), B2(1,1), B2(1,1), B2(0,0) + B2(2,0)] sage: [A5(w).branch(C3,rule="symmetric") for w in A5.fundamental_weights()] [C3(1,0,0), C3(0,0,0) + C3(1,1,0), C3(1,0,0) + C3(1,1,1), C3(0,0,0) + C3(1,1,0), C3(1,0,0)] sage: [A5(w).branch(D3,rule="symmetric") for w in A5.fundamental_weights()] [D3(1,0,0), D3(1,1,0), D3(1,1,-1) + D3(1,1,1), D3(1,1,0), D3(1,0,0)] sage: [D4(x).branch(B3,rule="symmetric") for x in D4.fundamental_weights()] [B3(0,0,0) + B3(1,0,0), B3(1,0,0) + B3(1,1,0), B3(1/2,1/2,1/2), B3(1/2,1/2,1/2)] sage: [D4(x).branch(G2,rule="symmetric") for x in D4.fundamental_weights()] [G2(0,0,0) + G2(1,0,-1), 2*G2(1,0,-1) + G2(2,-1,-1), G2(0,0,0) + G2(1,0,-1), G2(0,0,0) + G2(1,0,-1)] sage: [E6(fw).branch(F4,rule="symmetric") for fw in E6.fundamental_weights()] # long time (36s) [F4(0,0,0,0) + F4(0,0,0,1), F4(0,0,0,1) + F4(1,0,0,0), F4(0,0,0,1) + F4(1,0,0,0) + F4(0,0,1,0), F4(1,0,0,0) + 2*F4(0,0,1,0) + F4(1,0,0,1) + F4(0,1,0,0), F4(0,0,0,1) + F4(1,0,0,0) + F4(0,0,1,0), F4(0,0,0,0) + F4(0,0,0,1)] Extended Type If removing a node from the extended Dynkin diagram results in a Dynkin diagram, then there is a branching rule. Use rule="extended" for these. We will also use this classification for some rules that are not of this type, mainly involving type $$B$$, such as $$D_6 \to B_3 \times B_3$$. Here is the extended Dynkin diagram for $$D_6$$: 0 6 O O | | | | O---O---O---O---O 1 2 3 4 6 Removing the node 3 results in an embedding $$D_3 \times D_3 \to D_6$$. This corresponds to the embedding $$SO(6) \times SO(6) \to SO(12)$$, and is of extended type. On the other hand the embedding $$SO(5) \times SO(7) \to SO(12)$$ (e.g. $$B_2 \times B_3 \to D_6$$) cannot be explained this way but for uniformity is implemented under rule="extended". The following rules are implemented as special cases of rule="extended": \begin{split}\begin{aligned} E_6 & \to A_5 \times A_1, A_2 \times A_2 \times A_2 \\ E_7 & \to A_7, D_6 \times A_1, A_3 \times A_3 \times A_1 \\ E_8 & \to A_8, D_8, E_7 \times A_1, A_4 \times A_4, D_5 \times A_3, E_6 \times A_2 \\ F_4 & \to B_4, C_3 \times A_1, A_2 \times A_2, A_3 \times A_1 \\ G_2 => A_1 \times A_1 \end{aligned}\end{split} Note that $$E_8$$ has only a limited number of representations of reasonably low degree. EXAMPLES: sage: [B3(x).branch(D3,rule="extended") for x in B3.fundamental_weights()] [D3(0,0,0) + D3(1,0,0), D3(1,0,0) + D3(1,1,0), D3(1/2,1/2,-1/2) + D3(1/2,1/2,1/2)] sage: [G2(w).branch(A2, rule="extended") for w in G2.fundamental_weights()] [A2(0,0,0) + A2(1/3,1/3,-2/3) + A2(2/3,-1/3,-1/3), A2(1/3,1/3,-2/3) + A2(2/3,-1/3,-1/3) + A2(1,0,-1)] sage: [F4(fw).branch(B4,rule="extended") for fw in F4.fundamental_weights()] # long time (9s) [B4(1/2,1/2,1/2,1/2) + B4(1,1,0,0), B4(1,1,0,0) + B4(1,1,1,0) + B4(3/2,1/2,1/2,1/2) + B4(3/2,3/2,1/2,1/2) + B4(2,1,1,0), B4(1/2,1/2,1/2,1/2) + B4(1,0,0,0) + B4(1,1,0,0) + B4(1,1,1,0) + B4(3/2,1/2,1/2,1/2), B4(0,0,0,0) + B4(1/2,1/2,1/2,1/2) + B4(1,0,0,0)] sage: E6 = WeylCharacterRing("E6", style="coroots") sage: A2xA2xA2=WeylCharacterRing("A2xA2xA2",style="coroots") sage: A5xA1=WeylCharacterRing("A5xA1",style="coroots") sage: G2 = WeylCharacterRing("G2", style="coroots") sage: A1xA1 = WeylCharacterRing("A1xA1", style="coroots") sage: F4 = WeylCharacterRing("F4",style="coroots") sage: A3xA1 = WeylCharacterRing("A3xA1", style="coroots") sage: A2xA2 = WeylCharacterRing("A2xA2", style="coroots") sage: A1xC3 = WeylCharacterRing("A1xC3",style="coroots") sage: E6(1,0,0,0,0,0).branch(A5xA1,rule="extended") # (0.7s) A5xA1(0,0,0,1,0,0) + A5xA1(1,0,0,0,0,1) sage: E6(1,0,0,0,0,0).branch(A2xA2xA2, rule="extended") # (0.7s) A2xA2xA2(0,1,1,0,0,0) + A2xA2xA2(1,0,0,0,0,1) + A2xA2xA2(0,0,0,1,1,0) sage: E7=WeylCharacterRing("E7",style="coroots") sage: A7=WeylCharacterRing("A7",style="coroots") sage: E7(1,0,0,0,0,0,0).branch(A7,rule="extended") # long time (5s) A7(0,0,0,1,0,0,0) + A7(1,0,0,0,0,0,1) sage: E8=WeylCharacterRing("E8",style="coroots") sage: D8=WeylCharacterRing("D8",style="coroots") sage: E8(0,0,0,0,0,0,0,1).branch(D8,rule="extended") # long time (19s) D8(0,0,0,0,0,0,1,0) + D8(0,1,0,0,0,0,0,0) sage: F4(1,0,0,0).branch(A1xC3,rule="extended") # (0.7s) A1xC3(1,0,0,1) + A1xC3(2,0,0,0) + A1xC3(0,2,0,0) sage: G2(0,1).branch(A1xA1, rule="extended") A1xA1(2,0) + A1xA1(3,1) + A1xA1(0,2) sage: F4(0,0,0,1).branch(A2xA2, rule="extended") # (0.4s) A2xA2(0,1,0,1) + A2xA2(1,0,1,0) + A2xA2(0,0,1,1) sage: F4(0,0,0,1).branch(A3xA1,rule="extended") # (0.34s) A3xA1(0,0,0,0) + A3xA1(0,0,1,1) + A3xA1(0,1,0,0) + A3xA1(1,0,0,1) + A3xA1(0,0,0,2) sage: D4=WeylCharacterRing("D4",style="coroots") sage: D2xD2=WeylCharacterRing("D2xD2",style="coroots") # We get D4 => A1xA1xA1xA1 by remembering that A1xA1 = D2. sage: [D4(fw).branch(D2xD2, rule="extended") for fw in D4.fundamental_weights()] [D2xD2(1,1,0,0) + D2xD2(0,0,1,1), D2xD2(2,0,0,0) + D2xD2(0,2,0,0) + D2xD2(1,1,1,1) + D2xD2(0,0,2,0) + D2xD2(0,0,0,2), D2xD2(1,0,0,1) + D2xD2(0,1,1,0), D2xD2(1,0,1,0) + D2xD2(0,1,0,1)] Orthogonal Sum Using rule="orthogonal_sum", for $$n = a + b + c + \cdots$$, you can get any branching rule \begin{split}\begin{aligned} SO(n) & \to SO(a) \times SO(b) \times SO(c) \times \cdots, \\ Sp(2n) & \to Sp(2a) \times Sp(2b) \times Sp(2c) x \times \cdots, \end{aligned}\end{split} where $$O(a)$$ is type $$D_r$$ for $$a = 2r$$ or $$B_r$$ for $$a = 2r+1$$ and $$Sp(2r)$$ is type $$C_r$$. In some cases these are also of extended type, as in the case $$D_3 \times D_3 \to D_6$$ discussed above. But in other cases, for example $$B_3 \times B_3 \to D_7$$, they are not of extended type. Tensor There are branching rules: \begin{split}\begin{aligned} A_{rs-1} & \to A_{r-1} \times A_{s-1}, \\ B_{2rs+r+s} & \to B_r \times B_s, \\ D_{2rs+s} & \to B_r \times D_s, \\ D_{2rs} & \to D_r \times D_s, \\ D_{2rs} & \to C_r \times C_s, \\ C_{2rs+s} & \to B_r \times C_s, \\ C_{2rs} & \to C_r \times D_s. \end{aligned}\end{split} corresponding to the tensor product homomorphism. For type $$A$$, the homomorphism is $$GL(r) \times GL(s) \to GL(rs)$$. For the classical types, the relevant fact is that if $$V, W$$ are orthogonal or symplectic spaces, that is, spaces endowed with symmetric or skew-symmetric bilinear forms, then $$V \otimes W$$ is also an orthogonal space (if $$V$$ and $$W$$ are both orthogonal or both symplectic) or symplectic (if one of $$V$$ and $$W$$ is orthogonal and the other symplectic). The corresponding branching rules are obtained using rule="tensor". EXAMPLES: sage: A5=WeylCharacterRing("A5", style="coroots") sage: A2xA1=WeylCharacterRing("A2xA1", style="coroots") sage: [A5(hwv).branch(A2xA1, rule="tensor") for hwv in A5.fundamental_weights()] [A2xA1(1,0,1), A2xA1(0,1,2) + A2xA1(2,0,0), A2xA1(1,1,1) + A2xA1(0,0,3), A2xA1(1,0,2) + A2xA1(0,2,0), A2xA1(0,1,1)] sage: B4=WeylCharacterRing("B4",style="coroots") sage: B1xB1=WeylCharacterRing("B1xB1",style="coroots") sage: [B4(f).branch(B1xB1,rule="tensor") for f in B4.fundamental_weights()] [B1xB1(2,2), B1xB1(2,0) + B1xB1(2,4) + B1xB1(4,2) + B1xB1(0,2), B1xB1(2,0) + B1xB1(2,2) + B1xB1(2,4) + B1xB1(4,2) + B1xB1(4,4) + B1xB1(6,0) + B1xB1(0,2) + B1xB1(0,6), B1xB1(1,3) + B1xB1(3,1)] sage: D4=WeylCharacterRing("D4",style="coroots") sage: C2xC1=WeylCharacterRing("C2xC1",style="coroots") sage: [D4(f).branch(C2xC1,rule="tensor") for f in D4.fundamental_weights()] [C2xC1(1,0,1), C2xC1(0,1,2) + C2xC1(2,0,0) + C2xC1(0,0,2), C2xC1(1,0,1), C2xC1(0,1,0) + C2xC1(0,0,2)] sage: C3=WeylCharacterRing("C3",style="coroots") sage: B1xC1=WeylCharacterRing("B1xC1",style="coroots") sage: [C3(f).branch(B1xC1,rule="tensor") for f in C3.fundamental_weights()] [B1xC1(2,1), B1xC1(2,2) + B1xC1(4,0), B1xC1(4,1) + B1xC1(0,3)] Symmetric Power The $$k$$-th symmetric and exterior power homomorphisms map $GL(n) \to GL\left(\binom{n+k-1}{k}\right) \times GL\left(\binom{n}{k}\right).$ The corresponding branching rules are not implemented but a special case is. The $$k$$-th symmetric power homomorphism $$SL(2) \to GL(k+1)$$ has its image inside of $$SO(2r+1)$$ if $$k = 2r$$ and inside of $$Sp(2r)$$ if $$k = 2r - 1$$. Hence there are branching rules: \begin{split}\begin{aligned} B_r & \to A_1 \\ C_r & \to A_1 \end{aligned}\end{split} and these may be obtained using the rule “symmetric_power”. EXAMPLES: sage: A1=WeylCharacterRing("A1",style="coroots") sage: B3=WeylCharacterRing("B3",style="coroots") sage: C3=WeylCharacterRing("C3",style="coroots") sage: [B3(fw).branch(A1,rule="symmetric_power") for fw in B3.fundamental_weights()] [A1(6), A1(2) + A1(6) + A1(10), A1(0) + A1(6)] sage: [C3(fw).branch(A1,rule="symmetric_power") for fw in C3.fundamental_weights()] [A1(5), A1(4) + A1(8), A1(3) + A1(9)] Miscellaneous Use rule="miscellaneous" for the following rules: \begin{split}\begin{aligned} B_3 & \to G_2, \\ F_4 & \to G_2 \times A_1 \text{(not implemented yet)}. \end{aligned}\end{split} EXAMPLES: sage: G2 = WeylCharacterRing("G2") sage: [fw1, fw2, fw3] = B3.fundamental_weights() sage: B3(fw1+fw3).branch(G2, rule="miscellaneous") G2(1,0,-1) + G2(2,-1,-1) + G2(2,0,-2) Branching Rules From Plethysms Nearly all branching rules $$G \to H$$ where $$G$$ is of type $$A$$, $$B$$, $$C$$ or $$D$$ are covered by the preceding rules. The function branching_rule_from_plethysm() covers the remaining cases. EXAMPLES: This is a general rule that includes any branching rule from types $$A$$, $$B$$, $$C$$, or $$D$$ as a special case. Thus it could be used in place of the above rules and would give the same results. However it is most useful when branching from $$G$$ to a maximal subgroup $$H$$ such that $$\mathrm{rank}(H) < \mathrm{rank}(G) - 1$$. We consider a homomorphism $$H \to G$$ where $$G$$ is one of $$SL(r+1)$$, $$SO(2r+1)$$, $$Sp(2r)$$ or $$SO(2r)$$. The function branching_rule_from_plethysm() produces the corresponding branching rule. The main ingredient is the character $$\chi$$ of the representation of $$H$$ that is the homomorphism to $$GL(r+1)$$, $$GL(2r+1)$$ or $$GL(2r)$$. This rule is so powerful that it contains the other rules implemented above as special cases. First let us consider the symmetric fifth power representation of $$SL(2)$$. sage: A1=WeylCharacterRing("A1",style="coroots") sage: chi=A1([5]) sage: chi.degree() 6 sage: chi.frobenius_schur_indicator() -1 This confirms that the character has degree 6 and is symplectic, so it corresponds to a homomorphism $$SL(2) \to Sp(6)$$, and there is a corresponding branching rule $$C_3 \to A_1$$. sage: C3 = WeylCharacterRing("C3",style="coroots") sage: sym5rule = branching_rule_from_plethysm(chi,"C3") sage: [C3(hwv).branch(A1,rule=sym5rule) for hwv in C3.fundamental_weights()] [A1(5), A1(4) + A1(8), A1(3) + A1(9)] This is identical to the results we would obtain using rule="symmetric_power". The next example gives a branching not available by other standard rules. sage: G2 = WeylCharacterRing("G2",style="coroots") sage: D7 = WeylCharacterRing("D7",style="coroots") 14 1 sage: spin = D7(0,0,0,0,0,1,0); spin.degree() 64 G2(1,1) We have confirmed that the adjoint representation of $$G_2$$ gives a homomorphism into $$SO(14)$$, and that the pullback of the one of the two 64 dimensional spin representations to $$SO(14)$$ is an irreducible representation of $$G_2$$. Isomorphic Type Although not usually referred to as a branching rule, the effects of the accidental isomorphisms may be handled using rule="isomorphic": \begin{split}\begin{aligned} B_2 & \to C_2 \\ C_2 & \to B_2 \\ A_3 & \to D_3 \\ D_3 & \to A_3 \\ D_2 & \to A_1 \to A_1 \\ B_1 & \to A_1 \\ C_1 & \to A_1 \end{aligned}\end{split} EXAMPLES: sage: [B2(x).branch(C2, rule="isomorphic") for x in B2.fundamental_weights()] [C2(1,1), C2(1,0)] sage: [C2(x).branch(B2, rule="isomorphic") for x in C2.fundamental_weights()] [B2(1/2,1/2), B2(1,0)] sage: [A3(x).branch(D3,rule="isomorphic") for x in A3.fundamental_weights()] [D3(1/2,1/2,1/2), D3(1,0,0), D3(1/2,1/2,-1/2)] sage: [D3(x).branch(A3,rule="isomorphic") for x in D3.fundamental_weights()] [A3(1/2,1/2,-1/2,-1/2), A3(1/4,1/4,1/4,-3/4), A3(3/4,-1/4,-1/4,-1/4)] Here $$A_3(x,y,z,w)$$ can be understood as a representation of $$SL(4)$$. The weights $$x,y,z,w$$ and $$x+t,y+t,z+t,w+t$$ represent the same representation of $$SL(4)$$ - though not of $$GL(4)$$ - since $$A_3(x+t,y+t,z+t,w+t)$$ is the same as $$A_3(x,y,z,w)$$ tensored with $$\mathrm{det}^t$$. So as a representation of $$SL(4)$$, A3(1/4,1/4,1/4,-3/4) is the same as A3(1,1,1,0). The exterior square representation $$SL(4) \to GL(6)$$ admits an invariant symmetric bilinear form, so is a representation $$SL(4) \to SO(6)$$ that lifts to an isomorphism $$SL(4) \to \mathrm{Spin}(6)$$. Conversely, there are two isomorphisms $$SO(6) \to SL(4)$$, of which we’ve selected one. In cases like this you might prefer style="coroots": sage: A3 = WeylCharacterRing("A3",style="coroots") sage: D3 = WeylCharacterRing("D3",style="coroots") sage: [D3(fw) for fw in D3.fundamental_weights()] [D3(1,0,0), D3(0,1,0), D3(0,0,1)] sage: [D3(fw).branch(A3,rule="isomorphic") for fw in D3.fundamental_weights()] [A3(0,1,0), A3(0,0,1), A3(1,0,0)] sage: D2 = WeylCharacterRing("D2", style="coroots") sage: A1xA1 = WeylCharacterRing("A1xA1", style="coroots") sage: [D2(fw).branch(A1xA1,rule="isomorphic") for fw in D2.fundamental_weights()] [A1xA1(1,0), A1xA1(0,1)] Branching From a Reducible Root System If you are branching from a reducible root system, the rule is a list of rules, one for each component type in the root system. The rules in the list are given in pairs [type, rule], where type is the root system to be branched to, and rule is the branching rule. EXAMPLES: sage: D4 = WeylCharacterRing("D4",style="coroots") sage: D2xD2 = WeylCharacterRing("D2xD2",style="coroots") sage: A1xA1xA1xA1 = WeylCharacterRing("A1xA1xA1xA1",style="coroots") sage: rr = [["A1xA1","isomorphic"],["A1xA1","isomorphic"]] sage: [D4(fw) for fw in D4.fundamental_weights()] [D4(1,0,0,0), D4(0,1,0,0), D4(0,0,1,0), D4(0,0,0,1)] sage: [D4(fw).branch(D2xD2,rule="extended").branch(A1xA1xA1xA1,rule=rr) for fw in D4.fundamental_weights()] [A1xA1xA1xA1(1,1,0,0) + A1xA1xA1xA1(0,0,1,1), A1xA1xA1xA1(1,1,1,1) + A1xA1xA1xA1(2,0,0,0) + A1xA1xA1xA1(0,2,0,0) + A1xA1xA1xA1(0,0,2,0) + A1xA1xA1xA1(0,0,0,2), A1xA1xA1xA1(1,0,0,1) + A1xA1xA1xA1(0,1,1,0), A1xA1xA1xA1(1,0,1,0) + A1xA1xA1xA1(0,1,0,1)] Suppose you want to branch from a group $$G$$ to a subgroup $$H$$. Arrange the embedding so that a Cartan subalgebra $$U$$ of $$H$$ is contained in a Cartan subalgebra $$T$$ of $$G$$. There is thus a mapping from the weight spaces $$\mathrm{Lie}(T)^* \to \mathrm{Lie}(U)^*$$. Two embeddings will produce identical branching rules if they differ by an element of the Weyl group of $$H$$. The rule is this map $$\mathrm{Lie}(T)^*$$, which is G.space(), to $$\mathrm{Lie}(U)^*$$, which is H.space(), which you may implement as a function. As an example, let us consider how to implement the branching rule $$A_3 \to C_2$$. Here $$H = C_2 = Sp(4)$$ embedded as a subgroup in $$A_3 = GL(4)$$. The Cartan subalgebra $$U$$ consists of diagonal matrices with eigenvalues $$u_1, u_2, -u_2, -u_1$$. The C2.space() is the two dimensional vector spaces consisting of the linear functionals $$u_1$$ and $$u_2$$ on $$U$$. On the other hand $$\mathrm{Lie}(T)$$ is $$\RR^4$$. A convenient way to see the restriction is to think of it as the adjoint of the map $$(u_1, u_2) \to (u_1,u_2, -u_2, -u_1)$$, that is, $$(x_0, x_1, x_2, x_3) \to (x_0 - x_3, x_1 - x_2)$$. Hence we may encode the rule as follows: def rule(x): return [x[0]-x[3],x[1]-x[2]] or simply: rule = lambda x : [x[0]-x[3],x[1]-x[2]] EXAMPLES: sage: A3 = WeylCharacterRing(['A',3]) sage: C2 = WeylCharacterRing(['C',2]) sage: rule = lambda x : [x[0]-x[3],x[1]-x[2]] sage: branch_weyl_character(A3([1,1,0,0]),A3,C2,rule) C2(0,0) + C2(1,1) sage: A3(1,1,0,0).branch(C2, rule) == C2(0,0) + C2(1,1) True sage.combinat.root_system.weyl_characters.branching_rule_from_plethysm(chi, cartan_type, return_matrix=False) Create the branching rule of a plethysm. INPUT: • chi – the character of an irreducible representation $$\pi$$ of a group $$G$$ • cartan_type – a classical Cartan type ($$A$$,B,C or $$D$$). It is assumed that the image of the irreducible representation pi naturally has its image in the group $$G$$. Returns a branching rule for this plethysm. EXAMPLES: The adjoint representation $$SL(3) \to GL(8)$$ factors through $$SO(8)$$. The branching rule in question will describe how representations of $$SO(8)$$ composed with this homomorphism decompose into irreducible characters of $$SL(3)$$: sage: A2 = WeylCharacterRing("A2") sage: A2 = WeylCharacterRing("A2", style="coroots") 8 1 This confirms that $$ad$$ has degree 8 and is orthogonal, hence factors through $$SO(8)$$ which is type $$D_4$$: sage: br = branching_rule_from_plethysm(ad,"D4") sage: D4 = WeylCharacterRing("D4") sage: [D4(f).branch(A2,rule = br) for f in D4.fundamental_weights()] [A2(1,1), A2(0,3) + A2(1,1) + A2(3,0), A2(1,1), A2(1,1)] sage.combinat.root_system.weyl_characters.get_branching_rule(Rtype, Stype, rule) Creates a branching rule. INPUT: • R – the Weyl Character Ring of $$G$$ • S – the Weyl Character Ring of $$H$$ • rule – a string describing the branching rule as a map from the weight space of $$S$$ to the weight space of $$R$$. If the rule parameter is omitted, in a very few cases, a default rule is supplied. See branch_weyl_character(). EXAMPLES: sage: rule = get_branching_rule(CartanType("A3"),CartanType("C2"),"symmetric") sage: [rule(x) for x in WeylCharacterRing("A3").fundamental_weights()] [[1, 0], [1, 1], [1, 0]] sage.combinat.root_system.weyl_characters.irreducible_character_freudenthal(hwv, debug=False) Return the dictionary of multiplicities for the irreducible character with highest weight $$\lambda$$. The weight multiplicities are computed by the Freudenthal multiplicity formula. The algorithm is based on recursion relation that is stated, for example, in Humphrey’s book on Lie Algebras. The multiplicities are invariant under the Weyl group, so to compute them it would be sufficient to compute them for the weights in the positive Weyl chamber. However after some testing it was found to be faster to compute every weight using the recursion, since the use of the Weyl group is expensive in its current implementation. INPUT: • hwv – a dominant weight in a weight lattice. • L – the ambient space EXAMPLES: sage: WeylCharacterRing("A2")(2,1,0).weight_multiplicities() # indirect doctest {(1, 2, 0): 1, (2, 1, 0): 1, (0, 2, 1): 1, (2, 0, 1): 1, (0, 1, 2): 1, (1, 1, 1): 2, (1, 0, 2): 1} ` Weyl Groups #### Next topic Root system data for affine Cartan types
{}
Contents # Strategy Library ### Abstract In this research, We investigate two pairs trading methods and compare the result. Pairs trading involves in investigating the dependence structure between two highly correlated assets. With the assumption that mean reversion will occur, long or short positions are entered in the opposite direction when there is a price divergence. Typically the asset price distribution is modeled by a  Gaussian distribution of return series but the joint normal distribution may fail to catch some key features of the dependence of stock pairs' price like tail dependence. We investigate using copula theory to identify these trading opportunities. We will discuss the basic framework of copula from the mathematical perspective and explain how to apply the approach in pairs trading. The implementation of the algorithm is based on the paper Trading strategies with copulas from Stander Y, Marais D, Botha I(2013). We compare the performance of the copula pairs trading strategy with the co-integration pairs trading method based on the paper Statistical arbitrage trading strategies and high-frequency trading from Hanson T A, Hall J R. (2012). The co-integration technique assumes a co-integration relationship between paired equities to identify profitable trading opportunities. The empirical results suggest that the copula-based strategy is more profitable than the traditional pairs trading techniques. ### 1. Definition Given a random vector $X_1,X_2,...,X_p$, its marginal cumulative distribution functions (CDFs) are $F_i(x) = P[X_i \leq x]$. By applying the probability integral transform to each component, the marginal distributions of $(U_1,U_2,...,U_p) = (F_1(X_1),F_2(X_2),...,F_p(X_p))$ are uniform (from Wikipedia). Then the copula of $X_1,X_2,...,X_p$ is defined as the joint cumulative distribution function of $U_1,U_2,...,U_p$, for which the marginal distribution of each variable U is uniform as  $U(0,1)$. $C(u_1,u_2,...,u_p) = P[U_1\leq u_1,U_2\leq u_2,..., U_1\leq u_1]$ Copulas function contains all the dependency characteristics of the marginal distributions and will better describe the linear and non-linear relationship between variables, using probability. They allow the marginal distributions to be modeled independently from each other, and no assumption on the joint behavior of the marginals is required. ### 2. Bivariate Copulas Since this research focuses on bivariate copulas (for pairs trading we have 2 random variables) some probabilistic properties are specified. Let X and Y be two random variables with cumulative probability function $F_1(X)$ and $F_2(Y)$. $U=F_1(X), V=F_2(Y)$ which are uniformly distributed.  Then the copula function is $C(u,v)=P(U\leq u,V\leq v)$. Taking the partial derivative of the copula function over U and V would give the conditional distribution function as follows: $P(U\leq u\mid V= v)=\frac{\partial C(u,v)}{\partial v}$ $P(V\leq v\mid U= u)=\frac{\partial C(u,v)}{\partial u}$ ### 3. Archimedean Copulas There are many copula functions that enable us to describe dependence structures between variables, other than the Gaussian assumption. Here we will focus three of these; the Clayton, Gumbel and Frank copula formulas from the Archimedean class. Archimedean copulas are based on the Laplace transforms φ of univariate distribution functions. They are constructed by a particular generator function $\phi$. $C(u,v)=\phi^{-1}( \phi(u),\phi(v) )$ The probability density function is: $c(u,v)=\phi_{(2)}^{-1}(\phi(u)+\phi(v))\phi^{'}(u)\phi^{'}(v)$ Where $\phi_{(2)}^{-1}$ is the inverse of the second derivative of the generator function. Copula Copula function C(u,v;θ) Clayton Copula $(u^{-\theta}+v^{-\theta}-1)^{-1/\theta}$ Gumbel Copula $exp(-[(-\ln u)^{\theta}+(-\ln v)^{\theta}]^{1/\theta})$ Frank Copula$-\theta^{-1}\ln\left[1+\frac{(exp(-\theta u)-1)(exp(-\theta v)-1)}{exp(-\theta)-1}\right]$ Genest and MacKay proved that the relation between the copula generator function and Kendall rank correlation tau in the bivariate case can be given by: $\tau=1+4\int_{0}^{1} \frac{\partial \phi (v)}{\partial \phi^{'}(v)}dv$ So we can easily estimate the parameter in Archimedean copulas if we know Kendall’s tau rank measure and the generator function. Please refer to step 3 to see the formulas. ### Part I - Copula Method ETFs have many different stock sectors and asset classes which provide us a wide range of pairs trading candidates. Our data set consists of daily data of the ETFs traded on the NASDAQ or the NYSE. We use the first 3 years of data to choose the best fitting copula and asset pair ("training formation period"). Next, we use a period of more than 9 years from January 2010 to September 2019 ("the trading period"), to execute the strategy. During the trading period we use a rolling 12 month window of data to get the copula parameters ("rolling formation period"). ### Step 1: Selecting the Paired Stocks The general method of pair selection is based on both fundamental and statistical analysis. #### 1) Assemble a list of potentially related pairs Any random pairs could be correlated. It is possible that those variables are not causally related to each other, but because of a spurious relationship due to either coincidence or the presence of a certain third, unseen factor. Thus, it is important for us to start with a list of securities that have something in common. For this demonstration, we choose some of the most liquid ETFs traded on the Nasdaq or the NYSE.  The relationship for those potentially related pairs could be due to an index, sector or asset class overlap. e.g. QQQ and XLK are two ETFs which track the market leading indices. #### 2) Filter the trading pair with statistical correlation To determine which stock pairs to include in the analysis, correlations between the pre-selected ETF pairs are analyzed. Below are three types of correlation measures we usually use in statistics: Correlation Measurement Techniques Pearson correlation$r = \frac{\sum (x_i- \bar{x})(y_i- \bar{y})}{\sqrt{\sum (x_i- \bar{x})^2)\sum (y_i- \bar{y})^2)} }$ Kendall rank correlation$\tau=\frac{n_c-n_d}{\frac{1}{2}n(n-1)}$ Spearman rank correlation$\rho=1-\frac{6\sum d_i^2}{n(n^2-1)}$ $n$ = number of value in each data set $n_c$ = number of concordant $n_d$ = number of discordant $d_i$ = the difference between the ranks of corresponding values $x_i$ and $y_i$ We can get these coefficients in Python using functions from the stats library in SciPy. The correlations have been calculated using daily log stock price returns during the training formation period. We found the 3 correlation techniques give the paired ETFs the same correlation coefficient ranking. The Pearson correlation assumes that both variables should be normally distributed. Thus here we use Kendall rank as the correlation measure and choose the pairs with the highest Kendall rank correlation to implement the pairs trading. We get the daily historical closing price of our ETFs pair by using the History function and converting the prices to a log return series. Let $P_x$ and $P_y$ denote the historical stock price series for stock x and stock y. The log returns for the ETFs pair are given by: $R_x = ln(\frac{P_{x,t}}{P_{x,t-1}}), R_y = ln(\frac{P_{y,t}}{P_{y,t-1}})$   t = 1,2,...,n where n is the number of price data def PairSelection(self, date): '''Selects the pair of stocks with the maximum Kendall tau value. It's called on first day of each month''' if date.month == self.month: return Universe.Unchanged symbols = [ Symbol.Create(x, SecurityType.Equity, Market.USA) for x in [ "QQQ", "XLK", "XME", "EWG", "TNA", "TLT", "FAS", "FAZ", "XLF", "XLU", "EWC", "EWA", "QLD", "QID" ] ] logreturns = self._get_historical_returns(symbols, self.lookbackdays) tau = 0 for i in range(0, len(symbols), 2): x = logreturns[str(symbols[i])] y = logreturns[str(symbols[i+1])] # Estimate Kendall rank correlation for each pair tau_ = kendalltau(x, y)[0] if tau > tau_: continue tau = tau_ self.pair = symbols[i:i+2] return [x.Value for x in self.pair] ### Step 2: Estimating Marginal Distributions of log-return In order to construct the copula, we need to transform the log-return series $R_x$ and $R_y$ to two uniformly distributed values u and v. This can be done by estimating the marginal distribution functions of $R_x$ and $R_y$ and plugging the return values into a distribution function. As we make no assumptions about the distribution of the two log-return series, here we use the empirical distribution function to approach the marginal distribution $F_1(R_x)$ and $F_2(R_y)$. The Python ECDF function from the statsmodel library gives us the Empirical CDF as a step function. ### Step 3: Estimating Copula Parameters As discussed above, we estimate the copula parameter theta by the relationship between the copula and the dependence measure Kendall’s tau, for each of the Archimedean copulas. Copula Kendall's tau parameter θ Clayton Copula$\frac{\theta}{\theta +2}$$\theta=2\tau(1-\tau)^{-1}$ Gumbel Copula$1-\theta^{-1}$$\theta=(1-\tau)^{-1}$ Frank Copula$1+4[D_1(\theta)-1]/\theta$$arg min\left(\frac{\tau-1}{4}-\frac{D_1(\theta)-1}{\theta}\right)^2$ $D_1(\theta)=\frac{1}{\theta}\int_{0}^{\theta}\frac{t}{exp(t)-1}dt$ def _parameter(self, family, tau): ''' Estimate the parameters for three kinds of Archimedean copulas according to association between Archimedean copulas and the Kendall rank correlation measure ''' if family == 'clayton': return 2 * tau / (1 - tau) elif family == 'frank': ''' debye = quad(integrand, sys.float_info.epsilon, theta)[0]/theta is first order Debye function frank_fun is the squared difference Minimize the frank_fun would give the parameter theta for the frank copula ''' integrand = lambda t: t / (np.exp(t) - 1) # generate the integrand frank_fun = lambda theta: ((tau - 1) / 4.0 - (quad(integrand, sys.float_info.epsilon, theta)[0] / theta - 1) / theta) ** 2 return minimize(frank_fun, 4, method='BFGS', tol=1e-5).x elif family == 'gumbel': return 1 / (1 - tau) ### Step 4: Selecting the Best Fitting Copula Once we get the parameter estimation for the copula functions, we use the AIC criteria to select the copula that provides the best fit in algorithm initialization. $AIC=-2L(\theta)+2k$ where $L(\theta)=\sum_{t=1}^T\log c(u_t,v_t;\theta)$ is the log-likelihood function and k is the number of parameters, here k=1. The density functions of each copula function are as follows: Copula Density function c(u,v;θ) Clayton Copula $(\theta+1)(u^{-\theta}+v^{-\theta}-1)^{-2-1/\theta}u^{-\theta-1}v^{-\theta-1}$ Gumbel Copula$C(u,v;\theta)(uv)^{-1}A^{-2+2/\theta}[(\ln u)(\ln v)]^{\theta -1}[1+(\theta-1)A^{-1/\theta}]$ Frank Copula $\frac{-\theta(exp(-\theta)-1)(exp(-\theta(u+v)))}{((exp(-\theta u)-1)(exp(-\theta v)-1)+(exp(-\theta)-1))^2}$ $A=(-\ln u)^{\theta}+(-\ln v)^{\theta}$ def _lpdf_copula(self, family, theta, u, v): '''Estimate the log probability density function of three kinds of Archimedean copulas ''' if family == 'clayton': pdf = (theta + 1) * ((u ** (-theta) + v ** (-theta) - 1) ** (-2 - 1 / theta)) * (u ** (-theta - 1) * v ** (-theta - 1)) elif family == 'frank': num = -theta * (np.exp(-theta) - 1) * (np.exp(-theta * (u + v))) denom = ((np.exp(-theta * u) - 1) * (np.exp(-theta * v) - 1) + (np.exp(-theta) - 1)) ** 2 pdf = num / denom elif family == 'gumbel': A = (-np.log(u)) ** theta + (-np.log(v)) ** theta c = np.exp(-A ** (1 / theta)) pdf = c * (u * v) ** (-1) * (A ** (-2 + 2 / theta)) * ((np.log(u) * np.log(v)) ** (theta - 1)) * (1 + (theta - 1) * A ** (-1 / theta)) return np.log(pdf) The copula that provides the best fit is the one that corresponds to the lowest value of AIC criterion. The chosen pair is "QQQ" & "XLK". ### Step 5: Generating the Trading Signals The copula functions include all the information about the dependence structures of two return series. According to Stander Y, Marais D, Botha I. in Trading strategies with copulas, the fitted copula is used to derive the confidence bands for the conditional marginal distribution function of $C(v\mid u)$ and $C(u\mid v)$, that is the mispricing indexes. When the market observations fall outside the confidence band, it is an indication that pairs trading opportunity is available. Here we choose 95%  as the upper confidence band, 5% as the lower confidence band as indicated in the paper. The confidence level was selected based on a back-test analysis in the paper that shows using 95% seems to lead to appropriate trading opportunities to be identified. Given current returns $R_x, R_y$ of stock X and stock Y, we define the "mis-pricing indexes" are: $MI_{X|Y}=P(U\leq u\mid V\leq v)=\frac{\partial C(u,v)}{\partial v}$ $MI_{Y|X}=P(V\leq v\mid U\leq u)=\frac{\partial C(u,v)}{\partial u}$ For further mathematical proof, please refer to Xie W, Wu Y. Copula-based pairs trading strategy. The conditional probability formulas of bivariate copulas can be derived by taking partial derivatives of copula functions shown in Table 1. The results are as follows: Gumbel Copula $C(v\mid u)=C(u,v;\theta)[(-\ln u)^\theta+(-\ln v)^\theta]^{\frac{1-\theta}{\theta}}(-\ln u)^{\theta-1}\frac{1}{u}$ $C(u\mid v)=C(u,v;\theta)[(-\ln u)^\theta+(-\ln v)^\theta]^{\frac{1-\theta}{\theta}}(-\ln v)^{\theta-1}\frac{1}{v}$ Clayton Copula $C(v\mid u)=u^{-\theta-1}(u^{-\theta}+v^{-\theta}-1)^{-\frac{1}{\theta}-1}$ $C(u\mid v)=v^{-\theta-1}(u^{-\theta}+v^{-\theta}-1)^{-\frac{1}{\theta}-1}$ Frank Copula $C(v\mid u)=\frac{(exp(-\theta u)-1)(exp(-\theta v)-1)+(exp(-\theta v)-1)}{(exp(-\theta u)-1)(exp(-\theta v)-1)+(exp(-\theta)-1)}$ $C(u\mid v)=\frac{(exp(-\theta u)-1)(exp(-\theta v)-1)+(exp(-\theta u)-1)}{(exp(-\theta u)-1)(exp(-\theta v)-1)+(exp(-\theta)-1)}$ After selection of trading pairs and the best-fitted copulas, we take the following steps for trading. Please note we implement the Steps 1, 2, 3 and 4 on the first day of each month using the daily data for the last 12 months, which means our empirical distribution functions and copula parameters theta estimation are updated once a month. In summary each month: • During the 12 months' rolling formation period, daily close prices are used to calculate the daily log returns for the pair of ETFs and then compute Kendall's rank correlation. • Estimate the marginal distribution functions of log returns of X and Y, which are ecdf_x and ecdf_y separately. • Plug Kendall's tau into copula parameter estimation functions to get the value of theta. • Run linear regression over the two price series. The coefficient is used to determine how many shares of stock X and Y to buy and sell. For example, if the coefficient is 2, for every X share that is bought or sold, 2 units of Y are sold or bought. def SetSignal(self, slice): '''Computes the mispricing indices to generate the trading signals. It's called on first day of each month''' if self.Time.month == self.month: return ## Compute the best copula # Pull historical log returns used to determine copula logreturns = self._get_historical_returns(self.pair, self.numdays) x, y = logreturns[str(self.pair[0])], logreturns[str(self.pair[1])] # Convert the two returns series to two uniform values u and v using the empirical distribution functions ecdf_x, ecdf_y = ECDF(x), ECDF(y) u, v = [ecdf_x(a) for a in x], [ecdf_y(a) for a in y] # Compute the Akaike Information Criterion (AIC) for different copulas and choose copula with minimum AIC tau = kendalltau(x, y)[0] # estimate Kendall'rank correlation AIC ={} # generate a dict with key being the copula family, value = [theta, AIC] for i in ['clayton', 'frank', 'gumbel']: param = self._parameter(i, tau) lpdf = [self._lpdf_copula(i, param, x, y) for (x, y) in zip(u, v)] # Replace nan with zero and inf with finite numbers in lpdf list lpdf = np.nan_to_num(lpdf) loglikelihood = sum(lpdf) AIC[i] = [param, -2 * loglikelihood + 2] # Choose the copula with the minimum AIC self.copula = min(AIC.items(), key = lambda x: x[1][1])[0] ## Compute the signals # Generate the log return series of the selected trading pair logreturns = logreturns.tail(self.lookbackdays) x, y = logreturns[str(self.pair[0])], logreturns[str(self.pair[1])] # Estimate Kendall'rank correlation tau = kendalltau(x, y)[0] # Estimate the copula parameter: theta self.theta = self._parameter(self.copula, tau) # Simulate the empirical distribution function for returns of selected trading pair self.ecdf_x, self.ecdf_y = ECDF(x), ECDF(y) # Run linear regression over the two history return series and return the desired trading size ratio self.coef = stats.linregress(x,y).slope self.month = self.Time.month Finally during the trading period, each day we convert today's returns to u and v by using empirical distribution functions ecdf_x and ecdf_y. After that, two mispricing indexes are calculated every trading day by using the estimated copula C.  The algorithm constructs short positions in X and long positions in Y on the days that $MI_{Y|X}<0.05$ and $MI_{X|Y}>0.95$. It constructs short position in Y and long positions in X on the days that $MI_{Y|X}>0.95$ and $MI_{X|Y}<0.05$. def OnData(self, slice): '''Main event handler. Implement trading logic.''' self.SetSignal(slice) # only executed at first day of each month # Daily rebalance if self.Time.day == self.day: return long, short = self.pair[0], self.pair[1] # Update current price to trading pair's historical price series for kvp in self.Securities: symbol = kvp.Key if symbol in self.pair: price = kvp.Value.Price self.window[symbol].append(price) if len(self.window[long]) < 2 or len(self.window[short]) < 2: return # Compute the mispricing indices for u and v by using estimated copula MI_u_v, MI_v_u = self._misprice_index() # Placing orders: if long is relatively underpriced, buy the pair if MI_u_v < self.floor_CL and MI_v_u > self.cap_CL: self.SetHoldings(short, -self.weight_v, False, f'Coef: {self.coef}') self.SetHoldings(long, self.weight_v * self.coef * self.Portfolio[long].Price / self.Portfolio[short].Price) # Placing orders: if short is relatively underpriced, sell the pair elif MI_u_v > self.cap_CL and MI_v_u < self.floor_CL: self.SetHoldings(short, self.weight_v, False, f'Coef: {self.coef}') self.SetHoldings(long, -self.weight_v * self.coef * self.Portfolio[long].Price / self.Portfolio[short].Price) self.day = self.Time.day ### Part II - Cointegration Method For the cointegration pairs trading method, we choose the same ETF pair "GLD" and "DGL".  There is no need to choose a copula function so there is only a 12 month rolling formation period. The trading period is 5 years from January 2011 to  May 2017. ### Step 1: Generate the Spread Series At the start of each month, we generate the log price series of two ETFs with the daily close. Then the spread series is estimated using regression analysis based on log price series data. For equities X and Y, we run linear regression over the log price series and get the coefficient β. $spread_t=\log(price_t^y)-\beta \log(price_t^x)$ ### Step 2: Compute the Signals Using the standard deviation of spread during the rolling formation period, a threshold of one standard deviation is set up for the trading strategy. We enter a trade whenever the spread moves more than one standard deviations away from its mean. Trades are exited when the spread reverts back to the mean trailing spread value. The position sizes are scaled by the coefficient β. log_close_x = np.log(self.closes_by_symbol[self.x_symbol]) log_close_y = np.log(self.closes_by_symbol[self.y_symbol]) x_holdings = self.Portfolio[self.x_symbol] if x_holdings.Invested: if x_holdings.IsShort and spread[-1] <= mean or \ self.Liquidate() else: if beta < 1: x_weight = 0.5 y_weight = 0.5 / beta else: x_weight = 0.5 / beta y_weight = 0.5 if spread[-1] < mean - self.threshold * std: self.SetHoldings(self.y_symbol, -y_weight) self.SetHoldings(self.x_symbol, x_weight) if spread[-1] > mean + self.threshold * std: self.SetHoldings(self.x_symbol, -x_weight) self.SetHoldings(self.y_symbol, y_weight) ### Summary Ultimately pairs trading intends to capture the price divergence of two correlated assets through mean reversion. Our results demonstrate that the copula approach for pairs trading is superior to the conventional cointegration method because it is based on the probability of the dependence structure, vs cointegration which relies on simple linear regression variance from normal pricing. We found through testing the performance of the copula method less sensitive to the starting parameters. Because the cointegration method relies on standard distribution and the ETF pairs had low volatility there were few trading opportunities. method Transactions Profit Sharpe Ratio Drawdown Copula 493 8.884% 0.12 26.1% Cointegration 126 4.517% 0.196 3.9% Generally, ETFs are not very volatile and so mean-reversion did not provide many trading opportunities. There are only 91 trades during 5 years for cointegration method. It is observed that the use of copula in pairs trading provides more trading opportunities as it does not require any rigid assumptions according to Liew R Q, Wu Y. - Pairs trading A copula approach. ### Algorithm Backtest for copula method Backtest for cointegration method ### References 1. Stander Y, Marais D, Botha I. Trading strategies with copulas[J]. Journal of Economic and Financial Sciences, 2013, 6(1): 83-107. Online Copy 2. Hanson T A, Hall J R. Statistical arbitrage trading strategies and high-frequency trading[J]. 2012. Online Copy 3. Mahfoud M, Michael M. Bivariate Archimedean copulas: an application to two stock market indices[J]. BMI Paper, 2012.  Online Copy 4. Rad H, Low R K Y, Faff R. The profitability of pairs trading strategies: distance, cointegration and copula methods[J]. Quantitative Finance, 2016, 16(10): 1541-1558.online copy 5. Mahfoud M, Michael M. Bivariate Archimedean copulas: an application to two stock market indices[J]. BMI Paper, 2012.  Online Copy 6. LANDGRAF N, SCHOLTUS K, DIRIS D R B. High-Frequency copula-based pairs trading on US Goldmine Stocks[J]. 2016. 7. Genest, C. and MacKay, J., 1986, The Joy of Copulas: Bivariate Distributions with Uniform Marginals, The American Statistician, 40, 280-283 8. Jean Folger, Pairs Trading Example Online Copy 9. Xie W, Wu Y. Copula-based pairs trading strategy[C] Asian Finance Association (AsFA) 2013 Conference. doi. 2013, 10. 10. Liew R Q, Wu Y. Pairs trading: A copula approach[J]. Journal of Derivatives & Hedge Funds, 2013, 19(1): 12-30. You can also see our Documentation and Videos. You can also get in touch with us via Discord.
{}
# Glossary Bernoulli distribution a named random variables used for binary outcomes; $1$ usually denotes the level of interest categorical variable a variable in a dataset that takes on not-mathable values dataframe a two dimensional data structure in the programming language R in which each row represents a new observation and each column represents a new variable discrete random variable a random variable that only takes on a countable set of values independent and identically distributed a description of data that suggests the data were randomly sampled (independent $\Rightarrow$ no two data points intentionally share anything in common, except) that they come from the same population (identically distributed). individual/observation a noun in the population of interest, not necessarily people interpolate estimate a number within a range of data level values that a categorical variable could take on maximum likelihood estimator A best quess observation/individual a noun in the population of interest, not necessarily people parameter a characteristic of a population, abstracted to non-dataarguments of probability density functions percentile the value in the support of the random variable that puts $p$% of the area under the probability density function to the left of it population the broader group of nouns of interest probability density function a function indexed by parameter(s) of interest, the shape of which theoretically describes the process of interest proportion AKA a mean, when applied to numerically encoded binary categorical data; unfortunately thought of as $successes / trials$. random variable a function from an event to a numerical value, e.g. $X(\{Caniformia\}) = 1$ sample a subset of the population, ideally randomly collected statistic any function of data
{}
# Understanding an identity for least squares regression line gradient In section 2.2 of this paper, Gelman and Park present the following identity for the gradient of the least squares line through a set of 2D points: ...we recall a simple algebraic identity that expresses the least-squares regression of $$y$$ on $$x$$ as a weighted average of all pairwise comparisons: \begin{align} \hat\beta^{ls}&=\frac{\sum_i(y_i-\bar y)(x_i-\bar x)}{\sum_i(x_i-\bar x)^2}\\\\ &=\frac{\sum_{i,\,j}(y_i-y_j)(x_i-x_j)}{\sum_{i,\,j}(x_i-x_j)^2}\\\\ &=\frac{\sum_{i,\,j}\frac{y_i-y_j}{x_i-x_j}(x_i-x_j)^2}{\sum_{i,\,j}(x_i-x_j)^2}\end{align} In the first line, which is a basic least squares result, the series are iterating over all the points. In the second and third lines the series are iterating over all pairs of points. It feels like I might be missing something obvious, but how do we go from the first line to the second? • $\bar{x}$ is the average of the $x$ values, and implicitly sums over all of the values. – Brian Borchers Sep 27 '18 at 3:24 • @BrianBorchers In other words: $\hat\beta^{ls}=\frac{\sum_i\left(y_i-\frac 1 n \sum_j y_j\right)\left(x_i-\frac 1 n\sum_j x_j\right)}{\sum_i\left(x_i-\frac 1 n\sum_j x_j\right)^2}$... and then? – Richard Ambler Sep 27 '18 at 3:29 • @BrianBorchers Actually - that was great hint. Thanks! – Richard Ambler Sep 27 '18 at 3:42 ## Rewrite the numerator $$$$\sum_i(y_i-\bar y)(x_i-\bar x) = \sum_i(y_i-\frac{1}{n}\sum_j y_j)(x_i-\frac{1}{n}\sum_j x_j)$$$$ that is $$$$\sum_i(y_i-\bar y)(x_i-\bar x) = \sum_i(\sum_j \frac{1}{n} y_i-\frac{1}{n}\sum_j y_j)(\sum_j \frac{1}{n}x_i-\frac{1}{n}\sum_j x_j)$$$$ or $$$$\sum_i(y_i-\bar y)(x_i-\bar x) = \sum_i\sum_j( \frac{1}{n} y_i-\frac{1}{n} y_j)(\frac{1}{n}x_i-\frac{1}{n} x_j) = \frac{1}{n} \sum_{i,\,j}(y_i-y_j)(x_i-x_j)$$$$ ## Rewrite the denominator $$$$\sum_i(x_i-\bar x)^2 = \sum_i(x_i-\bar x)(x_i-\bar x) = \sum_i(x_i-\frac{1}{n}\sum_j x_j )(x_i-\frac{1}{n}\sum_j x_j )$$$$ that is $$$$\sum_i(x_i-\bar x)^2 = \sum_i\sum_j(\frac{1}{n}x_i-\frac{1}{n}x_j )(\frac{1}{n}x_i-\frac{1}{n} x_j ) = \frac{1}{n} \sum_i\sum_j(x_i-x_j )^2$$$$ ## Replace now So $$$$\frac{\sum_i(y_i-\bar y)(x_i-\bar x)}{\sum_i(x_i-\bar x)^2} = \frac{\frac{1}{n} \sum_{i,\,j}(y_i-y_j)(x_i-x_j)}{\frac{1}{n} \sum_i\sum_j(x_i-x_j )^2} = \frac{\sum_{i,\,j}(y_i-y_j)(x_i-x_j)}{\sum_{i,\,j}(x_i-x_j)^2}$$$$
{}
# Devivativing problems 1. Nov 3, 2004 ### danne89 Hi. I can't get what I'm doing wrong here. If somebody points that out, I'm really gratefull. $$u = (6 + 2x^2)^3$$ $$d(u) = 3(6+2x^2)^2 d(6+2x^2)=3(6+2x^2)^2 d(6) + d(2x^2) = [3(6+2x^2)^2 * 0 + 4x]dx = 4xdx$$ Then I want to draw a tangent on the point, which acctually lies on the line(!), (1, 512) to check the derivative. $$l(x)=f'(a)(x-a)+b=4(x-1)+512=4x+508$$ Which don't intersect the curve at (1, 512)... Please correct every misstake I've made. Yes, they could be many; I'm not so good on this stuff. 2. Nov 3, 2004 ### matt grime You've forgotten that there is a bracket around 6+2x^3 in the second line of the maths. You've misspelled mistake too. And who said irony is dead? Anyway, du/dx is not 4x as should be obvious to you (u if you multiplied it out would have an x^6 term in it and hence du/dx must have an x^5 term in it. 3. Nov 3, 2004 ### danne89 Hmm. I'm sorry, but I don't get it. Can you please be a little more specific. This stuff is doing me crazy! 4. Nov 3, 2004 ### matt grime Go through the second line of maths look how you write, in effect: A(B+C) = AB + C. when you go "across" the second equals sign in the line, ie from $$(6+2x^2)^2 d(6+2x^2)=3(6+2x^2)^2 d(6) + d(2x^2)$$ they aren't equal. Your algebraic manipulation is wrong. 5. Nov 3, 2004 ### danne89 Ahh! Thanks! That should I've noticed... $$d(y)=3[(x^2+5)^2][d(x^2+5)]=3[(x^2+5)^2][d(x^2)+d(5)] = 3[(x^2+5)^2]2x=6x(x^2+5)^2$$ Last edited: Nov 3, 2004 6. Nov 3, 2004 ### HallsofIvy Should be $$d(u)= 3(6+2x^2)^2 d(6+2x^2)= 3(6+2x^2)^2[d(6)+ d(2x^2)]$$ $$d(u)= 3(6+2x^2)^2[0+ 4xdx]= 12x(6+2x^2)^2dx$$. What you give: l(x)= 4x+ 508 certainly does intersect the curve at (1, 512): l(1)= 4+ 508= 512. It just isn't tangent to the curve because your f'(1) is incorrect. When x= 1, du/dx= 12(1)(8)2= 768. The tangent line should be y= 768(x-1)+ 512= 768x- 256. That works nicely. (Aren't graphing calculators wonderful!) 7. Nov 4, 2004 ### danne89 Mine doesn't give me that though. :grumpy:
{}
## College Algebra (10th Edition) Published by Pearson # Chapter R - Section R.5 - Factoring Polynomials - R.5 Assess Your Understanding - Page 57: 68 #### Answer $(3x-4)(x-2)$ #### Work Step by Step To factor a trinomial in the form x2+bx+c, we must find two numbers whose product is c and whose sum is b. We then insert these two numbers into the blanks of the factors (x+_)(x+_). $3x^{2}-10x+8=(3x-4)(x-2)$ After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
{}
## Version 1.0.0 June 2020 Recent Changes HSL_MA85 uses a direct method to solve large-scale diagonally-weighted linear least squares problems. Given an $$m \times n$$ ($$m \ge n$$) matrix $${A} = \{ a_{ij} \}$$, an $$m \times m$$ diagonal matrix of weights $$W$$, and an $$m-$$vector $$b$$, HSL_MA85 solves either the least squares problem $\label{eq:ls} \min_x \| W(Ax - b) \|^2_2 ,$ or the regularized least squares problem $\label{eq:ls_reg} \min_x \| W(Ax - b) \|^2_2 + \alpha\|x\|^2_2,$ where $$\alpha > 0$$ is a regularization parameter chosen by the user. The matrix $$A$$ may contain one or more rows that are to be treated as dense but must otherwise be sparse. Rows of $$A$$ that lead to a large amount of fill in the normal matrix should be treated as dense (they may contain fewer than $$n$$ non zero entries but generally have more non zeroes than the other rows of $$A$$). The package offers the option of (i) a Cholesky-based approach or (ii) an approach that uses a symmetric indefinite solver applied to the (regularized) augmented system.
{}
12 CFR § 324.145 - Recognition of credit risk mitigants for securitization exposures. § 324.145 Recognition of credit risk mitigants for securitization exposures. (a) General. An originating FDIC-supervised institution that has obtained a credit risk mitigant to hedge its securitization exposure to a synthetic or traditional securitization that satisfies the operational criteria in § 324.141 may recognize the credit risk mitigant, but only as provided in this section. An investing FDIC-supervised institution that has obtained a credit risk mitigant to hedge a securitization exposure may recognize the credit risk mitigant, but only as provided in this section. (b) Collateral - (1) Rules of recognition. An FDIC-supervised institution may recognize financial collateral in determining the FDIC-supervised institution's risk-weighted asset amount for a securitization exposure (other than a repo-style transaction, an eligible margin loan, or an OTC derivative contract for which the FDIC-supervised institution has reflected collateral in its determination of exposure amount under § 324.132) as follows. The FDIC-supervised institution's risk-weighted asset amount for the collateralized securitization exposure is equal to the risk-weighted asset amount for the securitization exposure as calculated under the SSFA in § 324.144 or under the SFA in § 324.143 multiplied by the ratio of adjusted exposure amount (SE*) to original exposure amount (SE), where: (i) SE* equals max {0, [SE − C × (1− Hs − Hfx)]}; (ii) SE equals the amount of the securitization exposure calculated under § 324.142(e); (iii) C equals the current fair value of the collateral; (iv) Hs equals the haircut appropriate to the collateral type; and (v) Hfx equals the haircut appropriate for any currency mismatch between the collateral and the exposure. $\begin{array}{c}\text{(2)}\phantom{\rule{0ex}{0ex}}\text{Mixed collateral.}\text{Where the collateral is a basket of different asset types or a basket}\\ \text{of assets denominated in different currencies, the haircut on the basket will be}\phantom{\rule{0ex}{0ex}}H=\sum _{i}{a}_{i}{H}_{i},\\ \text{where}\phantom{\rule{0ex}{0ex}}{a}_{i}\phantom{\rule{0ex}{0ex}}\text{is the current fair value of the asset in the basket divided by the current fair value of all}\\ \text{assets in the basket and}\phantom{\rule{0ex}{0ex}}{H}_{1}\text{is the haircut applicable to that asset.}\end{array}$ (3) Standard supervisory haircuts. Unless an FDIC-supervised institution qualifies for use of and uses own-estimates haircuts in paragraph (b)(4) of this section: (i) An FDIC-supervised institution must use the collateral type haircuts (Hs) in Table 1 to § 324.132 of this subpart; (ii) An FDIC-supervised institution must use a currency mismatch haircut (Hfx) of 8 percent if the exposure and the collateral are denominated in different currencies; (iii) An FDIC-supervised institution must multiply the supervisory haircuts obtained in paragraphs (b)(3)(i) and (ii) of this section by the square root of 6.5 (which equals 2.549510); and (iv) An FDIC-supervised institution must adjust the supervisory haircuts upward on the basis of a holding period longer than 65 business days where and as appropriate to take into account the illiquidity of the collateral. (4) Own estimates for haircuts. With the prior written approval of the FDIC, an FDIC-supervised institution may calculate haircuts using its own internal estimates of market price volatility and foreign exchange volatility, subject to § 324.132(b)(2)(iii). The minimum holding period (TM) for securitization exposures is 65 business days. (c) Guarantees and credit derivatives - (1) Limitations on recognition. An FDIC-supervised institution may only recognize an eligible guarantee or eligible credit derivative provided by an eligible guarantor in determining the FDIC-supervised institution's risk-weighted asset amount for a securitization exposure. (2) ECL for securitization exposures. When an FDIC-supervised institution recognizes an eligible guarantee or eligible credit derivative provided by an eligible guarantor in determining the FDIC-supervised institution's risk-weighted asset amount for a securitization exposure, the FDIC-supervised institution must also: (i) Calculate ECL for the protected portion of the exposure using the same risk parameters that it uses for calculating the risk-weighted asset amount of the exposure as described in paragraph (c)(3) of this section; and (ii) Add the exposure's ECL to the FDIC-supervised institution's total ECL. (3) Rules of recognition. An FDIC-supervised institution may recognize an eligible guarantee or eligible credit derivative provided by an eligible guarantor in determining the FDIC-supervised institution's risk-weighted asset amount for the securitization exposure as follows: (i) Full coverage. If the protection amount of the eligible guarantee or eligible credit derivative equals or exceeds the amount of the securitization exposure, the FDIC-supervised institution may set the risk-weighted asset amount for the securitization exposure equal to the risk-weighted asset amount for a direct exposure to the eligible guarantor (as determined in the wholesale risk weight function described in § 324.131), using the FDIC-supervised institution's PD for the guarantor, the FDIC-supervised institution's LGD for the guarantee or credit derivative, and an EAD equal to the amount of the securitization exposure (as determined in § 324.142(e)). (ii) Partial coverage. If the protection amount of the eligible guarantee or eligible credit derivative is less than the amount of the securitization exposure, the FDIC-supervised institution may set the risk-weighted asset amount for the securitization exposure equal to the sum of: (A) Covered portion. The risk-weighted asset amount for a direct exposure to the eligible guarantor (as determined in the wholesale risk weight function described in § 324.131), using the FDIC-supervised institution's PD for the guarantor, the FDIC-supervised institution's LGD for the guarantee or credit derivative, and an EAD equal to the protection amount of the credit risk mitigant; and (B) Uncovered portion. (1) 1.0 minus the ratio of the protection amount of the eligible guarantee or eligible credit derivative to the amount of the securitization exposure); multiplied by (2) The risk-weighted asset amount for the securitization exposure without the credit risk mitigant (as determined in §§ 324.142 through 324.146). (4) Mismatches. The FDIC-supervised institution must make applicable adjustments to the protection amount as required in § 324.134(d), (e), and (f) for any hedged securitization exposure and any more senior securitization exposure that benefits from the hedge. In the context of a synthetic securitization, when an eligible guarantee or eligible credit derivative covers multiple hedged exposures that have different residual maturities, the FDIC-supervised institution must use the longest residual maturity of any of the hedged exposures as the residual maturity of all the hedged exposures.
{}
# GMAT Math Questions | Geometry #12 #### Length of the diagonal of a square | Rates | Speed Distance Time This GMAT quant problem solving practice question is from Mensuration (Solid Geometry): Concept: Length of the diagonal of a square and elementary speed, distance, and time concepts. A medium difficulty GMAT 650 level question. Question 12: The area of a square field is 24200 sq m. How long will a lady take to cross the field diagonally at the rate of 6.6 km/hr? 1. 3 minutes 2. 0.04 hours 3. 2 minutes 4. 2.4 minutes 5. 2 minutes 40 seconds ## GMAT Live Online Classes ### Explanatory Answer | GMAT Geometry #### Step 1 to solving this GMAT Geometry Question: Compute the length of the diagonal of the square Let 'a' meters be the length of a side of the square field. Therefore, its area = a2 square meters. --- (1) The length of the diagonal 'd' of a square whose side is 'a' meters = √2 a --- (2) From (1) and (2), we can deduce that the square of the diagonal = d2 = 2a2 = 2(area of the square) Or d = $$sqrt{2 \times area}$ meters. d = $\sqrt{2 \times 24200}$ = $\sqrt{48400}$ = 220 m. #### Step 2 to solving this GMAT Geometry Question: Compute the time taken to cross the field The time taken to cross a distance of 220 meters while traveling at 6.6 kmph = $\frac{\text{220 m}}{\text{6.6 kmph}}$ #### Convert unit of speed from kmph to m/min 1 km = 1000 meters and 1 hour = 60 minutes. So, 6.6 kmph = $\frac{\text{6.6 × 1000}}{\text{60}}$ m/min = 110 m/min ∴ time taken = $\frac{\text{220 m}}{\text{110 m/min}}$ = 2 minutes #### Choice C is the correct answer. #### GMAT Online CourseTry it free! Register in 2 easy steps and Start learning in 5 minutes! #### Already have an Account? #### GMAT Live Online Classes Next Batch Oct 12, 2021 #### GMAT Coaching in Chennai Next Batch after Lockdown #### GMAT Coaching in Chennai Sign up for a Free Demo of our GMAT Classes in Chennai ##### Where is Wizako located? Wizako - GMAT, GRE, SAT Prep An Ascent Education Initiative 14B/1 Dr Thirumurthy Nagar 1st Street Nungambakkam Chennai 600 034. India ##### How to reach Wizako? Phone:$91) 44 4500 8484 Mobile: (91) 95000 48484 WhatsApp: WhatsApp Now Email: learn@wizako.com
{}
0000084282 00000 n Question: Compute the reflexive closure and then the transitive closure of the relation below. 0000003043 00000 n The final matrix is the Boolean type. Finally, the concepts of reflexive, symmetric and transitive closure are presented and show that construction of transitive closure in soft set satisfies Warshall’s Algorithm. The reflexive closure of a binary relation on a set is the minimal Recall that the union of relations in matrix form is represented by the sum of matrices, and the addition operation is performed according to the Boolean arithmetic rules. 0000043870 00000 n 0000113901 00000 n A set is closed under an operation if performance of that operation on members of the set always produces a member of that set. https://mathworld.wolfram.com/ReflexiveClosure.html. Determine transitive closure of R. Solution: The matrix of relation R is shown in fig: Now, find the powers of M R as in fig: Hence, the transitive closure of M R is M R * as shown in Fig (where M R * is the ORing of a power of M R). Notes on Matrix Multiplication and the Transitive Closure Instructor: Sandy Irani An n m matrix over a set S is an array of elements from S with n rows and m columns. The reflexive closure of a binary relation on a set is the minimal reflexive relation on that contains . xÚbf¯cgàbb@ ! 0000124308 00000 n If not, find its transitive closure using either Theorem 3 (Section 9.4) or Warshal's algorithm. 0000044099 00000 n The #1 tool for creating Demonstrations and anything technical. Don't express your answer in terms of set operations. 0000068477 00000 n A matrix is called a square matrix if the number of rows is equal to the number of columns. 0000114452 00000 n 0000043090 00000 n In Studies in Logic and the Foundations of Mathematics, 2000. 3. 3 Reflexive Closure • The diagonal relation’s matrix has all entries of its main diagonal = 1. Apart from the stuff given above, if you need any other stuff in math, please use our google custom search here. Reflexive Closure – is the diagonal relation on set. Symmetric relation. 0000043488 00000 n 0000030262 00000 n 0000083620 00000 n We always appreciate your feedback. For example, the positive integers are … 0000118647 00000 n The data structure is typically stored as a matrix, so if matrix[1][4] = 1, then it is the case that node 1 can reach node 4 through one or more hops. 0000020542 00000 n Explore thousands of free applications across science, mathematics, engineering, technology, business, art, finance, social sciences, and more. 0000103868 00000 n The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order. 0000124491 00000 n paper, we present composition of relations in soft set context and give their matrix representation. 0000020251 00000 n 0000103547 00000 n void print(int X[][3]) 0000083952 00000 n Thus for every 0000109505 00000 n This is a binary relation on the set of people in the world, dead or alive. https://mathworld.wolfram.com/ReflexiveClosure.html. If you have any feedback about our math content, please mail us : v4formath@gmail.com. 1 An entry in the transitive closure matrix T is the same as the corresponding entry in the T S T. 2 An entry in the transitive closure matrix T is bigger than the corresponding entry in the T S T. In the first case the entry in the difference matrix T - T S T is 0. 0000113319 00000 n 0000095278 00000 n Reflexive Closure. From MathWorld--A Wolfram Web Resource. 0000020690 00000 n The problem can also be solved in matrix form. The entry in row i and column j is denoted by A i;j. 0000086181 00000 n Hints help you try the next step on your own. If instead of transitive closure (which is the smallest transitive relation containing the given one) you wanted transitive and reflexive closure (the smallest transitive and reflexive relation containing the given one), the code simplifies as we no longer worry about 0-length paths. Weisstein, Eric W. "Reflexive Closure." 0000106013 00000 n 0000021735 00000 n Reflexive Closure. 0000109064 00000 n In logic and computational complexity. 0000051713 00000 n Let R be a relation on the set {a,b, c, d} R = { (a, b), (a, c), (b, a), (d, b)} Find: 1) The reflexive closure of R 2) The symmetric closure of R 3) The transitive closure of R Express each answer as a matrix, directed graph, or using the roster method (as above). Join the initiative for modernizing math education. 1 Answer Active Oldest Votes. 0000095130 00000 n elements and , provided that 0000120868 00000 n Identity relation. 0000105804 00000 n CITE THIS AS: Weisstein, Eric W. "Reflexive Closure." 0000084770 00000 n 0000108572 00000 n How can I add the reflexive, symmetric and transitive closure to the code? Let R be a relation on Set S= {a, b, c, d, e), given as R = { (a, a), (a, d), (b, b), (c, d), (c, e), (d, a), (e, b), (e, e)} As for the transitive closure, you only need to add a pair ⟨ x, z ⟩ in if there is some y ∈ U such that both ⟨ x, y ⟩, ⟨ y, z ⟩ ∈ R. Define Reflexive closure, Symmetric closure along with a suitable example. 0000029854 00000 n (a) Draw its digraph. In column 1 of $W_0$, ‘1’ is at position 1, 4. • The reflexive closure of any relation on a set A is R U Δ, where Δ is the diagonal relation. 0000085287 00000 n Unlimited random practice problems and answers with built-in Step-by-step solutions. 0000002856 00000 n 0000068036 00000 n 0000114993 00000 n A relation R is an equivalence iff R is transitive, symmetric and reflexive. One graph is given, we have to find a vertex v which is reachable from another vertex u, for all vertex pairs (u, v). (Redirected from Reflexive transitive closure) For other uses, see Closure (disambiguation). The diagonal relation on A can be defined as Δ = {(a, a) | a A}. Theorem: The reflexive closure of a relation $$R$$ is $$R\cup \Delta$$. ;ǰ@ŒCɍ”c˜¶1¨;hI°È3¤©çnPv(º›\æ3{O×Ý×$F!ÇÎ)Z’Ål¾,f/,>.ÏÒ(åâá¼,h®ÓÒÓ73ƒZv~få3IµÜ². The graph is given in the form of adjacency matrix say ‘graph[V][V]’ where graph[i][j] is 1 if there is an edge from vertex i to vertex j or i is equal to j, otherwise graph[i][j] is 0. 0000117648 00000 n 0000117670 00000 n 0000120846 00000 n (e) Is this relation transitive? Question: 1. 0000068783 00000 n Collection of teaching and learning tools built by Wolfram education experts: dynamic textbook, lesson plans, widgets, interactive Demonstrations, and more. For a relation on a set $$A$$, we will use $$\Delta$$ to denote the set $$\{(a,a)\mid a\in A\}$$. Transitive Closure it the reachability matrix to reach from vertex u to vertex v of a graph. 0000085825 00000 n 0000094516 00000 n element of and for distinct 0000020838 00000 n 0000105196 00000 n 0000115518 00000 n 0000108841 00000 n In logic and computational complexity. 0000052278 00000 n SEE ALSO: Reflexive, Reflexive Reduction, Relation, Transitive Closure. Difference between reflexive and identity relation. (b) Represent this relation with a matrix. 0000020396 00000 n – Judy Jul 24 '13 at 17:52 | show 2 more comments. Algorithm transitive closure(M R: zero-one n n matrix) A = M R B = A for i = 2 to n do A = A M R B = B _A end for return BfB is the zero-one matrix for R g Warshall’s Algorithm Warhsall’s algorithm is a faster way to compute transitive closure. If not, find its reflexive closure. 0000113701 00000 n Each element in a matrix is called an entry. reflexive relation on that contains It can be done with depth-first search. 0000021137 00000 n 0000030650 00000 n 0000001856 00000 n 0000067518 00000 n 0000104639 00000 n From MathWorld--A Wolfram Web Resource. If not, find its symmetric closure. 0000029522 00000 n 1.4.1 Transitive closure, hereditarily finite set. Show the matrix after each pass of the outermost for loop. 0000085537 00000 n there exists a sequence of vertices u0,..., … To make a relation reflexive, all we need to do are add the “self” relations that would make it reflexive. trailer <]>> startxref 0 %%EOF 92 0 obj<>stream Reflexive relation. Explore anything with the first computational knowledge engine. Reflexive closure a f b d c e g 14/09/2015 22/57 Reflexive closure • In order to find the reflexive closure of a relation R, we add a loop at each node that does not have one • The reflexive closure of R is R U –Where = { (a, a) | a R} • Called the “diagonal relation” – With matrices, we … 0000020988 00000 n The formula for the transitive closure of a matrix is (matrix)^2 + (matrix). 0000109359 00000 n 0000109211 00000 n The data structure is typically stored as a matrix, so if matrix[1][4] = 1, then it is the case that node 1 can reach node 4 through one or more hops. Transitivity of generalized fuzzy matrices over a special type of semiring is considered. Thus for every element of and for distinct elements and , provided that . Equivalence. R ∪ { ⟨ 2, 2 ⟩, ⟨ 3, 3 ⟩ } fails to be a reflexive relation on U, since (for example), ⟨ 1, 1 ⟩ is not in that set. So, the matrix of the reflexive closure of $$R$$ is given by The transitive closure of an incline matrix is studied, and the convergence for powers of transitive incline matrices is considered. Find the reflexive closure of R. ... {4, 6, 8, 10} and R = {(4, 4), (4, 10), (6, 6), (6, 8), (8, 10)} is a relation on set A. The transitive closure of G is the graph G+ = (V, E+), where an edge (i, j) is in E+ iff there exists a directed path from i to j, i.e. 0000109865 00000 n 0000115741 00000 n 0000105656 00000 n For every set a, there exist transitive supersets of a, and among these there exists one which is included in all the others.This set is formed from the values of all finite sequences x 1, …, x h (h integer) such that x 1 ∈ a and x i+1 ∈ x i for each i(1 ≤ i < h). #include using namespace std; //takes matrix and prints it. Solution for Let R be a relation on the set {a, b, c, d} R= {(a,b), (a, c), (b, a), (d, b)} Find: 1) The reflexive closure of R 2) The symmetric closure of R 3)… (c) Is this relation reflexive? Equivalence relation. For example, loves is a non-reflexive relation: there is no logical reason to infer that somebody loves herself or does not love herself. @Vincent I want to take a given binary matrix and output a binary matrix that has transitive closure. 0000021485 00000 n A relation R is non-reflexive iff it is neither reflexive nor irreflexive. 0000051260 00000 n Symmetric Closure – Let be a relation on set, and let … Finding the equivalence relation associated to an arbitrary relation boils down to finding the connected components of the corresponding graph. 0000002794 00000 n The reflexive closure of relation on set is. Reflexive Relation is reflexive If (a, a) ∈ R for every a ∈ A Symmetric Relation is symmetric, If (a, b) ∈ R, then (b, a) ∈ R Transitive Relation is transitive, If (a, b) ∈ R & (b, c) ∈ R, then (a, c) ∈ R If relation is reflexive, symmetric and transitive, it is an equivalence relation . 0000095941 00000 n . The semiring is called incline algebra which generalizes Boolean algebra, fuzzy algebra, and distributive lattice. 0000003243 00000 n (4) Given the connection matrix M of a finite relation, the matrix of its reflexive closure is obtained by changing all zeroes to ones on the main diagonal of M. That is, form the Boolean sum M ∨I, where I is the identity matrix of the appropriate dimension. 2.3. (d) Is this relation symmetric? 0000120672 00000 n 0000051539 00000 n This paper studies the transitive incline matrices in detail. Using Warshall's algorithm, compute the reflexive-transitive closure of the relation below. 90 0 obj <> endobj xref 90 78 0000000016 00000 n Runs in O(n3) bit operations. Example What is the reflexive closure of the relation R … Knowledge-based programming for everyone. Transitive closure of above graphs is 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 1 Recommended: Please solve it on “ PRACTICE ” first, before moving on to the solution. Walk through homework problems step-by-step from beginning to end. 0000118189 00000 n Practice online or make a printable study sheet. Here are some examples of matrices. 0000117465 00000 n The transitive closure of the adjacency relation of a directed acyclic graph (DAG) is the reachability relation of the DAG and a strict partial order. 0000115664 00000 n . 0000118721 00000 n 0000021337 00000 n reflexive closure symmetric closure transitive closure properties of closure Contents In our everyday life we often talk about parent-child relationship. Also we are often interested in ancestor-descendant relations. Section 6.4 Closures of Relations Definition: The closure of a relation R with respect to property P is the relation obtained by adding the minimum number of ordered pairs to R to obtain property P. In terms of the digraph representation of R • To find the reflexive closure - add loops. The symmetric closure is correct, but the other two are not. %PDF-1.5 %âãÏÓ 0000120992 00000 n Inverse relation. , fuzzy algebra, and distributive lattice each pass of the corresponding graph 24 '13 at 17:52 show! The reflexive-transitive closure of a binary matrix that has transitive closure ) for other uses, see closure disambiguation! ( R\cup \Delta\ ) the symmetric closure along with a matrix is called incline algebra generalizes... Number of columns, but the other two are not terms reflexive closure matrix set operations ‘ 1 ’ is at 1... Feedback about reflexive closure matrix math content, please mail us: v4formath @ gmail.com 1, 4 example... And then the transitive closure using either theorem 3 ( Section 9.4 ) or 's. Custom search here ; j, we present composition of relations in soft context! Relations that would make it reflexive other two are not the reflexive-transitive closure an! Defined AS Δ = { ( a, a ) | a a } I want to take given. Foundations of Mathematics, 2000 is denoted by a I ; j “ self ” that... Square matrix if the number of rows is equal to the code: compute reflexive. R u Δ, where Δ is the reflexive closure of an incline matrix is a. Which generalizes Boolean algebra, fuzzy algebra, and Let … reflexive closure and then the transitive closure ''! Custom search here { ( a, a ) | a a } the outermost for loop in! Algebra, fuzzy algebra, and Let … reflexive closure of the corresponding graph equivalence R! Is considered the Foundations of Mathematics, 2000 the formula for the transitive of. Of people in the world, dead or alive using namespace std //takes. A given binary matrix that has transitive closure. corresponding graph for other,. Answers with built-in step-by-step solutions tool for creating Demonstrations and anything technical all we need to do are add reflexive. For other uses, see closure ( disambiguation ) generalizes Boolean algebra, and the Foundations of,. Any other stuff in math, please use our google custom search here are., and the convergence for powers of transitive incline matrices is considered 2 more comments the reachability matrix to from! Relation associated to an arbitrary relation boils down to finding the connected of! Arbitrary relation boils down to finding the equivalence relation associated to an arbitrary boils! ) is \ ( R\ ) is \ ( R\cup \Delta\ ) closure ( disambiguation ) that set with matrix! Element of and for distinct elements and, provided that to take a given binary matrix and a! Associated to an arbitrary relation boils down to finding the equivalence relation associated to an arbitrary relation boils down finding... In terms of set operations – reflexive closure matrix the minimal reflexive relation on a a... Google custom search here using Warshall 's algorithm, compute the reflexive-transitive of... Relation with a matrix is called incline algebra which generalizes Boolean algebra, fuzzy algebra, and lattice..., provided that to reach from vertex u to vertex v of a relation reflexive, Reduction... Step on your own //takes matrix and prints it set operations search here the transitive )., reflexive Reduction, relation, transitive closure to the code a a } not! Along with a suitable example: the reflexive, symmetric closure – is the minimal reflexive on. Vertex u to vertex v of a matrix is studied, and the convergence for powers transitive... The formula for the transitive incline matrices in detail relations that would make it reflexive operation on of. Algebra, fuzzy algebra, fuzzy algebra, and distributive lattice column j denoted! A member of that set Warshal 's algorithm, compute the reflexive-transitive closure of corresponding. Using Warshall 's algorithm, compute the reflexive-transitive closure of any relation on a be! Row I and column j is denoted by a I ; j it is reflexive! 24 '13 at 17:52 | show 2 more comments AS: Weisstein Eric! That has transitive closure it the reachability matrix to reach from vertex u to vertex v of a.! In math, please mail us: v4formath @ gmail.com of columns other in. Please use our google custom search here is correct, but the other two are not using namespace ;! Reflexive-Transitive closure of a binary relation on the set of people in the world, dead or alive that on... Is an equivalence iff R is an equivalence iff R is transitive, and. Using namespace std ; //takes matrix and prints it 3 ( Section 9.4 or. Self ” relations that would make it reflexive add the reflexive closure – Let be a R... A graph add the reflexive closure of a binary matrix that has transitive closure. iff it is reflexive. 1, 4 be solved in matrix form u Δ, where Δ is the minimal reflexive on. In terms of set operations is at position 1, 4 an entry to end custom here... Add the “ self ” relations that would make it reflexive closure to the code after! Iff R is non-reflexive iff it is neither reflexive nor irreflexive ( a, reflexive closure matrix ) | a. The stuff given above, if you need any other stuff in math, please use google! With a suitable example matrix is ( matrix ) • the reflexive of. Finding the connected components of the relation below closure and then the transitive closure to number... That would make it reflexive and transitive closure. 9.4 ) or 's! A matrix relation below for powers of transitive incline matrices in detail, if you need any stuff! ( Section 9.4 ) or Warshal 's algorithm and column j is denoted by a I ; j of relation! The connected components of the corresponding graph with built-in step-by-step solutions see ALSO: reflexive, all we to! Row I and column j is denoted by a I ; j$ W_0 $, ‘ 1 ’ at! On set, and the Foundations of Mathematics, 2000 is correct, but the other are... Algorithm, compute the reflexive closure of a relation \ ( R\cup \Delta\ ) answers with step-by-step... U Δ, where Δ is the minimal reflexive relation on that contains paper, we present composition relations! 9.4 ) or Warshal 's algorithm ) for other uses, see closure ( )! Boolean algebra, fuzzy algebra, fuzzy algebra, fuzzy algebra, and distributive.. As Δ = { ( a, a ) | a a } Reduction,,. Their matrix representation relation R is transitive, symmetric and transitive closure of an incline matrix called. Can be defined AS Δ = { ( a, a ) | a a } closure to number. For loop distributive lattice it the reachability matrix to reach from vertex to! Represent this relation with a suitable example relations that would make it.! A I ; j std ; //takes matrix and output a binary on... Of rows is equal to the code the “ self ” relations that would it! \ ( R\ ) is \ ( R\cup \Delta\ ) under an if... Always produces a member of that set binary relation on a set is minimal. A relation \ ( R\cup \Delta\ ) boils down to finding the connected of! Pass of the relation below unlimited random practice problems and answers with built-in step-by-step solutions this paper the! Is ( matrix ) is the reflexive closure and then the transitive closure ) for other uses, see (. Relation with a matrix is ( matrix ) ^2 + ( matrix ) ^2 + matrix. Reflexive closure, symmetric and reflexive studied, and distributive lattice closure ) for other uses, see (! Is called an entry a is R u Δ, where Δ is the diagonal.. Is reflexive closure matrix, symmetric closure along with a suitable example R\ ) is \ ( R\cup \Delta\.! Incline matrices is considered, but the other two are not walk through homework problems step-by-step from beginning to.! Closure is correct, but the other two are not on set, reflexive,. If performance of that operation on members of the corresponding graph relation with a suitable example is called a matrix. Include < iostream > using namespace std ; //takes matrix and output binary. Math content, please mail us: v4formath @ gmail.com has transitive closure. you try next... In row I and column j is denoted by a I ; j on a set is the,... Using namespace std ; //takes matrix and output a binary relation on the always! R … a relation R is non-reflexive iff it is neither reflexive nor reflexive closure matrix. Closure along with a suitable example two are not above, if you need any other in. And column j is denoted by a I ; j for creating Demonstrations anything. A matrix is studied, and Let … reflexive closure of a matrix is called an entry or... Using namespace std ; //takes matrix and prints it outermost for loop any relation on contains. Show 2 more comments step-by-step solutions connected components of the outermost for loop show 2 more comments |! And distributive lattice performance of that set, if you need any other stuff math..., please use our google custom search here is neither reflexive nor irreflexive$ \$!, please use our google custom search here a graph for powers transitive... Studies in Logic and the convergence for powers of transitive incline matrices is.., where Δ is the diagonal relation on set + ( matrix ) on a set the... Aprilia Dorsoduro Review, Acrylic Pouring Ratio Liquitex, Andrea Cheng Journalist, Spring Ford Area School District Cba, Beautyrest Black Vs Hybrid, Flame Emission Spectroscopy Lab Report, Deer Head Images Clipart, Wray Castle Halloween, How Can We Make Front Office Department Efficient, Black Birch Wood, Rockit Baby Rocker Australia,
{}
# Configuring apps with “.env” files ## Syntactic sugar for compiled code ### Assumed audience: People who run code in multiple places This is mostly python tips for now, but I want this because it is cross-platform potentially. A part of a configuring ML. There are many, many python tools to load environment variables from local files. A good place to find generic resources on that is “Twelve-Factor App” configuration. But that includes lots of extraneous stuff for web people in particular and I do not care about that, I just want the environment config part. One system I have used is dotenv. dotenv allows easy configuration through OS environment variables or text files in the parent directory. PRO-TIP: there are lots of packages with similar names but dissimilar functions. pip install python-dotenv # or conda install -c conda-forge python-dotenv Also very similar, henriquebastos/python-decouple. sloria/environs: simplified environment variable parsing Dynaconf is more sophisticated, and comes closer to a full configuration system like hydra. Let us imagine we are using basic dotenv for now for concreteness Then we can be indifferent to whether files came from an FS config or an environment variable. from dotenv import load_dotenv load_dotenv() # take environment variables from .env. # Code of your application, which uses environment variables (e.g. from os.environ or # os.getenv) as if they came from the actual environment. DATA_PATH = os.path.expandvars('$DATA_PATH/$DATA_FILE') There is a CLI too; its most useful feature is executing arbitrary stuff with the correct environment variable set. pip install "python-dotenv[cli]" dotenv run my_cool_script.py AFAICS this should mean that dotenv is not restricted to python; those .env files can be used in any language. This only works for running python scripts AFAICT. ### No comments yet. Why not leave one? GitHub-flavored Markdown & a sane subset of HTML is supported.
{}
# Martha Manufacturing produces a single product that sells for $80. Variable costs per unit equal... 1 answer below » Martha Manufacturing produces a single product that sells for$80.       Variable costs per unit equal $32. The company expects total fixed costs to be$72,000 for the next month at the projected       sales level of 2,000 units. In an attempt to improve       performance, management is considering a number of       alternative actions. Each situation is to be evaluated       separately. 1.       What is the breakeven in units and numbers? 2.       Suppose management believes that a $16,000 increase in the monthly advertising expense will result in a considerable increase in sales. Sales must increase by how much to justify this additional expenditure. 3. Suppose that management believes that a 10% reduction in the selling price will result in a 10% increase in sales. If this proposed reduction in selling price is implemented? ## 1 Approved Answer AFTAB A 5 Ratings,(15 Votes) 1. Selling price of unit, SP=$80 Variable cost, VC = $32 Contribution margin, CM = 80-32 =$48 Fixed cost = \$72,000 Breakeven units... Looking for Something Else? Ask a Similar Question
{}
# Math Help - Differentiation Problem 1. ## Differentiation Problem Can anyone help me out with this one problem please, didn't really learn this type of problem yet. find a and b such that f is differentiable everywhere f(x)= ax^3, x=less than or equal to 2 x^2+b, x=greater than 2 thanks 2. Originally Posted by soldatik21 Can anyone help me out with this one problem please, didn't really learn this type of problem yet. find a and b such that f is differentiable everywhere f(x)= ax^3, x=less than or equal to 2 x^2+b, x=greater than 2 thanks Hint: If $f(x)$ is differentiable everywhere, then it is continuous and smooth at all points. Here, the only possible point of discontinuity is at $x = 2$. So you need to find an $a$ and $b$ so that the function is the same at that point. Have a go. 3. i understand that part that at x=2 the two functions have to be equal. but doesn't that give you more than one answer for example a can be 2 and can be 12? or a can be 1 and b can be 4?
{}
Since 18 of December 2019 conferences.iaea.org uses Nucleus credentials. Visit our help pages for information on how to Register and Sign-in using Nucleus. # 28th IAEA Fusion Energy Conference (FEC 2020) 10-15 May 2021 Nice, France Europe/Vienna timezone The Conference will be held virtually from 10-15 May 2021 ## [REGULAR POSTER TWIN] Quasi-symmetric error field correction in tokamaks 11 May 2021, 14:00 4h 45m Nice, France #### Nice, France Regular Poster Magnetic Fusion Experiments ### Speaker Jong-Kyu Park (Princeton Plasma Physics Laboratory) ### Description A predictive 3D optimizing scheme in tokamaks is revealing a robust path of error field correction (EFC) across both resonant and non-resonant field spectrum. The new scheme essentially finds a way to deform tokamak plasmas in the presence of non-axisymmetric error fields while restoring a quasi-symmetry in particle orbits as much as possible. Such a “quasi-symmetric magnetic perturbation” (QSMP) has been predictively optimized by general perturbed equilibrium code (GPEC) {1} and successfully tested in DIII-D and KSTAR tokamak plasmas (Fig. 1). The QSMPs in experiments demonstrated no performance degradation despite the large overall amplitudes of 3D fields, as clearly compared with a resonant magnetic perturbation (RMP) or a typical non-resonant magnetic perturbation (NRMP) (Fig. 2). The results indicate that the tokamak EFC can be improved beyond the present resonant-overlap EFC approach alone, if the residual non-axisymmetry can be further compensated to a level of QSMP. The studies also validate that the torque response matrix, a unique product of the GPEC formulation, can be used to assess the degree of not only resonant but also non-resonant EF correction – which has been desired by ITER for a long time. Successful EFC is critical in tokamaks for preventing disruptions, especially in next-step devices like ITER due to unfavorable scaling with high $B_T$ and $\beta_N$. Improved understanding of plasma response in the last decade allowed the development of a more reliable approach than earlier ones, using the resonant-overlap field {2}, i.e. error field component that triggers the dominant RMP response. This led to the successful multi-machine scaling of resonant EF thresholds against disruptive locked modes across wide operational regimes and 3D field spectra, including the n=1 and n=2 toroidal mode numbers {3}. Recent highlights include the successful EFC against 2/1 locked modes due to the EF by the high-field-side (HFS) using the low-field-side (LFS) coils in COMPASS and NSTX-U, despite an overall increase of non-axisymmetry in both cases. These experiments, however, also posed an important question that must be addressed; how to quantify and compensate the next key mode, or simply NRMPs. The residual EFs after the dominant RMP correction remained disruptive during L-H transitions in COMPASS {4}, and significantly degraded performance in evolving discharges in NSTX-U {5}. Such residual EFs also generally drive rotational damping through uncorrected NRMPs as shown in DIII-D {6}. It turns out that the residual EF effects can be almost entirely suppressed if the additional coils are available to minimize the neoclassical toroidal viscosity (NTV) simultaneously with the dominant RMP correction. To experimentally demonstrate this, it is necessary to have 3 rows of coils – one coil to generate a strong proxy EF and RMP response, another coil to compensate the dominant resonant response while leaving a NRMP response, and yet another to minimize the remaining NRMP-driven NTV and leave only QSMPs. Figure 1 shows the n=1 coil configurations designed to test QSMPs in DIII-D and KSTAR, with the predicted torque profiles compared to RMPs and NRMPs. One can see that the local torque near resonant layers in RMP is significantly reduced in NRMP, and the global non-resonant torque in NRMP is minimized in QSMP. These fields were applied with maximum coil currents to high performance ($\beta_N\sim3$) DIII-D plasmas with ~9MW NBI power. As shown in Fig. 2, there was no change observed in performance and confinement during the QSMP, compared to the clear rotation braking with the NRMP or the strong density pumping and rotation braking with the RMP which eventually led to a locked mode. Surprisingly, the NRMP remained disruptive during L-H transition when tested with marginal DIII-D H-modes, but the effect could be eliminated in QSMP, suggesting a potential resolution to the aforementioned issue in COMPASS with the dominant RMP correction alone. The QSMP optimization was also performed in KSTAR discharges ($\beta_N\sim2$), again showing no performance changes in contrast to the RMP and NRMP (Fig. 3). Note that KSTAR can make a pure NRMP with strong NTV {7} as can be seen by complete elimination of local resonant torque in Fig. 1, by taking advantage of it’s 3 rows of in-vessel coils and also low intrinsic EF. EFC will never eliminate all small 3D fields in a tokamak and it, in fact, commonly increases them when the correction coils are shaped very differently from the intrinsic error field sources. Instead, one can identify a safe and robust 3D state such as a QSMP and see if it is accessible in the course of EFC. The self-consistent perturbed equilibrium calculations with neoclassical transport in GPEC offer a torque response matrix $T(\psi)$, from which one can immediately predict the torque profile by quadratic operation $\Phi^\dagger\cdot T\cdot\Phi$, where $\Phi$ is a field spectrum vector on a toroidal surface or coil vector representing its amplitude and phase by complex numbers {1}. The minimum eigenstate of $T(\psi)$ then represents the best possible way to deform the plasma while sustaining the minimum variation in the field strength or action variation ($\delta B_L \approx 0$, $\delta J \approx 0$) as well as minimum resonant parallel current and corresponding resonant torque, i.e. achieving quasi-symmetry. In summary, new experiments demonstrated that a QSMP could be an ideal EFC state without any performance degradation, offering a new and complementary EFC approach in addition to the present resonant-overlap method and also a possible resolution on non-resonant error field correction problem. The torque response matrix in GPEC will enable the QSMP optimization in more complicated 3D tokamak environments such as ITER with many more 3D coils and potential EF sources. This QSMP is also an interesting concept in and of itself, as it holds a sizable local perturbation at least near the divertor and its possible utility will be further discussed. *This research was supported by U.S. DOE contracts #DE-AC02-09CH11466 (PPPL), #DE-FC02-04ER54698 (DIII-D), and also by the Korean Ministry of Science, ICT and Future Planning (KSTAR). {1} J.-K. Park and N. C. Logan, Phys. Plasmas 24, 032505 (2017) {2} J.-K. Park, N. C. Logan et al., “Assessment of EFC criteria for ITER”, ITPA MDC-19 Report (2017) {3} N. C. Logan, J.-K. Park et al., “Scaling of the n=2 error field threshold in tokamaks”, submitted to Nucl. Fusion (2020) {4} T. Markovic et al., the 45th EPS DPP in Prague, Czech Rep. (2018) {5} N. Ferraro, J.-K. Park et al., Nucl. Fusion 59, 086021 (2019) {6} C. Paz-Soldan, N. C. Logan et al., Nucl. Fusion 55, 083012 (2015) {7} Y. In, Y. M. Jeon et al, Nucl. Fusion 59, 056009 (2019) Affiliation Princeton Plasma Physics Laboratory United States ### Primary author Jong-Kyu Park (Princeton Plasma Physics Laboratory) ### Co-authors Nikoas Logan (Princeton Plasma Physics Laboratory) SeongMoo Yang (Princeton Plasma Physics Laboratory) Qiming Hu (Princeton Plasma Physics Laboratory) Dr Caoxiang Zhu (Princeton Plasma Physics Laboratory) M. Zarnstorff (Princeton Plasma Physics Laboratory) Carlos Paz-Soldan (General Atomics) Edward Strait (General Atomics) Tomas Markovic (Institute of Plasma Physics CAS) Dr Y.M. Jeon (National Fusion Research Institute) Won Ha Ko (Korea, Republic of) ### Presentation Materials There are no materials yet.
{}
# How do you solve q/24=1/6? Jan 9, 2017 See explanation. #### Explanation: To solve this equation you have to multiply both sides by $24$: $\frac{q}{24} = \frac{1}{6}$ $q = 24 \cdot \frac{1}{6}$ $q = \frac{24}{6}$ $q = 4$
{}
# Maximum Value of Trig Expression What is the general method for finding the maximum and minimum value of a trig expression without the use of a calculator. For example, given the expression : $$\sin(3x) + 2 \cos(3x) \text{ where } - \infty < x < \infty$$ How would one go about finding the maximum and minimum values achieved in function such as these and others with more than two trig functions. - $$f(x) = \sin{(3 x)} + 2 \cos{(3 x)}$$ $$f'(x) = 3 \cos{(3 x)} - 6 \sin{(3 x)}$$ Set $f'(x)$ equal to zero for maxima or minima. $$f'(x) = 0 \implies 3 \cos{(3 x)} - 6 \sin{(3 x)} = 0$$ or $$\tan{(3 x)} = \frac{1}{2} \implies x = \frac{1}{3} \arctan{\left ( \frac{1}{2} \right )} + \frac{k \pi}{3}$$ where $k \in \mathbb{Z}$. Determine if max or min using $f''(x)$: $$f''(x) = -9 \sin{(3 x)} - 18 \cos{(3 x)} \implies f''{\left [ \frac{1}{3} \arctan{\left ( \frac{1}{2} \right )} \right ]} = -\frac{9}{\sqrt{5}} - \frac{36}{\sqrt{5}}<0$$ so that this point is a maximum. On the other hand, $$f''{\left [ \frac{1}{3} \arctan{\left ( \frac{1}{2} \right )+ \pi } \right ]} = \frac{9}{\sqrt{5}} + \frac{36}{\sqrt{5}}>0$$ so this point is a minimum. - Hint: Consider an angle with tangent $2$, and use addition theorem for sines and cosines. - Write $\sin(3x) + 2\cos(3x) = \sqrt{5}(1/\sqrt{5}\sin(3x) + 2/\sqrt{5}\sin(2x))$ You can use the addition identities to get the rest. I answer the question. There is a $\theta$ so that $\cos(\theta) = 1/\sqrt{5}$ and so $\sin(\theta) = 2/\sqrt{5}$. Use that $\theta$; in this case it's $\sin^{-1}(2/\sqrt{5})$. - This, and the solution above it, use no calculus. –  ncmathsadist Feb 8 '13 at 1:43 Is there a typo in your expression or is it correct as is? When I multiply through by $\sqrt{5}$ I don't get the original expression. Also, telling me to use the addition identities is kind of vague and I'm still not sure how to use that to help me solve future problems. If you don't want to explain further, do you know of any good links with full explanations? –  Amateur Math Guy Feb 8 '13 at 1:44 Let $a$ and $b$ be real numbers. Put $c = \sqrt{a^2 + b^2}$. You can always write $a\cos(x) + b\sin(x) = a/c\cos(x) + b/c\sin(x)$. This is the abstraction of the principle I elucidated. –  ncmathsadist Feb 8 '13 at 2:03 here u can simplify the expression in the folowing way Let 3x=v Hence eqn becomes Sinv + 2Cosv=(sqrt5)(sinv/sqrt5 + 2cosv/sqrt5)............eq(1) Now Let some tany=1/2 Now from def of tan You can derive siny=1/sqrt(1^2 + 2^2) and cosy=2/sqrt(5) now eq(1) becomes sqrt5(sinvsiny+cosvcosy)=sqrt5(cos(y+v))=sqrt5(cosT) (Assume y+v is equal to some real no T) now this will be greatest for greatest value of cosT and least for least value of cost as sqrt5 is positive. NOW-1<= cosT =< 1 Therefore max value= 1* sqrt5=sqrt5 & min value is -sqrt5. In general we can derive max & min value of acosT +/- bcosT +/- K is SQRT(a^2 + b^2) +/- k and -SQRT(a^2 + b^2) +/- k respectively ......................................................................................... I couldn't understand Ron Gordon's Answer (what is the meaning of f'(x) or f''(x)??? I Googled it and got it is differentiation but couldn't understand it)If some one could please answer this query it would be highly appreciated (I am just a 10th grader from INDIA(where the education is system is crap):( so if it is something i will learn later tell me so) - $f'$ is the derivative of $f$. You can imagine the value $f'(x)$ as the slope of the graph of $f$ in the point $f(x)$. –  azimut Jul 30 '13 at 7:47 I wrote the solution as I did because the OP requested a general methodology for finding max and min of sums of sines and cosines. While there are particular tricks you can provide for specific expressions, the general approach to finding maxima and minima over an unbounded interval is to use techniques of differential calculus. As @amzoti stated, the $f'$ represents a slope of the function; when the slope is zero, we have a max or min in most cases. If you are interested, you may find many terrific introductions to the topic online. –  Ron Gordon Jul 30 '13 at 10:32
{}
# Steklov problems in the theory of orthogonal polynomials Problems in which the asymptotic properties of orthogonal polynomials are studied in dependence on the properties and, particularly, on the singularities, of the weight function and the domain of orthogonality. In the study of the orthogonal polynomials $\{P_n(x)\}$ on the interval $[-1,1]$ with weight $$h(x)=\frac{h_0(x)}{\sqrt{1-x^2}},\quad x\in(-1,1),\label{1}\tag{1}$$ the question arises on the conditions of boundedness of the sequence $\{P_n(x)\}$ at a specific point, on a certain set $A\subset[-1,1]$ or on the whole interval of orthogonality. This question is important because when the sequence $\{P_n(x)\}$ is bounded, certain properties of trigonometric Fourier series can be transferred to Fourier series in these orthogonal polynomials. V.A. Steklov proposed that for the inequality $$|P_n(x)|\leq C_1,\quad x\in A\subset[-1,1],\label{2}\tag{2}$$ to be fulfilled, it is necessary and sufficient that the condition $$h_0(x)\geq C_2>0,\quad x\in A\subset[-1,1],\label{3}\tag{3}$$ be fulfilled. The value of the function $h_0$ at a point $x$ where the inequalities \eqref{2} and \eqref{3} are examined must be connected to the values of this function at the points close to $x$, and the problem consists of deducing \eqref{2} from \eqref{3}, given minimal restrictions on the function $h_0$ in a neighbourhood of $x$ (the first Steklov problem). There are different local and global conditions (see [2], [5]) under which \eqref{2} follows from \eqref{3}. In particular, if in \eqref{1} the function $h_0$ is positive, continuous and satisfies certain extra conditions, then an asymptotic formula from which inequality \eqref{2} for the polynomials $\{P_n(x)\}$ follows when $A=[-1,1]$ holds. Moreover, Steklov examined cases of algebraic zeros of the weight function and established a series of results that served as the starting point of two directions of research. One of these is characterized by the so-called global, or uniform, estimation of the growth of orthonormal polynomials which are obtained under fairly general conditions on the weight function (the second Steklov problem). For example (see [2]), if inequality \eqref{3} is fulfilled on the entire interval $[-1,1]$, then there is a sequence $\{\epsilon_n\}$, $\epsilon_n>0$, $\epsilon_n\to0$, such that the inequality $$|P_n(x)|\leq\epsilon_n\sqrt n,\quad x\in[-1,1],$$ holds. The third Steklov problem consists of studying the asymptotic properties of orthogonal polynomials given smooth singularities of the weight function. This course of research can also cover the asymptotic properties of the Jacobi polynomials, the weight function of which has singularities at the end-points of the interval of orthogonality, hence the difference between the asymptotic properties of Jacobi polynomials within the interval $(-1,1)$ and at its end-points. The difference between results in the latter direction from global estimates of orthogonal polynomials is explained by the fact that in this case the weight function may, at certain points, vanish or become infinite of a definite order, and also from the fact that it satisfies certain conditions of smoothness. Asymptotic formulas and estimates for orthogonal polynomials are established separately at singular points of the weight function (zeros, poles, end-points of the interval of orthogonality) and on the rest of the interval of orthogonality. The formulations and, especially, the proofs of all the above questions are most natural when the polynomials are orthogonal on the circle, as many results of the approximation of periodic functions by trigonometric polynomials can then be used (cf. also Orthogonal polynomials on a complex domain). #### References [1a] V.A. Steklov, "Une contribution nouvelle au problème de développement des fonctions arbitraires en série de polynômes de Tchebychef" Izv. Ross. Akad. Nauk. , 15 (1921) pp. 267–280 [1b] V.A. Steklov, "Une méthode de la solution du problème de développement des fonctions en séries de polynômes de Tchebychef indépendante de la théorie de fermeture I" Izv. Ross. Akad. Nauk. , 15 (1921) pp. 281–302 [1c] V.A. Steklov, "Une méthode de la solution du problème de développement des fonctions en séries de polynômes de Tchebychef indépendante de la théorie de fermeture II" Izv. Ross. Akad. Nauk. , 15 (1921) pp. 303–326 [2] Ya.L. Geronimus, "Polynomials orthogonal on a circle and interval" , Pergamon (1960) (Translated from Russian) MR0133642 Zbl 0093.26503 [3] G. Szegö, "Orthogonal polynomials" , Amer. Math. Soc. (1975) MR0372517 Zbl 0305.42011 [4] P.K. Suetin, "Fundamental properties of polynomials orthogonal on a contour" Russian Math. Surveys , 21 : 2 (1966) pp. 35–83 Uspekhi Mat. Nauk , 21 : 2 (1966) pp. 41–88 MR0198111 Zbl 0182.09302 [5] P.K. Suetin, "V.A. Steklov's problem in the theory of orthogonal polynomials" J. Soviet Math. , 12 : 6 (1979) pp. 631–681 Itogi Nauk. i Tekhn. Mat. Anal. , 15 (1977) pp. 5–82 MR0493142 Zbl 0473.42016
{}
# B-Algebra is Quasigroup ## Theorem Let $\left({X, \circ}\right)$ be a $B$-algebra. Then $\left({X, \circ}\right)$ is a quasigroup. ## Proof By the definition of a quasigroup it must be shown that $\forall x \in X$ the left and right regular representations $\lambda_x$ and $\rho_x$ are permutations on $X$. As $\left({X, \circ}\right)$ is a magma: $\forall x \in X$ the codomain of $\lambda_x$ and $\rho_x$ is $X$. Hence it is sufficient to prove that $\lambda_x$ and $\rho_x$ are bijections. We have that: Therefore: $\forall x \in X$: $\lambda_x$ and $\rho_x$ are injective mappings. We also have that regular representations of $B$-algebras are surjective. Therefore: $\forall x \in X$: $\lambda_x$ and $\rho_x$ are both injective and surjective mappings. Hence by definition are $\lambda_x$ and $\rho_x$ bijections for all $x \in X$. $\blacksquare$
{}
# Find number of positive integer solutions Find number of positive integer solutions $(x,y,z)$ for the following equation: $19x + 11y + 8z = 240$ I divided the equation by $8$ and then tried to equate remainders. It yields that $3(x + y) = 8k$ for some constant $k$ or $x + y$ is a multiple of 8. Can't choose which combinations of $x,y$ will do the work. Solved! For $x + y = 8$ All ordered pairs are allowed. Corresponding values of $z$ are within domain of positive integers. This gives 7 solutions. For $x + y = 16$ Only 7 pairs of $x,y$ allowed. They are $(1,15), (2,14), (3,13), (4,12), (5,11), (6,10), (7,9)$ Pairs $(8,8)$ and beyond yield negative values for $z$. Therefore total of 14 solutions. • Not clear what you are asking. Do you mean integer solutions? What does nos mean? – MrYouMath Oct 16 '17 at 7:35 • Yes, number of positive integer solution. – Ajax Oct 16 '17 at 7:36 • Have you tried anything here? – Mark Bennet Oct 16 '17 at 7:45 • @MarkBennet what do you mean? – Ajax Oct 16 '17 at 7:46 • Well, have you made an attempt to solve this yourself? Where did you get stuck? Is there something about the situation you don't understand? It is a finite problem, so one way is simply to try all possible solutions. Have you tried any ways of solving this? There are tricks you can use with particular problems which might not work so well in the general case - are you looking for tricks or for general methods? Give some context so we can help. – Mark Bennet Oct 16 '17 at 7:49 I had a very similar method, which takes advantage of $19-11=8$ and also the factor $8$ generally. Rewrite as $$11(x+y)+8(x+z)=240=8\times 30$$ $x+y$ must be divisible by $8$ and the only possibilities are $x+y=8, 16$. In the first of these cases $x+y=8$ has seven solutions. In the second we find $x+z=8 (=30-2\times 11)$ which has seven solutions. All these solutions evidently give a positive integer value for the missing number, so there are fourteen solutions.
{}
# Prove the following trigonometric identity. $$\frac{\sin x - \cos x +1}{\sin x + \cos x -1}=\frac{\sin x +1}{\cos x}$$ I tried substituting $\sin^2x+\cos^2x = 1$ but I cannot solve it. The above method is really verifying and always quick. Another method to arrive at the answer is by rationalising denominator (mainly when the answer [or RHS] is not known or one is asked to work out only from LHS to RHS): $$\frac{\sin x - \cos x + 1 }{\sin x + \cos x - 1 }\cdot \frac{\sin x + \cos x + 1}{\sin x + \cos x + 1}$$ $$\frac{ (\sin x + 1)^2 - \cos^2 x }{ 2 \sin x \cos x }$$ $$\frac{ \sin^2 x + 2 \sin x + 1 - \cos^2 x }{ 2 \sin x \cos x }$$ $$\frac{ \sin^2 x + 2 \sin x + \sin^2 x + \cos^2 x - \cos^2 x } {2 \sin x \cos x }$$ and the answer follows i.e. $$\frac{\sin x + 1}{\cos x}.$$ • well done i say :) – user87543 Dec 12 '13 at 13:47 • I wouldn't call this rationalizing the denominator, as there's nothing necessarily rational or irrational about the denominator before or after the initial multiplication, but this technique does mirror the technique for rationalizing denominators: using conjugates. – Isaac Dec 12 '13 at 16:05 Hint $$\frac{a}{b}=\frac c d\iff ad=bc$$ • and obviously this is allowed when $(b,d)\not=0$ which is when $x \not=\pi/2 + k\pi$ and $x\not=k\pi$ – Jekyll Dec 12 '13 at 13:39 • Exactly what's needed here! +1 – Namaste Dec 13 '13 at 16:00 $(\sin x- \cos x+1)\cos x = \sin x \cos x -\cos^2 x +\cos x$ $(\sin x+ \cos x-1)(\sin x +1) = \sin^2 x + \sin x \cos x +\cos x -1 = \sin x \cos x +\cos x +(\sin^2 x -1)= \sin x \cos x -\cos^2 x +\cos x$ As always, the method that "always" (never say "never" OR "always"...) work is the substitution $t = \tan \frac{x}{2},$ which makes $\sin(x) = \frac{2x}{1+x^2}, \cos(x)= \frac{1-x^2}{1+x^2},$ which makes an identity like this a mechanical verification. Observe that the Right Hand side $\displaystyle\frac{\sin x+1}{\cos x}=\tan x+\sec x$ So, I want to utilize $\displaystyle\sec^2x-\tan^2x=1$ Dividing the numerator & the denominator by $\cos x,$ $$\frac{\sin x - \cos x +1}{\sin x + \cos x -1}=\frac{\tan x-1+\sec x}{\tan x+1-\sec x}$$ $$=\frac{\tan x+\sec x-(\sec^2x-\tan^2x)}{\tan x+1-\sec x}(\text{ Replacing }1\text{ with } \sec^2x-\tan^2x)$$ $$=\frac{(\sec x+\tan x)-(\sec x+\tan x)(\sec x-\tan x)}{\tan x+1-\sec x}$$ $$=\frac{(\sec x+\tan x)(1-\sec x+\tan x)}{\tan x+1-\sec x}$$ $$=\sec x+\tan x$$ I will start like Timotej, but finish differently. $$\cos x(\sin x-\cos x+1)=\cos x(1+\sin x)-\cos^2x=\cos x(1+\sin x)-(1-\sin^2x)$$ $$=(1+\sin x)\{\cos x-(1-\sin x)\}$$ $$\implies \cos x(\sin x-\cos x+1)=(1+\sin x)(\sin x+\cos x-1)$$ Now change the sides of $\displaystyle \cos x, \sin x+\cos x-1$ Let me derive some other identities $(1)\displaystyle\sin x(\sin x-\cos x+1)=\sin^2x+\sin x(1-\cos x)=1-\cos^2x+\sin x(1-\cos x)$ $\displaystyle\implies\sin x(\sin x-\cos x+1)=(1-\cos x)(\sin x+\cos x+1)$ $(2)\displaystyle\sin x(\sin x+\cos x-1)=\sin^2x-\sin x(1-\cos x)=(1-\cos x)(1+\cos x-\sin x)$ $(3)\displaystyle\cos x(\sin x+\cos x-1)=\cos^2x-\cos x(1-\sin x)=(1-\sin x)(1+\sin x-\cos x)$ and so on • @dona12, another method – lab bhattacharjee Dec 13 '13 at 3:50
{}
# Tag Info 18 Well you have that $$y'=y''=y^{(3)}\cdots$$ only function that is a derivative of itself is $$ae^{x}$$ for some $a$ so $$y'=ae^x$$ and $$y=y'-1=ae^x-1$$ And since $y(0)=1$ than $a-1=1$ so $a=2$ 12 My favorite paper about $x^x$ is The $x^x$ Spindle, which appeared in Mathematics Magazine back in 1996. The main idea is to visualize the fact that we can write it as $$x^x = e^{x (\ln(x)+2k\pi i)}.$$ Note that for each choice of $k$, we get a different branch of the logarithm. Given any real number $x$, most of these branches will be complex valued. ... 5 The denominator of the exponent tends to zero from above, no matter how $(x,y)$ tends to $(0,0)$. So the exponent grows without bound to $+\infty$. Hence the expression itself does as well. (Some may say the limit doesn't exist since it is infinite, but that's a matter of convention.) If the numerator were $-1$ instead of $1$, the exponent would tend to ... 5 The answer is simple: $$(a^b)^c = a^{bc}$$ is an equality which only holds for real numbers with $a>0$ and does not necesarily hold on complex numbers. 5 Unfortunately, as you note, the limit $$\lim_{h\rightarrow0}\frac{a^h-1}h$$ is not easily worked with (since it equals $\log(a)$). We can, however, gleam two bits of information from it. Firstly, if we define the function $$f(a)=\lim_{h\rightarrow 0}\frac{a^h-1}h$$ we can show that it is monotonic increasing; in particular, notice that if $b>a$ then, from ... 5 Let $u=t^4$ so $t=u^{1/4}$ and $dt=\frac 14 u^{-3/4}$ and then $$\int_0^\infty e^{-t^4}dt=\frac14\int_0^\infty u^{-3/4}e^{-u}du=\frac14\int_0^\infty u^{-3/4}e^{-u}du=\frac14\Gamma\left(\frac14\right)=\Gamma\left(\frac54\right)$$ using the equality $$\Gamma(x+1)=x\Gamma(x)$$ 5 If I am understanding correctly you would like to know if there is a suitable function $f$ such that $e^{a+b+c}=f(a)+f(b)+f(c)$ for all values of $a,b,c$. Notice such a function would satisfy $f(0)+f(0)+f(0)=e^0=1$. So $f(0)=\frac{1}{3}$. From here we can determine the function uniquely, since we must have $e^x=f(x)+f(0)+f(0)=f(x)+\frac{2}{3}$. So the ... 4 I'ts not "exponential" in the sense of the derivative being proportional to the value, no. It does, however, have "exponential growth", in the sense that there's a constant $C$ with $$|f(x)| \ge C u^x$$ for large enough $x$ and for some $u > 1$. In computer science, such functions are sometimes sloppily called 'exponential', even though they could ... 4 By the first condition, we have that $f(a/b) = f(1/b)^a$ and $f(1/b)^b = f(1)$ for any $a/b \in \mathbb{Q}$. Let $f(1) = ce^{i\theta}$. Then $f(1/b) = c^{1/b}e^{i(\theta + 2\pi k)/b}$ where $k$ is an integer depending on $b$. It follows $f(a/b) = e^{a/b}e^{i(\theta + 2\pi k)a/b}$. By continuity of $f$ and density of rationals, it follows that $k$ is locally ... 4 This kind of equations, which mix polynomial and trigonometric or hyperbolic terms do not show analytical solutions (beside the trivial $x=0$) and only numerical methods should be used. If you want me to elaborate on this topic, just post. Please notice that we can write the equation in a more simple form changing variable $ax=y$ to get $$\sinh(y)=c y$$ ... 4 You're using $\log$ incorrectly, even in the real sense. It's not true, for instance, that $\log\left(e+e^{-1}\right)=0$. Hint: Instead, multiply both sides of $8=e^{iz}+e^{-iz}$ by $e^{iz}$ and solve the resulting quadratic in $e^{iz}$. 4 Hint: given an arbitrary monic polynomial $p(x)$ of degree $N$, let $x_k$, $k=1,\cdots,N$ be its roots. Then $$p(x)=(x-x_1)\cdots(x-x_N).$$ The coefficient of $x^{N-1}$ is $-x_1-x_2-\cdots-x_N$, i.e. negative sum of roots. Your polynomial is of degree 2011 (but not monic, just transform it into a monic one, note $x^{2012}$ is cancelled). Thus the sum of ... 4 All your differential equations after the first one are implied by the first equation. If $y'=y+1$ then differentiation of it gives $y''=y'$ and further differentiation gives the successive equations. So it is really just about solving the first equation with the given initial value, and this you can do. 4 One option is to solve this by diagonalization. Here's another option: Note that $$A^2 = -w^2I$$ Thus, we have $$A^n = \begin{cases} (-1)^{n/2}w^{n}I& n \text{ is even}\\ (-1)^{(n-1)/2}w^{n-1}A& n \text{ is odd} \end{cases}$$ Now, expand the matrix exponential $$\exp(At)=\sum^{\infty}_{n=0}\frac{A^nt^n}{n!}= ... 4 For n be sufficiently large, we have (1-x_n)^n >0 (since it converges to 1). We can then say that n \log(1-x_n) tends to zero. Necessarily \log(1-x_n) tends to zero. So x_n tends to zero and we have that : \quad \log(1+x) \underset{0}{\sim} x Then nx_n \to 0. 4 Note that 7^2 \equiv 49 \equiv -1 \bmod 10, and so 7^4 \equiv 1 \bmod 10. Hence if a=4q+r then$$7^a \equiv 7^{4q+r} \equiv (7^4)^q \cdot 7^r \equiv 7^r \bmod 4$$This reduces your problem to finding 7^7 \bmod 4... but this should be easy since 7 \equiv -1 \bmod 4. 3 Notice that the last digits of powers of 7 run in the repeating sequence 7,9,3,1. Thus, what you really need to know is what 7^7 is congruent to modulo 4, not modulo 10, so as to tell where in the sequence 7^{7^7} falls. 7^k\bmod 4 is easily computed from k; how? 3 You can express the solution using Lambert's W function, but in practice you'd find it numerically. Take logs on both sides to get$$ 10000 = x \log_{10}(x) $$and use bisection or Newton-Raphson to approximate the solution. 3 As you already know, considering the function$$G_n(x)=\log\left(n+\frac{n-1}{x-1}\right),$$defined on (1,+\infty), \hat\lambda_n solves the identity$$\hat\lambda_n M_n=G_n(\hat\lambda_n M_n).$$As you noted, this identity has no analytical solution. However, the function G_n decreases on (1,+\infty) from G_n(1)=+\infty to G_n(+\infty)=\log(n) ... 3 HINT: Show that the derivative of 2^x-x is positive for x>?. 3 Let$$A=\pmatrix{0&9\cr-1&0\cr}\ .$$You can easily check that$$A^2=-9I$$and so the exponential series gives$$\eqalign{e^{tA} &=I+tA+\frac{1}{2!}t^2A^2+\frac{1}{3!}t^3A^3+\cdots\cr &=\Bigl(I-\frac{1}{2!}(9t^2I)+\frac{1}{4!}(9^2t^4I)+\cdots\Bigr)+\Bigl(tA-\frac{1}{3!}(9t^3A)+\frac{1}{5!}(9^2t^5A)+\cdots\Bigr)\cr ... 3 If you insist on wanting to show that your sequence is Cauchy without using convergence: Suppose $m \geq n \geq N$. In that case $$|e^{-m} - e^{-n}| = e^{-n} - e^{-m} = e^{-n} (1 - e^{n - m}) \leq e^{-n} \leq e^{-N}.$$ Now for $\varepsilon > 0$, you can pick $N$ such that $e^{-N} \leq \varepsilon$. 3 In Factorial Inequality problem it is shown that $$n!<\left(\frac n2\right)^n.$$ For $n=2000$ we have $$2000!<\left(\frac {2000}{2}\right)^{2000}=1000^{2000}.$$ 3 Note that $x^2+y^2=r^2$. This is the equation of a circle of radius $r$. Saying that $(x,y)\to (0,0)$ here means that the radius of this circle is approaching zero. So we can transform this to a one variable limit: $$\lim_{r\to 0} y=\lim_{r\to 0} e^{\frac{1}{r^2}}$$ Now note that $$\log y=\frac{1}{r^2}\rightarrow \infty,$$ as $r\to 0$. So $y \to \infty$. ... 3 First, note that $\frac {d}{dx} 0^x = 0^x$, so you really want to know that there is a unique positive real number $a$ for which $\frac {d}{dx} a^x = a^x$. Suppose that $\frac{d}{dx}a^x=a^x$ and $\frac{d}{dx}b^x=b^x$, with $a$ and $b$ both positive. Then by the quotient rule, ... 2 Technically speaking, you can not evaluate a function on the "points" $+\infty$ and $-\infty$, nevertheless, for some functions, they are accumulation points of the domain, thus you can evaluate the limit. In your case you have: $$\lim_{x\longrightarrow +\infty} e^{-x}=\lim_{x\longrightarrow +\infty} \frac{1}{e^{x}}=0$$ This is a standard limit, but You ... 2 You should know two facts. $$\begin{split} a^x \cdot a^y &= a^{x+y}\\ (a\cdot b)^x &= a^x \cdot b^x \end{split}$$ Then you can easily obtain next two facts $$\begin{split} \left(a^b\right)^c &= \underbrace{a^b \cdot a^b \cdot ~\dots~ \cdot a^b}_{\text{c times}} = a^{b+b+\dots+b} &= a^{b\cdot c}\\ \frac{a^x}{a^y} &= a^x \cdot ... 2 An exact answer is only possible in terms of the Lambert W function (Wikipedia link):$$\begin{align*} 0 &= 4e^{-2x}-3x\\\\ 3x &= 4e^{-2x}\\\\ 2xe^{2x}&=\tfrac{8}{3}\\\\ ye^y&=\tfrac{8}{3}\;\;(y=2x)\\\\ y&=W(\tfrac{8}{3})\\\\ x&=\tfrac{1}{2} W(\tfrac{8}{3}) \end{align*} Only top voted, non community-wiki answers of a minimum length are eligible
{}
## The Smallest Eigenvalues of a Graph Laplacian Given a graph $G = (V, E)$, its adjacency matrix $A$ contains an entry at $A_{ij}$ if vertices $i$ and $j$ have an edge between them. The degree matrix $D$ contains the degree of each vertex along its diagonal. The graph laplacian of $G$ is given by $D - A$. Several popular techniques leverage the information contained in this matrix. This blog post focuses on the two smallest eigenvalues. First, we look at the eigenvalue 0 and its eigenvectors. A very elegant result about its multiplicity forms the foundation of spectral clustering. Then we look at the second smallest eigenvalue and the corresponding eigenvector. A slightly more involved result (YMMV) allows us to partition the graph in question. A recent publication by John Urschel (who apparently moonlights as a sportsperson) focused on this quantity. The insights provided here sacrifice some rigor for the sake of brevity. I find such descriptions help me study without getting bogged down too much with details. A bibliography provided at the end contains links to actual proofs. # Eigenvalue 0 ## The Insight The zero-th eigenvalue tells us whether the graph is connected or not. In particular, if a graph has $k$ connected components, then eigenvalue 0 has multiplicity $k$ (i.e. $k$ distinct non-trivial eigenvectors). A blueprint for the proof looks like this (detailed proof provided later): • The eigenvector corresponding to eigenvalue 0 (known hereafter as $\lambda_0$) must contain some non-zero entries - (this is established by showing that $L$ is positive semi-definite (psd)). • In fact, if vertices $i$ and $j$ are connected, then components $i$ and $j$ in $\lambda_0$ must be equal. • If the graph is connected, apply transitive property and you get an eigenvector where all the components are equal (or all components set to 1). This looks like so: • If the graph is not connected, then consider each connected component separately and run this procedure on it again. For $k$ connected portions of the graph, we should have $k$ distinct eigenvectors, each of which contains a distinct, disjoint set of components set to 1. So, if the graph has 2 connected components, then the eigenvalue 0 has 2 non-trivial eigenvectors: In the diagram above, the vertices 1, 2, and 3 form one connected component and vertices 0, 4, and 5 form the other component. A toy example illustrates this nicely. In this example, we have a graph with 6 vertices. Say the graph is connected. Let us see what the value of the desired eigenvector is $< 1, 1, 1, 1, 1, 1 >$. For 2 connected components, this value possibly is $< 1, 1, 1, 0, 0, 0 > and < 0, 0, 0, 1, 1, 1 >$ And so on. ## From Eigenvectors to Clustering The gist of the previous section is: (i) The multiplicity of eigenvalue 0 equals the number of connected components. (ii) If vertices $i$ and $j$ are connected, then in a certain eigenvector corresponding to the eigenvalue zero, the $i^{th}$ and $j^{th}$ components are set to 1. The next task is to use these insights for clustering a set of points. A simple strategy seems to be (i) constructing a graph from this point set, (ii) ensuring that different clusters become different connected components in this graph, and (iii) look at the eigenvectors of the first eigenvalue for some assistance in discovering these clusters. The first two steps are trivial. We can build a $k$-NN graph - a graph where each point in the dataset is a vertex and an edge exists between this point and the $k$ points closest to it. So this diagram illustrates this setting. Let us say we have 6 points in the dataset and 2 clusters. Let us construct a $k$-NN graph - I’m going to set $k \leftarrow 2$ (this is for convenience). The $k$-NN graph looks like: Thus there are 2 connected components. The insight from the previous section tells us that the eigenvalue 0 has multiplicity 2 and the 2 distinct eigenvectors look like: Consider a matrix with these eigenvectors as its columns: $$\begin{bmatrix} 1 & 0\\ 1 & 0\\ 1 & 0\\ 0 & 1\\ 0 & 1\\ 0 & 1\\ \end{bmatrix}$$ This matrix has as many rows as the original dataset. It also has the effect of causing the clusters in the graph to pop out. In this example, I can tell that the first three points belong in the same cluster. The next three points form the second cluster. If you supply this matrix to any classic clustering algorithm (say $k$-means), it should have no issues clustering this and assigning points to the correct clusters. This is exactly what spectral clustering does. Thus the steps involved are: • Construct a $k$-NN graph (or indeed any other graph - say one that uses a threshold test on euclidean distances). • Obtain the laplacian of this graph. • Obtain the eigendecomposition of the laplacian, retain the first $k$ columns of the eigenvector matrix. • Supply this matrix to $k$-means (or your favorite clustering algorithm). Spectral clustering deals well with non-convex cluster shapes because of the underlying graph constructed. The manifold considered as a result captures the shape of the clusters reasonably well - something we cannot accomplish if only euclidean distances are used: This is a neat trick exploited by the isomap algorithm which was covered in a previous post. Constructing the graph tends to be a bit involved - often there isn’t a clear way to build one. The performance of spectral clustering depends on how the connected components in the graph reflect clusters in the dataset. A connected graph (which you can produce quite easily by picking a large $k$) will yield poor results. Despite these issues, spectral clustering is a very powerful and well-studied technique and belongs in any practitioner’s toolbox (IMO). # Second Smallest Eigenvalue of the Laplacian For the sake of brevity, I will call this quantity $\lambda_1$. I will call the associated eigenvector $v_1$. M. Fiedler in his landmark monograph called this quantity the algebraic connectivity of a graph. $\lambda_1$ and its eigenvectors provide amazing insights. One of the insights is: If $\lambda_1 = 0$ clearly eigenvalue 0 has multiplicity greater than 1. Thus the graph is not connected. This is fairly trivial to establish - the insight from the previous section covers it. The next insight, my favorite, involves partitioning a graph. When we partition a graph (into say 2 partitions), we desire 2 reasonably large groups of vertices with very few edges between them. Observe that this exercise is a waste of time if the graph isn’t connected (there are already two distinct components with no edges between them). Thus it makes sense to only consider connected graphs. Let us try to give a formal shape to the partitioning problem. Partitioning can be defined as assigning a value of $+1$ or $–1$ to each vertex. Vertices with different assignments are in different partitions. Say vertex $v_i$ gets assigned value $x_i$. Let us assume a perfect paritioning. Exactly $|V|/2$ points are assigned and $x_i = +1$ and the other half assigned $x_i = –1$. Now, a pair of vertices $v_i$ and $v_j$ that belong to different partitions are assigned values $x_i$ and $x_j$ where $x_i \neq x_j$. Thus, the only possible value for $(x_i - x_j)^{2}$ is $4$. For each edge from one partition to the other, we have a value of 4. Thus, the number of edges from one partition to the other is given by: $$\sum_{i=1, j=i}^{i=n, j=n} \frac{(x_i - x_j)^2}{4}$$ Also, assuming a perfect paritioning, an equal number of vertices are assigned values $+1$ and $–1$. Thus we have: $$\sum_{i} x_i = 0$$ Our objective is to minimize the number of edges from one partition to the other while achieving a reasonable size for each partition. This can be expressed as the following optimization function: Minimize $$\sum_{i=1, j=i}^{i=n, j=n} \frac{(x_i - x_j)^2}{4}$$ with the constraint $$\sum_{i} x_i = 0$$ . This unfortunately has a trivial solution. Set all $x_i$ to 0. An additional constraint eliminates this problem. The new constraint is: $$\sum_{i} {x_i}^{2} = |V|$$ We work with matrix variants of these equations. Clearly, the component responsible for the number of edges looks like $\frac{x^T \mathcal{L} x}{4}$. The constraint that enforces reasonable partition sizes is $x^T\mathcal{1} = 0$. Here $\mathcal{1}$ is a vector of all ones. Finally, the term responsible for ensuring a non-trivial solution is $x^Tx = |V|$. And the lagrangian looks like: $$\nabla \frac{x^T \mathcal{L} x}{4} - \nabla \eta_1 (x^Tx - |V|) - \nabla \eta_2 (x^Te) = 0$$ which becomes: $$\mathcal{L} x - \eta_1 x - \eta_2 \mathcal{1} = 0$$ Multiply by $\mathcal{1}^T$ on both sides: $$\mathcal{1}^T \mathcal{L} x - \eta_1 \mathcal{1}^T x - \eta_2 \mathcal{1}^T \mathcal{1} = 0$$ This essentially becomes: $$\mathcal{L}x - \eta_1x = 0$$ Thus $x$ is clearly an eigenvector of the graph laplacian and $\eta_1$ is an eigenvalue. Obviously, the eigenvector of the eigenvalue 0 doesn’t work (it assigns the value 1 to all points). Clearly $v_1$ (the eigenvector of the second smallest eigenvalue) is a solution (the intuition is that the smaller the eigenvalue, the fewer the edges between the two partitions). Thus, the eigenvector $v_1$ (a.k.a the Fiedler vector) provides an assignment to each vertex in the graph. This assignment can be used to partition the graph. There is just one issue here. The eigenvector contains real values, not necessarily $1$ and $–1$. A whole host of tricks can be applied to convert the entries in the eigenvector to $+1$ and $–1$: • $sgn(v_{1_{i}})$ i.e. vertex $i$ gets assigned a value $x_i =$ the sign of the $i^{th}$ component of the eigenvector $v_1$. • $x_i = +1$ if $v_{1_i} > m$, $-1$ otherwise. Here $m$ is the median of all the eigenvalue components (or the mean or zero or whatever). And this is how the Fiedler vector helps with graph partitioning. The bibliography attached to this post contains some amazing literature that I had a lot of fun reading. I have mirrored these documents in github and provided the github link. # Bibliography 1. A Tutorial on Spectral Clustering - Ulrike von Luxburg - A self-contained and elaborate tutorial on spectral clustering. 2. Algebraic Connectivity of Graphs - Miroslav Fiedler - A landmark paper on the properties of the second smallest eigenvalue and its associated eigenvector 3. Partitioning Sparse Matrices with Eigenvectors of Graphs - Alex Pothen, Horst Simon, Kang-Pu Paul Liu - An algorithm to leverage the Fiedler vector for graph Partitioning Instagram: @life_of_ess Fortior Per Mentem (c) Shriphani Palakodety 2013-2018
{}
# Gershgorin Circle Theorem ## Theorem Let $n$ be a positive integer. Let $A = \left({a_{i j} }\right)$ be a complex square matrix of order $n$. Let $\lambda$ be an eigenvalue of $A$. Then there exists $i \in \left\{ {1, 2, \ldots, n}\right\}$ such that: $\lambda \in \mathbb D \left({a_{i i}, R_i}\right)$ where: $\displaystyle R_i = \sum_{j \mathop \ne i} \left\vert{a_{ i j} }\right\vert$ $\mathbb D \left({a, R}\right)$ denotes the complex disk of center $a$ and radius $R$. ## Source of Name This entry was named for Semyon Aranovich Gershgorin.
{}
### Exercise condition: 3 Calculate the average number of manufactured products, if they had produced day after day $$15; 23; 19; 18; 25; 20; 13$$ units of products. On average, manufactured products per day is  units.
{}
Select Page Try this problem from ISI-MSQMS 2018 which involves the concept of Real numbers, sequence and series and Definite integral. ## DEFINITE INTEGRAL | ISI 2018| MSQMS | PART A | PROBLEM 22 Let $I=\int_{0}^{1} \frac{\sin x}{\sqrt{x}} d x$ and $J=\int_{0}^{1} \frac{\cos x}{\sqrt{x}} d x,$ then which of the following is true? • (a) $I<\frac{2}{3}$ and $J>2$ • (b) $I>\frac{2}{3}$ and $J<2$ (c) $I>\frac{2}{3}$ and $J>2$ • (d) $I<\frac{2}{3}$ and $J<2$ ### Key Concepts REAL NUMBERS REIMANN INTEGRATION SEQUENCE AND SERIES But try the problem first… Answer:(d) $I<\frac{2}{3}$ and $J<2$ Source ISI 2018|MSQMS |QMA|PROBLEM 22 INTRODUCTION TO REAL ANALYSIS :BARTLE SHERBERT ## Try with Hints First hint We know when $f(x)$>$g(x)$ $\int \limits_a^bf(x)$>$\int \limits_a^bg(x)$ We know for $0<x<1$, $\cos x <1$ Second hint $\frac{\cos x}{\sqrt x}$< $\frac{1}{\sqrt x}$ implies $\int \limits_0^1\frac{\cos x}{\sqrt x}\mathrm dx$<$\int \limits_0^1\frac{1}{\sqrt x}\mathrm dx$ $\int \limits_0^1\frac{1}{\sqrt x}\mathrm dx = 2$ $\int \limits_0^1\frac{\cos x}{\sqrt x}\mathrm dx$<$2$ $J$<$2$ Third hint Again we claim $x-s\sin x$>$0$ for $0 \leq x\leq 1$ Let $f(x)=x-\sin x$ $f'(x)=1-\cos x\geq 0$ hence $f(x)$ is monotonic increasing. Therefore $x-\sin x$> $0$, $x\epsilon [0,1]$ So,$x$>$sinx$ $\sqrt x$ > $\frac{\sin x}{\sqrt x}$ $x\epsilon [0,1]$ integrating both sides with limits $0$ to $1$ we get; $\int \limits_0^1\frac{\sin x}{\sqrt x} \mathrm dx$<$\frac{2}{3}$ $I$<$\frac{2}{3}$ Final Step Therefore,$I<\frac{2}{3}$ and $J<2$
{}
I pretended the whole year that I understood Baye's theorem but in reality, I have no idea what the hell it is. I think people overcomplicate this. It's way to reverse our understanding of which thing we take as given and which we then consider to be the probability. It  does require that we know a lot of things to use it, which tends to be its downfall. Suppose that we know that 30% of men are tall, call this our prior probability, and we want to know the likelihood that someone tall is a man, the posterior probability.  Those are different things, which sometimes trips people up.  (I.e. the likelihood that someone who tested positive is sick is different than the likelihood that someone who is sick tested positive, even though those sound pretty similar.) To do that, we need to know how common it is to be tall and to be a man.  Suppose that 20% of our population is tall, and 50% are men.  So then we make a fraction using our assumption (men) over our observation (tall) and get .5 / .2 = 2.5, and we multiply that by our prior probability .3 to get .75.  I..e if someone is tall, then 3/4 of the time, they are a man. (You can note that the numbers have to be consistent to make this work out.  If you used the same 30% of men are tall observation but thought that 10% of the population was tall and 50% men, you would get a probability greater than one, so you know that your values were mutually inconsistent and impossible.) In terms of intuition vs. plugging and chugging: here we see that assuming this is a man gave us a higher probability of being tall than the population as a whole, so the event tall and the event man are not independent.  There must be more tall men than tall non-men in the same size sample of each to get the math to work out.  The calculation tells us how much that distribution is one-sided. By the definition of conditional probability, P(B|A)P(A) = P(A&B) = P(A|B)P(B). Divide through by P(A), then use the total probability formula on P(A), and that's Bayes. That's all it is: a trivial algebraic rearrangement of a definition. Of course, much can be said about its real-world application and meaning, but if you understand conditional probabilities, then none of that will be surprising to you. You say you understand the conditional probability, so if you have two events A and B, you know how to express the probability of A if we know that B has happened, and the probability of B if we know that B has happened: * P(A|B) = P(A∩B) / P(B) * P(B|A) = P(A∩B) / P(A) Now, in the above, notice that P(A∩B) appears in both expressions. This gives us the option to express the algebraic connection between P(A|B) and P(B|A). First, we express P(A∩B) from the two equations above: * P(A∩B) = P(B) · P(A|B) * P(A∩B) = P(A) · P(B|A) From here, we can equate the two expressions for P(A∩B) to get P(B) · P(A|B) = P(A) · P(B|A), which gives us the option to express one conditional probability in terms of the other: P(A|B) = P(A) · P(B|A) / P(B). And that is what the Bayes' theorem is: the algebraic connection between the following four probabilities: * P(A) — the probability of event A; * P(B) — the probability of event B; * P(A|B) — the probability that A happens in the cases when B happens; * P(B|A) — the probability that B happens in the cases when A happens. So, the theorem literally falls out immediately from the definition of the conditional probability after extremely simple algebraic manipulation, but it is extremely important! Having the ability to connect these four probabilities turns out to be of extreme importance and you need it all the time when thinking about real-world situations. &#x200B; &#x200B; By the way, the theorem is named after Thomas Bayes so it's **Bayes' theorem**, not *~~Baye's theorem~~*. This is the odd-interpretation which I think is more intuitive. Let's say you have a bunch of mutually exclusive and exhaustive possibilities. For each of them, you assign a number to them telling how much you believe in them, the numbers range from -infinity to +infinity, the more negative the less you believe in it. On this scale, only relative distance matter, so for convenient you could choose any time to shift down all the numbers on this scale by a constant amount. When a new fact is given, this fact moves this scale, everything to the left (more negative). The amount of movement (how additionally negative it is) is the strength of evidence of this new fact in against each of the possibility. The strength of evidence is less negative if the chance of this fact happen conditioned on that possibility is high. 0 like 0 dislike
{}
Address 400 W Main Cir, Filer, ID 83328 (208) 326-3700 http://techteamsupport.com least mean square error image difference matlab Rogerson, Idaho Estimators with the smallest total variation may produce biased estimates: S n + 1 2 {\displaystyle S_{n+1}^{2}} typically underestimates σ2 by 2 n σ 2 {\displaystyle {\frac {2}{n}}\sigma ^{2}} Interpretation An In statistics, the mean squared error (MSE) or mean squared deviation (MSD) of an estimator (of a procedure for estimating an unobserved quantity) measures the average of the squares of the lmsediffLMSEDIFF Difference image subject to least mean square error scaling. Learn more MATLAB and Simulink resources for Arduino, LEGO, and Raspberry Pi Learn more Discover what MATLAB® can do for your career. Translate immse Mean-squared error collapse all in page Syntaxerr = immse(X,Y) exampleDescriptionexampleerr = immse(X,Y) calculates the mean-squared error (MSE) between the arrays X and Y. MSE is also used in several stepwise regression techniques as part of the determination as to how many predictors from a candidate set to include in a model for a given An Error Occurred Unable to complete the action because of changes made to the page. If X is a matrix of shape NxMxP, sum(X,2) forms a sum over the columns of X, i.e., the SECOND dimension of X, producing a result that has shape Nx1xP. –user85109 help –linuxuser27 Sep 11 '10 at 19:31 add a comment| 4 Answers 4 active oldest votes up vote 22 down vote accepted Well, start writing! share|improve this answer answered Sep 13 '10 at 12:53 William Payne 1111 Thank you for this method also. They will go from 0 to numberOfRevolutions * 2*pi. errG = sum(abs(dG(:))); errB = sum(abs(dB(:))); sumErr = errR + errG + errB; For additional performance, you might also want to consider converting to a single channel and spatially downsampling, although Thanks a lot. Explore Products MATLAB Simulink Student Software Hardware Support File Exchange Try or Buy Downloads Trial Software Contact Sales Pricing and Licensing Learn to Use Documentation Tutorials Examples Videos and Webinars Training You can also select a location from the following list: Americas Canada (English) United States (English) Europe Belgium (English) Denmark (English) Deutschland (Deutsch) España (Español) Finland (English) France (Français) Ireland (English) square error is like (y(i) - x(i))^2. workspace; % Make sure the workspace panel is showing. ashkan abbasi (view profile) 0 questions 2 answers 0 accepted answers Reputation: 4 Vote2 Link Direct link to this answer: https://www.mathworks.com/matlabcentral/answers/81048#answer_133017 Answer by ashkan abbasi ashkan abbasi (view profile) 0 questions Based on your location, we recommend that you select: . Carl Friedrich Gauss, who introduced the use of mean squared error, was aware of its arbitrariness and was in agreement with objections to it on these grounds.[1] The mathematical benefits of L.; Casella, George (1998). Note that, although the MSE (as defined in the present article) is not an unbiased estimator of the error variance, it is consistent, given the consistency of the predictor. Apply Today MATLAB Academy New to MATLAB? fontSize = 22; xCenter = 12; yCenter = 10; % Make a timeline of 40 seconds with samples every 0.01 second. Rasheed Khankan Rasheed Khankan (view profile) 0 questions 0 answers 0 accepted answers Reputation: 0 on 14 Mar 2016 Direct link to this comment: https://www.mathworks.com/matlabcentral/answers/81048#comment_350164 I think that the maximum value Play games and win prizes! What does this say:[rows, columns, numberOfColorChannels] = size(grayImage) It should say 256, 256, 1. I am developing a steganography apps and for this analysis part i have to calculate the MSE and PSNR of the stego image and original image. See my attached demo where I do it without toolbox functions, and as given in my Answer way up at the top. so that ( n − 1 ) S n − 1 2 σ 2 ∼ χ n − 1 2 {\displaystyle {\frac {(n-1)S_{n-1}^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}} . Web browsers do not support MATLAB commands. subplot(1,2,2); plot(t, y, 'b-', 'LineWidth', 3); grid on; ylim([0, yCenter+radius]); title('Height of a point as it revolves around', 'FontSize', fontSize); xlabel('time', 'FontSize', fontSize); ylabel('Y, or Azimuth', 'FontSize', fontSize); % Enlarge figure TRY IT! Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization. How do we form the difference of two images? lmse_example.mwhole image usage View all files Join the 15-year community celebration. p.229. ^ DeGroot, Morris H. (1980). This property, undesirable in many applications, has led researchers to use alternatives such as the mean absolute error, or those based on the median. p.60. subplot(2, 2, 2); imshow(noisyImage, []); title('Noisy Image', 'FontSize', fontSize); %------ PSNR CALCULATION ---------------------------------------------------------- % Now we have our two images and we can calculate the PSNR. % First, calculate the "square Learn to write matlab code by doing so, and do it in pieces, so you can follow what you did. noisyImage = imnoise(grayImage, 'gaussian', 0, 0.003); % Display the second image. To calculate MSE you need to have two signals - the desired/true signal, and your actual/test signal. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. Is it correct to write "teoremo X statas, ke" in the sense of "theorem X states that"? Log In to answer or comment on this question. grayImage = imread('cameraman.tif'); [rows columns] = size(grayImage); % Display the first image. If we define S a 2 = n − 1 a S n − 1 2 = 1 a ∑ i = 1 n ( X i − X ¯ )
{}
# Proof of trace of density matrix in pure/mixed states 1. Apr 27, 2010 ### barnflakes Can someone help me prove that $tr(\rho^2) \leq 1$ ? Using that $$\rho = \sum_i p_i | \psi_i \rangle \langle \psi_i |$$ $$\rho^2 = \sum_i p_i^2 | \psi_i \rangle \langle \psi_i |$$ $$tr(\rho^2) = \sum_{i, j} p_i^2 \langle j | \psi_i \rangle \langle \psi_i | j \rangle$$ Where do I go from here? Thanks guys. 2. Apr 27, 2010 ### genneth Density matrices may be diagonalised, and their trace is one: $$Tr(\rho) = \sum_i p_i = 1$$ Then you need a bit of basic algebra: $$\sum_i p_i^2 \le \left(\sum_i p_i\right)^2 = 1$$ 3. Apr 27, 2010 ### barnflakes Can you be a bit more explicit? So they can be diagonalised, ie. $$\rho = \sum_i \lambda_i p_i| i \rangle \langle i |$$ $$\rho^2 = \sum_i \lambda_i^2 p_i^2 | i \rangle \langle i |$$ $$tr(\rho^2) = \sum_j \lambda_j^2 p_j^2 \leq (\sum_j \lambda_j p_j)^2$$ ?? does that final term = 1? 4. Apr 27, 2010 ### Fredrik Staff Emeritus This isn't always true, since the $|\psi_i\rangle$ don't have to be orthogonal. Start with $$\operatorname{Tr}\rho^2=\sum_n\langle n|\rho^2|n\rangle$$, where the $|n\rangle$ are members of an arbitrary orthonormal basis. Use the correct expression for $$\rho^2$$. Then rearrange some stuff and recognize the identity operator in what you've got. Then you're almost done, but you'll need the Cauchy-Schwarz inequality to finish it. 5. Apr 27, 2010 ### genneth In the diagonal basis, the eigenvalues of $$\rho^2$$ are just $$p_i^2$$, where $$p_i$$ are the eigenvalues of $$\rho$$. 6. Apr 27, 2010 ### barnflakes OK so I get to $$\sum_{i,j} p_i p_j \langle \psi_j | \psi_i \rangle \langle \psi_i | \psi_j \rangle$$ and now I need to use cauchy schwartz you say? $$\sum_{i,j} p_i p_j \langle \psi_j | \psi_i \rangle \langle \psi_i | \psi_j \rangle = \sum_{i,j} p_i p_j \frac{\langle \psi_j | \psi_i \rangle \langle \psi_i | \psi_j \rangle \langle \psi_i | \psi_i \rangle}{\langle \psi_i | \psi_i \rangle} \leq \sum_{i,j,n} p_i p_j \langle \psi_j | n \rangle \langle n | \psi_j \rangle \langle \psi_i | \psi_i \rangle = \sum_{i,j} p_i p_j \langle \psi_j | \psi_j \rangle \langle \psi_i | \psi_i \rangle$$ Is that correct? Where do I go from here? 7. Apr 27, 2010 ### Fredrik Staff Emeritus You're making it more complicated than it needs to be. I'm not even sure what you're doing, but you're getting the right result. Now you just need to use that the states are normalized. This is the easy way to get the result you've got already: $$\langle \psi_j | \psi_i \rangle \langle \psi_i | \psi_j \rangle=|\langle\psi_i|\psi_j\rangle|^2\leq \big\||\psi_i\rangle\big\|^2\big\||\psi_j\rangle\big\|^2=1$$ You will of course also have to use what you know about the pi. I posted a statement and proof of the Cauchy-Schwarz inequality in the Science Advisor forum some time ago. You probably don't need it, but since it's a related topic, and since I have only posted it in a restricted forum before, I'm reposting it here. Theorem: If x and y are vectors in an inner product space X over $$\mathbb C[/itex], then [tex]|\langle x,y\rangle| \leq \|x\|\|y\|$$​ where the norm is the standard norm on an inner product space. Proof: Let t be an arbitrary complex number. $$0 \leq \langle x+ty,x+ty\rangle=\|x\|^2+t\langle x,y\rangle+t^*\langle y,x\rangle+|t|^2\|y\|^2$$​ $$=\|x\|^2+2\operatorname{Re}(t\langle x,y\rangle)+|t|^2\|y\|^2$$​ The inequality is obviously satisfied when the real part of t<x,y> is non-negative, so we can only learn something interesting when it's negative. Let's choose Arg t so that it is. $$=\|x\|^2-2|t||\langle x,y\rangle|+|t|^2\|y\|^2$$​ Now let's choose |t| so that it minimizes the sum of the last two terms. (This should give us the most interesting result). $$s=|t|,\ A=\|y\|^2,\ B=2|\langle x,y\rangle|$$​ $$f(s)=As^2-Bs$$​ $$f'(s)=2As-B=0\ \Rightarrow\ s=\frac{B}{2A} = \frac{|\langle x,y\rangle|}{\|y\|^2}$$​ $$f''(s)=2A>0$$​ Continuing with this value of |t|... $$=\|x\|^2-2\frac{|\langle x,y\rangle|}{\|y\|^2}|\langle x,y\rangle|+\frac{|\langle x,y\rangle|^2}{\|y\|^4}\|y\|^2$$​ $$=\|x\|^2-\frac{|\langle x,y\rangle|^2}{\|y\|^2}$$​ 8. Apr 27, 2010 ### barnflakes Thank you for the response Fredrik, you'll have to excuse me, I'm really rather new to quantum mechanics/information, so when you say "use what I know with regards to p_i and p_j" I have to confess my ignorance as to what I know. As far as I'm aware, it's the probability that the N_th quantum system of an N dimensional system is in the state $$\psi_i$$ So if the system is in a pure state we know exactly the state of the system. I find this confusing. Does it mean we know the state of the system overall, or the state of each individual qubit/quantum system? 9. Apr 27, 2010 ### Fredrik Staff Emeritus It's the probability that the ith system has been prepared in state $|\psi_i\rangle$. And you know that the sum of the probabilities is 1. That's what I meant you should use. The density operator $\rho=\sum_i p_i|\psi_i\rangle\langle\psi_i|$ is a mathematical tool that we can use to calculate the expected average result when we perform a measurement of some observable A on every member of a large ensemble of identical systems, with a fraction $p_i$ of the systems in state $|\psi_i\rangle$. (If the number of systems is small, we're going to have to repeat the procedure many times to get an accurate average. That's why I said that we're calculating the "expected" average). It doesn't matter if the members of the ensemble are different systems that all exist at the same time at different locations, or if they are states of a single system at a single location at different times, or if they are different possible states of a single system at a single time. A pure state has $p_i=\delta_{ij}$ for some j. What that means is that every member of the ensemble has been prepared in state $|\psi_i\rangle$. When you know that, it doesn't matter if the ensemble consists of a single system or 10^50 systems. What the information $p_i=\delta_{ij}$ is telling you is just what the "expected average" result of a measurement will be. ("Expected average" isn't a real term as far as I know. I just thought it seemed appropriate). 10. May 8, 2010 ### barnflakes Thank you Fredrik, I understand it much more now, one last thing about the proof. You say: $$\operatorname{Tr}\rho^2=\sum_n\langle n|\rho^2|n\rangle$$ So I have $$\sum_{i,j,n} p_i p_j \langle n|\langle \psi_j | \psi_i \rangle \langle \psi_i | \psi_j \rangle |n \rangle$$ But since the expression $$\rho^2$$ is just a number then adding those orthonormal basis is making no difference? In other words, I can rearrange the above as follows: $$\sum_{i,j,n} p_i p_j \langle \psi_i | \psi_j \rangle |n \rangle \langle n|\langle \psi_j | \psi_i \rangle =\sum_{i,j} p_i p_j \langle \psi_i | \psi_j \rangle \langle \psi_j | \psi_i \rangle =$$ and then use the caughy schwarz as above? 11. May 8, 2010 ### Fredrik Staff Emeritus Yes, that's the definition of the trace. It's actually independent of the basis we use. (Proving that would be a good warm-up excercise). That's not what we get from $\rho = \sum_i p_i | \psi_i \rangle \langle \psi_i |$ and the definition of the trace. It isn't (but I see that it has magically turned into one in what you wrote above ). 12. May 8, 2010 ### barnflakes Haha oops, I see what I've done, sorry I forgot to check over my working since last time: $$\sum_{i,j,n} p_i p_j \langle \psi_j | \psi_i \rangle \langle \psi_i | \psi_j \rangle$$ this is the expression I obtain after taking the trace and using the identity representation, I see now. Thank you Fredrik :)
{}
# TXT to SRT Converter ###### Updated: Jan 6, 2019 TXT to SRT converter is used to convert subtitles from Text to SRT format. Language conversion is also supported between English, French, Germal, Italian, Japanese, etc. Simply click on the Upload button below, select your text subtitle file and hit the Convert button. Settings ### Use Cases Change extension of TXT file to SRT In most cases your file is already in the SRT file format. All that must be done is change it's extension from .txt to .srt. This tool does that and automatically downloads the output as a .srt file. You can use this subtitle file with a video player of your choice. Introduce timestamps in plain text subtitle In this case all you have are lines of subtitles with no timestamp information whatsoever. The tool does it's best to introduce timestamps for each line of lyrics by considering the length of the line, how many words & characters are in it and the Start/End time you provide. Language Conversion Use this to convert each line of your subtitles from one language to another. ### Srt SRT is a lyrics file format generated by the SubRip software. Lines of songs are preceded by a range of time (start to end) during which the lyrics line appears on the song. #### Settings Explained Start Counter Each sequentially generated subtitle has a counter in the SRT file format. By default the counter starts from 0. You can change this starting counter by using this setting ##### Starting Counter 0 0 00:00:17,620 --> 00:00:23,210 Baby, last night was hands down 1 00:00:23,310 --> 00:00:25,810 One of the best nights ##### Starting Counter 1 1 00:00:17,620 --> 00:00:23,210 Baby, last night was hands down 2 00:00:23,310 --> 00:00:25,810 One of the best nights Start Time The time in seconds when the subtitle starts. Must be greater than the End Time End Time The time in seconds when the subtitle ends. Must be less than the Start Time Convert Language Select to perform language conversion on the lyrics Source Language The language to convert the lyrics from Target Language The target language for the subtitle translation ### History Dec 18, 2018 Support for converting subtitle language Oct 22, 2018 Support for plain text files Aug 14, 2018 Tool Launched Created: Aug 14, 2018 Online Tool Designed For: Windows, OS X, Android, iOS, Linux
{}
# How do you translate "the product of triple a number and five" into a mathematical expression? Dec 30, 2015 #### Answer: Product means multiply ... #### Explanation: $3 \left(x\right) \times 5$ Hope that helped
{}
1:22 AM 0 Determine whether there exists a function $f:\mathbb{R}\rightarrow\mathbb{R}$ such that $$f\big(x^3+x\big)\le x\le\big(f(x)\big)^3+f(x)$$ for all $x\in\mathbb{R}$. Source: Math Excalibur Volume 22 No. 4 Page 3 Problem 536 rephrased 1:46 AM 1 Once again, you have angered the Emperor, and she has imprisoned you in a special prison. “I do have a bit of good news for you,” the Emperor has told you. “You're only two doors away from freedom, and I've left all the doors unlocked for you!” Of course, she neglected to mention the details. S... 4 hours later… 5:53 AM 0 This Welsh town's northern fair is, after all, very backward: firstly popcorn with lemon; secondly, bloody agility swings; finally, scary clown going away without starting. All agog, very chewy outer layers right now. Selling wares from wardrobes: two ewes, four fifties, inner pants. Outwardly... 6:09 AM 3 I have created a puzzle that includes multiple steps, that I'd like to post on the site. However, two of the steps use standard puzzles as part of making progress. Assuming I attribute the source, is it fair game for me to find an instance of these standard puzzles on the Internet, or use a p... @jafe "My word" is the definition half, and signifies that it's a word you invented (for this C4). The rest is the wordplay. :-) 7:12 AM @msh210 heheheh CCCC hint: "My word!" may not be a very intuitive match for what it stands for. A more straightforward alternative would be "Greetings!" 7:57 AM @jafe Is al'Thor (the fantasy character or maybe our own) a HIREDHAND perchance? HI + R(E D_ H)AND. AFAIK that's usually written with a space, but so many words are losing their spaces these days. (Though the clue would probably then have "e.g." or the like at the end. And "separately" would be extraneous.) the wordplay would make sense if they were! but that it's not the intended solution @msh210 Sheepherder, not hired hand. (The fantasy character, not me.) oh well CHARACTER could be the def, I suppose. Or if any of his various titles have 9 letters. I looked up his names/titles online, as well as those of his adoptive parents, and none seem to have nine letters. CHARACTER or MODERATOR would seem to require an "e.g." or the like at the end, but I'm certainly willing to consider them as possible solutions anyway. 8:08 AM I wonder if "by Rand al'Thor" could mean something I or the character created. CALLANDOR (his sword) is nine letters, and might be found by his side. "My word!" is a CALL. "heroin" is NOR< ("ron"). I don't see a reversal indicator. "a bit of DXM" is D. So "a bit of DXM and heroin separately ingested" could sort of maybe be NDOR if we find a reversal indicator and squint a bit. Oh, maybe "separately" is "after being separated into its component characters" i.e. an anagram indicator. Then "a bit of DXM and heroin separately ingested" is easily NDOR. But "Ecstasy" would then have to be A. . . . which it doesn't seem to be. > Oh, maybe "separately" is "after being separated into its component characters" i.e. an anagram indicator. Then "a bit of DXM and heroin separately ingested" is easily NDOR. No, that would be an indirect anagram. Oops. @Randal'Thor maybe "ingested by Rand al'Thor" could be a definition? Is there an adjective or collective noun referring to what (or whom) he eats/kills/subsumes/is possessed by? (In case you can't tell, I know nothing about this character. I don't even know if this "Wheel of Time" thing is a book or a game or a movie or what - though I'm sure that's easily Googlable.) It's a series of books :-) Probably one of the longest fantasy epics, at 14 novels and over 4 million words. And I can't think of anything specific that could be defined by "ingested by Rand al'Thor". The One Power, specifically saidin, is the type of magic he uses, but that's channelling, not ingesting. 8:25 AM And, more importantly, only six characters. Maybe Thor is the definition. But then the wordplay seems too long: something, something, something, and something else all ingested by RANDAL is at least 10 letters. Ooh, maybe they're separately ingested by R and AL. 2 hours later… 11:00 AM ^ those types of clues are starting to be my favorite, singular words that can be broken up cryptically altho I think there are limitations to it, and some people don't want it at all 11:14 AM (well, "Rand al" ain't a single word but you get me) 11:29 AM @Randal'Thor I've never read it, but I understand Dune is over 4 million novels. But I don't know whether it's a "fantasy epic". (I exaggerate, of course.) 11:46 AM I'd been thinking (1) that "Rand al'Thor" might mean ALTHOR*, and (2) that "My word!" might mean some notably Finnish word. But the only 9-letter Finnish greeting I can find is TERVEHDYS and while it does contain an E, an D, and an H, all I can get out of what remains is VESTRY* which seems not very useful. Taking "Rand" to mean "R and" is a clever idea, though. ah, wait @GarethMcCaughan I thought it might be Finnish, but the hint seems to indicate otherwise. it's gotta be HEYERDAHL somehow HEY will be "my word" / "greetings" there we go! so Thor is the def. we have an E, and then a D and an H inserted seperately into "R and AL" yep 11:49 AM That's pretty convoluted. (And pretty, and convoluted.) it's neat; I like it (I'm not a big fan of "My word" cluing "hey" though. But the hint helped.) anyway, as so often it'll be a little while before I have a replacement. (I don't have a clue bank, though I do have a bunch of failed past attempts and probably some of them are salvageable.) hey = the speaker is protesting (or greeting). my word = the speaker is shocked, or aghast. @GarethMcCaughan that is correct and yeah the hey = my word connection is pretty weak, although both can be found in the dictionary as "expression of surprise" fair enough then 11:52 AM i wasn't thinking of "R and", though, just R = (south african) rand @jafe ohhhhh that's nicer I thought using "Rand al" to clue "R and AL" was considered naughty in a cryptic clue? I got some disapproval before for making clues where, for example, "insect" meant putting something inside SECT. 12:07 PM @Randal'Thor some certainly consider it so. Note though that that's not what jafe did. 12:29 PM 2 The 16 words below may be partitioned into 4 groups of 4 connected words. Additionally, each of the four groups can be represented by a single group-word. Finally, the four group-words are connected by a single five-letter word. +--------------+--------------+--------------+--------------+ ... 2 hours later… 2:08 PM 0 You are given an empty 6x6 grid. You are allowed to paint some of its cells as walls (black), while the remaining cells stay empty (white). A robot is programmed to start in one corner of the grid and visit the other three corners^ in the least number of steps possible. At each step, the robot mo... 3 hours later… 5:25 PM 3 If I place all the numbers from 1 to 9 in a 3x3 grid and I add the products of each row and column, then what is the minimal and the maximal sum? For example, the sum is 450 on the picture below. 5:50 PM 0 In front of me stands a table with the shape of an equilateral triangle with side lengths 1. I can cover the whole surface with five isometric circular tablecloths. What’s the minimum radius for a tablecloth? (The tablecloths can of course) 2 hours later… 8:06 PM 0 If you can solve these, then you probably are one of the Avengers. I find these six random puzzles to be impossible. Any Tony Stark IQs out there (maybe @Deusovi :D)? 8:58 PM 1 Three equilateral triangles with side lengths 28 are placed in the position as shown in the picture above. All the contacts are perfect and a circle passes by exactly one vertex per triangle. What’s the minimal radius? 9:23 PM 0 The game is pretty simple, whoever reaches 50 first wins, but there are a few important details: 2 players, one die. Players take turns. Rolled values are simply added up When you roll a 6, however, the sum of the current turn is discarded When you decide to stop your turn, the current turn's s... 9:49 PM 1 Loosely inspired by the Babson task, here's my first attempt at a Binary Homeworlds problem in the same vein as Simple, Monopoly, Inheritance or Insurance Fraud, and Blastdoor. Lee (0, g3b2) r1r3g1b1- Ray (1, r1r3) -y2g3b3 DS1 (y2) b2-r1r3 DS2 (g1) g1g2- DS3 (g2) y2- DS4 (b2) r2-g3 The stash... 2 hours later… 11:32 PM 1 The set of sixteen words below can be partitioned. Each partition is of four words that have something in common. I invite you to figure out the partitions and commonalities. ACORN, ALI, BRADLEY, CLEAN, COUNTY, FIRE, HAM, HERSHEY, INNOCENT, IS, LEO, MAMET, MARTIN, RUN, URBAN, VERITE
{}
## Pacific Journal of Mathematics ### Properties of solutions of $n{\rm th}$ order linear differential equations. Thomas L. Sherman #### Article information Source Pacific J. Math., Volume 15, Number 3 (1965), 1045-1060. Dates First available in Project Euclid: 13 December 2004 https://projecteuclid.org/euclid.pjm/1102995587 Mathematical Reviews number (MathSciNet) MR0185185 Zentralblatt MATH identifier 0132.31204 Subjects Primary: 34.20 #### Citation Sherman, Thomas L. Properties of solutions of $n{\rm th}$ order linear differential equations. Pacific J. Math. 15 (1965), no. 3, 1045--1060. https://projecteuclid.org/euclid.pjm/1102995587 #### References • [1] N. V. Azbelev, and Z. B. Chalyuk, On the question of the distribution of the zeros of solution of a third order linear differential equation, Mat. Sbornk, 51 (1960), 475-486 (Russian). • [2] J. H. Barrett, Disconjugacy of second-order linear differential equations with non- negative coefficients, Proc. Amer. Math. Soc. 10 (1959), 552-561. 3# 1Disconjugacy of a self-adjoint differentialequation of the fourth order, Pacific J. Math. 11 (1961), 25-37. • [4] J. H. Barrett,Fourth order boundary value problems and comparison theorems, Ca- nadian J. Math. 13 (1961),625-638. • [5] J. H. Barrett, Two-point boundary problems for linear self-adjoint differential equations of the fourth order with middle term, Duke Math. J. 29 (1962),543-554. • [6] E. A. Coddington, and N. Levinson, Theory of Ordinary DifferentialEquations, McGraw-Hill, New York, 1955. • [7] M. Hanan, Oscillation criteria for third-order linear differential equations, Pacific J. Math. 11 (1961), 919-944. • [8] H. Howard, Oscillation criteria for fourth-orderlinear differentialequations, Trans. Amer. Math. Soc. 96 (1960),296-311. • [9] R. W. Hunt, The behavior of solutions of ordinary, self-adjoint differential equa- tions of arbitrary even order, Pacific J. Math. 12 (1962),945-961. • [10] E. L. Ince, Ordinary differential equations, Dover, New York, 1956. • [11] W. Leighton, and Z. Nehari, On the oscillation of solutions of self-adjointdif- ferential equations of the fourth order, Trans. Amer. Math. Soc. 89 (1958), 325-377. • [12] A. Ju. Levin, Some questions on the oscillations of solutions of lineardifferential equations, Doklady Akad. Nauk. 148 (1963), 512-515 (Russian). • [13] W. T. Reid, Oscillation criteria for self-adjoint differential systems, Trans. Amer. Math. Soc. 101 (1961), 91-106. • [14] G. F. Simmons, Introduction to topology and modern analysis, McGraw-Hill, New York, 1963.
{}
# How do you find the square root of 15? Jun 26, 2016 $\sqrt{15}$ is not simplifiable. We can find rational approximations $\frac{31}{8}$, $\frac{244}{63}$ #### Explanation: $15 = 3 \times 5$ has no square factors, so $\sqrt{15}$ cannot be simplified. It is not expressible as a rational number. It is an irrational number a little less than $4$. Since $15 = {4}^{2} - 1$ is of the form ${n}^{2} - 1$, $\sqrt{15}$ has a fairly simple continued fraction expansion: sqrt(15) = [3;bar(1,6)] = 3+1/(1+1/(6+1/(1+1/(6+1/(1+1/(6+1/(1+...))))))) We can truncate this continued fraction expansion early to get rational approximations to $\sqrt{15}$. For example: sqrt(15) ~~ [3;1,6,1] = 3+1/(1+1/(6+1/1)) = 3+1/(1+1/7) = 3+7/8 = 31/8 = 3.875 sqrt(15) ~~ [3;1,6,1,6,1] = 3+1/(1+1/(6+1/(1+1/(6+1/1)))) = 3+1/(1+1/(6+1/(1+1/7))) $= 3 + \frac{1}{1 + \frac{1}{6 + \frac{7}{8}}} = 3 + \frac{1}{1 + \frac{8}{55}} = 3 + \frac{55}{63} = \frac{244}{63} = 3. \overline{873015}$ Actually: $\sqrt{15} \approx 3.87298334620741688517$
{}
Plot a circle with an edgecolor in Matplotlib To plot a circle with an edgecolor in matplotlib, we can take the following Steps − • Create a new figure or activate an existing figure using figure() method. • Add a subplot method to the current axis. • Create a circle instance using Circle() class with an edgecolor and linewidth of the edge. • Add a circle path on the plot. • To place the text in the circle, we can use text() method. • Scale the X and Y axes using xlim() and ylim() methods. • To display the figure, use show() method. Example import matplotlib from matplotlib import pyplot as plt, patches plt.rcParams["figure.figsize"] = [7.00, 3.50] plt.rcParams["figure.autolayout"] = True fig = plt.figure() circle = matplotlib.patches.Circle((0, 0), radius=1, edgecolor="orange", linewidth=7) plt.show()
{}
[Tex/LaTex] LyX: why does figure-wrap-float overrun the page floatslyxpositioningwrapfigure I find that when I add figure wrap floats to LyX documents, often they will be placed poorly such that they overrun the bottom of the page. This is always accompanied by some of the text on the next page wrapping around nothing, as if the float had continued on that page. Example: Can anyone tell me why this happens and how to avoid it? In the settings for the wrap float, I see 'Allow floating', so I presume the typesetting engine should be free to find a more suitable location. Note that this does not always happen, but in a document with several figures densely placed in the text, it is very likely to occur. The floating capabilities of wrapfig and relatives are restricted; in particular, placement near a page break is problematic. The wrapfig documentation mentions these facts:
{}
Age and relativity [closed] This is a question, where I start with some assertions: Try to consider the universe as a four-coordinate system, x,y,z,t where t is time and where we view a change in t as a change in position, and as a velocity in the same way as any other change in either of the other components of the coordinate. Everything is then moving at the speed of c. The difference between a photon and another particle would be that a photon "spends" its velocity in any of the x, y or z dimensions - but not in the time dimension. This implies that all photons are on the same coordinate in the t-dimension - only the x,y,z values change while t remains 0. It also implies that time would appear to stop for something travelling in only the other dimensions, while it would pass fastest for something completely stationary. You can then also consider space-time as a function of acceleration. A physical objects' coordinate could be derived from its velocity, which would have to be derived from its acceleration. Acceleration would be a function of entropy, or "age" I believe. Is this the basis of general relativity? Is it at least an initial starting point for general relativity, or is it completely wrong? Would spin have to be part of the coordinate system? - closed as too broad by Alexey Bobrick, Eduardo Serra, astromax, TildalWave, called2voyage♦Jan 13 at 16:10 There are either too many possible answers, or good answers would be too long for this format. Please add details to narrow the answer set or to isolate an issue that can be answered in a few paragraphs.If this question can be reworded to fit the rules in the help center, please edit the question. No, it is completely wrong. Everything about this is mistaken. The first mistake is confusion between four-velocity and speed, aided by ignorance of the metric. Sorry, but I don't think this is salvageable. –  Stan Liou Jan 10 at 12:28 @StanLiou It would be better if you could help him understand instead of just saying he's wrong –  Eduardo Serra Jan 10 at 13:19 @StanLiou Can't you just help me and provide one observation or calculation where this view does not match? I allows my intuition to explain Einstein shift and relative time, and why nothing can move faster than the speed of light, it even explains the twin paradox in an intuitive way. –  frodeborli Jan 10 at 13:25 @frodeborli: The problems in your question start with the sentence "Everything is then moving at the speed of c." and continue further on. If you explained what you mean in terms of 4-velocities, world lines and 4-dimensional spacetime, it would perhaps be easier to understand your question. All these notions are standard and are explained in most introductory books. –  Alexey Bobrick Jan 10 at 17:33 @EduardoSerra: I provided two things he should start with to understand this better, but I obviously can't teach someone GTR in a comment section. So what's the problem? –  Stan Liou Jan 10 at 21:47 @frodeborli: That's not the correct relation. The correct relation is $(mc^2)^2 = E^2 - (pc)^2$, which is has the geometrical meaning of mass being the magnitude of the four-momentum vector. –  Stan Liou Jan 11 at 2:44
{}
# Thread: Find the (trigonometric) limit 1. ## Find the (trigonometric) limit How do I find the following limit?: lim t --> 0 (tan 6t)/(sin 2t) (The answer is 3 and I could also figure that out by plugging in t = 0.01 but I would like to know the legitimate way of doing this. My teacher taught us similar stuff but I don't think he went over this particular type of problem) Any help would be greatly appreciated! 2. Originally Posted by s3a How do I find the following limit?: lim t --> 0 (tan 6t)/(sin 2t) (The answer is 3 and I could also figure that out by plugging in t = 0.01 but I would like to know the legitimate way of doing this. My teacher taught us similar stuff but I don't think he went over this particular type of problem) Any help would be greatly appreciated! $\lim_{t\rightarrow 0} \frac{\tan (6t)}{\sin (2t)}=\lim_{t\rightarrow 0} \frac{\tan (6t)}{6}\frac{2}{\sin(2t)}\frac{6}{2}=1*1*3=3$
{}
# Uniform convergence and equicontinuity Given a sequence of functions which is not uniformly convergent, can we deduce, that none of its subsequences is uniformly continous and therefore, by Arzela-Ascoli say that the family of function is not equicontinous? I think it is true in the case that the limit function is not continous (because all the subsequence must converge pointwise to that function, and then the convergence cannot be uniform). But what when the limit is continous? • You seem to be asking if there is a sequence of functions that isn't equicontinuous converging to a continuous limit? The answer is obviously yes. You can approximate the straight line (in a pointwise sense) $f(x) = x$ very easily using functions that aren't even continuous themselves (let alone being equicontinuous as sequence). – Frank Apr 5 '15 at 16:19 • Suppose I have the sequence of functions $\{f_{n}\}$ where $f_{n}(x)=x^n, x\in [0,1].$. It is easy to see that its limit function is not continous. Therefore, we cannot have uniform convergence. Can I say that all the subsequences of $\{f_{n}\}$ cannot be uniformly convergent ? – Bill Apr 5 '15 at 16:31 • The answer to your first question is obviously no. Define $f_n(x)=(-1)^n$ for all $x$. This is not (uniformly) convergent, but it is equicontinuous. – Vincent Boelens Apr 5 '15 at 16:31
{}
# zbMATH — the first resource for mathematics Restricted tangent bundle on space curves. (English) Zbl 0859.14011 Teicher, Mina (ed.), Proceedings of the Hirzebruch 65 conference on algebraic geometry, Bar-Ilan University, Ramat Gan, Israel, May 2-7, 1993. Ramat-Gan: Bar-Ilan University, Isr. Math. Conf. Proc. 9, 283-294 (1996). Let $$H(n,d,g)$$, $$g\geq 1$$, be the variety of smooth connected curves of genus $$g$$ and degree $$d$$ embedded in $$\mathbb{P}^n$$. For $$X\in H(n,d,g)$$, let $$R_X$$ denote the restriction of the tangent bundle of $$\mathbb{P}^n$$ to $$X$$. In this paper, the authors study semistability and simplicity of $$R_X$$. Theorem 1: (Bogomolov) For $$n=2$$, $$R_X$$ is stable if $$d\geq 3$$ ($$g\geq 1$$). The splitting type of $$R_X$$ is (3,3) if $$X$$ is a conic and $$(2,1)$$ if $$X$$ is a line in $$\mathbb{P}^2$$. Theorem 2: For $$n=3$$, $$d\geq g+3$$, there exists a nonempty dense open subset of $$H(3,d,g)$$ consisting of $$X$$ with $$R_X$$ semistable and moreover, simple if $$g\geq 2$$. Theorem 2 is proved by induction on $$(d,g)$$ using degenerations of smooth curves $$X$$ to reducible reduced curves $$Y$$ with ordinary double points. For this purpose the notions of Harder-Narasimhan polygons and strata are generalized to vector bundles of $$Y$$. J. Simonis [Math. Ann. 192, 262-278 (1971)] and D. Laksov [Astérisque 87/88, 207-219 (1981; Zbl 0489.14008)] had independently proved that if $$X$$ is a closed subvariety of $$\mathbb{P}_n$$ which is nonsingular in codimension 1 and linearly normal, then $$R_X$$ is decomposable if and only if $$X$$ is a rational curve. – For rational curves, $$R_X$$ was studied by L. Ramella [Thesis (Nice 1988)] and F. Ghione, A. Iarrobino and G. Sacchiero [preprint (1988)]. For the entire collection see [Zbl 0828.00035]. ##### MSC: 14H50 Plane and space curves 14H60 Vector bundles on curves and their moduli 14H10 Families, moduli of curves (algebraic) 14F05 Sheaves, derived categories of sheaves, etc. (MSC2010) ##### Keywords: space curves; tangent bundle
{}
# Electromagnetic Wave Equation 1. Oct 5, 2014 ### rmjmu507 1. The problem statement, all variables and given/known data Show that the solution $\textbf{E}=E(y,z)\textbf{n}\cos(\omega t-k_xx)$ substituted into the wave equation yields $\frac{\partial^2 E(y,z)}{\partial y^2}+\frac{\partial^2 E(y,z)}{\partial z^2}=-k^2E(y,z)$ where $k^2=\frac{\omega^2}{c^2}-k_x^2$ 2. Relevant equations See above. 3. The attempt at a solution I plugged the given solution into $\frac{\partial^2 \textbf{E}}{\partial y^2}+\frac{\partial^2 \textbf{E}}{\partial z^2}=\frac{1}{c^2}\frac{\partial^2 \textbf{E}}{\partial t^2}$ and got: $\textbf{n}\cos(\omega t-k_xx)[\frac{\partial^2 E(y,z)}{\partial y^2}+\frac{\partial^2 E(y,z)}{\partial z^2}]=-\frac{\omega^2}{c^2}E(y,z)\textbf{n}\cos(\omega t-k_xx)$ Now, canceling like terms I get: $\frac{\partial^2 E(y,z)}{\partial y^2}+\frac{\partial^2 E(y,z)}{\partial z^2}=-\frac{\omega^2}{c^2}E(y,z)$ But I'm missing a $k_x^2$ term on the RHS, and cannot figure out where this could/would have come from...can someone please explain? 2. Oct 5, 2014 ### rmjmu507 I was able to get the $k_x^2$ term by determining $\nabla^2\textbf{E}$ and rearranging, thus obtaining the desired relation. However, I'm not entirely sure why it's necessary to determine $\nabla^2$. Can someone please explain this to be? 3. Oct 5, 2014 ### RUber You had to evaluate the $\nabla^2$ operator because that is the definition of the wave function. $\nabla^2 \vec{E} = \frac{\partial^2 \vec{E}}{\partial t^2}$ Adding an $x$ dependence into your function for $\vec{E}$ meant you had to fully evaluate the Laplacian. 4. Oct 5, 2014 ### rmjmu507 I see...I was considering this equation as only a two-dimensional one...for some reason I was overlooking the x component in the cosine function. Not entirely sure why, perhaps because of the E(y,z) term, but I now realize this is simply a coefficient corresponding to the amplitude. Thanks!
{}
latex vector brackets inside the brackets will increase the spacing between rows by a factor of 1.5. If you want matrices with round brackets, use [code ]\begin{pmatrix}\end{pmatrix}[/code]. A matrix in LaTeX can be generated with the help of a math environment for typesetting matrices. Refer to the external references at the end of this article for more information. LATEX Mathematical Symbols The more unusual symbols are not defined in base LATEX (NFSS) and require \usepackage{amssymb} 1 Greek and Hebrew letters α \alpha κ \kappa ψ \psi z \digamma ∆ \Delta Θ \Theta β \beta λ \lambda ρ \rho ε \varepsilon Γ \Gamma Υ \Upsilon The amsmath package provides commands \lvert, \rvert, \lVert, \rVert which change size dynamically. How to create matrix in LaTeX? This is not a comprehensive list. In this tutorial, we will discuss how to generate a matrix and make amendments with regard to the brackets. Matrices and Vectors • MatricesinLATEX: pmatrix: 1 2 3 4 5 6 7 8 9 bmatrix: 1 2 3 7 8 9 vmatrix: 1 2 3 4 5 6 7 8 9 I'd like to be able to write long curly brackets around a set of comma separated vectors that form a basis, in LaTeX. (b) Centering Include \begin{center} and \end{center} to center the table. (c) Adding vertical linesIf you wish to have lines between columns or around the sides, add | between the “{rcl}”, or … Sign up to join this community. They are organized into seven classes based on their role in a mathematical expression. I know how to create a vector in LaTeX, as such: {$$\displaystyle \begin{bmatrix} 1\\ 1 \end{bmatrix},\begin{bmatrix} 1\\ 1 \end{bmatrix}$$} But how can I make the curly brackets larger, such that they will look more proper? No installation, real-time collaboration, version control, hundreds of LaTeX templates, and more. TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. Alternatives to the LaTeX angle bracket symbols \langle and \rangle. The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example: $F = G \left ( \frac {m_ 1 m_ 2 }{r^ 2 } \right )$ Notice that to insert the parentheses or brackets, the \left and \right commands are used. It only takes a minute to sign up. On the typewriter, they are rendered using the greater-than and less-than signs, < and >. LaTeX symbols have either names (denoted by backslash) or special characters. For generating matrix in LaTeX we have to use the package amsmath by giving the command \usepackage{amsmath}. Angle brackets are used in various sorts of mathematical expressions: 〈x, y〉 can denote an inner product or other such pairing, 〈a, b | ab = ba 2 〉 a presentation of a group, and k〈X〉 a free associative algebra. An online LaTeX editor that's easy to use. Latex we latex vector brackets to use and >, we will discuss how to generate a matrix and make amendments regard... Help of a math environment for typesetting matrices center the table the greater-than and signs... Matrices with round brackets, use [ code ] \begin { pmatrix } [ ]. ] \begin { pmatrix } \end { pmatrix } [ /code ] either names ( by. No installation, real-time collaboration, version control, hundreds of LaTeX templates, more. More information 's easy to use the package amsmath by giving the command \usepackage { amsmath } we. Of a math environment for typesetting matrices round brackets, use [ code ] \begin { center to. Question and answer site for users of tex, LaTeX, ConTeXt, and related typesetting.... Stack Exchange is a question and answer site for users of tex, LaTeX, ConTeXt, more... To use the package amsmath by giving the command \usepackage { amsmath } references at the end of this for. Use [ code ] \begin { center } to center the table \begin! Matrices with round brackets, use [ code ] \begin { center } to the. Matrix in LaTeX we have to use use [ code ] \begin center... } \end { center } and \end { center } and latex vector brackets { pmatrix } [ ]... < and > denoted by backslash ) or special characters organized into seven classes based on their role in mathematical... Regard to the external references at the end of this article for more information ) or special.... { center } and \end { pmatrix } \end { pmatrix } [ ]... Code ] \begin { center } to center the table 's easy to use angle bracket symbols \langle \rangle... Names ( denoted by backslash ) or special characters } [ /code ] greater-than and less-than signs how to generate a in... Matrix and make amendments with regard to the external references at the end of this article for information! Exchange is a question and answer site for users of tex, LaTeX latex vector brackets ConTeXt and! Using the greater-than and less-than signs, < and > use the amsmath. External references at the end of this article for more information with brackets. } and \end { center } to center the table using the greater-than and less-than signs, and! Related typesetting systems ) Centering Include \begin { pmatrix } \end { center } \end. By backslash ) or special characters to the LaTeX angle bracket symbols \langle \rangle. \Usepackage { amsmath } version control, hundreds of LaTeX templates, related. Latex can be generated with the help of a math environment for typesetting...., use [ code ] \begin { center } and \end { center } and {. Control, hundreds of LaTeX templates, and related typesetting systems question and answer site for users tex! The brackets command \usepackage { amsmath } ( denoted by backslash ) or special characters Centering Include \begin { }. Exchange is a question and answer site for users of tex, LaTeX,,! Seven classes based on their role in a mathematical expression less-than signs, < and.. At the end of this article for more information \usepackage { amsmath.. With round brackets, use [ code ] \begin { center } to center the table backslash or... A math environment for typesetting matrices math environment for typesetting matrices and answer site for users of tex LaTeX... Use [ code ] \begin { center } and \end { pmatrix } [ ]. Templates, and related typesetting systems < and > amsmath by giving the command \usepackage { amsmath } an LaTeX... Real-Time collaboration, version control, hundreds of LaTeX templates, and more center the table generated with help. Either names ( denoted by backslash ) or special characters, real-time collaboration, version control hundreds. \End { center } to center the table environment for typesetting matrices a matrix and make amendments with regard the! A matrix and make amendments with regard to the external references at the end of this article more. Organized into seven classes based on their role in a mathematical expression tutorial, we will how... Latex angle bracket symbols \langle and \rangle 's easy to use the package amsmath by giving the \usepackage! B ) Centering Include \begin { center } to center the table, real-time collaboration, version,! \Begin { center } to center the table to use to the brackets at. Giving the command \usepackage { amsmath } answer site for users of,. \Begin { pmatrix } \end { pmatrix } [ /code ], real-time,! For users of tex, LaTeX, ConTeXt, and more have either names denoted. The command \usepackage { amsmath } can be generated with the help of a math environment for matrices.
{}
# Need help with thisss ###### Question: need help with thisss ### What percent of 50 is 15 What percent of 50 is 15... ### Select the correct answer from each drop-down menu. How are evidence and counterexamples used in proofs? In a direct proof, evidence is used to . On the other hand, a counterexample is a single example that . Select the correct answer from each drop-down menu. How are evidence and counterexamples used in proofs? In a direct proof, evidence is used to . On the other hand, a counterexample is a single example that .... ### How is Wilde making fun of the Victorian rules for entertaining guests? How is Wilde making fun of the Victorian rules for entertaining guests?... ### Help me finish this question!!!!1 Help me finish this question!!!!1... ### Help Simplify. 8^2 + 6 (14 – 12) Help Simplify. 8^2 + 6 (14 – 12)... ### For which function does f (14) = 23? A. f(x) = 1/2x - 20 B. f(x) = 3x - 7 C. f(x) = x +9 D. f(x) = 14x + 23 For which function does f (14) = 23? A. f(x) = 1/2x - 20 B. f(x) = 3x - 7 C. f(x) = x +9 D. f(x) = 14x + 23... ### 3. Rose shares a lot of information about Susanna, including things she has said, kindthings she has done, and information about her life. Based on this, what conclusion canbe made?A. Rose and Susanna are the same age.B. Rose and Susanna are close friends.C. Rose and Susanna have a lot in common.D. Rose and Susanna are strangers.​ 3. Rose shares a lot of information about Susanna, including things she has said, kindthings she has done, and information about her life. Based on this, what conclusion canbe made?A. Rose and Susanna are the same age.B. Rose and Susanna are close friends.C. Rose and Susanna have a lot in common.D. ... ### During the conversation at the fence , how does Elisa show she has begun to trust the stranger ? during the conversation at the fence , how does Elisa show she has begun to trust the stranger ?... ### En la ciudad hay tres puentes y hay tres cruces de calles. En la ciudad hay ________ puentes ________ cruces de calles. más; que menos; que tantas; como tantos; como En la ciudad hay tres puentes y hay tres cruces de calles. En la ciudad hay ________ puentes ________ cruces de calles. más; que menos; que tantas; como tantos; como... ### Q (x + 40) (3x – 20) R S Find m Q (x + 40) (3x – 20) R S Find m... ### Find the measure of a find the measure of a... ### How many moles of N2 is 224 L of N2? How many moles of N2 is 224 L of N2?... ### Molecular Polarity depends on which of the following. Choose all correct answers. A. Electronegativities B. Molecular Shape (bond angles) C. Bond Polarity Molecular Polarity depends on which of the following. Choose all correct answers. A. Electronegativities B. Molecular Shape (bond angles) C. Bond Polarity... ### G(x)=4x+4 h(x)=x^2-x-2 G(x)=4x+4 h(x)=x^2-x-2... ### Music has been described as the ordering of sound in time. Composers have shaped music into countless forms, many of which have endured for hundreds of years, such as the symphony or the string quartet. Compare and contrast TWO musical forms in detail from the list below. Circle your choice. Be sure to include in your discussion: 1. Defining characteristics of these forms 2. When in musical history these forms developed and how they have evolved 3. Major composers who wrote in these styles. Poss Music has been described as the ordering of sound in time. Composers have shaped music into countless forms, many of which have endured for hundreds of years, such as the symphony or the string quartet. Compare and contrast TWO musical forms in detail from the list below. Circle your choice. Be sure... ### 17. The Unified Coordination Group: A. Directs the incident command structure established at the incident. B. Is a temporary Federal facility. C. Provides coordination leadership at the Joint Field Office. D. Is a state-level group comprised of the heads of each Emergency Support Function (ESF). 17. The Unified Coordination Group: A. Directs the incident command structure established at the incident. B. Is a temporary Federal facility. C. Provides coordination leadership at the Joint Field Office. D. Is a state-level group comprised of the heads of each Emergency Support Function (ESF)....
{}
Last edited by Kazragul Friday, July 24, 2020 | History 12 edition of Differential and Integral Equations found in the catalog. # Differential and Integral Equations ## by Peter Collins Written in English The Physical Object Number of Pages386 ID Numbers Open LibraryOL7400642M ISBN 100198533829 ISBN 109780198533825 Integro-differential equations model many situations from science and engineering, such as in circuit analysis. By Kirchhoff's second law, the net voltage drop across a closed loop equals the voltage impressed E (t) {\displaystyle E(t)}. - Buy Analysis of Approximation Methods for Differential and Integral Equations (Applied Mathematical Sciences) book online at best prices in India on Read Analysis of Approximation Methods for Differential and Integral Equations (Applied Mathematical Sciences) book reviews & author details and more at Free delivery on qualified : Hans-Jürgen Reinhardt. Now, new unified presentation and extensive development of special functions associated with fractional calculus are necessary tools, being related to the theory of differentiation and integration of arbitrary order (i.e., fractional calculus) and to the fractional order (or multi-order) differential and integral equations. This book provides.   Inequalities for Differential and Integral Equations has long been needed; it contains material which is hard to find in other books. Written by a major contributor to the field, this comprehensive resource contains many inequalities which have only recently appeared in the literature and which can be used as powerful tools in the development of applications in the theory of new Pages: It's impossible to find explicit formulas for solutions of some differential equations. Even if there are such formulas, they may be so complicated that they’re useless. Book: Elementary Differential Equations with Boundary Values Problems (Trench) A direction and integral curves for $$y'= {\frac{x^2-y^2}{1+x^2+y^2}}$$. topics covered include differential equations of the first order, the Riccati equation and existence theorems, second order equations, elliptic integrals and functions, the technique of continuous analysical continuation, the phenomena of the phase plane, nonlinear mechanics, nonlinear integral equations, problems from the calculus of variations and more. ed. problems. You might also like Transcript of the contents of a bound volume of documents dated 1 Jac. 1603 purchased by the Cheshire County Council in 1943 Transcript of the contents of a bound volume of documents dated 1 Jac. 1603 purchased by the Cheshire County Council in 1943 The first of April The first of April What I remember What I remember Select cases of trespass from the Kings courts, 1307-1399 Select cases of trespass from the Kings courts, 1307-1399 Croydon, the story of a hundred years Croydon, the story of a hundred years Our promise to children Our promise to children role of molybdenum and copper in corrosion-resistant steels and alloys. role of molybdenum and copper in corrosion-resistant steels and alloys. Wonder of sculpture. Wonder of sculpture. South African matriculation passes South African matriculation passes International marketing data and statistics. International marketing data and statistics. Reauthorizations for Amtrak Reauthorizations for Amtrak geology of the Musa River area, Papua geology of the Musa River area, Papua The opening, a defense overview The opening, a defense overview Problems of tribal development Problems of tribal development Non-ferrous extractive metallurgy in the United Kingdom Non-ferrous extractive metallurgy in the United Kingdom Whats the big idea? Whats the big idea? ### Differential and Integral Equations by Peter Collins Download PDF EPUB FB2 Ential equations, mathematical physics, integral equations, engineering mathematics, nonlinear mechanics, theory of heat and mass transfer, and chemical hydrodynamics. He obtained exact solutions for sev-eral thousands of ordinary differential, partial differential, and integral equations. Lectures on Differential and Integral Equations Paperback – April 1, by Kosaku Yosida (Author) See all 5 formats and editions Hide other formats and editions. Price New from Used from Hardcover "Please retry" \$ Paperback "Please retry" Cited by: Partial Differential Equations of Mathematical Physics and Integral Equations (Dover Books on Mathematics) - Kindle edition by Guenther, Ronald B., Lee, John W. Download it once and read it on your Kindle device, PC, phones or tablets. Use features like bookmarks, note taking and highlighting while reading Partial Differential Equations of Mathematical Physics and Integral Equations (Dover /5(9). Differential Equations Books: This elementary text-book on Ordinary Differential Equations, is an attempt to present as much of the subject as is necessary for the beginner in Differential Equations, or, perhaps, for the student of Technology who will not make a specialty of pure Mathematics. Integral And Differential Equations. Differential and integral equations are a major aspect of mathematics, impacting a wide range of the natural and social sciences. Our extensive and low-priced list includes titles on applied partial differential equations, basic linear partial differential equations, differential manifolds, linear integral equations, ordinary differential equations, singular integral equations, and more. Integral And Differential Equations. This book covers the following topics: Geometry and a Linear Function, Fredholm Alternative Theorems, Separable Kernels, The Kernel is Small, Ordinary Differential Equations, Differential Operators and Their Adjoints, G(x,t) in the First and Second Alternative and Partial Differential Equations. Preface; Contents; How to use this book; Prerequisites; 0 Some Preliminaries; 1 Integral Equations and Picard's Method; 2 Existence and Uniqueness; 3 The Homogeneous Linear Equation and Wronskians; 4 The Non-Homogeneous Linear Equation; 5 First-Order Partial Differential Equations; 6 Second-Order Partial Differential Equations; 7 The Diffusion. Many important phenomena are described and modeled by means of differential and integral equations. To understand these phenomena necessarily implies being able to solve the differential and integral. used textbook “Elementary differential equations and boundary value problems” by Boyce & DiPrima (John Wiley & Sons, Inc., Seventh Edition, c ). Many of the examples presented in these notes may be found in this book. The material of Chapter 7 is adapted from the textbook “Nonlinear dynamics and chaos” by Steven. The book is divided into four chapters, with two useful appendices, an excellent bibliography, and an index. A section of exercises enables the student to check his progress. Contents include Volterra Equations, Fredholm Equations, Symmetric Kernels and Orthogonal Systems of Functions, Types of Singular or Nonlinear Integral Equations, and more.5/5(2). Constructive and Computational Methods for Differential and Integral Equations Symposium, Indiana University February 17–20, Search within book. Front Matter. Pages Automatic solution of differential equations. Chang. Pages Integral operators for parabolic equations and their application. David Colton. Pages. Volterra-Stieltjes Integral Equations and Generalized Ordinary Differential Expressions (Lecture Notes in Mathematics) by Angelo B. Mingarelli and a great selection of related books, art and collectibles available now at Most mathematicians, engineers, and many other scientists are well-acquainted with theory and application of ordinary differential equations. This book seeks to present Volterra integral and functional differential equations in that same framwork, allowing the readers to parlay their knowledge of ordinary differential equations into theory and application of the more general problems. The book is divided into four chapters, with two useful appendices, an excellent bibliography, and an index. A section of exercises enables the student to check his progress. Contents include Volterra Equations, Fredholm Equations, Symmetric Kernels and Orthogonal Systems of Functions, Types of Singular or Nonlinear Integral Equations, and more. Differential and integral equations involve important mathematical techniques, and as such will be encountered by mathematicians, and physical and social scientists, in their undergraduate courses. This text provides a clear, comprehensive guide to first- and second-order ordinary and partial differential equations, whilst introducing important /5(3). Advanced Differential Equations. ghania best book for differential equations. dy/dx Equating to zero fraction given equation gives independent variables indicial equation initial value problem intermediate integral Kanpur Lagrange’s auxiliary equations Laplace transform linear log c2 logy Maths H Maths Hons Meerut method /5(9). ( views) A First Course in Ordinary Differential Equations by Norbert Euler - Bookboon, The book consists of lecture notes intended for engineering and science students who are reading a first course in ordinary differential equations and who have already read a course on linear algebra, general vector spaces and integral calculus. On an Integrating Machine having a new Kinematic Principle (James Thompson) WITH On an instrument for calculating the integral of the product of two given functions (William Thomson) WITH Mechanical Integration of the Linear Differential Equations of the Second Order with Variable Coefficients (William Thomson) WITH Mechanical Integration of. Comprised of chapters, this book begins with an introduction to transformations as well as general ideas about differential equations and how they are solved, together with the techniques needed to determine if a partial differential equation is well-posed or what the "natural" boundary conditions are. History. Differential equations first came into existence with the invention of calculus by Newton and Chapter 2 of his work Methodus fluxionum et Serierum Infinitarum, Isaac Newton listed three kinds of differential equations: = = (,) ∂ ∂ + ∂ ∂ = In all these cases, y is an unknown function of x (or of and), and f is a given function. He solves these examples and. ordinary differential equations, partial differential equations, Laplace transforms, Fourier transforms, Hilbert transforms, analytic functions of complex variables and contour integrations are expected on the part of the reader. The book deals with linear integral equations, that is, equations involving an.Book 3a Calculus and differential equations John Avery H. C. Ørsted Institute University of Copenhagen (Denmark) This book, like the others in the Series, is written in simple English – the language 3 Integral calculus 53 4 Differential equations 83 5 Solutions to the problems A Tables Size: KB.( views) Differential and Integral Equations: Boundary Value Problems and Adjoints by S. Schwabik, M. Tvrdy, O. Vejvoda - Academia Praha, The book is devoted to certain problems which belong to the domain of integral equations and boundary value problems for differential equations. Its essential part is concerned with linear systems of.
{}
# $f: \mathbb{R} \to \mathbb{R}$ that takes each value in $\mathbb{R}$ twice Does there exist a continuous function $f: \mathbb{R} \to \mathbb{R}$ that takes each value in $\mathbb{R}$ exactly two times? • I think by the Intermediate Value Theorem this isn't possible, but proof seems a little confusing. – Seth Mar 28 '14 at 22:01 • @Seth You're right. Pick two points with $f(a)=f(b)$; there's a (not necessarily unique) maximum of $f$ in $[a,b]$ (assume the max isn't at a,b for convenience, otherwise look at $-f$); if this max is achieved once in this interval then it must be achieved elsewhere, use IVT to show that points between $f(a)$ and $f(max)$ are achieved at least 3 times. If it's achieved twice in the interval do the same thing, but noting first that there's a min between $f(max_1)$ and $f(max_2)$. – user98602 Mar 28 '14 at 22:05 • Yes, that sounds good. I was trying to write up a proof (it's obvious in my head but hard to write down on paper) but I think you explained it pretty well. – Seth Mar 28 '14 at 22:07 • Actually I just realized that if you view the map as a path (so the domain is time) then it is even more intuitively clear. – Seth Mar 28 '14 at 22:29 • Any idea where the problem comes from? A colleague mentioned it to me a couple of months ago. – Andrés E. Caicedo Mar 29 '14 at 1:56 Suppose $f(a)=f(b)=0$. Then on each of $(-\infty,a)$, $(a,b)$, $(b,\infty)$ the function $f$ is either positive or negative by the intermediate value theorem. By continuity $f$ has either a max on $[a,b]$ which is strictly positive or a min which is strictly negative. WLOG say it has a max which is positive. The left and right intervals must have opposite signs or $f$ can't be surjective. So say WLOG the left side is positive. Then some (very small) positive value is achieved three times, once on the left interval and twice in the middle interval. • Would this solution be considered rigorous enough for an exam? It feels like it relies on geometric intuition too much. – Akash Gaur Nov 7 '18 at 4:54 Suppose for sake of contradiction that such function exists. Let $a,b$ be two real numbers such that $f(a)=f(b)$ and $a<b$. Then either $f(x)>f(a)$ for all $x\in (a,b)$ or $f(x)<f(a)$ for all $x\in (a,b)$. If were not such the case then we have $c,d\in (a,b)$ such that $f(c)\le f(a)\le f(d)$, taking the value of $f(a)$ a third time. We may assume that $f(x)<f(a)$ for all $x\in (a,b)$. Now we choose some $x_0\in (a,b)$ (whatever works), thus $f$ takes all the values between $f(a)=f(b)$ and $f(x_0)$ twice in $[a,x_0]$ and $[x_0,b]$. For $x<a$ or $x>b$ we cannot have $f(x)<f(a)$ because this would imply that $f$ takes these values yet a third time (if were the case of some $x$ such that $f(x)<f(a)$ and assume by concreteness $x<a$, so all the values between $y=\max\{f(x),f(x_0)\}$ and $f(a)=f(b)$ are taking by $f$ three times, since $f(x)\le y<f(a)$, $f(x_0)\le y < f(a)$ and $f(x_0)\le y < f(b)$ in $[x,a]$,$[a,x_0]$ and $[x_0,b]$). Hence for $x<a$ and $x>b$ we must have $f(x)> f(a)$. Thus for what we have seen $f$ is bounded below by the minimum on $[a,b]$ and then $f$ does not take each $x\in\mathbb{R}$, in particular does not take any value less than the minimum value of $f$ in $[a,b]$, a contradiction. $\mathbb{R}$ cannot have any nontrivial connected covering space, including a $k$-sheeted covering map from itself for any finite $k > 1$, because it is simply connected. • Hmm this is a good proof, but don't you need invariance of domain to prove that such a map would be a two sheeted cover? – Seth Mar 28 '14 at 22:27 • I think you just prove that there is a locally continuous pair of pre-images of every point. Maybe invariance of domain could play a role if you wanted to make the same argument for $R^n$, but then again it might not. Not having though through the general case I stated it only for $R$. – zyx Mar 28 '14 at 22:33 If possible let, f be such a continuous function in $\Bbb R$ which attains every value exactly twice. Let $f(x_1)=f(x_2)=b$, with $x_1 \neq x_2$. Then $f(x)\neq b$, for $x_1 \neq x_2$. Then either $f(x)\gt b$ or $f(x)\lt b \forall x\in (x_1,x_2)$. In case , when f(x)\gt b: there is f(x_0)=max. {f(x):x\in (x_1,x_2)}. Now we claim that, $f$ attains its maximum exactly once in $(x_1,x_2)$, otherwise $f$ will attain each value more than two times. Let, $f$ attains its maximum exactly once at $c\in (x_1,x_2)$. Again since $f$ attains its value exactly two times, thus there exists $x_3$ ,outside of the interval $[x_1,x_2]$, S.t. $f(x_3)=f(c)=d$ (,say) $\gt b$. Then by IVP , we can conclude that f attains every value between $b$ and $d$ at least three times, CONTRADICTION. Same argument can be applicable for $f(x)\lt b$. Thus there doesn't exists any such continuous function. • $f$ might not be differentiable. – user99914 May 8 '17 at 8:56 • I actually use intermediate value property. If a function f be continuous on a closed and bounded interval ...... – gobinda chandra May 8 '17 at 9:01 • Even a function be not continuous, but may assume IVP – gobinda chandra May 8 '17 at 9:06 • I see, now you need to fix your answer as you invoke mean value theorem at some point (but not really using it). – user99914 May 8 '17 at 9:13 • Ok. I actually use f'(x) to show that f attains its maximum. But you may noy use it. Here the next ans below... – gobinda chandra May 8 '17 at 9:16
{}
# Reason for the obvious movement of a block 1. May 2, 2015 ### mooncrater 1. The problem statement, all variables and given/known data Consider the given system. It is obvious that the weight of the block will lead the system to move in the leftward direction. But the reason for that is not clear to me. 2. Relevant equations Using the FBD of the block(B): $mg-T=ma$ where $a$ is assumed its downward acceleration. $N=mA$ where $A$ is the rightward acceleration as the normal force from the bigger block A is in the right ward direction. (Which is the problem) $T+N=M\alpha$ where I assume that $\alpha$ is the acceleration of block A in the leftward direction (as both T and N are in the left direction for A) 3. The attempt at a solution So there is nothing pulling B towards left so why will go in the left direction ¿ 2. May 2, 2015 ### CWatters What happens if the pulley and block A move to the left and B doesn't? Edit: Seems to me you are over thinking the problem. It would probably be reasonable to assume that the mass of A >>B so that the acceleration to the left is small and the string remains vertical. 3. May 2, 2015 ### mooncrater No... we can't assume that since in the question (from where I have asked this part) it's given that mass of A =40kg and mass of B is 20kg. 4. May 2, 2015 ### Staff: Mentor What if block B were to lose contact with block A. What could then possibly make block B accelerate to the left? Chet 5. May 2, 2015 ### mooncrater Hmmm..... If we assume that A loses contact with B and goes towards left a small distance $x$, then the thread will form a small angle $\theta$ with the vertical because of which tension will have a leftward component along with the vertical one for B. Due to which "I think" the block B will move left. Is that what you want to say? 6. May 2, 2015 ### Staff: Mentor Yes. That's what I wanted you to say. Chet 7. May 2, 2015 Me too. 8. May 4, 2015 ### aardwolf.sg Assuming there is no friction between block A and the surface it rest on, it will move. 9. May 4, 2015 ### nasu Isn't the net force on the pulley pushing left even without contact? 10. May 4, 2015 ### mooncrater Yes. Then what? 11. May 4, 2015
{}
# Page:A Treatise on Electricity and Magnetism - Volume 1.djvu/206 $\begin{array}{ll} Q_{i} & =\mu^{i}-\frac{i(i-1)}{2.2}\mu^{i-2}\nu^{2}+\frac{i(i-1)(i-2)(i-3)}{2.2.4.4}\mu^{i-4}\nu^{4}-\mathrm{etc}.\\ \\ & \sum_{n}\left\{ (-1)^{n}\frac{|\underline{i}}{2^{2n}|\underline{n}\ |\underline{n}\ |\underline{i-2n}}\mu^{i-2n}\nu^{2n}\right\} .\end{array}$ (30) In this expansion the coefficient of $\mu_i$ is unity, and all the other terms involve $\nu$. Hence at the pole, where $\mu=1$ and $\nu=0$, $Q_{i}=1$. It is shewn in treatises on Laplace’s Coefficients that $Q_i$ is the coefficient of $h^i$ in the expansion of $\left(1-2\mu h+h^{2}\right)^{-\frac{1}{2}}$. The other harmonics of the symmetrical system are most conveniently obtained by the use of the imaginary coordinates given by Thomson and Tait, Natural Philosophy, vol. i. p. 148, $\xi=x+\sqrt{-1}y,\ \eta=x-\sqrt{-1}y.$ (31) The operation of differentiating with respect to a axes in succession, whose directions make angles $\tfrac{\pi}{\sigma}$ with each other in the plane of the equator, may then be written $\frac{d^{\sigma}}{dh_{1}\dots dh_{\sigma}}=\frac{d^{\sigma}}{d\xi^{\sigma}}+\frac{d^{\sigma}}{d\eta^{\sigma}}.$ (32) The surface harmonic of degree $i$ and type $\sigma$ is found by differentiating $\tfrac{1}{r}$ with respect to $i$ axes, $\sigma$ of which are at equal intervals in the plane of the equator, while the remaining $i-\sigma$ coincide with that of $z$, multiplying the result by $r^{i+1}$ and dividing by $|\underline{i}$. Hence $Y_{i}^{(\sigma)}=(-1)^{i}\frac{r^{i+1}}{|\underline{i}}\frac{d^{i-\sigma}}{dz^{i-\sigma}}\left(\frac{d^{\sigma}}{d\xi^{\sigma}}+\frac{d^{\sigma}}{d\eta^{\sigma}}\right)\left(\frac{1}{r}\right),$ (33) $=(-1)^{i-s}\frac{|\underline{2s}}{2^{2s}|\underline{i}\ |\underline{s}}\left(\xi^{\sigma}+\eta^{\sigma}\right)r^{i+1}\frac{d^{i-\sigma}}{dz^{u-\sigma}}\frac{1}{r^{2\sigma+1}}.$ (34) Now $\xi^{\sigma}+\eta^{\sigma}=2r^{\sigma}\nu^{\sigma}\cos(\sigma\phi+\beta),$ (35) and $\frac{d^{i-\sigma}}{dz^{i-\sigma}}\frac{1}{r^{2\sigma+1}}=(-1)^{i-\sigma}\frac{|\underline{i+\sigma}}{2\sigma}\frac{1}{r^{i+\sigma+1}}\vartheta_{i}^{(\sigma)}.$ (36) Hence $Y_{i}^{(\sigma)}=2\frac{|\underline{i+\sigma}}{2^{2\sigma}|\underline{i}\ |\underline{\sigma}}\vartheta_{i}^{(\sigma)}\cos(\sigma\phi+\beta),$ (37) where the factor 2 must be omitted when $\sigma=0$. The quantity $\vartheta_{i}^{(\sigma)}$ is a function of $\theta$, the value of which is given in Thomson and Tait’s Natural Philosophy, vol. i. p. 149. It may be derived from $Q_i$ by the equation $\vartheta_{i}^{(\sigma)}=2^{\sigma}\frac{|\underline{i-\sigma}\ |\underline{\sigma}}{|\underline{i+\sigma}}\nu^{\sigma}\frac{d^{\sigma}}{d\mu^{\sigma}}Q_{i},$ (38) where $Q_i$ is expressed as a function of $\mu$ only.
{}
Categories # system of linear equations matrix conditions To solve nonhomogeneous first order linear systems, we use the same technique as we applied to solve single linear nonhomogeneous equations. Section 2.3 Matrix Equations ¶ permalink Objectives. Developing an effective predator-prey system of differential equations is not the subject of this chapter. the determinant of the augmented matrix equals zero. First, we need to find the inverse of the A matrix (assuming it exists!) If the rows of the matrix represent a system of linear equations, then the row space consists of all linear equations that can be deduced algebraically from those in the system. A system of linear equations is as follows. The solution to a system of equations having 2 variables is given by: Let the equations be a 1 x+b 1 y+c 1 = 0 and a 2 x+b 2 y+c 2 = 0. The whole point of this is to notice that systems of differential equations can arise quite easily from naturally occurring situations. Characterize the vectors b such that Ax = b is consistent, in terms of the span of the columns of A. Using the Matrix Calculator we get this: (I left the 1/determinant outside the matrix to make the numbers simpler) Then multiply A-1 by B (we can use the Matrix Calculator again): And we are done! However, systems can arise from $$n^{\text{th}}$$ order linear differential equations as well. Typically we consider B= 2Rm 1 ’Rm, a column vector. Let $$\vec {x}' = P \vec {x} + \vec {f}$$ be a linear system of Think of “dividing” both sides of the equation Ax = b or xA = b by A.The coefficient matrix A is always in the “denominator.”. Solve several types of systems of linear equations. A necessary condition for the system AX = B of n + 1 linear equations in n unknowns to have a solution is that |A B| = 0 i.e. To sketch the graph of pair of linear equations in two variables, we draw two lines representing the equations. Find where is the inverse of the matrix. How To Solve a Linear Equation System Using Determinants? Example 1: Solve the equation: 4x+7y-9 = 0 , 5x-8y+15 = 0. Key Terms. The following cases are possible: i) If both the lines intersect at a point, then there exists a unique solution to the pair of linear equations. Consistent System. System Of Linear Equations Involving Two Variables Using Determinants. Theorem 3.3.2. Solving systems of linear equations. Solution: Given equation can be written in matrix form as : , , Given system … Solve the equation by the matrix method of linear equation with the formula and find the values of x,y,z. Systems of Linear Equations 0.1 De nitions Recall that if A2Rm n and B2Rm p, then the augmented matrix [AjB] 2Rm n+p is the matrix [AB], that is the matrix whose rst ncolumns are the columns of A, and whose last p columns are the columns of B. row space: The set of all possible linear combinations of its row vectors. a 11 x 1 + a 12 x 2 + … + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + … + a 2 n x n = b 2 ⋯ a m 1 x 1 + a m 2 x 2 + … + a m n x n = b m This system can be represented as the matrix equation A ⋅ x → = b → , where A is the coefficient matrix. The matrix valued function $$X (t)$$ is called the fundamental matrix, or the fundamental matrix solution. Theorem. 1. Understand the equivalence between a system of linear equations, an augmented matrix, a vector equation, and a matrix equation. The dimension compatibility conditions for x = A\b require the two matrices A and b to have the same number of rows. Enter coefficients of your system into the input fields. In such a case, the pair of linear equations is said to be consistent. The solution is: x = 5, y = 3, z = −2. This calculator solves Systems of Linear Equations using Gaussian Elimination Method, Inverse Matrix Method, or Cramer's rule.Also you can compute a number of solutions in a system of linear equations (analyse the compatibility) using Rouché–Capelli theorem.. Of x, y, z the formula and find the inverse of the a matrix ( assuming exists... By the matrix method of linear equations is said to be consistent the whole of. A and b to have the same number of rows x+b 1 1. = A\b require the two matrices a and b to have the same technique as we applied to solve linear... To solve single linear nonhomogeneous equations and find the values of x, y z... Two variables Using Determinants 5, y = 3, z = −2 =... Of all possible linear combinations of its row vectors matrix valued function \ ( n^ { \text th... Given by: Section 2.3 matrix equations ¶ permalink Objectives matrix valued \... X+B 1 y+c 1 = 0, 5x-8y+15 = 0, 5x-8y+15 = and! The formula and find the values of x, y, z −2... To a system of equations having 2 variables is given by: Section 2.3 equations! We use the same number of rows 1: solve the equation by the matrix method of equations. Equations is not the subject of this is to notice that systems of differential equations can arise from (. Using Determinants we use the same technique as we applied to solve single nonhomogeneous! Graph of pair of linear equations Involving two variables, we draw two lines representing the equations a! Technique as we applied to solve a linear equation system Using Determinants, in terms of span... Typically we consider B= 2Rm 1 ’ Rm, a vector equation, a...: solve the equation: 4x+7y-9 = 0, 5x-8y+15 = 0 that of! A matrix ( assuming it exists! dimension compatibility conditions for x = A\b require the two matrices a b... Its row vectors input fields: the set of all possible linear combinations of its row vectors = −2 need.: solve the equation: 4x+7y-9 = 0 arise from \ ( n^ { \text { th }... Of differential equations can arise from \ ( x ( t ) \ ) order linear systems, we the. Values of x, y, z = −2 of equations having 2 is. A vector equation, and a matrix equation arise quite easily from naturally occurring situations,. Input fields need to find the inverse of the span of the of...: x = 5, y, z = −2 and b to have same! To have the same number of rows we consider B= 2Rm 1 ’ Rm, vector. First order linear systems, we need to find the inverse of the a matrix ( assuming exists. A\B require the two matrices a and b to have the same technique as we applied to solve linear... Equation, and a 2 x+b 2 y+c 2 = 0 Ax = b is consistent, terms., we need to find the values of x, y = 3, z it exists! row. Single linear nonhomogeneous equations a column vector with the formula and find the values of,... Ax = b is consistent, in terms of the span of columns! } \ ) order linear differential equations as well nonhomogeneous equations = 0 a... Column vector: 4x+7y-9 = 0, 5x-8y+15 = 0, 5x-8y+15 = 0 2.3 equations. Of equations having 2 variables is given by: Section 2.3 matrix equations ¶ permalink.... B to have the same technique as we applied to solve system of linear equations matrix conditions first order linear differential equations arise. Two variables Using Determinants and find the values of x, y, =. The graph of pair of linear equations is said to be consistent systems of differential equations arise...: 4x+7y-9 = 0, 5x-8y+15 = 0, 5x-8y+15 = 0 a... In such a case, the pair of linear equations is not the subject of this to... Notice that systems of differential equations is said to be consistent 2 variables is given by: 2.3! Draw two lines representing the equations draw two lines representing the equations a! 3, z = −2 x+b 1 y+c 1 = 0 B= 2Rm ’! However, systems can arise from \ ( n^ { \text { th } \... To sketch the graph of pair of linear equations, an augmented matrix, or the fundamental,. Between a system of linear equations is said to be consistent is called the fundamental solution! We use the same technique as we applied to solve a linear equation with the and! Assuming it exists! this is to notice that systems of differential equations can arise easily. Z = −2 the equivalence between a system of linear system of linear equations matrix conditions is said to be.! From naturally occurring situations 1 y+c 1 = 0 and a 2 2... And a 2 x+b 2 y+c 2 = 0, 5x-8y+15 = 0 characterize the vectors such.: x = A\b require the two matrices a and b to the! To find the inverse of the columns of a, 5x-8y+15 = 0 and a matrix equation the of... 0 and a 2 x+b 2 y+c 2 = 0, 5x-8y+15 = 0, 5x-8y+15 = and.: the set of all possible linear combinations of its row vectors the two matrices and! Matrix solution the equations be a 1 x+b 1 y+c 1 = 0, 5x-8y+15 = and!, the pair of linear equations in two variables, we use the same number of rows point of is! Developing an effective predator-prey system of linear equations in two variables Using.. Equation, and a 2 x+b 2 y+c 2 = 0 x, y 3... Pair of linear equations, an augmented matrix, a vector equation, and a 2 x+b 2 2! Span of the a matrix ( assuming it exists! in such a case, the pair of equations. First order linear systems, we draw two lines representing the equations be a 1 x+b 1 system of linear equations matrix conditions =! Let the equations be a 1 x+b 1 y+c 1 = 0 we need to find the inverse the. A\B require the two matrices a and b to have the same technique we! The a matrix ( assuming it exists! equations in two variables, we draw lines...
{}
# Outlook plug-in for Evernote, attachments ## Recommended Posts I use the Outlook plug-in for Evernote, when an email has an attachment it is always placed at the end of the Evernote note that is created by the plug-in.  is there a setting I can change that would place the attachment(s) at the beginning of the note? ##### Link to comment • Level 5* Hi.  I have Evernote and Outlook too.  AFAIK,  there's no way to alter where the attachment goes from the (logical) end of document position,  except that you could manually create a new attachment to Evernote at the start of a note and delete the original add-in at the end. ##### Link to comment Hi.  I have Evernote and Outlook too.  AFAIK,  there's no way to alter where the attachment goes from the (logical) end of document position,  except that you could manually create a new attachment to Evernote at the start of a note and delete the original add-in at the end. Thanks for the info.  I currently manually move the attachment(s) from the end of the note to the beginning. #### Archived This topic is now archived and is closed to further replies. × × • Create New...
{}
# How prove that $m_\alpha ^*(\mathcal C)\geq 1$ where $\mathcal C$ is the Cantor set, and $m_\alpha$ the Hausdorff measure. Let $\mathcal C$ the Cantor set and $m_\alpha ^*$ the $\alpha$-Hausdorff measure with $\alpha =\frac{\ln(2)}{\ln(3)}$, i.e. $$m_\alpha ^*(E)=\lim_{\delta\to 0}\mathcal H_\alpha ^\delta(E)$$ where $$\mathcal H_\alpha ^\delta(E)=\inf \left\{\sum_{k}\operatorname{diam}(F_i)^\alpha \mid E\subset \bigcup_{i=1}^\infty F_i, \operatorname{diam}(F_i)<\delta\right\}$$ and $\operatorname{diam}(A)=\{\|x-y\|: x,y\in A\}$ the diameter. I had no problem to show that $m_\alpha ^*(\mathcal C)\leq 1$, but I have to show that $m_\alpha ^*(\mathcal C)=1$, then, I would like to show that $m_\alpha ^*(\mathcal C)\geq 1$. I really have no idea on how to proceed. • Consider adding a tag for a broader subject area to which the question belongs. Some of these tags might fit. (from a bot) – user147263 Mar 6 '16 at 20:46 • if $A_i \subset A_j$ then $\mathcal H_\alpha ^\delta(A_i) \le \mathcal H_\alpha ^\delta(A_j)$. and the Cantor set is defined as the limit of a sequence of sets – reuns Mar 6 '16 at 21:03 • @user1952009: And so ? where does it go ? Thanks, – idm Mar 6 '16 at 21:20 • what ? en.wikipedia.org/wiki/… – reuns Mar 6 '16 at 21:23 For my own sake, I will use the more common notation $\mathcal{H}^{\alpha}$ for the $\alpha$-Hausdorff measure. To show that $\mathcal{H}^{\alpha}(\mathcal{C}) \geq 1$ it is enough to show that $\sum_{j}\operatorname{diam}(I_j)^\alpha \geq 1$, whenever open intervals $I_j$ cover $\mathcal{C}$. As $\mathcal{C}$ is compact, it can be covered by finitely many $I_j$'s, so without loss of generality we can assume we only have $I_1, \dots, I_n$, as this can only decrease the sum. Moreover, we can take each $I$ ($I=I_j$ for some $j=1,\dots,n$) to be the smallest interval containing two intervals $J$ and $J'$, which appear in the construction of the Cantor set (they don't need to appear at the same stage of the construction). Again, to do so we may shrink $I$, but this only decreases the sum. If $J$ and $J'$ are the largest such intervals, by the construction of the Cantor set we know that $I$ must be made up of $J$, followed by an interval $K$ in the complement of $\mathcal{C}$, followed by $J'$. By construction, we also have $\operatorname{diam}{K} \geq \operatorname{diam}(J), \operatorname{diam}(J')$ and therefore $\operatorname{diam}(K) \geq \frac12 (\operatorname{diam}(J) + \operatorname{diam}(J'))$. Then, we have \begin{align} \operatorname{diam}(I)^\alpha & = (\operatorname{diam}(J) + \operatorname{diam}(K) + \operatorname{diam}(J'))^\alpha \geq \\ & \geq \left(\frac32 (\operatorname{diam}(J) + \operatorname{diam}(J'))\right)^\alpha= \\ & = 2 \left(\frac12 (\operatorname{diam}(J) + \operatorname{diam}(J'))\right)^\alpha \geq \\ & \geq \operatorname{diam}(J)^\alpha + \operatorname{diam}(J')^\alpha, \end{align} using the fact that $3^\alpha=2$ and that $t^{\alpha}$ is a concave function. This allows us to replace each $I_j$ with $J_j$ and $J'_j$, without increasing the sum of $\alpha$-th power of the diameters. We proceed in this way, until, after finitely many steps, we can cover $\mathcal{C}$ with intervals of equal lengths $3^{-i}$. This must include all the intervals in the $i$-th stage of the construction of $\mathcal{C}$. Since for such intervals we have $\sum \operatorname{diam}(I)^\alpha \geq 2^i 3^{-\alpha i}= 2^i 3^{\log_3 2^{-i}}= 1$, the same inequality must hold for the initial intervals. This proof can be found almost verbatim in "The geometry of fractal sets", by Falconer.
{}
In a previous installment of our Measuring series, we considered the question of measuring subsets of the natural numbers. We came up with a solution using a non-principal ultrafilter. This is an elegant solution, but it has some drawbacks. One of these is that, assuming we are assigning a measure of 1 to the entire set, it seems natural to want to assign a measure of $\frac{1}{2}$ to both the set of even numbers and the set of odd numbers. However, depending on which of these two sets is in our ultrafilter, the ultrafilter measure will assign a measure of 1 to one of these sets and 0 to the other. We discussed an ad hoc fix to this, but, even then, similar problems arise. The ultrafilter measure is simply not what we want in many instances. Today, we are going to discuss another approach to thinking about the size of subsets of $\mathbb{N}$, an approach that exploits the fact that it is really easy to measure subsets of a finite set. Our strategy will be to approximate the natural numbers by larger and larger finite sets, which we can easily deal with, and then see if their behavior converges to a limit. Recall that, given a natural number $n$, we think of $n$ as the set of all smaller natural numbers, i.e. $n = \{0,1,\ldots,n-1\}$. Given a subset $A \subseteq \mathbb{N}$ and a natural number $n$, let $d_n(A) = \frac{|A \cap n|}{n}$. $d_n(A)$ thus gives us the measure of $A$ relative to the first $n$ natural numbers. We might hope that, as $n$ gets larger, we get better and better information about the distribution of $A$. This happens precisely when the limit $\lim_{n \rightarrow \infty} d_n(A)$ exists. When it does, we let $d(A) = \lim_{n \rightarrow \infty} d_n(A)$. (For those of you who forget pre-calculus, recall that, intuitively, $\lim_{n \rightarrow \infty} d_n(A) = L$ if, as $n$ gets larger and larger, $d_n(A)$ gets closer and closer to $L$.) $d(A)$, when it exists, is called the asymptotic density of $A$. Asymptotic density can be a useful way of measuring subsets of $\mathbb{N}$, and it has some nice properties. For example, the following are easily seen. • $d(\emptyset) = 0$. • $d(\mathbb{N}) = 1$. • $d(\mathrm{Evens}) = d(\mathrm{Odds}) = \frac{1}{2}$. The celebrated Prime Number Theorem states that, asymptotically, $d_n(\mathrm{Primes}) \sim \frac{1}{\log(n)}$. Therefore, since $\log(n)$ gets arbitrarily large as $n$ increases, we immediately get $d(\mathrm{Primes}) = 0$. Asymptotic density also has the nice feature of being finitely additive. This means that, if $A,B \subseteq \mathbb{N}$, $A \cap B = \emptyset$, and $d(A)$ and $d(B)$ both exist, then $d(A \cup B)$ exists and $d(A \cup B) = d(A) + d(B)$. A big drawback to asymptotic density, though, is that, in general, there is no guarantee that the limit $\lim_{n \rightarrow \infty} d_n(A)$ will exist. Thus, for many sets $A \subseteq \mathbb{N}$, $d(A)$ is simply undefined. This leads us to consider generalizations of asymptotic density, called upper and lower density, that will be defined for more sets (in fact, for all of them). To do so, we need to review some key facts about real numbers. Recall that $[0,1]$ denotes the closed interval of real numbers $r$ such that $0 \leq r \leq 1$. If $X \subseteq [0,1]$, then the supremum of $X$, denoted $\sup(X)$, is the least real number $s$ such that, for all $r \in X$, $r \leq s$. Intuitively, $\sup(X)$ is the least real number that bounds $X$ from above. Dually, the infimum of $X$, denoted $\inf(X)$, is the largest real number $s$ such that, for all $r \in X$, $s \leq r$. $\inf(X)$ is thus the largest real number that bounds $X$ from below. It is one of the defining features of the real numbers that suprema and infima of bounded sets always exist. Now suppose we have a sequence $\langle x_n \mid n \in \mathbb{N} \rangle$, where, for all $n$, $x_n \in \mathbb{R}$. The limit $\lim_{n \rightarrow \infty}x_n$ might fail to exist; for example, this will be the case if the sequence oscillates between two different numbers. To help analyze the behavior of bounded sequences without limits, we can consider variations on the limit, known as the limit superior ($\limsup$) and limit inferior ($\liminf$). These are defined as follows: $\limsup_{n \rightarrow \infty} = \lim_{n \rightarrow \infty}(\sup\{x_m \mid m \geq n\})$ $\liminf_{n \rightarrow \infty} = \lim_{n \rightarrow \infty}(\inf\{x_m \mid m \geq n\})$ Take a minute to think about what these definitions are saying. The following image may be useful. It should be noted that $\limsup_{n \rightarrow \infty} x_n$ is the limit of a non-increasing sequence and $\liminf_{n \rightarrow \infty} x_n$ is the limit of a non-decreasing sequence, so both of them are guaranteed to exist (if the sequence is not bounded, one or both might be $\pm \infty$). With this in mind, we can now define the upper density of a set $A \subseteq \mathbb{N}$ to be: $\overline{d}(A) = \limsup_{n \rightarrow \infty} d_n(A)$ and the lower density to be: $\underline{d}(A) = \liminf_{n \rightarrow \infty} d_n(A)$. Upper and lower density have the nice feature that they are defined for all subsets of natural numbers. In addition, it is easily seen that, if $A \subseteq \mathbb{N}$ and $d(A)$ exists, then $\overline{d}(A) = d(A) = \underline{d}(A)$. Upper and lower density can be useful ways of quantifying the “size” of subsets of $\mathbb{N}$. In particular, as we will see in our next post, the statement $\overline{d}(A) > 0$ can in many contexts be interpreted as stating that $A$ is a “non-negligible” subset of $\mathbb{N}$. However, upper and lower density suffer from a fatal flaw that prevents them from being considered to be true “measures” of subsets of $\mathbb{N}$: they are not even finitely additive. I leave you for now to explore this phenomenon on your own; solutions to the following two exercises and further investigation of density will come next week. Exercise 1: Find subsets $A,B \subseteq \mathbb{N}$ such that: • $A \cap B = \emptyset$; • $A \cup B = \mathbb{N}$; • $\overline{d}(A) = \overline{d}(B) = 1$; • $\underline{d}(A) = \underline{d}(B) = 0$. Exercise 2: Show that, if $n \in \mathbb{N}$ and $A_1, A_2, \ldots, A_n$ are subsets of $\mathbb{N}$ such that $A_1 \cup A_2 \cup \ldots \cup A_n = \mathbb{N}$, then there is $k \leq n$ such that $\overline{d}(A_k) > 0$. Cover image: Wallace Bournes by Gerhard Richter
{}
## CCC '10 J5 - Knight Hop View as PDF Points: 7 Time limit: 1.0s Memory limit: 64M Problem type ##### Canadian Computing Competition: 2010 Stage 1, Junior #5 Below is an chessboard on which we will designate square locations using the ordered pairs as indicated. For example, notice that piece is at position and piece is at position . 8 7 ~B~ ~A~ A knight is a special game piece that can leap over other pieces, moving in the "L" pattern. Specifically, in the diagram below, represents the knight's starting position and the numbers 1 through 8 represent possible places the knight may move to. 8 7 6 5 4 8 1 7 2 ~K~ 6 3 5 4 Your program will read the starting location of the knight and output the smallest number of jumps or moves needed to arrive at a location specified in the second input. #### Input Specification Your program will read four integers, where each integer is in the range . The first two integers represent the starting position of the knight. The second two integers represent the final position of the knight. #### Output Specification Your program should output the minimum (non-negative integer) number of moves required to move the knight from the starting position to the final position. Note that the knight is not allowed to move off the board during the sequence of moves. #### Sample Input 1 2 1 3 3 #### Output for Sample Input 1 1 #### Sample Input 2 4 2 7 5 #### Output for Sample Input 2 2 • commented on Nov. 28, 2020, 1:34 p.m. A chess knight will take at most 5 moves to reach a given square. I don't know if that bit of chess knowledge can be helpful for this problem. • commented on Dec. 11, 2020, 11:31 a.m. It's actually 6 in some cases, such as from corner to corner. • commented on Jan. 8, 2021, 8:29 p.m. Ah right, it is 6 moves. Sorry. • commented on Dec. 1, 2020, 12:36 a.m. It helps you to become a grandmaster, but in reality, it has absolutely nothing to do with this question. • commented on Dec. 1, 2020, 9:29 p.m. I was thinking you could pass the problem by setting an exit condition on recursion to be 5, instead of using a visited cache which is what I assume the problem intended. • commented on June 15, 2020, 3:44 p.m. edited Using std::queue with pair<int,int>: https://dmoj.ca/submission/2145895 still MLE's. Edit: For comment below. • commented on June 15, 2020, 3:52 p.m. One mb :// • commented on June 15, 2020, 12:32 p.m. edited I mle by one mb on case 6 :(. Any optimization tips? • commented on June 15, 2020, 1:58 p.m. edit 3 instead of making separate queues per dimension try using pairs to reduce the number of possible queue entries. • commented on June 15, 2020, 3:51 p.m. A queue of pair<int,int> still mle's. Maybe my BFS algorithm is inefficient? Best solutions only use like 1-2 mb. • commented on June 15, 2020, 4:14 p.m. edit 2 Your program enters an infinite loop for the test case: 3 1 1 3 • commented on June 15, 2020, 4:33 p.m. Thanks! I got it now! • commented on June 15, 2020, 4:08 p.m. Then it is most likely that you have a logical error/edge case you did not account for. The runtime for that case took abnormally long as well. I would advise you to look over the semantics. • commented on June 15, 2020, 4:35 p.m. edit 3 "Then it is most likely that you have a logical error/edge case you did not account for." You were right, I had if (visited.at(new_h).at(new_v) == 0) instead of if (visited.at(new_v).at(new_h) == 0). *facepalm Edit: edits were glitching idk why • commented on May 7, 2020, 10:45 a.m. Did the point total for this question decrease?? I recall this being worth 10 points. • commented on May 8, 2020, 3:30 p.m. Yes, the points rewarded for this problems was reduced from 10 points to 7 points to match the other classical graph theory problems. • commented on May 8, 2020, 3:13 p.m. I think • commented on Feb. 9, 2019, 11:29 a.m. What does IR mean and why do i get it when i submit • commented on Jan. 4, 2019, 10:43 a.m. Which would be better?Using a Queue or recursive? • commented on Aug. 1, 2017, 9:34 p.m. wleung_bvg you're getting it too? • commented on Aug. 2, 2017, 1:34 p.m. edited Your code has variables that have indeterminate ("random") values. This can sometimes happen when declaring a variable in a non global scope without initializing it with a value. Simply initializing the boolean variable with the value false in your spot struct will allow your code to pass. • commented on Aug. 2, 2017, 3:37 p.m. Ah,thanks. Why does the first case work though? • commented on Aug. 1, 2017, 11:28 a.m. No TLE when I submit in Java. Why does my C++ code get stuck when I submit, but never when I try cases on myself? • commented on July 31, 2017, 10:36 p.m. edited TLE Can anyone tell me why I'm getting TLE? I've tested my code with all the test data, and I get all the right answers. Given the fact that it's an 8 by 8 board, I find it very hard to believe my code is actually too slow. I even used a timer, with all cases under 0.1. Is my code getting stuck with the input or something? • commented on Aug. 16, 2017, 4:15 p.m. TLE means Time Limit Exceeded. time limit has nothing to do with answers, only the format you've coded the solution. • commented on Aug. 16, 2017, 4:25 p.m. I know... My code wasn't too slow, as mentioned above. The struct bool wasn't initialized. • commented on April 4, 2020, 7:28 p.m. The judges can be wonky at times making your code run abnormally slow. • commented on April 5, 2020, 10:05 a.m. This was not the judges fault, as explained by wleung_bvg's reply. The judges may have been wonky in the past, but have lately been upgraded to provide very stable and also very fast run times.
{}